Course Handout - Motor Programming

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2010, High Tower Consultants Limited.


First published online 09:00 BST 27th March 2003, Copyright Derek J. Smith (Chartered Engineer). This version [HT.1 - transfer of copyright] dated 18:00 14th January 2010


Earlier versions of this material appeared in Smith (1997; Chapter 6). It is here extended and supported with hyperlinks.

1 - Early Reaction Time Studies

Preliminary Exercise

You are driving your car at 30mph on a clear dry level road and suddenly have to stop. The average emergency stopping distance in such conditions is 23 metres. For how much of this is your foot not yet on the brake, and why?

The tradition that reaction time (RT) can be used as an physical index of underlying neural activity dates from the mid-19th century work of the German polymath Herman von Helmholtz (1821-1894) (eg. Helmholtz, 1850) and the Dutch physiologist Franciscus Cornelius Donders (1818-1889). Helmholtz set the ball rolling by using RT to calculate the speed of conduction of the nervous impulse, thus .....

"Helmholtz stimulated a man on the toe and on the thigh, and noted the difference in the time between stimulation and response by the hand in the two cases. By these methods he found that the speed of transmission along the motor nerve of the frog was about 90 feet per second and for the sensory nerves of the man something between 50 and 100 feet per second. [.....] It was shown by this discovery that man's body does not instantly obey his mind." (Flugel and West, 1964, p75; bold emphasis added) [For a convenient description of Helmholtz's techniques, see the translation of Helmholtz's papers in Hall (1951).]

Donders' contribution was then to bring central processing time into the equation as well, to help account for whatever mental decision making was needed between the sensory and the motor conduction processes. He devised a technique for comparing the average time for a simple reaction task with the average time for exactly the same reaction delivered or withheld according to the outcome of a broader decision task [this two-stimulus/two-response arrangement was known at the time as Donders' "type b" reaction and is known now as "choice reaction time"; Donders "type a" reaction was a single response to a single stimulus, and his "type c" reaction was a single response to two stimuli]. The simple reaction was always quicker than the one requiring the additional decision making, and by subtracting one set of timings from the other Donders was able to factor out what we would nowadays conceptualise as "thinking time". Here is the argument as put across by one of the early commentators .....

"Underlying mental chronometry is the idea that since brain processes and mental processes occur together, and brain processes take time, the time of the central occurrence as a whole may be separated off from that of the other parts of the reaction. The time of the entire reaction from sense to muscle - as when I press a key as soon as I see a light - may be divided into three parts: that of the sensory transmission by the optic nerve, that of the central or brain process, and that of the motor transmission to the muscles of the hand. Subtracting from the entire time that required for the first and third parts [.....] the time taken up by the psycho-physical and mental processes may be reached by simple calculation." (Baldwin, 1913, pp76-77.)

RT experimentation soon became one of the standard research techniques at the experimental psychology laboratories being set up around the world at the time. The German physiologist-philosopher Wilhelm M. Wundt (1832-1920), one-time assistant to Helmholtz, led the way, founding the world's first dedicated psychology laboratory between 1875 and 1879 at the University of Leipzig, to pursue what he referred to as "physiological psychology". The Leipzig laboratory immediately attracted researchers from across the world, and the development broadly coincided with that of the first US laboratory, the brainchild of William James (1842-1910) at Harvard. Wundt and James were then followed by G. Stanley Hall at Johns Hopkins, James M. Cattell (1860-1944) at the University of Pennsylvania [detail], James Angell at Chicago, and the George Patrick/Carl Seashore team at the University of Iowa. In Europe, one of Wundt's students, Oswald Kulpe (1862-1915), set up his own laboratory at the University of Würzburg, and Charles Samuel Myers (1873-1946) eventually followed suit at Cambridge, and by the beginning of the 20th century variations on Helmholtz's and Donders' methods had been used to measure such things as nerve conduction speeds, effective stimulus levels, and summation and integration factors [binocular RT, for example, was found to be faster than monocular, binaural faster than monaural, and so on - see the Brebner and Welford (1980) review]. The method was also being used to reflect upon other areas of human ability. Cattell, for example, saw RT as a legitimate aspect of general assessment testing. Capitalising upon his experiences in his laboratory at Philadelphia, Cattell (1890) proposed a ten-test intelligence/ability battery, of which tests #6 (responding to an aural stimulus) and #7 (colour naming) were RT-based. The RT method was also adopted by the young psychoanalyst, Carl Jung, who used it to time the responses in his famous association of ideas projective testing paradigm (eg. Jung, 1907).

2 - Real-Time Cognition and Cognitive Modularity

Readers unfamiliar with the computer-world concept of "real-time processing" should pre-read Section 3.8 of our e-paper "Short-Term Memory Subtypes in Computing and Artificial Intelligence (Part 6)", before proceeding.

We mention all this, because in measuring RT you are more or less explicitly modelling cognition. Your end-to-end timings are timings of end-to-end cognition, that is to say, of everything the brain has to do in order to respond appropriately to environmental input, and your sub-timings (if you are clever enough to factor them out) are timings of the sub-stages of cognition. When interpreting RT data, therefore, you are constantly having to make judgements as to the internal structure of biological information processing, and to do this effectively you need to combine the rather diverse skills of real-time systems designer and philosopher of mind.

Now one of the reasons Donders' three-component model of cognition caught on so quickly, was that this particular structure was easy to map across onto the models being generated by Donders' contemporaries in the field of physiology and neuropsychology. On the physiological side of things, for example, it was already half a century since Bell (1811) and Magendie (1822) had identified the sensory and motor pathways in the spinal cord [detail], and the brain was already routinely characterised as supporting said pathways with cortical "projection areas", and then coordinating their activity by cortical "association areas" [see the entries from Flourens to Kussmaul (and especially that for John Hughlings Jackson) in our Neuropsychology Timeline]. On the neuropsychological side of things, Broca and Wernicke were in the process of identifying what appeared to be speech production and comprehension areas (respectively), separate from the brain's higher cognitive functions. As a result, Lichtheim (1885) was able to draw up a three-box diagram of end-to-end speech processing in the shape of a house [reproduced in Figure 1 below], a schematic processing hierarchy which explained the basic architecture so neatly that it is still being used over a century later to teach "aphasiology" (the neuropsychology of language) to modern medical students (see, for example, Fuller, 1993).

Nevertheless, other aphasiologists were growing dissatisified with the three-component model, wanting to take the analysis further. For example, Kussmaul (1878) identified no less than six modules without going into the details of lower perceptual analysis or lower motor production. He organised these in much the same shape as "Lichtheim's House", only with three "storeys" to the processing hierarchy rather than just two! The two- and three-storey approaches are compared in Figure 1 .....

Figure 1 - The Classic Three- and Five-Box (Two- and Three-Level) Cognitive Hierarchies: Diagram (a) is Lichtheim's three-box "house" diagram (Lichtheim, 1885) [click for full details]. This is the classic two-level control hierarchy for speech processing, with an input leg [lower right], an output leg [lower left], and a higher functions module [top centre]. However, pathway A-M allows the higher functions module to be bypassed if necessary, by providing a short-cut route similar to (but situated more rostrally than) the spinal reflex arc [not shown]. Diagram (b) shows a more powerful five-module analysis of end-to-end cognition, similar in essence to that proposed by Kussmaul (1878) and subsequently popularised by James (1890) and Wundt (1902). Two boxes have been added to the three shown in Diagram (a), thus creating a middle layer of control, sandwiched between the low-level sensory and motor modules and the apical higher functions module. This gives us a three-stage process of perception [left side, ascending black arrows] and a three-stage process of motor execution [right side, descending black arrows], with the higher functions module being common to both; and this, in turn, requires us to recognise two short-circuit routes rather than one - an upper arc [dotted blue arrow] serving complex reflexes and habits, and a lower arc [solid blue arrow] serving both simple life support reflexes and Pavlovian conditioned reflexes. Craik's (1945) "Neural Geography" model falls neatly into this structure, both Frank's (1963) "Organogramm" and Rasmussen's (1983) "Levels of Performance" model differ only in the nuances, and Norman's (1990) model differs only in the breakdown of higher functions which it proposes.

Diagram (a)


Diagram (b)

Copyright © 2003-2004, Derek J. Smith.

3 - The Intelligence-Performance Debate

"The war will be won through a judicious expenditure of brain power" (General Enoch H. Crowder, cited in Terman, 1918).

Having successfully established its credentials prior to the First World War, RT then seems to have fallen strangely from grace, playing surprisingly little part in the expansion in the psychological evaluation industry which that war brought with it .....

ASIDE: One of the leading figures in wartime psychological evaluation was Robert Mearns Yerkes (1876-1956). Yerkes' presidency of the American Psychological Association coincided with the 1917 entry of the US into the First World War, and he used his influence to push for greater use of psychological assessments and services in the US armed forces. As a result, he was asked to chair both the US National Research Council Psychology Committee and the Committee on the Psychological Examination of Recruits. These committees immediately recognised the potential of the intelligence testing methods developed by Lewis Madison Terman (1877-1956) at Stanford University. In 1906, Terman had translated the Binet-Simon intelligence test from its original French, and in the decade to 1916 had both added in the now familiar "intelligence quotient", or "IQ", and standardised his translation for the American market. By 1917, the "Stanford-Binet" tests were popular and well understood psychometric techniques, and thus natural candidates to fulfil the US military's requirements. 

The explanation for the sidelining of the RT method is that neither it nor other motor skill assessments could be group administered and template scored. The standard US Army "Alpha/Beta" tests were used instead, because they were capable of processing recruits in bulk. Here is Terman's telling of the story .....

"By January 1918 [.....] a Division of Psychology was established in the Office of the Surgeon General. commissioned officers were provided to carry out the program, and a School of Military Psychology for the training of Psychological Officers was established at Fort Oglethorpe, Georgia. By October 1, 1918, approximately one and a half million men and officers had been tested and classified according to intelligence, and tens of thousands of assignments or promotions had been made wholly or in part on the basis of the intelligence ratings. [.....] The general intelligence tests as used in the US Army include three types. 1 - Alpha, a group test for men who read and write English. The Alpha test measures a man's ability to comprehend, to remember and follow instructions, to discriminate between relevant and irrelevant answers to common sense questions, to combine related ideas into a logical whole, to discover by logical reasoning the plan present in a group of abstract terms, to keep the mind directed toward a goal without yielding to suggestion, and finally, to grasp and retain miscellaneous items of information. It is so arranged that its 212 questions are answered by checking or underlining, thus permitting the answers to be scored by the use of stencils. 2 - Beta, a group test for foreigners and illiterates [who] cannot understand or read English well enough to take the Alpha test. Success in it does not depend upon knowledge of English, as the instructions are given entirely by pantomime and demonstration. Like Alpha, Beta measures general intelligence, but it does so through the use of concrete materials instead of by the use of written language [and again] its answers require no writing and are scored by stencils. 3 - Individual Tests. Three forms of individual tests are used in the examination of men who fail to pass the group tests. They are the Yerkes-Bridges Point Scale, the Stanford-Binet Scale, and the Performance Scale." (Terman, 1918, pp179-180; italics original.)

The Alpha and Beta tests were intelligence tests, not tests of manual skill or physical aptitude. The individual examinations did include picture and block construction tasks to evaluate spatial intelligence, but no RT or similar motor tasks (Yoakum and Yerkes, 1920). RTs seem to have retained some role in selection for flight training in both the Italian and French air forces, but "the British paid little attention to reaction times and the more elaborate studies of resistance to emotional stimuli" (Dockeray and Isaacs, 1921, p127), preferring instead "the MacDougall dotting test, studies of tremor and giddiness, and a study of temperament and service flying". Here is Terman's justification of this deliberate omission .....

"[Recruits] must learn their new tasks from the beginning, and the speed with which they can do this will depend largely on their intelligence. [.....] The mental tests are not intended to replace other methods of judging a man's value to the service [.....]. They merely help to do this by measuring one important element in a soldier's equipment; namely, intelligence. They do not measure loyalty, bravery, power to command, or the emotional traits that make a man 'carry on'. However, in the long run these qualities are far more likely to be found in men of superior intelligence than in men who are intellectually inferior. Intelligence is perhaps the most important single factor in soldier efficiency." (Terman, 1918, pp178/184; italics original; bold emphasis added.)

ASIDE: Terman also observed that "men below C+ are rarely equal to complicated paperwork" (p184). We mention without further comment (a) that the average essay grade in British universities is C, and (b) that it is far from unknown for British politicians to have earned Third Class (D-grade) degrees.

Only after the war did another sensorimotor technique become popular. This was the "continuous pursuit", or "tracking" task, a family of tasks in which the subject has to follow a physically moving stimulus with a pointer of some sort .....

Key Technique - The "Continuous Pursuit" (or "Tracking") Task: The continuous pursuit task was popularised as a tool of psychomotor coordination by workers such as R.H. Seashore (1928) and S. Seashore (1932), who deployed it in the Stanford Motor Skills Battery. In the "rotary pursuit" version of the test (administered using a "pursuit rotor"), the subject is required to keep a metal stylus in touch with a small electrical contact offset on a revolving platform. Task difficulty here depends on the size of the target, its offset distance and the speed of rotation of the platten, and performance is usually quoted as relative "time on target" (TOT) (and can usually be recorded automatically into the bargain because successful contact completes an electrical circuit). In the "linear pursuit" test, the subject has to steer a pointer left or right (on horizontally mounted tests) or up or down (on vertically mounted ones), and difficulty here depends on the distance available to "track ahead", speed of track movement, and response lag in the equipment. Here is how modern ergonomics introduces the problem: "Tracking tasks require continuous control of something and are present in practically all aspects of vehicle control, including driving an automobile, piloting a plane, or steering and maintaining balance on a bicycle. [.....] The basic requirement of a tracking task is to execute correct movements at correct times. In some instances the task is paced by the person doing it, as in driving an automobile. In other instances the task is externally paced, in that the individual has no control over the rate at which the task has to be performed, as in following a racehorse with binoculars" (Sanders and McCormick, 1993, p314). The underlying processing is then summarised as follows: "In a tracking task, an input, in effect, specifies the desired output of the system; for example, the curves in a road (the input) specify the desired path to be followed by an automobile (the output). Inputs on a tracking task can be constant (eg. steering a ship to a specified heading or flying a plane to an assigned altitude) or variable (eg. following a winding road or chasing a manoeuvering butterfly with a net). Such input typically is received directly from the environment and sensed by mechanical sensors or by people. [.....] The output is usually brought about by a physical response with a control mechanism (if by an individual) or by the transmission of some form of energy (if by a mechanical element). In some systems the output is reflected by some indication on a display, sometimes called a follower, or a cursor; in other systems it can be observed by the outward behaviour of the system, such as the movement of an automobile." (Sanders and McCormick, 1993, pp314-315; italics original.)

Then, of course, came the early rumblings of another war. By now, however, lessons had been learned, and this time the psychologists were amongst the first to be mobilised! During the 1930s, for example, Frederick (later Sir Frederick) Charles Bartlett (1886-1969), director of the experimental psychology laboratories at Cambridge University, carried out research on behalf of the RAF, studying amongst other things ways of reducing air accidents by better selection and training procedures (eg. Bartlett, 1937). This type of work automatically became more important after war was declared in 1939, and drew in a number of talented young postgraduates. At Cambridge, this included Norman Mackworth and Kenneth J.W. Craik (1914-1945), and across the Atlantic it included Alphonse Chapanis (1917-2002) (the "father of ergonomics"), J.P. Guilford, and Paul M. Fitts. Craik joined Bartlett's laboratories in 1936 as a doctoral student, and became involved in researching the design of cockpit simulators (Bartlett, 1946). He played an important part in the development of the "Cambridge Cockpit", a model of good aircraft design [details], and suggested many practical improvements to both controls and instrumentation in recognition of the many physical, physiological, and perceptuo-motor limitations of the pilots being trained.

ASIDE: In time of war, ergonomic knowledge is a priceless strategic resource [for starters, it is nothing less than the science of bomb aiming - see, for example, our e-papers "Short Term Memory Subtypes in Computing and Artificial Intelligence" (Part 4; Section 4.6) and "Mode Error in System Control" (Section 6)]. This explains why so many wartime scientists are allocated bodyguards [although you should start worrying if this happens to you, because that bodyguard's unannounced duty is to make sure you are never taken alive, which could well mean having to shoot you h/self!!]. The earliest Ovid references for continuous tracking research are dated 1947-1948, but we presume the method would have been one of the mainstays of classified research for much of the preceding decade. Much of the US ergonomic research was carried out by the Psychology Branch of the Aero Medical Laboratory. The aforementioned Paul M. Fitts was director between 1945 and 1949, and was followed between 1949 and 1956 by Walter F. Grether [who has a fascinating short memoir online at Grether (2004 online)].

The RT and continuous pursuit paradigms duly helped ergonomists meet the increasingly high-tech demands of the Second World War, but what happened after the war is historically even more significant. By 1945, the demobilising military psychologists had acquired a sort of "critical mass". They were young, highly focussed, and at the cutting edge of ergonomic research, and, as their equipment and methods were gradually declassified they would go on, in the decade between 1945 and 1955, to found the modern cognitive movement .....

4 - The Issue of the Human Operator and Discontinuous Functioning

Exercise - Continuous Bodily Movement

Extend your right arm, so that your index finger is at shoulder height and pointing half left. Now move it smoothly and slowly to point half right, watching it closely as it goes. Now move it back again, still watching it closely. You should find that it moves smoothly, rather than jerking along, stopping every inch or two. But how so, when the nervous potentials which are doing the moving are known to be discontinuous and crackly?

The issue which now concerns us is how continuous is the continuous in continuous pursuit. The point is that whenever mental time can be micro-measured it seems to advance in discrete quanta, that is to say, stepwise rather than smoothly. Granted, it may seem to move smoothly to the subjective experiencing self, but upon closer analysis it can be shown to be jumpy and discontinuous .....

ASIDE: The irresistible force of this smoothness-from-roughness illusion may be experienced on demand by listening to any digital audio recording or viewing any celluloid or televised movie. With digital audio, we hear smooth shifts in sound frequency and volume, but in fact the waveform is "stepped" up and down by each controlling digit by a technique known as "Pulse Code Modulation" [detail]. It is just that it all happens so quickly that we are unable to notice. Likewise, cine film typically runs at 24 frames per second [actually 48, but each image is shown twice] and mains lighting is actually flickering at 50 Hz, but our eyes do not work at that speed and so we see the light as constant intensity. The "critical flicker" or "flicker fusion" speed is the flicker rate at which the transition from perceived-as-flickering to perceived-as-constant takes place, and has been recently measured at 47.3 Hz [Andrews, White, Binder, and Purves (1996/2004 online)]. As Hick (1948) put it: "If the steps are very short and very numerous, the behaviour of the system approximates to continuity ....." (p37). We need also to mention that a body of neuroscientific evidence has accumulated in the last quarter century that nervous tissue enforces its own periodicity by virtue of its underlying brain rhythms. One rhythm in particular - an oscillation at around 40 Hz - is currently being heavily researched [we recommend the paper by Van Rullen and Koch (2003/2004 online) for a thorough introduction to this topic].

..... meaning that continuous pursuit cannot be as continuous as we might like to think. Instead, it must be a rapid succession of decision-action events, rendered objectively smooth by the weight and inertia of skeletal structures relative to the size of the neurons which are instructing them (just as my car moves smoothly at 30 mph even though it is discretely firing at around 50 sparks per second), and subjectively smooth by some sort of cinematic illusion.

And here we must again mention Cambridge's Kenneth Craik, who drew on the growing body of data from his sensorimotor performance research on behalf of the RAF to put together his theory of "the human operator". Craik was one of the first scientists to realise that many of the principles of brain function were common to all information processing systems, including machines. During the Second World War, he worked on a succession of War Department human performance projects, and soon came to see the idealised human operator as an internally hierarchical part of an equally complex external command and control hierarchy. He was therefore one of the first to attempt to put the human being "into the loop" of system control. Craik was also one of the first to see the value of negative feedback in supporting stable systems, even seeing it as allowing the system in a way to modify its own behaviour. He was particularly fascinated by the role played by "servomechanisms" - downline and largely autonomous automatic regulators - in complex systems, seeing them as vital to the delivery of effective control in both man and machine. He described the human brain as .....

"..... a computing system which responds to the misalinement-input by giving a neural response calculated, on the basis of previous experience, to be appropriate to reduce the misalinement [sic]" (Craik, 1948, p142.)

..... and explained its role in tracking tasks as follows .....

"In psychological terminology, the operator learns the feel of the controls and finally makes the control movements which he judges to be appropriate for reducing the misalinement as much as possible [.....]. There then follows a period of quantitative modification of the ratio of control movement to the misalinement [.....] and finally there may be an appearance of complicated temporal patterns of control movements in response to a misalinement, having the object of compensating for the defects of the operator (such as his time lags) or of the control gear [or of] physical limitations such as the time of flight of the projectile (as in aiming off with a shot gun at a flying bird, where no specific rules for aim-off are given). Thus, viewed from the outside, and regarded as a mechanical system such as we should design to operate in the same way, the operator's brain appears as a computing system and amplifier, with variable characteristics and a variable switch-gear between its different input and output elements. [.....] The first and most marked feature of the cerebral process is its time lag or 'central delay'." (Craik, 1948, pp145-146.)

 "Thus the operator, in tracking, responds intermittently, at a frequency of about 2 per sec [putting] the human operator into the class of 'intermittent definite correction servos' apprehending a misalinement, making a single corrective movement, and so proceeding." (Craik, 1948, pp147-148.)

By tragic irony (he was human operating a bicycle at the time) Craik was killed in a road traffic accident in May 1945, and so the privilege of developing his ideas fell to his students, notably Margaret A. Vince and W.E. Hick. Vince (1948) continued to address the questions whether the operator responds continually or intermittently. She studied performance data from a number of simple motor tasks involving winding a hand-wheel at constant speed in order to keep a pointer in touch with a moving target. The resulting tracking records all exhibited .....

"..... a periodic or 'wavy' characteristic, which appears to be independent of the rate and shape of the course, and which shows a predominant frequency of about 0.5 sec, with smaller amounts of frequencies from just less than 0.25 to just over 1.0 sec. This periodicity is a common feature of tracking records, being more marked in the performance of unpractised subjects, and indicates that the subject is making a series of intermittent corrective movements separated by short pauses [.....]. It would appear, therefore, that this periodicity is a basic feature of the human being's response to a continuously changing misalinement; he makes a series of intermittent corrective movements, which begin, on the average, at intervals of half a second." (Vince, 1948, pp150-151; bold emphasis added.)

For his part, Hick (1948) considered "discontinuous functioning" in continuous tasks, arguing that the most important problems were "what aspects of the response are continuous and what are discontinuous, and of in what manner and with respect to what variables the discontinuities arise" (Hick, 1948, p37). Like his late mentor, he equated the human operator to a servomechanism - a machine which continually "compares its output quantity (or 'signal') with its input quantity, and endeavours to minimise the difference between them" (p37). He then presented data obtained using a repeating stimulus RT design where the interval between consecutive stimuli varied from 0 to 3.7 seconds, and found that at intervals less than 0.5 seconds the RT was typically elevated. Here is how he explained this phenomenon ..... 

"To sum up, it seems that with this particular apparatus and instructions we can discern a transition from fairly consistently performed responses when the stimuli are less than about 0.3 seconds apart, to equally consistent distinct responses when the stimuli are more than about 0.45 seconds apart. In between, the responses are somewhat chaotic, with occasional very long delays [.....]. In the transition region, the second stimulus may be supposed to have coincided approximately with the decision to make a single response, and thus to have caused some confusion." (Hick, 1948, p49.)

ASIDE: The notion that the execution of one response might render the control system momentarily unable to process a follow-on stimulus was itself nothing new, having originated three quarters of a century earlier with studies of nerve conduction and the spinal reflex, where it was known as the "refractory phase" (eg. Marey, 1876; Woodworth, 1902; Sherrington, 1906; Adrian and Olmstead, 1922). The problem had also been analysed in industrial process control, where it was known as "lag" or "dead time" (eg. Mason and Philbrook, 1940).

5 - The Problem of Information

Readers unfamiliar with information theory in general, or with the concepts of "chunking" and "channel capacity" in particular, should read our e-paper on "The Relevance of Shannonian Communication Theory to Biological Communication". Readers may also be interested in the material on the strangely influential Warren Weaver in our e-paper on "Short-Term Memory Subtypes in Computing and Artificial Intelligence" (Part 4).

Then another sea change rolled in. In the same year that Craik (posthumously), Vince, and Hick were reporting results from the Cambridge applied psychology laboratory, the telecommunications industry was introducing another massively important cluster of concepts. As we have explained elsewhere [divert], it was Ralph Vinton Lyon Hartley (1888-1970), an American electrical engineer, who first attempted to quantify information (Hartley, 1928). Previously, if you had measured information at all you had done it by the sentence-full, or the chapter-full, or the encyclopaedia-full, and while you may have had a lot of it or a little of it, you could not have counted it in any meaningful way. Hartley addressed this problem by pointing out that information is only information if it tells you something you did not already know. He argued that messages which truly informed had to come as a surprise, so to speak, and that if you knew what a message was going to be telling you, then despite any superficial length and complexity that message was actually conveying no information whatsoever. So in order to measure information, you had to start counting the number of things you were being told, relative to the number of things you did not know, and that, in turn, meant counting how many signs you had in your message relative to how many signs there were available in the vocabulary your message was drawn from. Hartley proceeded to quantify this approach in a long and complex mathematical argument [details], culminating in the following equation .....

I = N log S

Where I is the amount of information each message contains, N is the number of signs in a particular message, and S is the number of different signs in your vocabulary.

Hartley's formal definition of information was then taken up by Claude Elwood Shannon (1916-2001) of the Bell Telephone Company (Shannon, 1948/2003 online; Shannon and Weaver, 1949). Shannon, too, saw information as that which reduced uncertainty, but he was fortunate enough to get his work sponsored by the highly influential Warren Weaver and was therefore able to reach a much wider audience. What Shannon brought to the argument was the concept of the idealised, or "general", communication system. The principles of information and its transmission, he argued, did not vary from the semaphore to the cable telegraph to the telephone to the wireless - there was an underlying logical pattern to them all, and information was the abstract common denominator. Soon Shannon's concepts of "signal-to-noise ratio", "redundancy", and (that favourite of the Internet age) "bandwidth" were beginning to crop up across cognitive research and motor theory, and the period 1948 to 1955 saw a series of groundbreaking studies into biological "channel capacity".

To start with, of course, much of the telecoms jargon was lost on mainstream psychologists, but on 15th April 1955 its relevance to the profession as a whole was famously summarised in an Invited Address by George A. Miller to the Eastern Psychological Association in Philadelphia. The papers Miller particularly drew on were by Hake and Garner (1951). Pollack (1952), Garner (1953), and Pollack and Ficks (1954), and his personal analysis (subsequently published as Miller, 1956/2004 online) was entitled "The magical number seven, plus or minus two". The human operator [not that Miller chose to use this term, or cite Craik] could process an end-to-end cognitive load of around seven bits [formal definition] of information, it seemed, had available to it an immediate memory capacity of around seven "chunks" of information, worked most impressively when those chunks contained lesser chunks, and had been shaped in geological time by the evolutionary principle that "it was better to have a little information about a lot of things than to have a lot of information about a small segment of the environment".

Davis (1957) is a typical offering from the period, and is worth looking at in detail because it proposed that the refractory period was a fourth component to go with Donders' basic three, and here - as noted above - was where theoretical interpretations were most lacking. Davis therefore studied two-stimulation situations, where a second stimulus followed the first, carefully timed to arrive centrally during the CRT from the first; when the processor would not yet be ready for it. Sure enough, whenever this happened, the second RT would routinely be additionally delayed. In explaining these observations, Davis drew up a diagram as follows .....

Figure 2 - Davis's (1957) Computation of Central Refractory Time: Here is a microtiming graph showing how a rapid follow-on stimulus can compete for resources with a reaction already in the pipeline. Time has been plotted from left to right across the top of the graph, and the delivery times of the two successive stimuli are shown by vertical dotted lines headed S1 and S2 respectively (separated by Interval I). As explained above, S1 is followed by ST1 [upper white timing bar] as the sensory information makes its way to the central decision making process, and by CT1 [upper green timing bar] as the necessary decision making takes place. CT1 is then followed simultaneously [see the red asterisk] by MT1 [upper mauve timing bar] as the motor information makes its way to the muscles, and by CRT1 [upper brown timing bar], its dead period. The end-to-end reaction time, RT1, is then the sum of ST1, CT1, and MT1. The equivalent S2 profile is shown across the bottom of the diagram, offset to the right by Interval I. This profile is the same as that for S1, except for the delay X [red timing bar]. The point is that CT2 cannot begin until CRT1 is finished, making RT2 greater than RT1 by the delay X, despite ST, CT, and MT factors being identical.

Key: Central Refractory Time (CRT): Having authorised one motor response, this is the time the central processor is then unavailable - for some as-yet-unknown reason - to make further decisions. Sensory Conduction Time: This is the time it takes for sensory information to reach the central decision making process, and the limiting factor here is the conduction speed of the myelinated axons of the afferent pathways of the peripheral and central nervous systems. Typical observed speeds for auditory sensory conduction were in the range 8-10 msec., and for visual ST in the range 20-40 msec. Central Time: This is the time it takes for the central processor - whatever that turns out to be - to decide what motor response is appropriate to the given stimulus. This is the component responsible for thinking time. Motor Conduction Time: This is the time it takes for the decision coming out of the central processor to be conveyed down the motor pathways to the muscles responsible for executing the necessary movement. As with sensory conduction time, the limiting factor here is neural conduction speed, this time of the efferent pathways provided by the upper and lower motor neurons [for quick revision of the motor system, see our e-handout on "The Pyramidal and Extrapyramidal Motor Systems"].

Simplified from a black-and-white original in Davis (1957, p127; Figure 5). This version Copyright © 2003-2004, Derek J. Smith.

Substituting empirically derived average timings into formulae derived from this diagram, Davis computed the CRT at about 100 msec., and his conclusions as to what actually went on during the refractory period are as follows .....

 "..... it is interesting to speculate on the possible relation of [a Central Refractory Time of the order of 100 milliseconds] to the 10 per second periodicity so often noted in rhythmical activity of the cortex which may itself reflect a refractory period of this duration. [Either way], the experiment seems clearly to establish that delays of a similar order are likely to occur whether signals are given in the same or different modalities. The fact that 'queueing' of signals for central mechanism occurs in both situations suggests that the human operator functions as a single channel through which information from both sense modalities has to pass before appropriate responses are organised." (Davis, 1957, p128.)

Davis's work bogged down, however, in the theoretical vaguaries of switching attention between visual and auditory stimuli, and in 1958 the whole area was taken up under a new heading, that of attention theory [SEPARATE SUPPORT MATERIAL UNDER CONSTRUCTION], although, interestingly enough, it has resurfaced recently in the human error literature [see, for example, Van Selst and Jolicouer (2003 online) in Section 10, whose Figures 2, 3, and 15 take the same basic form as that shown above].

6 - Reactive vs Predictive Control

Readers unfamiliar with control theory in general, or with the concept of "negative feedback" in particular, should read our e-paper on "Basics of Cybernetics" before proceeding, carefully noting the structural similarities between the advanced negative feedback control loop shown in Figure 2 therein, and the arched cognitive hierarchies introduced in Figure 1 of the present paper.

Up to now, we have been considering "reactive control", that is to say, the workings of a system attempting to respond to a detected environmental event - be that a discrete target occurrence (the RT tasks) or else an alteration in a target variable (the tracking tasks) - after that event has happened. This, however, is to ignore a major source of information, namely the periodicity of many events and the predictability of many motions. Thus if you know more or less when something is going to occur then you can be there waiting for it, and if you can compute a prey's logical escape route then you can steer ahead of it and cut it off. The most effective cognitive systems, therefore, are those which incorporate biological predictor technology, the ability to guess what is going to happen next before it has happened.

Now it so happened that sustained research and development during the war with anti-aircraft and ship-to-ship gunnery had established quite a formidable body of expertise with predictor technology .....

Key Concept - Prediction in Gunnery: As we have explained elsewhere, the Second World War saw major advances in the science of ballistic prediction, centred around the development of analog computing devices capable of figuring out where a two- or three-dimensionally moving (and often invisible) target was going to be by the time you could possibly get a projectile to that same point, and of then communicating the necessary parameters to the gunlayers concerned [for a longer introduction to this fascinating story, see the section on "Computation in Ballistics" in our e-paper on "Short-Term Memory Subtypes in Computing and Artificial Intelligence" (Part 2).]

..... and because the mathematics of ballistic prediction is complex in the extreme, the mathematicians involved were forced to identify the natural modularity of the computation required, and map those algorithms across onto the intrinsically "parallel processing" nature of analog computing devices. In this way, they hoped to ensure that no one element of the solution could be allowed to exceed either (a) the physical capabilities of the mechanisms available, nor (b) the cognitive capabilities of the humans who would have to design and build it. We see the resulting modularity in the gyroscopic steering systems developed by Elmer Sperry in the period 1912-1922, in the analog "differential analysers" put together in the period 1931 to 1942 by Vannevar Bush at MIT, in battlefield devices such as Bell Labs' AA Predictor No. 9, and in warship fire control computers such as the Vickers HACS (DiGiulian, 2002/2004 online). It was thus an entirely logical next step to try to put predictor concepts to work in theories of biological cybernetics, and enhanced versions of the standard negative feedback control loop were soon developed which went a long way towards replacing reactive control with "predictive control". One such device was Sperry's 1914 "anticipator" (Bennett, 1979), and another was developed by the American electrical engineer Otto J.M. Smith at the University of California at Berkeley in the late 1950s and formally described in Smith (1959). Smith's technique was one of "linear prediction", as shown in Figure 3 ..... 

Figure 3 - Smith's (1959) Linear Predictor: Here is the logic flow for a Smith Linear Predictor. The principal relationship is between the Controller [mauve highlight] and the thing controlled, known in the abstract as "the Plant" [tan highlight]. Control instructions pass from left to right across the top of the diagram, producing an Outcome [top right] as close as possible to an externally set Target State [top left]. There is then a negative feedback loop [the lowermost right-to-left arrows] by which perturbations in output from the Plant are automatically detected and fed back to revised actions. As with biological homeostasis, when the Plant starts to overperform the feedback signal arranges for it to be turned down, and when it starts to underperform the feedback turns it up. The practical difficulties with this otherwise watertight piece of control logic are the two sources of delay [green highlight]. Delay #1 represents the time lag getting hold of performance data from the Plant, whilst Delay #2 represents the time lag getting that data back to the control point. However, these delays can be overcome by having the controller maintain internal "models" of the system being controlled. Such models allow "minor stabilising loops" [long panel centre] capable of making "high frequency" adjustments to the low frequency primary steering flow. Complex internal algorithms [ie. processes] use knowledge acquired from past experience [ie. long-term memory] momentarily to adjust the main control flow. Note that the minor loop in this case taps into the main control flow BEFORE it gets to the Plant, and so is free of the main delay effects. [The terms high frequency and low frequency are all relative, of course. With an electronic circuit they might be measured in MegaHertz, but with the human operator 10 adjustments per second would be a valuable way of modulating a device otherwise capable of only two adjustments per second. For a detailed history of feedback in control devices, we recommend Fuller (1976) and Bennett (1979).]

Redrawn from a white-and- black original in Smith (1959, p31; Figure 6). This graphic Copyright © 2004, Derek J. Smith.

The Smith Predictor was in essence merely a hardware attempt at processes previously attributed, in vertebrates, to the cerebellar hemispheres. The idea was clearly stated within motor physiology, for example, by Ruch (1951), thus .....

"According to Ruch the corticocerebellocortical circuit may also represent a part of a mechanism by which an instantaneous order of cortical origin may be 'amplified and extended forward in time'. It should be efficient in starting and in stopping a movement without jerkiness. This might be accomplished by a controlling feedback proportional to the velocity of the movement. In this view 'cerebellar tremor may be comparable to the oscillation of an undamped servomechanism in which the feedback is removed'. Ruch likens the cerebellum to the 'comparator' of a servomechanism which receives from the cerebral cortex some representation of the command, and from the muscle and other exteroceptors representation of the resulting movement. These, compared, may result in a signal which when transmitted to the motor cortex alters its commands to the muscles so as to diminish the discrepancy." (Paillard, 1960, pp208-209.)

In fact, Paillard supports this explanation of Ruch's work with a complex neuroanatomical circuit diagram which when analysed turns out to be a seven-module four-layer control hierarchy with the cerebellum providing a short loop anticipator bridging the lowermost two modules.

7 - Schmidt and the "Motor Schema"

Readers unfamiliar with biological motor theory should read our e-paper on "The Motor Hierarchy" before proceeding, and readers unfamiliar with the data processing concept of "program structure" should read the material thereon in our e-paper on "Short-Term Memory Subtypes in Computing and Artificial Intelligence" (Part 6; Section 1).

Let us pause for a moment to take stock. In Sections 1 and 2, we looked at how RT studies first emerged as a simple non-invasive means of investigating the internal structure of cognition. This took us roughly to the beginning of the First World War. In Section 3, we then looked at the development of further motor assessment techniques - notably the continuous tracking task - and at the gradual realisation by motor theorists that hand-eye coordination involved some highly complex mental information processing. And in Sections 4 to 6, we then looked at the emergence of the new science of cognitive psychology, a science whose fundamental purpose was to find out how our information processing minds might be put together using the sensory, central, central refractory, and motor components originally detected in the RT studies. Now all this broadly coincided with advances in two other sciences. The first of these was the continuing physiological work on motor systems, where motor theorists such as Bernstein and Weiss had long regarded motor memory as combining the power to be reactivated as a single unit with the ability of the sub-components of that single unit to be reactivated one by one in rapid succession. Motor memory of this nature was conventionally referred to as a "motor schema", and the point about motor schemas is that by definition they are organised into a "motor hierarchy" [Weiss (1941) is very thorough in his theoretical analysis here]. The other development was the rise of the digital electronic computer, with its central emphasis on the predesign of computation in the form of "stored computer programs". By the 1970s, the cognitive psychologists, the motor physiologists, and the computer scientists, were gradually coming together as "cognitive scientists" to create a multi-disciplinary superscience, and one of cognitive science's first achievements was to formulate the theory of "motor programming", the notion that the psychologists' central processes, the physiologists' schemas, and the computer scientists' programs were, mutatis mutandis, one and the same thing. In fact, Schmidt (1980) has since traced the idea of a biological computer program to an early paper by the physiologist Brown (1914), although the term "program" was not borrowed from computer science until much later. Here are two early mentions .....

"..... a rich store of unconscious motor memory is available for the performance of acts of neuromotor skill [.] The neural pattern for a specific and well-coordinated motor act is controlled by a stored program that is used to direct the neuromotor details of its performance. In the absence of an available stored program, an unlearned complicated task is carried out under conscious control, in an awkward, step-by-step, poorly coordinated manner." (Henry and Rogers, 1960, p441.)

"A motor program may be viewed as a set of muscle commands that are structured before a movement sequence begins, and that allows the entire sequence to be carried out uninfluenced by perceptual feedback." (Keele, 1968, p475.)

And here is the main argument for the existence of the instruction sequence .....

".....the strongest human evidence for the motor programming notion seems to be that subjects can initiate, carry out, and stop a limb movement within 100 msec., implying that decisions about when to stop the movement must have been made prior to the initiation of the movement." (Schmidt, 1975, p231; emphasis added.)

So to cut a long story short, motor behaviour is nowadays typically conceptualised as a cycle of program selection (from some sort of program store) followed by program execution. Each motor program converts individual muscle contractions into a fluent total movement. Just as a complex speech act consists of sentences, each of which consists of words, each of which consists of phones, so too with motor behaviour. Firstly there is a top-level intention, then the initial breakdown into the limb and postural movements required, and then the low-level muscle control necessary to make it all happen. In Section 8, we look at probably the best of the individual theories .....

8 - Sternberg and the "Subprogram Retrieval Model"

One of the most influential workers in modern motor programming theory has been Saul Sternberg of the University of Pennsylvania, and one of the most commonly used research paradigms involves subjects producing short action sequences (such as saying words or typing keystrokes) at high speed. This series of experiments was prompted in part by an observation by Henry and Rogers (1960) to the effect that reaction time increased as more responses were required. The main sources, however, are papers by Sternberg (1969), Monsell and Sternberg (1976), and Sternberg, Monsell, Knoll, and Wright (1978), and this series of experiments is well reviewed in later papers by Sternberg, Knoll, Monsell, and Wright (1988) and Wright (1990).

The typical experiment runs through four stages as follows .....

(1) A short list of words is presented visually at a rate of one per second.

(2) This is followed by a four-second delay to allow rehearsal and response preparation.

(3) This is followed by a three-step countdown, at the end of which the subject has to recite the list as quickly as possible. (It is arranged that some countdowns are false alarms so that subjects cannot afford to "jump the gun".)

(4) Timings of latency (how long it takes to start responding after the "Go" has been given) and duration (how long it takes to complete the list once it has started) are taken.

Some typical response patterns are shown in Figures 4 and 5:

Figure 4 - Mean Latency for Rapid Motor Production: This graph plots the latency of the first response against the number of items to be produced. The hollow dots are words and the solid dots are phonologically matched non-words. It can clearly be seen that latency increases linearly with number of items to be produced, and that it makes no difference whether the items are words or just sound patterns.

[Sternberg et al's (1988) subprogram latency data]

Redrawn from Smith (1997; Figure 6.7), after Sternberg, Knoll, Monsell, and Wright (1988, p187; Figure 7a). This version Copyright © 2003-2004, Derek J. Smith.


Figure 5 - Mean Element Duration in Rapid Motor Production: This graph plots mean element duration against the number of items to be produced. The upper line is for 2-syllable word items, and the equation of this line is 169 + 12n. the lower line is for 1-syllable word items, and the equation is 85 + 12n. The fact that both line slopes are 12 msec per item (that is to say, the fact that they are parallel) indicates that it is unit count, not unit length, which makes the difference.

[Sternberg et al's (1988) subprogram duration data]

Redrawn from Smith (1997; Figure 6.8), after Sternberg, Knoll, Monsell, and Wright (1988, p179; Figure 3b). This version Copyright © 2003-2004, Derek J. Smith.

Now the point of the curves shown in Figures 4 and 5 is that latency and duration data - properly analysed - can have a lot to say about the way motor behaviour is organised, and Sternberg and his team have incorporated their data into a theoretical model - the "Subprogram Retrieval Model", as shown diagrammatically in Figure 6.

Figure 6 - Sternberg et al's (1988) Subprogram Retrieval Model: Here is a microtiming graph similar to that shown in Figure 2, attempting to explain the relationships shown graphically in Figures 4 and 5. The model's principal assertion is that the overall task is somehow "programmed" and then held in memory while convenient segments of it are read out and acted upon. The following stages are proposed ..... 

(1) When preparing to act, the subject constructs his/her motor program. This identifies the words to be spoken (or the typewriter keys to be struck, etc.), and the order of production.

(2) This is then stored away in what they termed a motor program buffer, a sort of brief memory store. For a longer discussion of the history of the "buffer" concept see our e-paper on "Short-Term Memory Subtypes in Computing and Artificial Intelligence" Part 4 (Section 1.3).

(3) Each item, or "unit", in the program has its own subprogram.

(4) When the "Go" signal is given, the first subprogram is selected, that is to say, it is retrieved from the buffer and given control of the appropriate musculature. This is known as the selection stage. (The authors recognise that the words "selection", "retrieval", and "activation" could be used synonymously to describe the underlying process here.)

(5) The selected subprogram is now executed, that is to say, the muscles are activated. This is known as the command stage.

(6) The full length utterance is controlled by repeating the retrieval-command cycle (steps 3 and 4) for each unit involved.

[Sternberg et al's (1988) subprogram retrieval model]

Redrawn from Smith (1997; Figure 6.9), after Sternberg, Knoll, Monsell, and Wright (1988, p184; Figure 5). This version Copyright © 2003-2004, Derek J. Smith.

9 - The Motor Schema in Robotics

Readers unfamiliar with modelling theory in general, or with the concept of "functional decomposition" in particular, should read our e-tutorial on "How to Draw Cognitive Diagrams" before proceeding.

As explained in Sections 7 and 8, biological motor theorists now model motor memory using terminology borrowed from computer programming. However, the flow of ideas has not been entirely one-way, and Ronald C. Arkin of Georgia Institute of Technology is typical of those who have sought to introduce biological schema theory into robotics. His definition is as follows .....

"A motor schema is the basic unit of motor behaviour from which complex actions can be constructed. It consists of both the knowledge of how to act and the computational process by which it is enacted." (Arkin, 1993, p385.)

Arkin (1993) goes on to argue that one of the main benefits of the schema approach is that it forces due theoretical attention onto the basic shape of the processing network. In his words, schemas "afford fairly large grain modularity, in contrast to neural network models, for expressing the relationships between motor control and perception" (p385). It also makes for efficient "reactive control" robotic systems, the hallmarks of which are (1) that tasks have to be "decomposed" into more "primitive" behaviours, and (2) that global representations are to be avoided. This decomposition will then allow a repertoire of motor behaviours to be established, each of which is supported by "primitive stimulus-response reactions" (p387). Here is an illustrative repertoire of schemas, which should be self-explanatory .....

move ahead; move towards goal; avoid static obstacle; stay on path; escape and dodging; docking; follow-the-leader; move up/down or maintain altitude (these being key skills when navigating undulating terrain, flying, or swimming)

Arkin also proposes a "motor schema manager" to select from this repertoire, and sees this as being one of the five main modules in any "autonomous robot architecture", as now shown .....

Figure 7 - Arkin's (1990) "Autonomous Robot Architecture": Here is the modular structure designed by Arkin (1990) to provide "general purpose navigational capabilities over a wide range of problem domains" (p111). The higher functions module [mauve panel] is shown as a composite of three lesser functions, namely Mission Planner, Navigator, and Pilot [note the internal command hierarchy here], all in the service of an external Human Commander. This sits atop a totally conventional motor hierarchy [the large feint greyscale arrow], comparable in granularity to that proposed by 19th century diagram-makers such as Kussmaul (1878), and in which the Motor Schema Manager [blue panel] intervenes between the Pilot and the final Motor Channel Controllers [bottom centre]. The Motor Schema Manager thus plays a similar role to the "action schema" release function of the Supervisory System in the Norman-Shallice Theory of Executive Function [detail]. Note how the Motor Subsystem has available to it its own Internal Sensors [bottom right], thus supporting the sort of "efference copy" facility recommended by Von Holst and Mittelstaedt (1950).

Simplified from a black-and-white original in Arkin (1990, p112; Figure 3), then with additional annotation. This graphic Copyright © 2004, Derek J. Smith.

The concept of decomposition has also proven popular with Rodney A. Brooks, Director of the MIT Artificial Intelligence Laboratory and Fujitsu Professor of Computer Science, although he reaches subtly different conclusions to Arkin. Brooks uses decomposition to build "layered" control systems, that is to say, he "slices" the problem horizontally rather than vertically .....

"There are many possible approaches to building an autonomous intelligent mobile robot, As with most engineering problems, they all start by decomposing the problem into pieces, solving the subproblems for each piece, and then composing the solutions [.....]. Typically mobile robot builders [instances] have sliced the problem into some subset of


mapping sensor data into a world representation


task execution

motor control

This decomposition can be regarded as a horizontal decomposition of the problem into vertical slices. The slices form a chain through which information flows from the robot's environment, via sensing, through the robot and back to the environment, via action, closing the feedback loop (of course most implementations of the above subproblems include internal feedback loops also). An instance of each piece must be built in order to run the robot at all. Later changes to a particular piece (to improve it or extend its functionality) must either be done in such a way that the interfaces to adjacent pieces do not change, or the effects of the change must be propagated to neighbouring pieces, changing their functionality, too. We have chosen instead to decompose the problem vertically as our primary way of slicing up the problem. Rather than slice the problem on the basis of internal workings of the solution, we slice it on the basis of desired external manifestations of the robot control system. To this end we have defined a number of levels of competence for an autonomous mobile robot. A level of competence is an informal specification of a desired class of behaviours for a robot over all environments it will encounter. [.....] We have used the following levels of competence [.....] as a guide in our work:

0) Avoid contact with objects (whether the objects move or are stationary).

1) Wander aimlessly around without hitting things.

2) 'Explore' the world by seeing places in the distance that look reachable and heading for them.

3) Build a map of the environment and plan routes from one place to another.

4) Notice changes in the 'static' environment.

5) Reason about the world in terms of identifiable objects and perform tasks related to certain objectives.

6) Formulate and execute plans that involve changing the state of the world in some desirable way.

7) Reason about the behaviour of objects in the world and modify plans accordingly.

Notice that each level of competence includes as a subset each earlier level of competence [.....] The key idea of levels of competence is that we can build layers of a control system corresponding to each level of competence and simply add a new layer to an existing set to move to the next higher level of overall competence. We start by building a complete robot control system that achieves level 0 competence. It is debugged thoroughly. We never alter that system. We call it the zeroth-level control system. Next we build another control layer, which we call the first-level control system. It is able to examine data from the level 0 system and is also permitted to inject data into the internal interfaces of level 0 suppressing the normal data flow. This layer, with the aid of the zeroth, achieves level 1 competence. The zeroth layer continues to run unaware of the layer above it which sometimes interferes with its data paths. The same process is repeated to achieve higher levels of competence [.....]. We call this architecture a subsumption architecture." (Brooks, 1986, p16; italics original.)

Our own views on the evolution of competence in biological processing systems over geological time are set out in Smith and Stringer (1997). Even in relatively simple animals, the nervous control system is modular and hierarchical, and the sort of three-layered control hierarchy shown in Figure 1(b) can be detected from the large mammals upwards. We show the process of "cognitive meiosis" (= modular "fission") in Smith (1991) and the fine detail of the full-blown adult human control hierarchy in Smith (1993). For more on Brooks' work see our e-paper on "Short-Term Memory Subtypes in Computing and Artificial Intelligence" (Part 5; Section 3.10).

10 - Recent Work

Readers unfamiliar with the computer-world concepts of "multiprogramming" and "job execution scheduling" should read Section 1.2 of our e-paper "Short-Term Memory Subtypes in Computing and Artificial Intelligence (Part 5)", before proceeding.

Despite all the advances in cognitive science and robotics, the problems of lack of clarity as to the fundamental processing architecture of the cognitive system - that is to say, its functional structure and its physical modularity - are as real today as they were when Sperry patented his Anticipator in 1914. Indeed, one has only to compare the late 19th century models with their late 20th century counterparts to see how good the 19th century guesses actually were. The five-module three-level diagram has shown itself to be particularly versatile and robust in this respect, and crops up yet again in 1986 in the RT literature in a paper by Frith and Done (1986). What these authors did was prompted by Leonard's (1959) observation that simple RTs were always much faster than choice RTs, but that two-choice, four-choice, and eight-choice were then effectively equal (Leonard had his subjects place two, four, or eight fingers on vibro-switches, and then depress whichever one started to vibrate). Frith and Done therefore devised variants of the RT task in which they explored the effects of predictability by allowing or not allowing the interval between stimulation to be kept constant. Here is their justification .....

"Anticipating the stimulus is strictly speaking a way of cheating, and in the one-choice task it is possible to get away with it, since anticipations can hardly be detected because they do not result in errors" (Frith and Done, 1986, p169).

Frith and Done interpreted their RT data as suggesting three distinct processing routes between stimulus and response, as follows .....

1 - Slow Route: This is the route which supports "strategic" behaviour. It consists of [the usual] five cognitive stages, namely (a) "stimulus registration", whose role is to detect that a stimulus has occurred, (b) "stimulus identification", whose role is to decide which stimulus it was, (c) "response selection", whose role is to select from the available behavioural repertoire the response most appropriate to the identified stimulus, (d) "response specification", whose role is to "load" the "motor program to execute selected response" (p171), and (e) "response initiation", whose role is to start that program running. These five stages act together as shown in Figure 8 below. 

2 - Direct Route: This is the route proposed by Leonard (1959) to cope with one-choice situations where the appropriacy of a particular response to a particular stimulus is not in question. It consists of stimulus registration, stimulus identification, response specification, and response initiation stages, making it the same as Route #1, but - reasonably enough, for no response selection is required - without the response selection aspect.

3 - Fast Route: This is a route proposed by Frith and Done to cope with situations where both stimulus and response are known. It consists ONLY of the stimulus registration and response initiation processes, making it three processes simpler than Route #1.

This is how the authors summarise their argument .....

"Will has minimal involvement in the direct route. The response is selected and specified entirely by the stimulus. However the response can clearly be inhibited by an act of will. [.....] In the slow route also the response is selected entirely on the basis of the stimulus. However volition is required to maintain the temporary and arbitrary links between stimulus and response. In the 'fast' route the signal has a minimal role, that is, it is reduced to the timing of the response. Response selection and specification are both achieved by an act of will, and the stimulus merely initiates the response. There are many situations in which a stimulus plays no role at all, since the response is not only selected and specified, but also initiated by an act of will. The routes to responding that we have described here suggest that a useful distinction might be made between control of action by an external stimulus and control of action by an internal intention." (Frith and Done, 1986, p175.)

..... and this is how they summarise their proposals graphically .....

Figure 8 - Frith and Done's (1986) Three Processing Routes: Diagram (a) shows the five separate processes involved in dealing with "arbitrary and novel" relationships between environmental stimulation and behaviour. This is exactly as presented in Frith and Done's Figure 2(a). This is the type of cognition required for situations which require some sort of thinking about, and in which responses need to be selected, loaded, and executed from scratch. The response selection process is shown as "Select R", and has access to an "S-R Table" [far left]. This latter is not a process as such, but rather a memory store available to Select R. Diagram (b) shows EXACTLY THE SAME components and flow logic, but redrawn as a five-module three-level cognitive hierarchy, so that it is visually more or less totally cognate with diagrams going back to Kussmaul (1878) [see the caption to Figure 1 above for a fuller list]. Diagram (c) then shows how the three routes described in the main text map onto Diagram (b). Note how the slow route [green arrow] takes in all five processing modules, the direct route [pink arrow] omits the response selection module, and the fast route [blue arrow] omits the identification and specification modules as well. The three routes thus represent full, restricted, and minimal [our terms - Ed.] use of the cognitive system, and we may safely presume that the most effective total use of the available cognitive resources is when any pressing demands are processed minimally and other, more abstract, demands make use of the higher cognitive resources thereby freed up. This, in turn, would require some sort of "multiprogramming" facility, analogous to the "job execution scheduling" carried out in computers equipped with virtual machine operating systems.

Diagram (a) redrawn from a black-and-white original in Frith and Done (1986, p171; Figure 2(a)). Diagrams (b) and (c) our realignment and annotation of (a). This combined graphic Copyright © 2004, Derek J. Smith.

In another recent line of enquiry, Henderson and Dittrich (1998) have studied the effects of "preparedness" in reducing both the sensory and the motor components of an RT. They argue that it is misleading to focus (as most studies have) on "the on-line processes initiated by the arrival of the imperative signal" (p531) to the exclusion of the general preparatory processing. They reviewed the literature back to Wundt's time, and suggest that in all likelihood there is a sensory component to all motor RT tasks, by virtue of the fact that the attentional system has to be tuned to the triggering stimulus. Here is their core argument .....

"The question now arises of whether and how a person can engage in preparation so as to facilitate search and detection in a task where the stimuli are not distinguishable spatially. We suggest that in those circumstances, a stimulus set would involve attending to a mental representation of the target or imperative signal. Such executively controlled attention might hold in a working memory the critical features that best distinguish the target from a particularly noisy background [.....]. This sort of stimulus set has its equivalent in the motor domain where a participant runs covertly through a highly complex motor skill before executing it." (Henderson and Dittrich, 1998, p548)

Elsewhere, the refractory period continues to exercise the minds of those responsible for training aircrew. This is because RT represents a period out of control. For example, in both continuous tracking and detection tasks, the time between the physical stimulus (be it a change or an onset) and the necessary compensating motor adjustment is time spent potentially heading towards a hazard rather than away from it - as the Highway Code has been explaining for nearly a century, braking time can kill. This problem has not just not gone away in the intervening decades, but has actively got worse, because the machines we fly today and the systems to be controlled are now much faster. Thus a re-entering space shuttle will travel about half a mile in each 100 msec. central refractory time, two and a half miles in the 0.5 seconds it takes to cancel an incorrect motor decision and authorise a correction, or three and a half miles in the 0.7 seconds it takes a motorist to get his/her foot onto the brake pedal in an emergency; and a lot can go wrong in three and a half miles! As a result, workers at the NASA Ames Research Centre have been taking RT research to new levels of sophistication. Van Selst and Jolicouer (2003 online) are typical of this latest work, and present data evaluating a number of theoretical explanations of "response selection bottlenecks". Following Frith and Done's example, they justify the importance of this issue by pointing out that the task of response selection is the only real difference between a simple one-choice detection task (where the response is known and can be "readied" in advance so that it merely awaits initiating) and a two-choice discrimination task (where a response cannot be readied in advance). The authors reviewed the competing theories in this area, and carried out a number of carefully designed empirical studies in which they systematically varied "stimulus onset asynchrony" (SOA) and other factors, before concluding that when two tasks interfere it is due "in large part", but not entirely, to a bottleneck in response selection. On the basis of this data, the authors favour the "Multiple Bottleneck Hypothesis", an explanation which derives from the earlier work of De Jong (1993) and proposes both a processing bottleneck at the response specification stage, and a refractory period at the response execution stage of about 200 msec.

Nor may we overlook the latest on predictive control. Miall, Weir, Wolpert, and Stein (1993/2004 online) are among the field leaders here, and are working on the suggestion that the role of the cerebellum in biological motor control is to function as a Smith Predictor, in precisely the sense that such a mechanism was described in Section 6 and illustrated in Figure 3. In fact, they propose two such Smith Predictors, one operating "in a circuit between association cortex and motor cortex, via the lateral [cerebellar] hemispheres" (pp210-211), and another "situated in the intermediate cerebellar cortex, operating on the outflow from the motor cortex", as now shown .....

Figure 9 - Miall et al's (1993) Cerebellar Predictor System: Diagram (a) shows the possible use of two linear predictors [highlighted in grey] in helping the sensorimotor system control a joystick. The proposed control flow takes the same basic shape as that shown in Figure 3, save that the whole thing has now been aligned top-to-bottom rather than left-to-right. Note the double feedback loops [blue captions, left] and the insertion of the two cerebellar predictors into the main control [ie. downward] flow. Diagram (b) shows EXACTLY THE SAME system components and flow logic, but redrawn as a five-module three-level cognitive hierarchy, so that it is visually more or less totally cognate with diagrams of this genre going back to Kussmaul (1878) [see the caption to Figure 1 above for a fuller list]. For the supporting mathematics, we recommend the e-tutorial by Gawthrop (2004 online).

Diagram (a) redrawn from a black-and-white original in Miall, Weir, Wolpert, and Stein (1993/2004 online, p211; Figure 8). Diagram (b) our realignment and annotation of (a). This combined graphic Copyright © 2004, Derek J. Smith.

Finally, there has been a parallel recognition of the importance of predictive control within the Artificial Intelligence community. Butz, Sigaud, Gerard (2003) [details] provide a valuable recent summary of what is going on here. Indeed, their volume is introduced as follows .....

"The matter of anticipation is, as the editors of this volume state in their preface, a rather new topic. Given the almost constant use we make of anticipation in our daily living, it seems odd that the bulk of psychologists have persistently ignored it. However, the reason for this disregard is not difficult to find. The dogma of the scientific revolution had from the outset laid down the principle that future conditions and events could not influence the present. The law of causation clearly demands that causes should precede their effects and, therefore, concepts such as purpose, anticipation, and even intention were taboo because they were thought to involve things and happenings that lay ahead in time. An analysis of the three concepts - purpose, anticipation, and intention - shows that they are rooted in the past and transcend the present only insofar as they contain mental representations of things to be striven for or avoided. [.....] To anticipate means to project into what lies ahead a mental representation abstracted from past experience." (Von Glasersfeld, 2003; bold emphasis added).

The volume in question then offers papers across the spectrum of research, including such areas as "anticipatory behavioural control", "anticipatory learning", and "preventative state anticipation", and the general point is well brought out in the paper by Alexander Riegler of Brussels Free University, click here.

11 - References

See the Master References List


[How to Draw Cognitive Diagrams]