Course Handout - Mode Error in System
Control
Copyright Notice: This material was
written and published in Wales by Derek J. Smith (Chartered Engineer). It forms
part of a multifile e-learning resource, and subject only to acknowledging
Derek J. Smith's rights under international copyright law to be identified as
author may be freely downloaded and printed off in single complete copies
solely for the purposes of private study and/or review. Commercial exploitation
rights are reserved. The remote hyperlinks have been selected for the academic
appropriacy of their contents; they were free of offensive and litigious
content when selected, and will be periodically checked to have remained so. Copyright © 2003-2018, Derek J. Smith.
|
First published online 11:58 BST 31st March 2003,
Copyright Derek J. Smith (Chartered Engineer). This version [2.0 - copyright] 09:00 BST 4th July
2018.
1 - The Concept
of Hierarchical Motor Control
For well over a century, motor theorists have favoured the notion that voluntary behaviour is initiated at the top of what is known as a "motor hierarchy", a set of vertically integrated cognitive processes which progressively convert the idea of making a movement into the muscle activity necessary to put that idea into effect. [More on the Motor Hierarchy]
For almost as long, motor theorists have also favoured the notion that there exist high-level mental representations of complex skilled movements, each capable of generating a longitudinally complete and appropriately modulated sequence of muscle contractions. This concept has gone by many names over the years. It is what Weiss (1941) called "behaviour sequences", what Lashley was talking about in his paper "The Problem of Serial Order in Behaviour" (Lashley, 1951), what Schmidt had in mind when he described the "motor schema" (Schmidt, 1975), and what many since Henry and Rogers (1960) have described as "motor programming". [More on Motor Programming]
The topic of motor behaviour is also intricately linked with mainstream Attention Theory. For example, Shallice (1982) invokes the Norman and Shallice (1980) model of the Supervisory Attentional System in his discussion of planned action. This model regards the basic unit of action as the "action schema", a "sensori-motor knowledge structure" (Norman, 1981, p3) "that can control a specific overlearned action or skill such as [.....] doing long division, making breakfast, or finding one's way home from work" (Shallice, 1982, p199). Shallice sees such schemas as being activated in various ways by different aspects of cognition, but especially by other schemas already in progress, and by new perceptual events. [More on Attention Theory - UNDER CONSTRUCTION]
2 - Feedforward
and Feedback in Hierarchical Motor Control
Motor theory also overlaps in many respects with the science of control in general, that is to say, with the science of "cybernetics". The concepts which matter most are those of "feedforward", the downward flow of motor instructions, and "feedback", the consequent, and vitally important, upward flow of "knowledge of results" (KR). These flows are shown in Figure 1 below. [More on Cybernetics] [More on Biological Cybernetics]
3 - The Concept
of Control Modes
The Norman-Shallice model is in fact a three-layer/five-box control hierarchy (similar to Craik, 1945) sculpted on top of a sixth box containing the schema selection process [more on the rules and conventions of this sort of diagram construction]. This latter process is characterised as relying as much on inhibitory mechanisms as upon excitatory, so that the momentary salience of one motor program comes in large part from a carefully synchronised lack of "contention" from all the others (Shallice, 1982, p200). As such, the model potentially has a lot to say about the phenomenon of "control modes" in behaviour.
Control modes were first proposed in the early 1980s (eg. Norman, 1981), and are significantly broader in scope than schemas. In fact, each control mode has four important qualities, namely (a) that it controls a repertoire of related lower level motor programs, (b) that it has competing modes, (c) that these competing modes can be momentarily inhibited, and (d) that when it itself has been activated it will therefore possess a behavioural momentum of some sort. Each mode, in other words, is "a manner of behaving" (Degani, Shafto, and Kirlik, 1995/2003 online).
Example: Suppose there existed a control mode for <PLAY CRICKET>. This would
have available to it a number of motor programs, covering such behaviours as
<BOWLING>, <BATTING (DEFENSIVE)>, <BATTING (ATTACKING)>, etc.,
and each motor program would be given momentary control of the necessary
muscles as the time came for it to be activated. The point is that the muscle
activity and the motor programming for accurate <FIELDING (LONG THROW)>
is identical to that for <TOSS GRENADE (LONG THROW)>, and - indeed - is
only distinguished from it by the fact that the latter would be controlled by a
totally different control mode such as <FIGHT WAR>.
4 - Interrupt
Mechanisms in Hierarchical Motor Control
Now the problem with any programmable system is that once a particular course of action has been selected it wants to run its course - that, as we have seen, is its role in life. The question therefore arises as to how to abort an ongoing motor program and re-select a more appropriate control mode should an emergency develop. For example, our cricketer may be half way through executing his long throw only to feel one of the muscles in his arm give way. Not only must all contraction of the affected muscle then immediately stop, but so, too, must the mode which is driving it along (being replaced, perhaps, by the mode <PAUSE TO CHECK INJURY (ARM)>); and because Norman and Shallice's inhibitory mechanisms will have temporarily suppressed most of the alternative modes and programs, this may take not inconsiderable time and effort. Indeed, in engineering terms, there turns out to be only one solution to this problem, and that is for the lower levels in the control hierarchy to have mechanisms in place which can force the higher modules to stop what they are doing. Figure 1 shows how such "Interrupt Mechanisms" fit in to the broader control picture .....
Figure 1 -
Control Flows in a Five-Box Control Hierarchy: Here is the classical three-layer/five-box control
hierarchy [reminder],
now amended for enhanced flexibility of control and hence greater adaptive
worth. Two major changes have been made ..... (1) To Show Feedforward and Feedback Separately: The main ascending and descending flow lines have
now each been paired with a backchannel (the descending green arrows on the
left, and the ascending green arrows on the right). On the input pathways
(left), the resulting two-way flow allows orienting and fine tuning of the
body's sensory systems, and thus more accurate perception, whilst on the
output pathways (right) it allows feedback - knowledge of results (KR) - to
be passed back up the hierarchy. (2) To Show Interrupt Mechanisms: The two motor modules (bottom right) have additionally been given a
second up arrow (the red ascending arrows). These are the Interrupt Mechanisms
described above. Interrupts will be triggered in the first instance by the
body's pain or runaway movement sensors, passed across the left-to-right
reflex pathway (red, centre) to act as an Emergency Brake, and then
relayed upwards to force discontinuation (a) of the ongoing motor program,
and (b) of the overriding control mode. NB: Note how the lower
motor system interrupts the upper motor system, which in turn interrupts the
higher cognitive system. This is similar to the error handling which needs to
be written into complex computer programs, whenever the normal flow of
control is "nested" into layers [definition and example]. The point is that once the flow of control has
been passed downwards several times, it needs to be explicitly retrieved the
same number of times, once from each control layer, starting at the one at
which the error occurred. Among IBM CICS programmers, where an irrecoverable
processing failure is known as an "ABEND" [= ABnormal
ENDing], this upward cascading of failure is known
as "propagating an ABEND". As indicated by
the giant arrows on the right hand side of the diagram, all descending
information within the motor hierarchy counts as Feedforward
information, and all ascending information counts as Feedback. In
normal ongoing behaviour, minor adjustments to feedforward can be made on the
strength of this feedback, but only every 500 msec. or so (ie. approximately twice per second). This is why we
find it so difficult to respond to a rapid succession of emergencies at the
wheel of our car! Reflex freezing is much faster, at about 30 msec., split approximately 50-50 between pure reaction
time and neural transmission time (Schmidt, 1978). NB: There can be no greater demonstration of the
robustness and versatility of this particular control architecture than the
fact that this is how the cognitive system evolved in higher vertebrate life.
After all, Mother Nature unhesitatingly punishes weak or dysfunctional
systems with swift oblivion. For more on the phylogenetic perspective to
biological information processing, see Smith (1991) and Smith and Stringer (1997).] If this diagram fails to load
automatically, it may be accessed separately at |
Enhanced from Figure 1.2(b) in our introductory paper on Human Error. This version Copyright © 2003, Derek J. Smith. |
5 - Motor
Theory and Human Error
Norman (1981) explicitly links the Norman-Shallice model (see Section 1) to behavioural errors, describing motor programming errors as "action slips", a term which was subsequently incorporated into Rasmussen's (1983) and Reason's (1990) taxonomy of error. Such errors, Norman claims, are "compelling sources of data" (Norman, 1981, p2), and much can be learned from the sort of verbal slips which abound in everyday conversational speech. However, the phenomenon of control mode error is perhaps most obvious when we take humans and put them "behind the wheel" in some way. For example, skilled motorists have many discrete "manoeuvres" available to them - such operations as turning, reverse parking, lane changing, etc. These operations are selected at control mode level, and then motor programs are activated according to the momentary demands of road and traffic conditions. The same goes for pilots, who have at their disposal a repertoire of discrete mental control modes for such manoeuvres as climbing, level flight, preparing for landing, etc. Degani, Shafto, and Kirlik (1995/2003 online) summarise this highly complex state of affairs as follows .....
"Taken
as a whole, a system can have several ways of behaving; but at any point in
time only a single mode can be active. [.....] Once a mode is active, it will
operate according to its characteristic behaviour while attempting to maintain
[preset] target values." (Degani,
Shafto, and Kirlik, 1995/2003 online; ¶2.1 and ¶2.4.)
As it happened, Rasmussen and Reason came up with much the same solution, of course, namely a three-level motor hierarchy similar to that shown in Figure 1. In so doing, they brought together into a single explanatory model the hierarchy concept, the programming concept, the mental model concept, the philosophy of knowledge and knowledge levels, the psychology of volition, and the laws of cybernetics. This was an extremely far-sighted cross-disciplinary adventure, and we see it paying off when Reason (1990) thereby becomes able to argue that the element of "feedforward" control - that is to say, of motor preselection by control mode and motor program - is typical of both skill and rule-based behaviour, and is absent only in knowledge-based behaviour. This is how he brings the various jargon together .....
"Control
at the KB [= knowledge based] level, however, is primarily of the feedback
kind. This is necessary because the problem solver has exhausted his or her
stock of stored problem-solving routines, and is forced to work 'on line',
using slow, sequential, laborious, and resource-limited conscious processing.
The focus of this effortful functional reasoning will be some internalised
mental model of the problem space." (Reason, 1990, p57).
6 - Mode Error
in the "Glass Cockpit"
Which brings us to the purpose of this particular paper, which is to consider how well the Rasmussen-Reason model can explain command and control failures in today's increasingly automated systems.
HISTORICAL ASIDE: Because
they are usually at the cutting edge of invention, aviators have always needed
better technology than money could buy. As a result, aviation has always been a
rich marketplace for companies specialising in technological innovation, and it
was not long after the Wright Brothers took to the air in 1903 that the control
and instrumentation companies started to get involved. We have already
described elsewhere [reminder]
how the Sperry Gyroscope Company grew into a major international corporation on
the back of naval control systems, and they were quick to see the same
opportunity for lightweight systems in aircraft. By 1913, Sperry was using
gyroscopic stabilisers as part of a radio-controlled aircraft (Pearson, 2003 online), and the demands
of the First World War extended this work into the control of aerial torpedoes
and the design of better bombsights. A military autopilot-bombsight was
developed during the 1920s by Carl Norden, an
ex-Sperry man, and Sperry's own A-3 autopilot was introduced into commercial
aircraft in the 1930s. The Norden bombsight was so
successful that it dominated the USAF market until the end of World War Two (it
even had the dubious honour of putting the crosshairs on downtown Hiroshima one
August morn in 1945). At around the same time, the pilotless German V-1 and V-2
rocket systems had autopilots and guidance computers capable of steering them
several hundred miles. Digital computers started to enter the cockpit as soon
as their size and weight allowed towards the end of the 1950s. The Lockheed
Martin F-16 [picture] was
the first mass-produced "fly-by-wire" aircraft. When it was
introduced in the late 1970s, it actually took the bold step of making itself unflyable by a human pilot - the aircraft was so inherently
unstable (in the interests of combat manoeuvrability) that the control surfaces
had to be automatically adjusted at machine speed by computer-assisted
servomechanisms. The Airbus Industrie's A300 [picture] was
the first mass-produced "glass cockpit" aircraft. Its cockpit is to
all intents and purposes a giant computer screen, with computer graphic
instrumentation replacing all the old dials and displays, and it is controlled
via a joystick little different to those used in computer gaming [picture].
Unfortunately, it soon emerged that all this automation could kill if aircraft designers did not get the "man-machine interface" exactly right, and one of the most persistent problems under this heading emerges from the woodwork whenever the machine is given control modes of its own. This is because although the purpose of automation is to help pilots, it actually places new demands on them. Specifically, they need a mental control mode to do no more than set and reset their cockpit control modes! This is how Sarter and Woods (1995) explain this troublesome (and potentially deadly) anomaly:
"Because
the human supervisor must select the mode best suited to a particular
situation, he or she must know more than before about system operations
and the operation of the system as well as satisfy new monitoring and
attentional demands to track which mode the automation is in and what it is
doing to manage the underlying processes. [.....] Note that mode error is
inherently a human-machine system breakdown, in that it requires that the users
lose track of which mode the device is in." (Sarter and Woods, 1995, pp5-6; emphasis added.)
Sarter and Woods (1995) support their argument with the story of the 1990 Bangalore air disaster [database entry]. This was a clear automation mode error during an approach for landing, in which the pilots accidentally selected a control mode called <OPEN DESCENT>, and were then unable in the time available to work out what they had done wrong. In this particular mode, the aircraft cuts back engine power and thereafter maintains its speed by progressively losing height. As a result, the rate of descent is immediately too great for safe landing, and, by the same token, the aircraft is guaranteed to undershoot the runway. The <OPEN DESCENT> mode therefore makes it impossible to maintain a meaningful approach to landing, or to override the lack of power, locking you into certain disaster unless and until the mode is cancelled. At Bangalore, the pilots only discovered their error 10 seconds before impact, leaving them too little time for the idling engines to respool up to thrust. Similar errors contributed to disasters at Habsheim (1988), Strasbourg (1992), and Nagoya (1994) [database entries], and to near-disasters at Moscow (1991), and Paris (1994).
It is, of course, only mildly reassuring to learn that the root cause of this particular problem has been known about for some time. Degani, Shafto, and Kirlik (1995/2003 online) suggest that when the aerospace industry decided to computerise the cockpit they simply gave a new lease of life to some quite old computing problems. The computer industry, it turns out, had long been having problems with its so-called "human-computer interface" (HCI), and these problems did not disappear when those same systems were installed in aircraft. In turn, the root cause of the HCI problem is that keyboard size has always required a trade-off between the number of inputs required and the internal coding system used. As a result, systems designers have always arranged for the main array of keys to be re-used in a number of different "data entry modes". There are many examples of this, but perhaps the most instantly recognisable is the "shift-plus-26" system for coping with the upper and lower case English alphabets when typing. This was introduced in the 1878 Remington No2 typewriter [picture], and means that the A-key on its own delivers a lower case "a", whilst SHIFT-A delivers an upper case "A". The shift-key is thus a rudimentary control mode [and most of us will know how easy it is to flip into tHE WRONG MODE FROM TIME TO TIME]. Teleprinter systems designers were even more keen to re-use their available keys in clever ways, because the requirement to send their end product down a wire meant they had even less keys to play with .....
ASIDE: In 1874,
Jean-Maurice-Emile Baudot (1845-1903) developed the Baudot printing telegraph, a system capable of sending the
full alphabet, plus the numbers and the most common punctuation marks, using
only a five-key keyboard. The system was adopted in 1877 by the French
telegraph service, and improved versions were in use in the British Post Office
around the turn of the century. The historical point about Baudot's
code (also known as the International Telegraph Code No. 1) is that its
five-key keyboard actually allows only a 32-item codebook, so Baudot went instead for a "one-plus-31" system
and prefixed his transmissions with a reserved code indicating whether what
followed was from the alphabetic list, or a second list comprising the digits
and the punctuation marks. [For the technological context to Baudot's invention, click here.
For the British version of the Baudot codebook, see
Hobbs (1999/2002 online,
Figure 1), and for more on binary alphabets in general, click here.]
There is even a direct causal link between late 19th century typewriter and
telegraph systems and the late 20th century glass cockpit. This lies in the
fact that when the inventors of the modern digital computer needed input and
output devices in the 1940s, they simply requisitioned or cannibalised existing
telegraph equipment [fuller story].
7 - Mode
Transition
So what does cognitive psychology know about the transition from one control mode to another? Well again we need look no further than the Norman-Shallice model of the Supervisory Attentional System. This from Norman 's (1981) discussion of the psychology of action slips .....
"One
interesting aspect of slips is people's ability (or inability) to detect them.
Many slips are caught at the time they are made. Sometimes they are caught just
prior to their occurrence, but with insufficient time to prevent the act, or at
least the initial stages of the act. For a slip to be started, yet caught,
means that there must exist some monitoring mechanism
of behaviour - a mechanism that is separate from that responsible for the
selection and execution of the act. [Heading] The proposed model, an activation-trigger-schema
system (ATS), assumes that action sequences are controlled by sensori-motor knowledge structures: schemas [.....]
The operation of the model is based on activation and selection of schemas and
uses a triggering mechanism that requires that appropriate conditions be
satisfied for the operation of a schema. [.....] The novelty of the current
model lies in several of its aspects: first, the combination of schema,
activation values, and triggering conditions; second, the application of motor
action sequences; third, the role of intention; fourth, the consideration of
the operation of cognitive systems when several different action sequences are
operative simultaneously; and fifth, the specific application of this framework
to the classification of slips." (Norman, 1981, p3;
italics original, bold added.)
Mode transitions therefore require (a) detection of need by the monitoring mechanism, (b) suppression of ongoing action by reducing its activation value, (c) sensitisation of the activation value of the selected alternative action, and (d) triggering said alternative action.
More recently, Degani, Shafto, and Kirlik (1995/2003 online) have studied mode occupancy and mode-to-mode transitions during 30 flights of Boeing 757/767 type airliners. Here are some of the mode transitions which may be identified in the control period between take-off and touch down .....
Degani et al provide descriptive data on how often and for how long each mode gets selected, and graphical data [see their Figure 6] on what mode(s) is/are likely to follow. They use the term "mode trajectory" to summarise the sequence of modes activated during any one control period.
8 - The Big
Mystery: Counting the Control Layers
Hourizi and Johnson (2001) have also considered disaster data, and see the well trained cognitive system as pursuing a strategic level of control activity rather than simple mode selection. The strategies dictate which modes are going to be acceptable, and the modes then activate the schemas necessary to get the job done. Hourizi and Johnson adopt the concept of "task knowledge structures" to store details of the available tasks, where the tasks are defined as "well-defined, proceduralised, activities" (p7) and strategies are .....
".....
knowledge structures which would allow a system
operator (in our example, a pilot) to coordinate, monitor and (potentially)
alter a lower level activity or 'task', according to the particular context in
which they found themselves. In other words, it would describe the management
activity needed to perform a group of lower level tasks in a complex, changing,
interactive environment." (Hourizi
and Johnson, 2001, p9.)
The Hourizi and Johnson paper is theoretically challenging for two reasons. Firstly, it reminds us how little is really known about the internal dynamics of high-level behavioural selection, and secondly, it shows how easy it is for theorists to introduce new layers into the control hierarchy when it suits them. And at the root of both these problems is a serious difference of opinion between theorists like Weiss who go for six (or so) levels of control [reminder], and those like Craik, Norman, Rasmussen, and Reason, who go for three (or so).
So how can two theoretical camps differ so fundamentally in their analysis of the same physical system? Well our own view is that both approaches stand up to critical inspection, and that any apparent paradox is explained away as soon as we distinguish between physical module and functional process. We regard Weiss as correctly identifying six (or so) layers of control processing, and Craik, Rasmussen, and all the others as correctly identifying three (or so) layers of modular architecture. Hourizi and Johnson's strategic level of control will thus match well with Weiss's top level, but not at all well with Rasmussen's, because the former is a process and the latter is an entire physical layer capable of supporting many processes, themselves internally hierarchical.
What this strongly implies, of course, is that some physical modules have to carry out more than one functional process; in other words, that there is a one-to-many relationship between a cognitive module and the processes carried out within it. What is far from established, however, is the mapping of the six (or so) processes into the three (or so) modules - it could be 1:3:2, or 2:2:2, or 1:4:1, for example, or it could even vary from moment to moment as neural resources suddenly reschedule themselves to different task demands and/or states of alert. As we have repeatedly argued elsewhere [example], there is as yet little recognition of this issue in the psychological literature, nor of the implications for the inter- and intra-modular coordination necessary to keep everything working smoothly [hence our separate paper on How to Draw Cognitive Diagrams].
9 - Specific
Instances of Mode Error
The following instances in the disaster literature either have been, or can be, interpreted using the mode error approach:
Even the Titanic shipping disaster has a control mode angle to it, notwithstanding the fact that it came at the beginning of the automation age. This is because the ship was being controlled as though it were undertaking daytime speed trials in clear seas, and not with the foresight and caution appropriate to a night-time transit through an area of known ice risk.
10 - References
See the Master References List
[Home]