Course Handout - From Frontal Lobe
Syndrome to Dysexecutive Syndrome
Copyright Notice: This material was
written and published in Wales by Derek J. Smith (Chartered Engineer). It forms
part of a multifile e-learning resource, and subject only to acknowledging
Derek J. Smith's rights under international copyright law to be identified as
author may be freely downloaded and printed off in single complete copies
solely for the purposes of private study and/or review. Commercial exploitation
rights are reserved. The remote hyperlinks have been selected for the academic
appropriacy of their contents; they were free of offensive and litigious
content when selected, and will be periodically checked to have remained so. Copyright © 2003-2018, Derek J. Smith.
|
First published online 08:59 GMT 13th March 2003, Copyright
Derek J. Smith (Chartered Engineer). This version [2.1 - links to graphics] 09:00 BST 5th July
2018.
The skeleton of this material previously appeared in Smith (1996; Chapter 5). It is repeated here with major extensions and supported with hyperlinks. |
1 - Frontal Lobe Neuroanatomy
From
the highest primate to the lowliest invertebrate, the nervous system has
sustained the animal kingdom against predation and environmental hazard for
around half a billion years, and this, in vertebrates, has meant reliance on
progressive specialisation of the brain as
ganglion-in-chief of an axially elongated and segmented system. Admittedly, a
lot of sensory information is processed at the various segments of the spinal
cord, but this is only for reflex or biomechanical purposes (balance, say, or
multiple limb coordination), and as soon as any "higher function" is
needed the information is routed instead "rostrally"
- forwards - to the brain. This basic set-up was already well known by the
early 19th century, being seen, for example, in
Key Concept - The
Triune Brain: A number of useful neuroanatomical names derive from
the order of evolutionary emergence of the structures in question, that is to
say, from phylogenetic considerations. These are the terms palaeocortex ("ancient cortex"), archicortex ("old cortex") [the term archipallium ("old cloak") is also often
seen], and neocortex ("new cortex") [the term neopallium ("new cloak") is also often
seen, and is in fact more accurate because the term cortex, strictly speaking,
refers solely to the surface grey matter of the cerebrum, not to its overall
bulk]. The palaeocortex is the hippocampal gyrus, a
gyrus which in humans is situated on the medial surface of the temporal lobe.
Historically, it was the first cortex to emerge, and was probably olfactory in
function. The archicortex is the hippocampus, an
important nucleus which in humans is situated deep within the temporal lobe. It
was the next area of cortex to emerge, and was "largely concerned with the
integration of information from different sensory modalities" (Rose, 1976,
p169). The neopallium is the rest of the cerebrum,
and, in humans, includes by far the largest area of cortex. Considerations of
this sort prompted the neurologist Paul MacLean to distinguish between (1) what
he called "the reptilian brain", the most primitive structures
of the brainstem, midbrain, and cerebellum, (2) the "limbic
system", the phylogenetically more recent structures of the
diencephalon and archicortex, and (3) the neopallium, the most recent addition of all.
Here
are the main components of the forebrain, set in the broader context of the
main divisions of the central nervous system .....
Figure 1 - Structures of the Forebrain: Here is a tree-structured analysis showing the
component structures of the vertebrate brain, and with conventional
neuroanatomical naming shown in black. MacLean's reptilian brain is then
indicated by the tan highlighting, and his limbic brain by the mauve highlighting.
Structures not specifically highlighted make up the neopallium,
that is to say, the neocortex, its underlying white matter, and the basal
ganglia. If this diagram fails to
load automatically, it may be accessed separately at |
Simplified from a black-and-white original in Smith (1997c; Figure 1.4). This graphic Copyright © 2003-2004, Derek J. Smith. |
The gradual development of rostral processing power during evolution is often
referred to as "encephalisation", and there is a logical
pattern to what happens as you move up through the animal kingdom, because
brain anatomy simply follows lifestyle demand. Thus species which need sharp
eyes (eg. eagles) grow large eyes and "visual brains", those
which need acute hearing (eg. bats) grow large ears and "auditory
brains", and those which need an acute sense of smell (eg. fish,
reptiles, and lower mammals such as rodents) develop extensive olfactory cortex
and exhibit behaviour driven in large part by the sense of smell. Those, on the
other hand, which need to be flexible and solve problems develop the "uncommitted
cortex" needed to support the necessary higher functions. It is
therefore commonly accepted that the best indicator of brain power is forebrain
development in general, and frontal lobe development in particular. Primates,
for example, have been large-brained throughout their known history (Jerison,
1987), and the highest grade of encephalisation is shared by humans and
cetaceans (dolphins). Figure 2 shows some of the steps on the evolutionary
ladder leading ultimately to H. sapiens, and Figure 3 shows the basics
of human frontal lobe neuroanatomy .....
Figure 2 -
Evolution of the Vertebrate Cerebrum:
This diagram shows the schematised brain anatomy of (a) fish, (b) reptile,
(c) bird, (d) small mammal, (e) large mammal (non-primate), and (f) primate.
Note the progressive growth of the cerebrum (grey) with respect to both
midbrain and hindbrain. Note the close association of olfactory bulb with
palaeocortex and optic tract with optic tectum (only really visible in
specimens (a) to (c)), and that this, given the known motor functions of the
cerebellum, leaves little room for any behaviour other than sniffing, seeing,
and locomotion in animals (a) to (d). Only with the development of
"uncommitted cortex" (see text) is there brainpower to spare for
more complicated types of cognition. To cut a long story short, the key
product of encephalisation is the "forebrain", comprising
the lobes and lobules of the cerebral hemispheres, together with the
underlying basal ganglia and the mass of axon tracts which allow these
components to function as an integrated information processing whole. And the
jewels in the forebrain's crown are the frontal lobes, which in humankind
comprise the most rostral 40% of the cerebral hemispheres, anterior to the
Rolandic Fissure. Brodmann (1912) estimated that the prefrontal cortex
constitutes 29% of the total cortex in man, 17% in chimpanzees, 11.5% in
gibbons, and 8.5% in lemurs. If this diagram fails to load
automatically, it may be accessed separately at |
Redrawn from a black-and-white original in Smith (1997c; Figure 1.7). This graphic Copyright © 2003-2004, Derek J. Smith. |
Figure 3 -
Frontal Lobe Neuroanatomy: Here is
a left lateral view of the undissected left frontal lobe. The precentral [or
"Pre-Rolandic"] gyrus [red stipple] is primary motor area, and the
band of superior, middle, and third frontal gyri in front of that [blue
stipple] is supplementary motor area. The remainder of the superior and
middle frontal gyri [white stipple] is known as the "prefrontal
region". It is generally involved in managing the activity of the more
caudal areas, and is specifically responsible for such higher cognitive
processes as forward planning and directed attention. The orbital cortex
[bottom centre] is so named because it is situated just above the orbit of
the eye. [For the corresponding Brodmann's Numbers, and an indication of the
layout of the medial aspect of the cerebral hemispheres, see Kleist (1934).] If this diagram fails to load
automatically, it may be accessed separately at |
Redrawn from a black-and-white original in Smith (1997c; Figure 2.1). This graphic Copyright © 2003-2004, Derek J. Smith. |
2 - Frontal Lobe Syndrome
So
what happens when frontal lobe structures are damaged by injury or disease?
Well one of the earliest accounts of a frontal lobe lesion is Bigelow's (1850)
description of the brain-injured American railway labourer Phineas P. Gage [timeline; other sources], who suffered
pronounced personality changes after an accidental prefrontal leucotomy. This
was followed by investigations of deliberately inflicted frontal lesions in
animals by the likes of Fritsch and Hitzig (1870) and Ferrier (1876/1886), and
shortly after that by the pioneer attempts at "psychosurgery",
that is to say, human brain surgery performed expressly for the purpose of
alleviating psychological (rather than medical) symptoms.
The
first planned (as opposed to accidental) attempts at psychosurgery were carried
out by Gottlieb Burckhardt on 29th December 1888, and first reported in 1891.
Burckhardt carried out various unilateral cortical lesions on a series of six
institutionalised psychiatric patients with "not spectacular"
results. The idea was then resurrected following the International Neurological
Congress in London in July 1935 at which the comparative psychologists John F.
Fulton and Carlyle Jacobsen reported on the removal of the frontal cortex in
chimpanzees, one of whom - an animal called Becky - had been cured of a
particularly temperamental disposition by the procedure [much as the fictitious
Randall P. McMurphy was "cured" of loving life just a bit too much in
the movie One Flew Over the Cuckoo's Nest]. In short, "the animal
without its frontal areas no longer appears to 'worry' over mistakes"
(Jacobsen, 1936). Jacobsen also cited the earlier work by Ferrier (1876/1986)
to the effect that removal of the frontal lobes (again in monkeys) caused no
impairment of the "special sensory or motor faculties", but
"very decided" alterations in the animals' character and behaviour.
Thus .....
"Instead of, as before,
being actively interested in their surroundings, and curiously prying into all
things that came within the field of their observation, they remained apathetic
and dull, or dozed off to sleep []. While not actually deprived of
intelligence, they had lost, to all appearance, the faculty of attentive and
intelligent observation." (Ferrier, 1876, pp231-232.)
"The frontal lobes are
the seat of coordination and fusion of the incoming and outgoing products of
the several sensory and motor areas of the cortex" (Bianchi, 1895, p34).
This,
of course, was classic encephalisation restated, but Bianchi was then more
precise in 1922, when he summarised the animal studies as showing five areas
of frontal deficit, as follows .....
The
point was that Bianchi's (1922) five areas of deficit usually tended to
co-occur, more or less, in patient after patient and therefore qualified for
the medical descriptor "syndrome", and so "frontal lobe
syndrome" was born.
3 - Frontal Lobe Psychosurgery
Meanwhile,
one of the other delegates to the 1935
Despite
the fact that the results of Moniz's first attempt at this procedure were not
wholly conclusive, the method was quickly introduced into the
ASIDE: Freeman seems to
have been a bit of a psychosurgery zealot - Youngson and Schott (1996)
mischievously suggest that he would cheerfully have used a food mixer to do the
job had one been handy!
Following
surgery, patients did indeed become less anxious and withdrawn, although their
intellectual level remained ostensibly unchanged. Particularly good results
were obtained in cases of paranoia, catatonia, and simple schizophrenia.
Side-effects of the surgery included loss of community consciousness and
feeling for others, cessation of dreaming, complacency, placidity, loss of
initiative, apathy, loss of concentration, loss of enthusiasm for life, and
reduced creativity and artistic expression. Also - since surgery reduced
disruptive emotional responses - the ability to interact socially tended to
return. Sadly, there was also a removal of social inhibitions, leading to
insensitivity to criticism, unrestrained exposure, or depressed table manners.
In other words, social interaction was not actually any the more meaningful,
but there was far less worry on the part of the patient whether it was or not.
As to why this effect should be seen, Denny-Brown (1951) describes frontal
polar cortex as having small cells, a thick internal granular layer, and
"an almost exclusive connection with the dorsomedial nucleus of the
thalamus" (p400). It is therefore in close communication with what is
known as the "limbic system", and thus with the full range of
the brain's emotional and motivational systems. The prefrontal region in
general is commonly linked with such functions as problem solving, behaviour
planning, working memory [glossary], and eye
movements. It is the more ventrally placed orbital cortex which is
involved in personality and social behaviour.
And
as to the procedures themselves, a number of variants soon emerged, two of
which were compared by Petrie (1952). In what he termed the bilateral
standard operation, an incision was made 3 cm behind the lateral margin of
the orbit and 5-6 cm above the zygoma. A 1 cm burr-hole was then drilled
through the skull, and the dura mater cut through and folded back. Finally, a
needle was pushed down through the exposed cortex and rocked to and fro through
the underlying white matter. In the bilateral rostral operation, the
needle is angled more obliquely forwards. The standard procedure thus isolates
Areas 9, 10, 11, 46, 47, and possibly part of Area 45, whilst the rostral
procedure isolates only Areas 9 and 10 - the prefrontal cortex proper - and
leaves the orbital cortex relatively intact [compare the two top arrows on
Figure 4]. However, with deteriorating press and the discovery of
chlorpromazine antidepressant drugs in 1952, the numbers started to fall. By
the 1950s, psychosurgery had whipped up a storm of objections on a variety of
grounds, not least the difficulty in obtaining genuinely informed medical
consent in such circumstances [see Restak (op cit) for details]. The economics
were another cause of overeagerness to operate - the lobotomies cost $250 and
needed to be carried out only once, whereas the costs of institutionalisation
were over $35,000 per patient per annum. There were also further improvements
in technique, so that the surgery involved gradually became less extensive.
Knight (1966) reviewed many tractotomies and concluded that the most effective
lesion site was in a relatively small bundle of the thalamocortical tract as it
passes beneath the head of the caudate nucleus level with Area 13. He developed
a procedure known as the restricted undercut specifically to attack this
tract and this tract alone (for details of which, see Blakemore, 1977, p181).
The most modern methods are assisted by three-dimensional computer imaging of
the patient's brain. Electrodes are positioned using an externally mounted
stereotaxic frame, and the lesions produced either by electro-coagulation or minute
radioactive implant. Lesions can therefore be placed very precisely and extend
only a few millimetres. This allows what Girgis (1971) describes as
"unnecessary encroachment" upon uninvolved cortex to be more or less
totally avoided. The target area is still the orbital cortex, although Brian
Simpson of University Hospital Wales now attacks the thalamocortical tract very
early on, while it is still within the anterior parts of the internal capsule
(Simpson, 1996). Bridges (1996) reports that between 20 and 30 procedures are
currently carried out annually in Britain, mainly for depression or obsessive
anxiety.
Figure 4 -
Frontal Lobe Psychosurgery: Here is
Figure 3 again, overprinted now with the appropriate Brodmann's Numbers and
locating the various psychosurgical procedures described in the text. Each
arrow shows the point of entry of the instrument and the plane of the
resulting lesion. The Knight restricted undercut is shown as a double-headed
arrow to reflect the fact that the instrument enters laterally and is rocked
up and down. Note how the Freeman transorbital technique involves penetrating
the orbital cortex from below. Note also how the later procedures
concentrate on orbital cortex rather than prefrontal, and in this respect
note carefully the position of Area 13, as implicated by the Knight (1966)
report. [For a more complete display of the Brodmann's Numbers, and an
indication of the layout of the medial aspect of the frontal lobes, see Kleist (1934).] If this diagram fails to load
automatically, it may be accessed separately at |
Redrawn from a black-and-white original in Smith (1997c; Figure 3.8). This graphic Copyright © 2003-2004, Derek J. Smith. |
4 - Causes of Frontal Lobe Lesions
Apart
from taking its fair share of the brain's normal exposure to cerebrovascular
and space occupying lesions, the frontal lobes are at risk in multiple
sclerosis, Huntington's disease, Alzheimer's disease, and the normal ageing
process (Stuss and Benson, 1986). They are also regularly involved in traumatic
brain injury, partly for the simple reason that they are frontal and therefore
take the brunt of any forward collision impact, and partly because the orbital
region is close to the relatively sharp contours of the bones of the orbit.
[For further details, including a brief animation of the coup/contre-coup
effect in closed head injury (CHI), click here.]
Another
not uncommon cluster of frontal lobe lesions arises following haemorrhage of the anterior
communicating artery (ACoA), that part of the
circle of Willis which connects the two anterior cerebral arteries just
anterior to the optic chiasm. According to Parkin and Leng (1993), the ACoA is
curiously prone to the development of aneurisms, and when such aneurisms
rupture they reduce the supply to the basal forebrain, the septal area, and the
anterior portions of the cingulate gyrus, fornix, hypothalamus, and corpus
callosum. As to the resulting clinical picture, Parkin and Leng (1993)
summarise a number of separate case reports and report a severe confusional
state, attention problems, severe retrograde amnesia, and misorientation to an
earlier time period. Language and general knowledge are unaffected, and
recognition memory seems to be relatively well preserved compared to recall.
Confabulation (discussed in detail in Section 9) is common, as in Kapur and
Coughlan's (1980) ACoA patient SB .....
"He would claim, first
thing in the morning, to have fictitious business appointments, when in fact he
was attending a day centre, and would frequently dress for dinner in the
evening in the mistaken belief that guests were coming. He would also attempt
to take cups of tea outside, saying that these were for his foreman, who had
discontinued employment with him several years earlier." (Kapur and
Coughlan, 1980, p461)
5 - Early Attempts at Assessing Frontal Lobe Function
All
in all, therefore, frontal amnesias turned out to be neither as immediately
obvious nor as clinically clear-cut as those arising from temporal lobe damage,
and neither memory nor intelligence tests, generally speaking, were
particularly good at detecting frontal lobe damage. Because of this, clinical
neuropsychologists devised their own special "frontal lobe tasks",
or "frontal batteries", that is to say, tests designed to be
selectively sensitive to frontal lobe damage. One of the first to do this was
Rieger (1888, cited in
One
of the earliest surviving assessments from this period is the Porteus Maze,
developed as a psychometric test of intelligence in 1914 by Stanley Porteus of
the University of Hawaii, and in constant use ever since [Porteus himself
summarised his lifetime's work in Porteus (1950)]. The technique addresses the
logically opposed hypothetical constructs of "planfulness" and
"impulsiveness", and is scored by counting the number of false
trails on the response sheet. Another early test was devised by John
Ridley Stroop (1897-1973), and requires subjects to name the ink colours
used when reading printed stimulus words (Stroop, 1935). Thus the correct
answer for the stimulus <walk> is "red", not "walk". The problem is that
reading printed words is a very automatic act in appropriately trained adult
subjects, and so there is an "attentional conflict" situation
(Pardo et al, 1990), in which the response of first impulse needs to be
consciously suppressed in favour of the colour-response. Even more interference
comes when the printed words are themselves colour names, but not matching the
ink colour. The Stroop test is valuable as a clinical screening tool because
brain injury - and specifically frontal brain injury - renders patients less
able to control this interference, whereupon they respond automatically. The
test has been in constant use ever since it was first introduced, and the
latest research regularly implicates the anterior cingulate gyrus in the
selection of the appropriate response under conditions of doubt (see, for
example, Pardo et al, 1990). Try this for yourself: read the following ink
colours out loud as fast as you can .....
BLUE YELLOW GREEN RED PURPLE BROWN |
Another
type of test involves "category sorting". The first to make
his name here was Egon Weigl (Weigl, 1927/1941), who had been investigating his
patients' ability to sort coloured wool samples when he noted some strange
idiosyncrasies, thus .....
"The patients have no
principle of classification such as is formed in categorial [sic]
behaviour. The primitive concrete behaviour of patients was demonstrated in a
particularly instructive way in the following experiment. They were given three
colours and told to place the two together which were 'most alike'. Whether or
not the desired result was apparently obtained depended on the way the colours
were combined. Thus, for example, the patient Th. [sic], when presented with a
dark red, a pink, and a medium brown skein, reacted in the way intended by the
experimenter; that is, he put the two reddish skeins together. If, however, he
was given a sky blue skein instead of the medium brown one, the patient usually
could not decide at all, because, for him, the paleness of the sky blue and of
the pink 'cohered' as much as the two variants of [red]. [.....] We are dealing
with a pathological change in these patients which is connected with the
problem of so-called 'isolating abstraction'." (Weigl, 1927/1941, p5;
italics original.)
But
it was not just failures of abstraction which impaired the sorting performance
of frontal patients. They also typically showed "perseveration"
[glossary], an
inability to cancel one sorting strategy in favour of an alternative one when
circumstances or instructions required it. Weigl therefore developed a more
compact version of the test, using simple cardboard shapes rather than skeins
of wool, thus .....
"The patient was
presented with 12 unarranged cardboard figures of different colour and form: four
equilateral triangles (red, green, yellow, blue), four squares (red,
green, yellow, blue), four circles (red, green, yellow, blue). He was
told to sort them. The patient carried out the task as follows: First he put
together a red triangle, a red circle, and a red square in
a vertical row. Similarly he put together the yellow, green, and blue figures,
so that as a result a series of rows, one red, one yellow, one green, one blue,
lay next to each other a few centimetres apart." (Weigl, 1941, p9; italics
original.) [Subsequent revisions of the test use 20 counters and vary
thickness, size, and overprinted shape as well.]
ASIDE - WEIGL AND TRANSCODING: In a later paper
(Weigl, 1974), Weigl was one of first to use the term "transcoding"
to describe the process of recoding information in transit through the
cognitive system, specifically from spoken to written language form and back
again. For more on the transcoding model genre within psycholinguistics, see
our e-paper on "The Transcoding Model Series".
Another
testing pioneer was Ward C. Halstead (1908-1968), of the University of Chicago
Medical School. In Halstead (1940), for example, he exposed patients to a
display of common objects and invited them to sort them into piles on the basis
of their similarity. He then scored omissions and idiosyncrasies as evidence of
underlying pathology. Other tests were added in Halstead (1947), and further
improvements were then made by Ralph M. Reitan, so that the modern form of the
battery is the Halstead-Reitan Battery (Reitan, 1955). This test package
is still on the market, published by the Reitan
Neuropsychology Laboratories at the
Despite
these early developments, not every clinician found the available tests useful.
For example, Hebb and Penfield (1940) reported their examination of patient KM,
a 27-year-old right-handed male who had suffered a depressed fracture of the
frontal bone in a workplace accident in 1928. This had left him subject to
post-traumatic epilepsy, and in 1938 it was decided to operate to remove the
irritant intracranial scar tissue which was causing his fits. The resulting
partial bilateral frontal lobotomy removed "one third of the mass of the
frontal lobes" (p427) [approximately the white stippled area in Figure 3].
Here are some observations from the subsequent case notes .....
"The patient was
conscious and cooperative throughout. After the surgical removal had been
completed it was desired to put the patient to sleep [and] during the
introduction of the tube the patient made inappropriately facetious
remarks" (p425).
"On the tenth day after
operation the patient was again responsive, but was disoriented, irrational,
and slightly facetious and used obscene language" (p426).
"Immediately before
operation the patient was examined with the revised Standford-Binet and the McGill
revision of the army beta test. Two months after operation he was
[re-examined]. The preoperative intelligence quotient was 83 [and]
postoperative scores were consistently higher" (p431). [In fact, there was
a 13-month improvement of 11 IQ points.]
"The important fact is
the absence of grossly pathologic defects and of 'frontal lobe signs'. [.....]
There seems also to be little question of 'loss of abstract behaviour'
[citation]. No difficulty in categorising was found on a sorting test"
(pp433-434).
"For the effect of lesions of the frontal
lobe on human intelligence, it seems that one will have to look elsewhere than
to clinical observation or ratings by intelligence tests such as are now
available" (p437).
Nevertheless,
most reviewers in that era were continuing to side with Bianchi .....
"..... patients with
frontal lobe lesions or destruction of a frontal lobe by an operation are
changed in a characteristic way in their behaviour in everyday life. Their mental
capacity may be sufficient for executing routine work but they lack
initiative, foresight, activity, and ability to handle new tasks."
(Goldstein, 1944, p192; italics original)
Another
sorting task, the Wisconsin Card Sorting Test (WCST) [glossary] was
developed in 1948 (Berg, 1948; Grant and Berg, 1948), and subsequently modified
in 1976 as the MCST [glossary] (Nelson,
1976). The effect of brain lesions on WCST performance was then summarised by
Brenda Milner of the Montreal Neurological Institute .....
"The results of the
present study provide strong support for the view that the ability to shift
from one mode of solution to another on a sorting task is more impaired by
frontal than by posterior cerebral injury [.....] all removals which encroached
upon the superior frontal region were associated with poor performance on the
sorting test. The critical lesion cannot be more precisely defined, although
area 9 of Brodmann was implicated in most cases. [.....] The impairment shown
by patients after frontal lobectomy reveals itself as a strong perseverative
tendency. In extreme cases, a patient may sort all 128 cards to one preferred
category (for example form), despite the experimenter repeatedly telling him
that his responses are wrong. When the sorting is done rapidly, the patient may
not even wait for the experimenter to say 'right' or 'wrong', before proceeding
to place the next card. In such cases, it is tempting to argue that the main
defect is motivation [but] one can see the same perseverative behaviour in
patients who work slowly, pausing between cards, and who become manifestly
distressed by the frequency of their errors." (Milner, 1963, pp96-97.)
Milner
and Petrides (1984) report further experience with this test, as follows .....
"Although patients with
frontal lobe lesions respond normally to environmental stimuli, they appear to
have difficulty in using these stimuli to regulate their actions. A clear
example of this is provided by the performance of such patients on [the WCST,
where] the impairments observed after a frontal-lobe lesion appear to stem from
the patient's inability to overcome previously established response tendencies,
resulting in the generation of fewer hypotheses and, frequently, in a high
incidence of errors including perseveration." (Milner and Petrides, 1984,
p405.)
.....
and it was this inability to regulate their actions
which led Katz, Ford, Moscowitz et al (1963) to develop the Activities of
Daily Living (ADL) test [glossary], and
Other
authors, meanwhile, were still trying to get to the bottom of the frontal
lobes' role in cognition. Luria and Homskaya (1964), for example, took an
analytical eye to the classic symptomatology of frontal lobe syndrome and
managed to reduce Bianchi's five areas of deficit [see Section 2] to just two,
thus .....
"Disturbances occurring
after lesions of the frontal lobes often manifest themselves as two basic
symptoms: loss of spontaneity or initiative, and lack of critical
attitude toward the results of one's own behaviour. [.....] The latter
symptom [can] be regarded as the result of a general loss of some feedback
mechanism, a disturbance in signals of error, or an inadequate evaluation of
the patient's own action. It can be reduced to a deficit in matching of action
carried out with the original intention [.....] the patient with severe frontal
lobe lesions becomes unable to evaluate the adequacy of his own action, does
not try to modify his behaviour when it fails, and remains satisfied with his
own actions no matter how ineffective they may be." (Luria and Homskaya,
1964, pp373-374; bold emphasis added.)
Milner
was also instrumental in introducing the Corsi blocks tasks [glossary] (Milner,
1971). The Corsi task is regularly used in cognitive research (Berch,
Krikorian, and Huha, 1998, go so far as to describe it as "arguably the
single most important nonverbal task in neuropsychological research"), and
has recently been analysed for its consumption of working memory [glossary]
resources (see, for example Vandieronck et al, 2004).
The
year 1971 also saw a major theoretical review by Walle J.H. Nauta at MIT, who
homed in on the frontal lobes' involvement in the process we now know as
"motor programming" .....
"The frontal lobe,
despite decades of intensive research by physiologists, anatomists, and
clinicians, has remained the most mystifying of the major subdivisions of the
cerebral cortex. Unlike any other of the great cerebral promontories, the
frontal lobe appears not to contain a single sub-field that can be identified
with any particular sensory modality, and its entire expanse must accordingly
be considered association cortex [loss of which] leads to a complex functional
deficit, the fundamental nature of which continues to elude laboratory
investigators and clinicians alike. [.....] It is clear [that] the frontal lobe
disorder is characterised foremost by a derangement of behavioural programming.
One of the essential functional deficits of the frontal lobe patient appears to
be in an inability to maintain in his behaviour a normal stability-in-time: his
action programs, once started, are likely to fade out, to stagnate in
reiteration, or to become deflected away from the intended goal." (Nauta,
1971, pp167-171.)
Just
such a derangement of behavioural programming can also be seen in the following
clinical anecdote .....
"If the patient's hands
are lying on top of his blanket, he can respond to the command 'Lift the hand'
without difficulty. However, during the performance of the second or third
trial, symptoms of inactivity may appear; the movements slow down, the hands
are not lifted as high, and after several repetitions of the orders the
movements may be fully discontinued. Now let us repeat the same
experiment under different conditions. This time, the hands of the patient
(with a massive bilateral lesion of the frontal lobes) are under the blanket;
in order to execute the instruction, 'lift the hand', he must perform a complex
series of movements. First, he must free his hand from under the blanket, and
only then can he lift it. It should be pointed out that the first part of this
program was not mentioned in the instructions, and this intermediate intention
must be formulated by the patient himself. As a rule, patients with massive
bilateral lesions (tumours) of the frontal lobes do not perform such an action
and soon replace the required movements with an echolalic repetition of the
instruction: 'Yes, lift the hand'. This observation supports the conclusion
that patients with massive frontal lobe lesions are able to execute a direct
order [but] unable to execute a complex program of actions if some links of it
have not been formulated in the instruction." (Luria, 1973, p10.)
Luria's
own approach to frontal assessment was set out in the Luria-Nebraska
Neuropsychological Battery, a 14-subscale battery of "unstructured
qualitative" neuropsychological tests [see sales material]. Luria's "tapping
tests" are particularly easy to carry out and readily disclose a
multitude of sins. The tests have many sub-variants, but the best known is the "Go/No-Go"
test, which is based on the deliberately awkward instruction: "Tap
once when I tap once, but do not tap at all when I tap twice".
The
Word Fluency Test (WFT) was introduced by Goodglass and Kaplan (1972)
and promoted by Benton and Hamsher (1976), and measures how many words a patient
can generate beginning with a given letter of the alphabet in a measured
minute. The usual stimulus letters are F, A, and S, and the underlying theory
implicates our old friend the semantic network [glossary; further discussion],
thus .....
"The cognitive processes
that are activated during a fluency task can be conceptualised as a memory
search that involves the use or active restriction of associations. Memory has
been theorised as being organised on a conceptually related basis and in
lexicons or categories (Collins and Loftus, 1975). Within these two systems, a
memory search is thought to involve a spreading activation throughout nodes in
the memory structure. The model assumes that everything in memory is linked or
associated. [.....] These codes are further broken down into nodes that have
inter-emanating associations. A concept, such as an animal or a letter category,
would consist of all the associated nodes dominated by a particular node.
[.....] Effective productivity within fluency tasks suggests either a concisely
structured network of associations within a category or a very efficient search
strategy [and] one aspect of the frontal lobe component in a fluency deficit
may involve an impaired ability to effectively search memory stores or,
perhaps, to restrict an active memory search." (Butler et al, 1993, p520.)
Noting
that frontal patients regularly made bizarre estimates of such things as value,
Shallice and Evans (1978) reported on experience with "Cognitive
Estimation" Tasks (CET). They begin with a case study illustration of
the nature of the problem. Patient JS had suffered "a massive right
frontal lesion" in an explosion, but his pre- and post-event intelligence
scores were nevertheless the same. In one particular respect, however, he was
seriously impaired .....
"The one deficit observed
was a gross inability to produce adequate cognitive estimates. For instance,
when questioned, he replied that the height of the highest building in London
was between 18,000 and 20,000 feet, that the largest fish in the world was a 3
foot long trout, that the best paid occupation was that of a long distance
lorry driver, that the number of cars in Britain was over 50,000, and that the
length of the average spine was between 4 and 5 feet. He did not appear to
realise that the answers he gave were bizarre and he justified them when
pressed. When it was pointed out to him that the height he had given for
the highest building was greater than the estimate he had given earlier of
17,000 feet for the highest mountain in Britain he merely reduced his estimate
for the building to 15,000 feet. (Shallice and Evans, 1978, pp294-295.)
The
last of the early tests was the
6 - Recent Improvements in Assessing Frontal Lobe
Function
"We talk of amnesia,
aphasia, dyslexia, dyscalculia, and so forth, describing the dysfunction, and
leaving open the question of its possible localisation. I would like to suggest
that a similar approach be taken in the case of possible dysfunctions of the
central executive. Unfortunately no term exists for this, and I for one cannot
think of an obvious neat descriptive label [.....] As a stop-gap I suggest the
term dysexecutive syndrome." (Baddeley, 1986, p238; italics
original.)
For
our present purposes, we are dating the modern age of frontal assessment to
1982, when Shallice (1982) devised a variant of the Tower of Hanoi, called the Tower
of London (TOL) task [buy one]. In its
usual form, this problem consists of three different length pegs, capable of
holding one, two, and three beads respectively. However, where the TOH discs
are the same colour but different sizes, the TOL beads are the same size, but
different colours. As a result, the TOL is easier to grade for problem
difficulty (how many moves it takes), and this makes for a more sensitive
psychometric test (Shallice 1988). Shallice (1982) found a significant left
anterior frontal deficit for TOL performance.
Muriel
D. Lezak is another who has pointed to the problems of assessing executive
functions (eg. Lezak, 1982). "With few exceptions," she writes,
"we do not have standardised methods for making objective or reliably
replicable estimates of gradations of impairment of the functions [or] for
making intra- and inter-individual comparisons" (p281). Like Luria, she,
too, is especially concerned about the cognitive processing involved during "goal
formulation", thus .....
"The capacity to
formulate a goal, or to have an intention, is bound up with motivation and with
awareness of self and how one's surroundings impinge on oneself. Goal-directed
motivation differs from the simple arousal states that spur infants, impulsive
adults, and subhuman animals. Simple arousal states lead automatically to
reactive or instinctive activity. In contrast, persons capable of goal
formulation not only can conceptualise their needs and desires before acting
upon them but can entertain motives that may be far removed from organismic
drive states and much more complex than are impulsive acts or automatic
responses to physiological needs or environmental stimuli. The ability to
create motives out of past experiences, out of an appreciation of physically or
temporally distant needs, or out of one's imagination, requires self-awareness
at a number of levels including awareness of internal states, an experiential
sense of self, and self-consciousness vis-à-vis the social and objective
environment. It also requires the ability to identify those aspects of one's
surroundings that may have personal relevance. [.....] It simply does not
occur to [persons lacking this capacity] to do anything." (Lezak,
1982, p286; bold emphasis added.)
There
then came a flurry of innovative assessment techniques. In 1984, for example,
Milner and Petrides (1984) added the Self-Ordered Pointing Test (SOPT)
to the frontal assessment repertoire [glossary]. This
was followed by Reitan and Wolfson's (1985)
resurrection of the Trail Making Test (TMT), a simple
pen-and-paper task in which the patient has to join up specified sequences of
letters and/or numbers printed randomly across the page. The test was
previously part of the Army Individual
Test
"A 52-year-old
right-handed housewife had suffered from [.....] a right hemiparesis. CT scan
revealed an astrocytoma in the basal left frontal lobe. Treatment consisted of
a left frontal lobectomy and radiotherapy. She recovered well, returned home,
and resumed her domestic work but showed a lack of initiative. [During testing]
I put some medical instruments on my desk. She immediately picked up the blood
pressure gauge and very meticulously took my blood pressure [photograph]. After
this she took the tongue depressor and placed it in front of my mouth, which I
opened, and she examined my throat [photograph]. [Later, she] and I were
speaking in my office. Suddenly, I got up and I left the room. She accompanied
me and followed me outside without comment. I walked toward my car and got in.
She went to the other door and got in beside me. [.....] Not a word was
spoken during this outing (which lasted about 40 minutes). When asked about it
a few days later, she recalled the outing clearly and considered it quite
normal." (Lhermitte, 1986, pp336-338.)
Anxious
to get some prevalence data, Lhermitte, Pillon, and Serdaru (1986) screened for
IB and UB in a sample of 125 "patients with a definite diagnosis of
cerebral lesions". IB was tested by having the examiner suddenly interrupt
the clinical examination to perform without explanation or comment various
sequences of bodily gesture, all more or less inappropriate to the setting (eg.
saluting, kicking, combing the hair, etc.), and UB by presenting the patient (again
without explanation or comment) with specific objects. This is what they found
.....
"75 patients demonstrated
IB (35 with and 40 without UB). Almost all patients imitated the examiner
starting with the first gesture. [.....] All gesture sequences were imitated
without surprise: the patients tried to follow as best they could the order
they thought they had to obey. No patient ever forgot a detail of gestural
sequence (eg. when lighting a candle, he would always blow the match out). If
the gestures were not easy to perform, the patient adapted himself perfectly to
overcome the difficulties. Male patients even imitated such socially
unacceptable gestures as using a urinal, or urinating against a wall, in front
of 20 or 30 people. Some of them smiled when imitating unusual gestures
(kneeling as if to receive a blessing or putting on eyeglasses when already
wearing some). Several patients refused to imitate. They indicated that they
considered the gesture ridiculous, or did not want to perform it (eg. a patient
who wore a wig refused to comb his hair)." (Lhermitte, Pillon, and
Serdaru, 1986, p328.)
Lhermitte
et al explained their observations by proposing that regions of the parietal
cortex are responsible for integrating the jumble of multi-channel sensory data
arriving in the various areas of sensory cortex. This sets up "links of
dependence" between the individual and the outside world. However, the
parietal cortex is itself subject to an inhibitory influence from the frontal
lobe. In normal subjects, the authors see this as setting up a dynamic
equilibrium between these two forces. When the frontal lobes are damaged,
however, the inhibition stops, leaving the parietal lobe accordingly
oversensitive to external stimuli. Because the patient's behaviour is more
controlled by external rather than internal factors, Lhermitte calls this "Environmental
Dependency Syndrome". [Small wonder, therefore, that McKenna and
The
late 1980s then saw the sorting tasks being challenged. Anderson, Damasio,
Jones, and Tranel (1991) compared frontal (n=49) and non-frontal (n=24)
performance on the WCST, and found that "the WCST alone should not be used
to group brain-damaged subjects into 'frontal' and 'nonfrontal' groups"
(p920). This is not to say that the frontal lobes are not involved in the
sorting process, merely that "performance on a multifaceted cognitive task
such as the WCST will necessarily involve the coordinated interaction of
multiple and separate brain regions" (p920). Shallice and Burgess (1991)
agree, arguing that more lifelike "multiple subgoal tasks" are
the best means of establishing ability in life-like situations. All too often,
they point out, the "relevant tests" are not applied. When preparing
a meal, for example, one has in real life to consider not just the isolated
merits of a particular menu, but also the practical issues of availability of
ingredients, available time, etc. Far better, therefore, if testing tapped into
the "many minor decisions" which everyday living typically consists
of, and especially those which need to be "undertaken in parallel with
other activities" (p728). Shallice and Burgess then report three case
studies where the standard battery of tests was supplemented by two new tests,
the "Six Element" (SE) Test [glossary] and the "Multiple
Errands" (ME) Test [glossary], before
concluding as follows .....
"If one considers what is
involved in carrying out these multiple subgoal tasks, then at a very general level,
four basic types of process are relevant. Motivational and memory processes are
clearly required and so are a variety of special-purpose cognitive processes of
the sort that standard neuropsychological tests assess. In addition there are
certain bridge processes which enable the special-purpose cognitive processes
to be used to satisfy motivational requirements. A deficit in basic
special-purpose cognitive processes seems an implausible explanation of their
difficulties on the experimental tasks, given the performance of the patients
on the baseline tests. Indeed the most difficult Multiple Errands subtest gave
problems for some of the controls as well as the patients; it was the least
sensitive part of the procedure. However, frontal patients often manifest
inappropriate affect and have frequently been described as apathetic or
impulsive [citation], and also they can have memory problems [citation]. The
possibility of motivational or memory difficulties therefore needs to be
considered, especially as the patients, when asked to account for some action,
often said that they had completely forgotten their prior intention. A possible
motivational explanation of the impaired performance of the 3 patients is that
they require continuous social reinforcement to carry out psychological tasks,
and without it their spontaneous motivation would tend to dwindle rapidly;
without it they do not persevere." (Shallice and Burgess, 1991, p735.)
7 - Some Comparative Studies
Ever
since the days of Fritsch and Hitzig and Ferrier [see Section 2], animal brain
vivisection studies have helped inform clinical interpretation of human frontal
performance. Such research has continued to this day, and in this section we
look at some of the studies which have cast light on forebrain involvement in
memory functions. The first major finding came from the same Carlyle Jacobsen
who in 1935 had helped to persuade Moniz to carry out the first psychosurgery
[again see Section 2]. Jacobsen (1936) found that frontally damaged monkeys had
particular difficulties with "delayed response learning", that
is to say, with learning tasks where there is an enforced delay between
stimulus and response. In its simplest form, this experiment offers the animal
two lidded bowls, shows a piece of food being put beneath one of the lids, and
then enforces a delay before allowing the animal to lift one of the lids.
Normal monkeys can respond correctly over delays of one or two minutes, but a
frontal monkey's performance tails off (so to speak) after one or two seconds!
Jacobsen interpreted these observations as suggesting an abnormally rapid decay
of immediate memory, however contradictory evidence started to emerge when
"The results of the
present experiment may be summarised briefly as follows: successful performance
in delayed response is possible for monkeys after their frontal association
areas have been removed bilaterally. The difference between normal and operated
monkeys with respect to such performance is not one of presence or absence of
the capacity for delayed response, but rather the difference is one of degree
of susceptibility to the interfering effects of extraneous stimuli occurring
during the delay interval. Jacobsen's hypothesis that immediate memory is
functionally located in the frontal lobes is not sufficient to account for the
results." (Malmo, 1942, p354.)
More
recently, animal studies have been helping to locate working memory, "the
capacity to retain information no longer present in the environment, to
manipulate and/or transform this information, and to use it to guide
behaviour" (Postle, Druzgal, and D'Esposito, 2003/2004 online,
p1). Here researchers have been aided by technical improvements which allow the
discharge records of individual prefrontal neurons to be examined. Among the
lead authors in this area were/are Joachin M. Fuster at the UCLA
Neuropsychiatric Institute and Patricia
S. Goldman-Rakic at the Yale University School of Medicine. Here is Fuster
on the technicalities .....
"In the microelectrode
exploration of the prefrontal cortex during behavioural tasks, one is struck by
the variety of stimuli to which its units are responsive. To be sure, the
magnitude and selectivity (tuning) of prefrontal unit responses to sensory
stimuli are generally much lower than those of units in primary sensory or
postcentral associative cortex to appropriate stimuli. Yet nowhere else in the
neocortex are cells attuned to so many different sensory inputs as in the
prefrontal cortex. On closer analysis it becomes apparent, however, that most
prefrontal cells simply respond primarily, if not exclusively, to the broad
category of stimuli that, in one way or another, are related to the task at
hand, without much regard for their most particular sensory or physical
characteristics. [.....] In a delay trial, for example, it is obvious that some
units are especially responsive to the cue and the visual or auditory stimuli
that constitute it or accompany its presence; other units are mostly responsive
to the stimuli that appear for choice at the end of the delay, others to the
presumably proprioceptive input that results from other instrumental manual
response of the animal, and still others to the delivery of reinforcement for a
correct response." (Fuster, 1992, pp352-355.)
.....
and here is his general theoretical orientation .....
"A large body of
empirical evidence supports the notion of a critical role of the prefrontal
cortex in the temporal organisation of goal-directed behavioural sequences. The
key element of that role is the bridging of cross-temporal contingencies of
behaviour, in other words, the adjustment of the actions of the organism to
temporally distant events and objectives. By the analysis of lesion effects,
neuroelectrical phenomena, and metabolic activity we are led to conclude that
the prefrontal cortex subserves at least three cognitive functions that allow
the mediation of cross-temporal contingencies and, thereby, the formation of
temporally extended structures of behaviour: short-term memory, preparatory
set, and control of interference." (Fuster, 1985, p169.)
For
her part, Goldman-Rakic has resurrected Jacobsen's delayed response paradigm,
but with the added sophistication of modern electrode technology to monitor the
electrical behaviour of single neurons in the prefrontal cortex. As a result of
gaining direct access to brain activity during the learning task she was able
to test Jacobsen's original suspicion that the secret of working memory lay in
the prefrontal cortex, and that this might well explain the "gross
deficiencies" in the ways frontal patients "use knowledge to guide
their behaviour in everyday situations" (p73). Here is the method .....
"At Yale, Shintaro
Funahashi, Charles J. Bruce, and I have used the single-neuron technique in
conjunction with a delayed-response experiment that tests spatial memory. For
our experiment, a monkey is trained to fix its gaze on a small spot in the
centre of a television screen. A visual stimulus, typically a small square,
appears briefly in one of eight locations on the screen and then vanishes. At
the end of a delay of three to six seconds, the central light, or fixation
spot, switches off, instructing the animal to move its eyes to the location
where the stimulus was seen before the delay. [.....] Because the animal's gaze
is locked into the fixation spot, each stimulus activates a specific set of
retinal cells. Those cells, in turn, trigger only a certain subset of the
visual pathways in the brain. Using the eye movement experiment, we have
demonstrated that certain neurons in the prefrontal cortex possess what we call
'memory fields': when a particular target disappears from view, an individual
prefrontal neuron switches into an active state, producing electrical signals
at more than twice the baseline rate. The neuron remains activated until the
end of the delay period, when the animal delivers its response. A given neuron
appears always to code the same visual location [and] other neurons code for
other target locations in working memory." (Goldman-Rakic, 1992, p75.)
Work
of this sort continues apace, with Curtis and D'Esposito
(2003/2004 online)
providing a recent review of the role played by dorsolateral prefrontal cortex
in working memory, and Postle, Druzgal, and D'Esposito (2003/2004 online)
suggesting the involvement of more posterior tissues as well.
8 - Advanced Memory Theory (1): "Online
Representational Memory"
Daigneault,
Braün, and Whitaker (1992) have attempted to test the hypothesis that the
"basic prefrontal function" is "on-line representational
memory", a form of memory which can operate independently of incoming
stimulation. They adopt Goldman-Rakic's (1987) theory of working memory, as
follows .....
"Goldman-Rakic (1987)
postulated that prefrontal cortex receives sensory and mnemonic representations
of reality as well as symbolic representations (eg. concepts, plans) which have
been elaborated in other cerebral areas. These are kept activated ('on line')
by prefrontal cortex in 'representational memory' long enough for this live
memory to modulate behaviour appropriately despite the absence of external
contingencies or despite the presence of external task-irrelevant
'discriminative' stimuli. Different prefrontal areas are postulated to house
different representational memory units which are related to each other
anatomically and functionally [and] any one prefrontal area is assumed to exert
inhibitory as well as excitatory influence on the relevant motor systems.
Discrete prefrontal lesions are understood to hinder specific representational
memory units resulting in specific difficulties in the regulation of
behaviour." (Daigneault, Braün, and Whitaker, 1992, p50.)
Daigneault
et al then exposed 259 normal adults to seven selected frontal lobe tests, and
a factor analysis of the results revealed five "prefrontal functional
constructs", as follows .....
Factor 1 - Planning: This is the
frontal lobe skill tapped by the SOPT and simple errors on the Porteus Maze. It
may also be viewed as "the elaboration of strategy" (p49).
Factor 2 - Self-Regulation: This is the
frontal lobe skill tapped by perseverative errors on the WCST and repeated
errors on the Porteus Maze. It indicates that patients are failing to modify a
chosen course of action despite the availability of error feedback.
Factor 3 - Maintenance of a
"nonautomatic cognitive or behavioural set": This is the
frontal lobe skill tapped by interference errors on the Stroop Test, category
break errors on the WCST, and alternation errors on the Trail Making Test. It
indicates that patients are failing to maintain a complex behavioural plan in
the face of distraction.
Factor 4 - Spontaneity and
sustained mental productivity: This is the frontal lobe skill tapped by verbal
fluency and design fluency tasks.
Factor 5 - Spatiotemporal
Segmentation and Organisation: This is the frontal lobe skill tapped by tests
of recency judgement.
In
yet another assault on the problem of actually defining executive cognition,
Duffy and Campbell (1994) identify three areas of interest, namely .....
Working Memory: This is the
problem of how the prefrontal cortex supports the "neural chalkboard"
(p380) of short-term memory, that is to say, the memory resource which
"enables the individual to simultaneously evaluate multiple intra- and
inter-personal cognitive representations [by putting together] a reasoned
strategy for dealing with the particular task at hand" (p380). This
resource seems to be situated particularly in the dorsolateral area.
"Mediation of
Cross-Temporal Contingencies": Here Duffy and Campbell buy
into the work of Fuster (1985), who proposed that "the overarching
function of the prefrontal cortex (and the core characteristic of executive
cognition) is 'the integration of sensory information and motor acts into
novel, complex, and purposive behavioural sequences'" (p380).
"Modulation of
Large-Scale Neurocognitive Networks": This is the problem of
finally assuming an executive role in behaviour, enabling the individual
"to respond to a particular stimulus on the basis of a distillate of
previous experience and current environmental stimuli; for example, 'although
I'm tired I must continue studying if I want to pass the exam tomorrow'"
(p381).
They
also distinguish three separate "prefrontal syndromes", namely
.....
(1) Dysexecutive Type: This syndrome
arises from lesions of the "dorsal convexity" [that is to say, the
gentle "hump" in the region of Brodmann's Area 9], and is
characterised by dysfunction in flexibility, sequencing, and planning
ahead.
(2) Disinhibited Type: This syndrome
arises from lesions of the orbitofrontal region [that is to say, the ventral
surface of the frontal lobe, where it contacts (and can easily be damaged by)
the bony roof of the orbits of the eyes], and is characterised by "poor
impulse control, explosive aggressive outbursts, inappropriate verbal lewdness,
jocularity, and a lack of interpersonal sensitivity" (p383).
(3) Apathetic Type: This syndrome
arises from lesions of the medial region of the frontal lobe, at the anterior
end of the cingulate gyrus, and is characterised by apathy and inertness,
perhaps because said area is normally involved in exploration and motivation.
Jacobs
(2004 online) offers
a concise alternative description of the dysexecutive and disinhibited types,
if interested [take me
there].
9 - Advanced Memory Theory (2): Autobiographical
Memory
Another
frontal sign to attract the attention of cognitive theorists is "confabulation",
the inventing of factually spurious explanations to "fit" otherwise
fragmentary and/or inconsistent recollections, and another hot line of enquiry
is into the relationship between confabulation and "autobiographical
memory" [glossary].
Papagno and Baddeley (1997/2004 online)
give an example of how that relationship may present itself, describing the
behaviour of patient MM, a 29-year-old man who had suffered a severe right
hemisphere stroke .....
"His confabulations were
spontaneous and consistent and not triggered by a lack of memory. For example,
each time we left the testing room, which was close to a lift, he claimed that
he had to take the lift and go upstairs (in fact we were on the top floor) to
see his children. MM's wife had given birth ten days before the accident to a
child whose name was Enea; he could remember it exactly (including date of
birth, name of baby, etc.) but he claimed his wife had also had a second child,
born one month later, whose name he could not remember. When asked how it was
possible to have a child one month after the other, he answered 'Ask my wife,
she did it'." (Papagno and
Baddeley, 1997/2004 online, pp744-745.)
Other
theorists have highlighted the processes of "reality monitoring",
that is to say, the ability to maintain an accurate internal representation of
the world and what is going on within it. The key theoretical construct here is
Johnson, Hashtroudi, and Lindsay's (1993) "source monitoring
framework" (SMF). This is a collection of mechanisms capable of
tagging retrieved memory content as actual or imagined. Thus .....
"Typically, memories for
experienced (external) events have information denoting time, location, spatial
arrangement, emotion or sensory perceptual details such as colour or shape. In
contrast, memories for thoughts and imagined events typically have much less or
less vivid information of these types, but often have more information about
cognitive operations (such as intention and planning, deliberate imaging, actively
searching for a piece of information and drawing conclusions). [.....] Memory
monitoring processes capitalise on such differences by evaluating memories (or
mental experiences in general) for their match with the expected
characteristics of a given source. Such attributions are correct sufficiently
often to keep our memories and beliefs constrained by reality, but,
importantly, are subject to error." (Johnson and Raye, 1998, pp137-138.)
Insofar
as "source monitoring" was concerned .....
"..... we trust our
source monitoring processes to indicate not only when memories probably
correspond to reality but also when they might not do so. Various processes
operate to constrain the amount of distortion that arises from our imperfect
memory system and to signal us when we should be cautious about the
truthfulness of a memory. The feeling of remembering is important to our
well-being, but so is the feeling of not remembering that accompanies vague,
inconsistent, ir implausible recollections. Accurate memory is knowing when we
do not remember as well as knowing when we do. The subtle balance of the
encoding, consolidation, reactivation, retrieval, and evaluation processes that
underlie the 'meta-memory' function of source monitoring develops throughout
childhood, tends to weaken in old age, and can be disrupted at any age by
distraction, stress, drugs, hypnosis, and social or motivational pressures. A
profound disorganisation of memories and beliefs can occur when memory
monitoring processes are disrupted as a consequence of frontal brain
damage." (Johnson and Raye, 1998, p144.)
Antonio
R. Damasio, Head of Neurology at the University of Iowa School of Medicine has
recently (Damasio, 2002) turned his high tech brain scanners onto the problem
of episodic memory [glossary].
He invokes the concept of the "time stamp" to differentiate
episodic from semantic memory content, thus .....
"In the course of
evolution, humans have developed a biological clock set to [the] alternating
rhythm of light and dark. This clock, located in the brain's hypothalamus,
governs what I call body time [.....]. But there is another kind of time
altogether. 'Mind time' has to do with how we experience the passage of time
and how we organise chronology. [..... We] place events in time, deciding when
they occurred, in which order and on what scale, whether that of a lifetime or
of a few seconds. How mind time relates to the biological clock of body time is
unknown. It is also not clear whether mind time depends on a single timekeeping
device or if our experiences of duration and temporal order rely primarily, or
even exclusively, on information processing. [In any event, the] ability to
form memories is an indispensable part of the construction of a sense of our
own chronology. We build our time line event by event, and we connect personal
happenings to those that occur around us. When the hippocampus is impaired,
patients become unable to hold factual memories for longer than about one
minute. Patients so afflicted are said to have anterograde amnesia.
Intriguingly, the memories that the hippocampus helps to create are not stored
in the hippocampus. They are distributed in neural networks [ie. cell
assemblies] located in parts of the cerebral cortex (including the temporal
lobe) related to the material being recorded: areas dedicated to visual
impressions, sounds, tactile information, and so forth. These networks must be
activated to both lay down and recall a memory; when they are destroyed,
patients cannot recover long-term memories, a condition known as retrograde
amnesia. The memories most markedly lost in retrograde amnesia are precisely
those that bear a time stamp: recollections of unique events that happened in a
particular context on a particular occasion. For instance, the memory of one's
wedding bears a time stamp. [.....] The temporal lobe that surrounds the
hippocampus is critical in the making and recalling of such memories."
(Damasio, 2002, pp50-51.)
10 - Advanced Memory Theory (3): Structured Event
Complexes
One
of the most innovative theorist-clinicians in the last 15 years is Jordan Grafman,
Head of the Cognitive Neuroscience Section in the US National Institute for Neurological Disorders
and Stroke (NINDS) in
"Cognitive science has
long been concerned with trying to understand how people represent in memory
events that occur in our lives [.....]. A variety of cognitive structures have
been hypothesised to account for this kind of knowledge [including] schemas,
scripts, frames, cases, and story grammars [.....]. What these knowledge
structures have in common are that they are composed of a set of events,
actions, or ideas that when linked together form a knowledge unit (eg. a
schema). A unit could be a series of simple movements or the set of rules used
to solve a physics problem. We have called this general class of linked
unitised information the Structured Event Complex (SEC). We have called
the SEC that is specifically involved with planning, social behaviour, and the
management of knowledge, the managerial knowledge unit (MKU). It is the
MKU and its more primitive SEC neighbours that we hypothesise are only stored
in the prefrontal cortex [citation]. What might the informational content of an
MKU be like? It is suggested that an MKU is composed of a series of events.
There should be a typical order to the occurrence of these events. The ordering
of events obeys multiple constraints. Some constraints are physical. You cannot
have coffee in a cup unless it is first poured into the cup. Some constraints
are cultural. In the
Goel
and Grafman (1995) have pointed to the dangers of presuming that the
superficially similar
"[Our] results confirm
the widely held belief that patients with frontal lobe lesions are impaired on
the [TOH] task with respect to normal controls. However, such a conclusion is
only a first step. What we really want to know is whether they are especially
impaired, more so than on other cognitive tasks. And if they are so impaired,
we want to know why." (Goel and Grafman, 1995, pp629-630; italics
original.)
They
then call for greater discipline in conceptualising the term
"planning", thus .....
"To plan is to chart a
course from point A to point B, without 'bumping' into the world. All the
'bumping' must be done in some modelling space, and some satisfactory path
extracted. Once the path has been constructed, the planning component is
complete. The execution or following of a plan is quite a different process. It
is the construction and evaluation of this path in some modelling space that we
are referring to when we use the term 'planning'." (Goel and Grafman,
1995, p638.)
Their
substantive criticism of the TOH puzzle is then that the ability to "look
ahead" is neither necessary nor sufficient to solve the TOH. It is not necessary,
they point out, because computers can be programmed to do the TOH job quite
adequately [this being what Herbert Simon was up to at the end of Section 5],
and computers do not understand. Nor is it sufficient, because "you
can look ahead all you like, but unless you see the 'trick', the
counterintuitive backward move, you won't solve the puzzle" (Goel and
Grafman, 1995, p638). Frontal processing, in other words, often includes the
sort of "insightful problem solving" once so popular with
workers such as Maier, and Duncker.
As
it happens, the WCST has also been technically criticised of late. For example,
Dunbar and Sussman (1995/2004 online)
have argued that the WCST is "a classic concept attainment task"
(that is to say, subjects have to formulate, apply, and monitor hypotheses as
to the "rules" of the game at hand), and, as such, offers "a
number of different possible sources of perseveration".
The
frontal lobes are also increasingly being implicated in phenomena such as
consciousness and volition. For example, Badgaiyan
(2000/2004
online) has studied executive control and the will, and reviews
neuroimaging studies which suggest "several cortical areas that mediate
different functions of the central executive" (p39). The cingulate cortex,
for example, is activated in the Stroop task and is apparently involved in
"executive functions such as error detection and response monitoring"
(p39). Indeed the cingulate is involved in so many executive functions that it
may be "considered crucial for execution of supervisory function"
(p39).
11 - Advanced Memory Theory (4): Hemispheric
Differences
Although
Shallice (1982) found a left hemisphere deficit on TOL performance [see Section
6], and Mesulam (1995) found contralateral symptoms of
"motivational neglect" with unilateral lesions of the anterior
cingulate cortex, most hemispheric differences in frontal function are
restricted to the more posterior motor areas. It would be wrong, however, to
proceed without noting the writings of Elkhonon
Goldberg at the New York University Medical Centre. Following a review of
the literature on hemispheric differences, Goldberg, Podell, and Lovell (1994)
have suggested the following principle of hemispheric specialisation .....
"..... the right
hemisphere is critical for the exploratory processing of novel cognitive
situations to which none of the codes or strategies preexisting in the
subject's cognitive repertoire readily applies. The left hemisphere is critical
for processing based on preexisting representations and routinised cognitive
strategies. The traditional language/nonlanguage dichotomy then becomes a
special case of this more fundamental principle. The novelty-routinisation
principle of hemispheric specialisation is different from the more traditional
ones in several major respects. First, the distinction between cognitive
novelty and cognitive routinisation is not limited to humans. [.....] In
addition, the novelty-routinisation approach emphasises individual differences
and argues against the fixed assignment of particular materials and tasks to
one or the other hemisphere. What is cognitively novel to one individual is
familiar and routinised to another. Finally, the novelty-routinisation
hypothesis offers a dynamic rather than a static view of hemispheric
specialisation. It implies that the pattern of hemispheric specialisation is
different in a given individual at different developmental stages.
Specifically, it implies that the locus of cortical control shifts from the
right to the left hemisphere in the course of cognitive skill
development." (Goldberg, Podell, and Lovell, 1994, pp372-373; bold
emphasis added.)
12 - Control Architecture Consideration
"Nothing is more chastening
to human vanity than the realisation that the richness of our mental life - all
the thoughts, feelings, emotions, even what we regard as our intimate self -
arises exclusively from the activity of little wisps of protoplasm in the
brain." (Ramachandran and Hirstein, 1997, p429.)
We
must now make an explicit connection between two study areas - the Tim Shallice
with the reputation as frontal lobe theorist is the same Tim Shallice who
teamed up with
"Norman & Shallice
(1980) and Shallice (1982) have adopted a computational information-processing
approach to modelling disorders of 'executive' functions. Norman & Shallice
took as their starting point the distinction between habitual and novel action
routines [and] suggested that the selection and integration of these two
classes of action were based on different principles. Norman & Shallice
proposed that control over the sequencing and integration of the components
required for complex but well-established patterns of behaviour is mediated by
hierarchically organised 'schemas' or motor representations [.....]. In driving
to work the highest level of the schema might be a comparatively abstract
representation of the route. Such high-level schemas can call up subordinate
'programs' or subroutines; thus 'driving to work' will have component schemas
including at the lowest level instructions to muscles to press pedals and turn
the steering wheel. Norman & Shallice suggested that under many conditions
we can function on "auto pilot", selecting and integrating cognitive
or behavioural skills on the basis of established schemata. Once a schema has
been triggered it 'competes' for dominance and control of action by a process
of inhibiting other schemas which would be likely to conflict with it [.....]
(a process which they termed contention scheduling). When one needs to
suppress an automatically attractive alternative source of stimulation, to plan
novel solutions to problems, or to change flexibly from one pattern of
behaviour to another, the selection of schemas on the basis of the strength of
their initial activation might be disastrous. Norman & Shallice argued
that, under these circumstances, the selection of schemas was modulated by the
operation of a supervisory attentional system [which] can provide a
boost to a schema's level of activation, thereby enabling it to 'get ahead' in
the competition for dominance despite starting from a handicapped
position." (McCarthy and Warrington, 1990, pp362-363; italics original.)
ASIDE: If interested in
the topic of action schemas and motor programming - itself a major research
area - see our e-paper on "Motor Programming". The term
"contention scheduling" was borrowed originally from computer
science, where it was an important aspect of the sort of "job execution
scheduling" carried out by virtual machine mainframe operating systems -
see Section 1.2 of our e-paper on "Short-Term Memory Subtypes in
Computing and Artificial Intelligence" (Part 5), for a longer
introduction
Now
we mention the SAS theory because it may well be that defects in contention
scheduling underlie the sort of utilisation behaviour discussed in Section 6.
For example, Shallice, Burgess, Schon, and Baxter (1989) report on signs of UB
in case LE, a 52-year-old right-handed man .....
"On September 17, 1987,
his son reported that the patient was found early in the morning wearing
someone else's shoes, not apparently talking or responding to simple commands
but putting coins into his mouth and grabbing imaginary objects. [.....]
These
authors then fit these observations into the Supervisory System theoretical
framework as follows .....
"First, in the absence of
a working Supervisory System in the frontal lobes, perceptual input alone can
lead to activation of an action schema and its selection in contention
scheduling; for instance, the sight of a pair of scissors and paper activates
the actions associated with the objects, and these behaviours are then carried
out (selected) in the absence of supervisory inhibition. This is precisely what
happens in utilisation behaviour. [.....] Secondly, even when the Supervisory
System is impaired - as can be assumed for LE - the probability of utilisation
behaviour occurring will depend on whether any action schema which is being
randomly triggered in contention scheduling is or is not being inhibited
by an already active schema. This is more likely when the patient has been
instructed to carry out a task and task-irrelevant stimuli are being presented.
The model would predict that utilisation behaviour should occur most frequently
in a patient with an impaired Supervisory System when no task is being
undertaken. Utilisation behaviour should also be more frequently observed when
the required task is in the auditory-verbal domain, when there would be no
overlap in the cognitive subsystems involved, than when a task-irrelevant
visual input is competing with the triggering stimuli. Such a model provides a
more articulated version of Lhermitte's general position and predicts
differences in the frequency of utilisation behaviour according to the
particular activity the patient is performing. It fits the pattern of results found
in our patient LE." (Shallice et al, 1989, pp1596-1597.)
The
concept of control layers was taken up in detail by Donald
T. Stuss of the
"This model represents a
hierarchy of brain abilities, meaning that there are 'higher' and 'lower' order
functions. [.....] An important component of the model is the feedback loop
present at each level. Incoming information is forwarded to a comparator which
analyses in a pattern-recognition format the incoming specific fact or group of
facts. These comparator values have been developed through previous experience,
modelling, and training. If there is no difference between the input and
comparator values, no adjustment is necessary. If they are different, a change
output is automatically triggered. Depending on the level or the demand, this
could be action to change the environment, a call for increased information
from the environment, or a requirement for direction from higher levels and
alteration of the comparator. A feedforward system is postulated to preset the
system in an anticipatory manner. Three levels of monitoring or
feedback-feedforward systems are proposed [figure]. The lower level(s), at
least, may be considered as modules as described in cognitive psychology
[citation]. One could postulate more levels or smaller feedback loops within
particular systems. The three levels proposed are satisfactory as a skeleton
outline for the specific needs of this paper. Neuropsychological input at the
lowest level presented is sensory/perceptual and is domain- or module-specific:
consequently, multiple systems relating to specific functions may exist. At
this level operations may range from simple to complex. Regardless of their
complexity, they are overlearned and routinised. The processes are thus
virtually automatic - speed of operations is rapid. [.....] The routinised
activity is not conscious or easily changed by conscious effort. The process of
routine selection of routine actions or thought processes has been labelled
'contention scheduling' by Norman and Shallice [citations]. [.....] The second
level described is associated with the executive control or supervisory
functions of the frontal lobes [citation]. The neural input for this second
level derives primarily from the information elaborated by the sensory/perceptual
level. The neural substrate for this second level is the well-documented
reciprocal connections of the frontal regions with all posterior multimodal and
basic limbic structures [citations]. The primary role of this level is the
conscious direction of the lower level systems toward a selected goal. This
control is higher order, an adjustment of the ongoing activities of lower
modules [citations]. This control may well be divided into specific functions
such as anticipation, goal selection, plan formulation, evaluation and
monitoring of behaviour, and anterior attentional functions such as selectivity
and possibly persistence [citations]. [.....] At this level, the feedback loop
is slower, deliberate, effortful, and required in the processing of new or
complex material where routine responses or knowledge are not available. With
repetition, the new complex behaviours requiring active conscious deliberation
may eventually become automatic in the sense that control of these behaviours
in ordinary circumstances is transferred to a lower level. The highest level
described is consciousness - the ability to be aware of oneself and the
relation of self to the environment. This prefrontal self-awareness
appears to be similar to the concept of metacognition, the ability to reflect
on any process itself. This level implies a self-reflectiveness of all levels,
including its own. Inputs are presumably the abstract mental representations of
the executive's alternative choices. The primary anatomical representation of
this highest level has been postulated as the prefrontal region [citations].
The abstract representation of this concept, however, necessitates involvement
of all functionally lower levels (of the brain)." (Stuss, 1992, pp10-12;
bold emphasis added.)
As
far as the modularity of the control architecture is concerned, Godefroy et al (1999/2004 online)
have (quite rightly) lamented cognitive science's general confusion as to what a
biological control system qua control system needs to do
.....
"The underspecification
of control operations stems from the methodology used to assess executive
functions. Most studies have used complex tests such as card sorting, planning,
and problem solving tests, which involve numerous cognitive processes and
greatly load short-term memory [citations]. Thus, the finding of low
performance does not allow to characterise the impairment in terms of cognitive
processes. The underspecification of control operations results in a severe
limitation of any theoretical account. [.....] The cognitive architecture of
the models of Shallice and Baddeley relies on two main assumptions: (1) a
hierarchical organisation [and] (2) that control processes depend on an amodal
central-supervisory system regulating specific-purpose 'slave' modules. The
fractionation of the central control system has been suggested by Shallice
(1994) and Baddeley (1996) mainly on the basis of the few clinical data showing
the large variety of executive deficits. However, it remains largely unknown
whether executive functions depend on a unique control system or on multiple
subsystems." (Godefroy et al,
1999/2004 online, pp2-3.)
Godefroy's
team therefore recommends a more focused attack on the problem, and identifies
three discrete research objectives, namely (1) to decipher the role played by
short-term storage, (2) to establish the "architecture of executive
functions" (p16), and (3) to specify the various "control
operations". It is a rare treat to see such a technical approach in an
area usually reserved for clinicians and philosophers. However, we should not
underestimate the complexity of the task, because .....
"..... studies of
regional cerebral metabolism have identified seventeen functionally
distinct areas within frontal cortex, excluding cingulate and orbital cortex,
and have led Roland (1984) to conclude that in humans any structured processing
of information requires the involvement of one or more regions within the
frontal lobe." (Parker and Crawford, 1992, p267.)
Our
own views on the brain as an instance of a modular real-time control system are
set out in Sections 3.7 and 3.8 of our e-paper on
"Short-Term Memory Subtypes in Computing and Artificial Intelligence"
(Part 6).
Finally
under this section, we may mention
ASIDE: The Twenty
Questions parlour game was made a tool of formal scientific enquiry by Mosher
and Hornsby (1966), who used the method to investigate the developmental stages
of purposeful questioning behaviour in normal children. Subjects were told that
the experimenter had a type of animal in mind, and had to work out what it was
by accumulating question-and-yes/no-answer knowledge about it. They found that
children become increasingly able (a) to guide the enquiry process "by
what [they] found out earlier" (p101), and (b) to rule out whole classes
of irrelevant possibilities at a time. The test was then upgraded for clinical
use with alcoholics by Laine and Butters (1982), and with frontal lobe patients
by Klouda and Cooper (1990) .....
Concerned
that Klouda and Cooper (1990) had only examined five patients, Upton and
Thompson followed up with a much larger sample. They assessed 88 patients with
frontal lobe dysfunction (42 left frontal, 32 right frontal, and 14 bifrontal),
and compared them to 57 temporal lobe neurological controls and 28 normal
controls. Here is the nature of the scoring scheme used .....
"All questions asked by
the participant were recorded verbatim and classified into one of three types:
(1) Constraint: Questions of this type are the most effective search
question. These types of question narrow the field by as much as half, by
eliminating a series of different types of animals (eg. 'Does it have four
legs?' or 'Does it live in water?'). (2) Pseudoconstraint: Questions of
this variety are a less effective search strategy. Although they appear to be
constraining, they only apply to one particular type of animal (eg. 'Does it
have a trunk?' or 'Does it bark?'). (3) Hypothesis Scanning: This is a less
effective search strategy and basically involves guessing with no previous
basis for such a guess (eg. 'Is it a dog?' or 'Is it a cat?'). The number of
each type of question was recorded. Apart from the number of different types of
question asked, two other indices of performance were recorded: (a) the number
of questions needed to arrive at the correct response (maximum 20) and (b) the
number of questions asked before the first guess (presumed to be a measure of
impulsivity." (Upton and Thompson, 1999, pp206-207; italics original.)
Results
indicated that the bifrontal group required the most questions (mean = 17.32),
the left frontal group were next (mean = 14.63), and the right temporal next
(mean = 12.36). Right frontals, left temporals and normals all fell in the
range 10 to 12. Further analysis by question type revealed that the inefficient
hypothesis scanning questions (that is to say, the specific guesses) were most
common in the bifrontal group. In addition, orbitofrontal patients showed
impaired understanding of the strategy to be employed, although - surprisingly
- dorsolateral patients were not thus impaired.
Well
that's the frontal theory, folks, and at this point the question may reasonably
be put as to what this enormous cauldron of opinion and data actually boils
down to if you are a clinician who wishes merely to manage a caseload. In the
closing sections of this handout, we look at some of the practical
recommendations which can be made.
13 - Paediatric Frontal Management Issues
Turning
firstly to the problems of paediatric management, Tranel, Anderson, and Benton
(1994) remind us of the normal developmental sequence .....
"Development and
maturation of executive functions in normal children have been addressed in
several recent studies. Levin, Culhane, Hartmann, et al (1991) studied 52 children
aged 7-15, with a battery of tests. [.....] A principal components analysis
revealed a three-factor solution which included factors related to concept
formation, freedom from perseveration, and planning. Similar results were
reported in a study by Welsh, Pennington, and Groiser (1991). They studied
'executive function' in 100 children aged 3-12. Executive function was defined
as 'goal-directed behaviour, including planning, organised search, and impulse
control', and six measures were utilised: visual search, verbal fluency, motor
sequencing, the [WCST], the [TOH], and the Matching Familiar Figures Test. The
authors found that the age at which children achieved adult-level performance
on the tasks varied considerably across different tasks. On visual search and a
simple version of the TOH, for example, 6-year-old children performed at adult
levels. More complex tasks, including the WCST and a complex version of the
TOH, showed more protracted development curves, and even 12-year-olds had not
achieved adult levels on some of the response variables (eg. complex planning
on a four-disc TOH)." (Tranel, Anderson, and Benton, 1994, p131.)
Diamond
and
The
frontal lobes play a part in Attention-Deficit (Hyperactivity) Disorders
(ADD/ADHD). It does not take long to spot the potentially frontal aspects in
this summary of the symptoms from
"Attention Deficit
Hyperactivity Disorder or ADHD is regarded as the most common neurobehavioural
disorder affecting children. Its prevalence is estimated to be between 3-10% of
school age children with a two-three times greater preponderance in boys
compared to girls. The diagnosis of ADHD is based upon clinical grounds as
defined by DSM-IV criteria. Core symptoms include hyperactivity, impulsivity,
distractibility, and inattentiveness." (e1.)
Anderson
and Pentland (1998) warn of residual attentional deficits following childhood
CHI. They found that head-injured adolescents "exhibited deficits on a
wide range of summary variables extracted from attention tasks" (p283).
"Most tests did not
perform much above base rate levels of positive predictive power for the
subtype of ADD+H and rarely exceeded that which might be achieved by tossing a
coin." (Barkley and Grodzinsky, 1994, p137.)
Bishop
(1993) has speculated on a possible relationship between executive functions
and "theory of mind", thus making frontal lobe psychology directly
relevant to clinicians dealing with autistic children .....
"[Ozonoff, Pennington,
and Rogers (1991)] proposed that both tests [ie. the frontal tests of executive
function already described, and the "Sally-Anne" genre of theory of
mind tests] involve the use of stored information to govern behaviour.
Impairment in using different types of stored information could affect
performance on a wide range of superficially different tasks. To perform well
on a theory of mind task, subjects must access internal representations of the
mental state of others; to perform executive function tasks, they must generate
representations of hypothetical configurations to guide their planning and
problem solving. The problem with this explanation is that it is so general
that it would be easy to explain almost any deficit on any task in terms of the
theory, because most behaviours are governed to some extent by stored
information. [.....] An alternative possible point of similarity between
frontal lobe patients and autistic children could be that in both cases
behaviour is largely driven by external environmental stimulation. Shallice and
Burgess [(1996)] argue that people have available a large but finite set of
action and thought schemas that, like computer programs, can be activated if
well-learned triggers (either from the environment or from the output of other
schemas) are excited. The problem for the organism is how to select the
appropriate schema when several are activated at once. According to Shallice
and Burgess this process of contention scheduling [glossary] is controlled by a
supervisory system, whose operation depends on the integrity of the frontal
lobes. When the supervisory system malfunctions, external stimulation will
elicit responses associated with the stimulus, but there will be little
evidence of planned behaviour. Impairments of cognitive function will be
particularly apparent in situations where the individual is presented with
environmental stimuli but required to withhold the habitual response to these
and to perform some other operation instead. For instance, Baddeley (1986)
described a man with an acquired frontal lobe lesion who was given a piece of
string, a ruler and scissors and instructed to measure out a piece of string in
order to cut it later. He immediately started to cut, and when told not to
replied: 'Yes, I know I'm not to cut it', while continuing to do so. Viewed in
this light, it is of interest to note that in the theory of mind tasks
described so far, the child is required to make a statement that is directly
contradictory to the visible evidence. Thus, in the Sally-Anne experiment,
children must say that Sally will look for her marble in the basket when in
reality they have seen it in the box. [.....] This raises the question as to
whether the child actually does have a theory of mind and appreciates false
beliefs, but is unable to act on this knowledge because of an inability to
inhibit a more prepotent response." (Bishop, 1993, pp287-288.)
Mateer
and Williams (1991) have studied the effects of frontal lobe injury in children
and recommend the following classroom management guidelines .....
1 - Do Not Offer Options: This is because
frontal cases have difficulty recognising and prioritising alternative courses
of action. Prepare instead a simple course of action.
2 - Do Not Bargain: This is because
frontal cases have difficulty recognising contingent benefit.
3 - Structure and
Predictability: These are important aids to memory and the scheduling of effort, so
detailed timetables and explicit deadlines should be provided.
4 - Use Memory Books and
Organisers: These are important to store modelled and directed good practice as it
is established during management.
5 - Use Direct Instruction: Again because
frontal cases are generally disorganised and have difficulty abstracting the
relative importance of things for themselves, their teachers should state
precise final and substage objectives.
6 - Provide Study
Skills/Organisational Assistance: Under this heading, it is useful (a) to distribute
practice over time, and to follow a shared written plan, and (b) to identify
the "main idea" in any new material.
7 - Provide Positive Feedback: Use
"constructive timely feedback" to reinforce students' positive
self-esteem.
At
the same time, clinicians need to guard against doing too much of their
patient's thinking for them. For example, Ylvisaker and Feeney (draft
2004 online) have reviewed the literature on paediatric frontal
rehabilitation and identify the fundamental problem as one of measuring
patients' "self-determination" in a clinician-patient encounter where
the clinician is likely to be doing all the determining. They adopt Wehmeyer,
Agran, and Hughes' (1998) analysis of self-determination into four components
.....
(1) Autonomy: This is Wehmeyer,
Agran, and Hughes' first self-determination factor, namely "acting in a
way that is free from undue external influence or interference"
(p2).
(2) Self-Regulation: This is Wehmeyer,
Agran, and Hughes' second self-determination factor, namely "formulating,
enacting, and evaluating plans of action, with revisions as necessary"
(p2).
(3) Psychological Empowerment: This is Wehmeyer,
Agran, and Hughes' third self-determination factor, namely "acting on the
belief that one can influence important outcomes in the environment and in
life" (p2).
(4) Self-Realisation: This is Wehmeyer,
Agran, and Hughes' fourth self-determination factor, namely "using a
reasonably accurate knowledge of self (strengths and needs) and acting in a
manner that capitalises on this knowledge in a beneficial way" (p2).
Ylvisaker
and Feeney also echo Stuss and Benson's (1986) observation that "in the
context of standardised assessment, the examiner and testing situation function
as prosthetic frontal lobes" (p4). They therefore recommend "a
distrust of clinical programs that fragment integrated aspects of human
function and decontextualise the treatment" (p4), thus .....
"To be successful with
any difficult task, children need to (a) know that it will be difficult
(presupposing some awareness of strengths and limitations), (b) set a
reasonable goal, (c) formulate (however unconsciously) a plan to achieve the
goal, (d) initiate goal-directed action, (e) refrain from actions that
interfere with success, (f) attend to and evaluate how well they are doing, and
(g) try another plan or strategy if things are not going well, remaining
optimistic about the possibility of success. In addition, they need to know
that they can control the outcome of their efforts (at least to some degree)
and take responsibility for that effort (ie. internal locus of control, Rotter,
1966)." (Ylvisaker and Feeney, 2003/2004 online, p4.)
14 - Adult Frontal Management Issues
The
1990s also saw tests increasingly being packed up as glossy and standardised
commercial psychometrics products to sit alongside older packages such as the
Halstead-Reitan and Luria-Nebraska batteries. A good example of this trend is
Wilson et al's (1996) Behavioural Assessment of the Dysexecutive Syndrome
(BADS) Test [glossary]. The
package requires (a) the subject to complete six separate practical tests, and
(b) both subject and carer(s) to complete a 20-item diagnostic questionnaire.
The tests are as follows: (1) Temporal Judgement, (2) Rule Shifting, (3) Action
Programme, (4) Key Search Task, (5) Zoo Map Task, and (6) Modified Six
Elements Test [for the full "horse's mouth" history of the
development of the BADS test, see Wilson et al (1998)]. Other commercially
packaged assessments which wholly or partly address frontal processing include
.....
Riddoch and Humphreys' (1993)
Langdon and
Dubois, Slachevsky, Litvan,
and Pillon's (2000) Frontal Assessment
As
summarised in Chayer (2002/2004
online), the designed-for-bedside FAB includes a similarities test, the WFT
for the letter S, a motor imitation task, two Luria-type tapping tasks, and a
particularly clever test of utilisation behaviour (QV).
As
far as the generally "disinhibited" orbitofrontal patients are
concerned, Varney and Menefee (1993) report the practical problems .....
"Patients with TBI,
particularly when mild, may perform normally on a wide variety of
neuropsychological measures and may appear relatively normal within the
structure of standard psychological interviews. At the same time, they are
often substantially impaired in independent self-determined 'adult' behaviours
and activities of daily living. [.....] Patients with TBI may provide
inaccurate histories, overreport or underreport symptomatology, and lack
insight concerning their behaviour and its effects on others in their
environment. [.....] even the most state-of-the-art testing fails to identify
the manifestly disabled 50% of the time or more. Thus, it is essential that
collateral informants be interviewed and vocational histories be obtained from sources
other than the patient." (Varney and Menefee, 1993, pp33-41).
Jacobs
(2004 online) is not
happy that the Mini Mental Status Examination addresses frontal function, and
suggests a "Maxi Mental" test to go with it. While this would need to
evaluate the integrity of all four major cerebral lobes AND the lateralisation
of their functions, its frontal aspects could be assessed using a combination
of the Go/No-Go task, the WFT, and "Serial Seven Subtraction" tests,
in conjunction with "general bedside and neuropsychological testing"
for aphasia, dyspraxia, and neglect [see specific suggestions].
Unfortunately,
nothing is ever easy in cognitive science, and clinicians will regularly face
one essentially insoluble problem, namely that of deciding how much improvement
to go for. The point is that not all "normal" adults attain Piagetian
formal operational thought in the first place (Long, McCrary, and Ackerman,
1979; Shute, 1979), remaining concrete reasoners in adult bodies all their
lives! Indeed, Shute and Huertas (1990) identify four specific formal
operations which may have been more or less lacking in a frontal patient before
their hospitalisation, namely (1) probabilistic reasoning, the ability to
"reason past one's own experience", (2) propositional reasoning, the
ability to "develop hypothetical solutions to presented problems",
(3) combinatorial reasoning, the ability to "see the world in terms of
possibilities rather than absolutes", and (4) proportional reasoning, the
ability to "recognise the relationships between behaviours and the
consequences of those behaviours" (p2). It follows that "if frontal
lobe function spans a substantial range of performance among 'normal'
individuals, the task of identifying frontal dysfunction is bound to be
difficult" (p3).
There
is also the care environment to take into account. Campbell, Duffy, and
Salloway (1994) have argued for an element of "family therapy"
when dealing with dysexecutive syndrome patients, thus .....
"Family therapy is an
important and often neglected treatment modality in managing dysexecutive
syndromes. The significant personality changes induced by frontal lobe
impairment clearly destabilise the family system. When this is not recognised,
optimal treatment cannot be achieved. A destabilised family system creates an
ambiguous, emotionally charged, potentially toxic environment that affects the
patient's behaviour in a decidedly negative way. Family therapy begins with
education. Families must understand the patient's changed behaviour. Signs of
executive dysfunction appear to the uninitiated as willful behaviour. Abulic [glossary] patients are often accused
of being lazy or uncaring. Disinhibited patients appear insensitive.
Disorganised patients with apparently normal cognitive abilities generate
frustration. This adds considerably to the burden on family members who are
struggling to cope with loss of the premorbid family system. The McMaster model
of problem-centred systems family therapy provides a useful assessment and
treatment tool for the families of patients with dysexecutive syndromes
[Epstein, Bishop, and Levin (1978). This] model requires a careful assessment
of six essential areas of family functioning: roles, problem solving, behaviour
control, communication, affective involvement, and affective responsiveness
[citation]. All six areas are affected by executive dysfunction. The model
provides the basis for a structured treatment approach that focuses on the
specific problems identified by the assessment. Active collaboration of the
family is required, with the therapist acting as facilitator. The major objectives
of therapy include family openness, clarity of communication, and the
development of active problem solving skills. [..... Indeed,] the stepwise
assessment of problem solving offered by the McMaster model clarifies this
process for families in a logical way [and] disorganised patients respond well
to the breakdown of problem solving into steps." (Campbell, Duffy, and
Salloway, 1994, pp415-416.) [Click here for a McMaster Model
slideshow, if interested.]
Finally,
Wheatley and McGrath (1997/2004 online)
warn that care staff should be aware that frontal patients' often all-pervading
lack of initiative can easily lead to an unwarranted reputation for
malingering. It is also important to remember that frontal patients -
particularly the orbitofrontal ones - may have raised levels of anxiety, and
that this itself may present management problems.
15 - The
For
a week in September 2002,
Paul Burgess (
"Considerable treatment
advances have been made in this area in the last few years. However in order to
develop new methods, and in some cases to explain the success or failure of
existing ones, we need to understand the causes of the particular symptoms. For
this reason rehabilitation can only proceed as fast as our knowledge about the
basic brain systems that are damaged will allow." (Burgess, 2002, p7.)
Max Coltheart (
"The theories of
cognition which cognitive neuropsychology uses are modular: that is, they offer
descriptions of what the particular information-processing components are that
make up the cognitive system responsible for a person's performance in some
particular cognitive domain. These descriptions are sufficiently explicit that
they define what the actual functions of the components are, and so they tell
us what tasks should be used to assess whether any particular component of the
system is functioning normally or not. Cognitive neuropsychology thus
automatically provides a guide to rational assessment. Examples of assessment
batteries derived from theories in this way are PALPA (for assessing language;
Kay, Lesser, and Coltheart, 1992) and BORB (for assessing visual perception and
recognition; Riddoch and Humphreys, 1993). This kind of assessment allows
targeted therapy programs to be devised. What it does not provide, at least not
currently, are ideas regarding what form the targeted treatment should take.
This is left up to the experience and ingenuity of the therapist."
(Coltheart, 2002, p9)
Coltheart
saw the primary clinical decision as being whether to go for
"compensation" of a function or its "restoration". He then
warned that this decision would never be easy until assessments were improved
to the point of identifying whether neural resources for restoration were
actually available.
John R. Crawford (
Jonathan
Evans (MRC,
Elizabeth
Glisky (University of Arizona) argued that modern
approaches to memory rehabilitation showed a distinct improvement over
remediation prior to the mid-1980s, "when the dominant approach to
treatment focused on reducing or eliminating underlying memory impairment by
repetitive drills and practice" (Glisky, 2002, p10). She was particularly
insistent on the need for the "generalisation of training gains beyond the
training context" (ibid.), but saw little value in simple
repetitive practice unless it had day-to-day relevance. Moreover, although the
nature of the brain's various memory systems implied that we often needed to
stimulate the hippocampus, we actually had "no real idea" how to do
so in practice, neither in terms of tasks which would "force episodic
binding" nor of when to apply them if we had them.
Like
Coltheart, David
Howard (
"..... both language
production and comprehension involve complex and interacting brain systems,
distributed over the cerebral cortex. The nature of the processing taking place
in different brain regions is not yet fully understood." (p11)
Narinder
Kapur (
ASIDE: To see how paired
associate learning can be used as a screening task in the early detection of Alzheimer's
disease, see Fowler et al (2002/2004 online)
Catherine
Mateer (
"A number of basic
assumptions and well-established principles for ethic and effective
rehabilitation have been established. (1) Interventions that address cognitive
impairments must be seen as a collaborative enterprise involving patients,
family, professionals, and communities. (2) Interventions must be goal oriented
and address practical and meaningful aspects of the person's everyday life. (3)
Cognitive interventions are dynamic and often involve a combination of
activities designed to maximise areas of cognitive functioning, to increase
insight and awareness, and to identify and implement internal and external
compensatory strategies. (4) Cognitive abilities are interlinked with
behavioural, emotional, and psychosocial functioning and must be addressed in
any effective treatment programme. (5) Effective neuropsychological
rehabilitation relies on a broad theoretical base incorporating frameworks,
models, and methodologies from many different fields of scientific, medical,
neuropsychological, social, and ethical inquiry." (Mateer, 2002, p13.)
She
went on to recommend that the critical elements for effective cognitive
intervention were a broad range of activities, tailored to the individual, with
multiple (but integrated) targets, and applied collaboratively with the carers
involved. Cognitive behavioural interventions were only appropriate in cases
where some insight and self-regulatory metacognition had been spared.
One
of Coltheart's co-workers, Lyndsey Nickels (
Barbara
Wilson (MRC,
"It is these problems
that should be targeted in rehabilitation. Although there is little evidence
that rehabilitation can restore memory functioning, there is considerable
evidence that disabilities can be treated effectively. For example, a
randomised control trial allocating people to a paging system [..... found]
convincing evidence that for the group as a whole and for the majority of
people in the study, the paging system reduced everyday memory and planning
problems." (Wilson, 2002, p16.)
She
went on to stress that rehabilitation was a two-way interactive process .....
"No one model, theory, or
framework, is sufficient to address the many and complex disabilities faced by
people requiring cognitive rehabilitation. Models of cognitive functioning are
necessary but not sufficient. We need to also refer to models of assessment,
learning, behaviour, emotion, compensation, and recovery at the very
least." (Wilson, 2002, p16.)
And
finally, Andrew Worthington (Brain Injury
Rehabilitation Trust,
ASIDE: ..... which is
fair comment, when one recalls that the large X-shaped psycholinguistic
diagrams such as PALPA habitually and deliberately treat higher
functions as an unanalysed black box. The authors did this deliberately, in
order to make progress elsewhere, usually dwelling on the peripheral lexical
processing routes implicated in dyslexia and speech production. The time has
now come, in other words, to open up the black box once and for all, and it is
because we fear that psychologists lack the technical modelling skills to do
this that we have made available our e-tutorial on "How to Draw Cognitive
Diagrams".
16 - References
See the
Master References List
[Home]
[How
to Draw Cognitive Diagrams]