Course Handout - The History of the
Psycholinguistic Flow Model
Copyright Notice: This material was
written and published in Wales by Derek J. Smith (Chartered Engineer). It forms
part of a multifile e-learning resource, and subject only to acknowledging
Derek J. Smith's rights under international copyright law to be identified as
author may be freely downloaded and printed off in single complete copies
solely for the purposes of private study and/or review. Commercial exploitation
rights are reserved. The remote hyperlinks have been selected for the academic
appropriacy of their contents; they were free of offensive and litigious
content when selected, and will be periodically checked to have remained so. Copyright © 2002-2018, Derek J. Smith.
|
First published online 07:26 BST 3rd May 2002,
Copyright Derek J. Smith (Chartered Engineer). This
version [2.0 - copyright] 09:00 3rd July 2018.
An earlier version of this material appeared in Smith (1997; Chapter 5). |
1 - Psycholinguistic Modelling in Historical Context
The
first sustained use of modelling techniques to help explain human communication
was by the nineteenth century aphasiologists, and the
efforts of Wernicke
(1874), Kussmaul (1878), Lichtheim
(1885), and Freud
(1891) provide particularly good examples of what could be achieved in this
way. However, in reviewing the achievements of these
so-called "diagram makers", Head (1926) was largely dismissive, and
human communication remained out of favour as a study
area for modelling until the invention of the computer prompted a number of
first generation models of attention and memory in the 1950s and 1960s. Here
are some of the early efforts:
To see Craik's (1945)
hierarchical model of biological control, click here.
To see Broadbent's (1958)
"filter" model of attentional processing, click here.
To see Atkinson and Shiffrin's
(1971) model of memory processing, click here.
Another
influential early worker was the young British psychologist John Morton. He was
studying the perceptual factors affecting word recognition and reading, and summarised the processes he suspected of being at work in a
processing model based around a mental dictionary. Here is how he put it:
"It seems reasonable to
assume that when a particular word is available as a response there is an event
in the nervous system in a particular place regardless of the circumstances
leading to the word availability. Such a part of the nervous system can be
called a 'neural unit' [and the] collection of units makes up a
'dictionary'." (Morton, 1964:217.)
It
is this use of the word "dictionary" which is significant, because
the dictionary concept is a natural metaphor for the structure of long-term
semantic memory [Memory
Glossary]. Words can readily be seen as "units" in a mental
lexicon (i.e. word store) composed of many such units, and to use a word from
this mental dictionary, you simply have to "look it up" somehow, just
as you would with a real dictionary. This means activating that particular word
unit beyond some sort of activation threshold, whilst at the same time ensuring
that no other word unit is allowed to approach its own threshold. Morton's 1964
model proposes a single mental dictionary, activated by one or other of two
main input routes, but also strongly influenced by context effects and
conscious selection. To see the full entry for Morton (1964), click here.
Two
other important early workers were Marshall and Newcombe
(1966, 1973), who had been studying the clinical phenomenon of acquired
dyslexia, that is to say, dyslexia arising from brain damage or disease in
previously non-dyslexic adults. By painstakingly assessing what acquired
dyslexics could and could not do, these authors concluded that they could be
divided into two main groups. In the first group - the "surface
dyslexics" - a word and its errors were predominantly visually
related (thus "insect" might be misread as "insist").
However, in the second group - the "deep dyslexics" - such
errors were predominantly semantically related (thus "speak"
might be misread as "talk", or "sick" as "ill").
Marshall and Newcombe (1973) modelled these findings
using block diagram notation, and explained their observations by proposing
that multiple processing routes operated during reading, each responsible for a
different aspect of the process - form, sound, meaning, etc - and each
operating "in parallel" (that is to say, simultaneously). To see the
full entry for Marshall and Newcombe (1973), click here.
Morton
continued to develop this type of model, and in 1979 he allocated the name logogen
(from the Greek words logos = "word" and genesis =
"birth") to the processes which called forth whole words in response
to stimuli. A logogen is not a word, note, but rather "the device which
makes a word available" (Morton, 1979, p112). In the jargon of the
dataflow diagram, it is part-process, part-datastore
(or, more accurately, it is a process which contains and manages a datastore). To see the full entry for Morton (1979), click here.
2 - Modern Psycholinguistic Modelling
The
Marshall-Newcombe-Morton diagrams were valuable in
their own right, but typically showed only the main processing modules (six for
Marshall and Newcombe, 1973; five for Morton, 1979).
In the event, however, they regularly proved to be too small to explain all the
available data, and by the early 1980s Morton and a neuropsychologist named
Andrew W. Ellis had both produced 21-box "supermodels", and it
is at about this level of complexity that things have since stabilised.
To see the full entry for
Morton (1981), click here.
To see the full entry for
Ellis (1982), click here.
These
large models soon became known as "transcoding models",
because they constantly forced theorists to consider (a) what sort of
information might be flowing along the individual flowlines, and (b) how it was
(or was not) being transformed by successive processing. One early use of this
term was by Weigl (1974), and another was by McCarthy and
Warrington (1984). For a full glossary definition, click here.
The
best known transcoding model within mainstream psychology is that by Ellis and
Young (1988), and the best known within speech and language therapy are those
by Kay, Lesser, and Coltheart (1992) and Stackhouse and Wells (1997). However,
we also strongly recommend the smaller Coltheart, Curtis, Atkins, and Haller
(1993) for the sophisticated consideration it gives to the flow processes
linking the processing modules, for this is an important aspect of cognitive
modelling which usually gets overlooked.
To see the full entry for
Ellis and Young (1988), click here.
To see the full entry for Kay,
Lesser, and Coltheart (1992), click here.
To see the full entry for
Coltheart, Curtis, Atkins, and Haller (1993), click here.
To see the full entry for
Stackhouse and Wells (1997), click here.
3 - Theoretical and Clinical Status of Transcoding
Models
The
problem with the larger psycholinguistic models, of course, is that at first
sight they can appear rather daunting. Nevertheless, they are all basically
dataflow diagrams (DFDs) [e-tutorial],
consisting of subsystems, processes, and information flows, and their basic
purpose is to display the totality of cognition in as few words as possible. They
are merely pictorial expressions of arguments which could just as well have
been expressed verbally, and in the final analysis all they can do is convey
their authors' current understanding of the phenomena being modelled.
Moreover, the DFD format itself is recognised as
having certain inherent limitations, including the fact that "we cannot
possibly represent everything we want to do in the same format" (Morton,
1981 p388). They are also bad at presenting the order of events, that is to
say, at showing which processes are operating simultaneously and which
serially, and when and why. Indeed, Morton has always been suitably reserved about
the value of transcoding models, having described his 1979 model thus:
"This model makes it
easier to relate together a large number of experimental findings and so may be
regarded as a useful expository device. I do not believe it is 'true' in any
interesting sense of the word and it is certainly not unique." (Morton, 1979, p109; bold emphasis added.)
The
average transcoding model is "up for debate", in other words, and
students are encouraged to evaluate and criticise
them as vigorously as they would any other form of argument.
Speech
and language pathologists are not wholly convinced of the utility of the
available psycholinguistic models, either. Most hold that cognitive models are
at best only partly useful in the clinic. Lesser (1987), for example, admits
that by analysing "the mental processes
underlying language into dissociable components or modules" (p189)
transcoding models force you to address the basic organisation
of neural processing, but she then argues that they have little to say on the
higher matters of communication, such as in the realms of discourse [glossary]
and pragmatics [glossary].
Similarly, Bryan (1995) describes the notion of modularity as "the
underlying supposition" of cognitive neuropsychology, but complains that
"clinicians are now asking not just what are the errors but why
did they occur and 'where' might the difficulties in processing be" (p14;
emphasis original), and Hillis (1993) puts it this way: ".....:there is
nothing within the models of normal cognitive processes that would alone
support the introduction of specific intervention strategies" (p6; italics
original). What clinicians really need, she writes, is a theory of intervention,
not of processing.
Cognitive
neuropsychology is still actively refining its models in the hope of countering
at least some of these criticisms, but is hampered in its efforts (a) by the
sheer impenetrability of the underlying principles and mechanisms of cognition,
and (b) by the fact that every clinical case is unique. Symptoms tend to be
highly individual to a particular patient, rarely cleanly localisable, and often
downright contradictory. Issues of current debate are whether the models should
allow for separate processes for name, face, and picture identification, and
even whether there is one semantic system or several. Only time will tell how
much further they can be developed, and how much closer they can get to
becoming Hillis's theory of intervention.
4 -
References
See the
Master References List
[Home]