Ellis (1982)

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2002-2018, Derek J. Smith.

First published online 14:28 30th May 2002, Copyright Derek J. Smith (Chartered Engineer). This version [2.0 - copyright] 09:00 BST 3rd July 2018.

 

The Ellis (1982) 21-Box Transcoding Model

See firstly the supporting commentary for the transcoding series of psycholinguistic models.

This is an early attempt at a large scale psycholinguistic model by one of the authors subsequently responsible for the Ellis and Young (1988) model.

 

The Ellis (1982) Model: This model is historically important, because it has now clearly adopted the very particular "X-shape" used by the later Ellis and Young (1988) and Kay, Lesser, and Coltheart (1992) models. Here are the key points to look for:

·         There is a central module - termed the "cognitive system" and highlighted here in yellow - responsible for higher cognitive processes such as thinking and problem solving, conscious awareness, and volition. This module can be, but by convention is not, further expanded. (In fact, this is a wise restriction, because as soon as you open this particular black box you encounter areas of psychology - not least consciousness studies - where there is major philosophical disagreement and little unequivocal data.)

·         There are four word storage modules - termed the "logogen systems" and highlighted here in pink - arranged diagonally around the central module. [The term "logogen" derives from Morton (1979).]

·         The model is top-to-bottom linearly aligned, with sensory inputs descending from the apex and motor outputs emerging from the base. This means that the various processes of perception end halfway down the model, whilst the various motor hierarchies begin half way up it. This contrasts with the inverted-U control hierarchy format used in, say, Craik (1945), where the higher cognitive processes are always shown at the apex.

·         The model presents the hearing-speech communication channel to the left, and the reading-writing channel to the right. The model is thus good at accounting for "cross-over" (hence the epithet "transcoding") between the channels, such as occurs when writing down dictated speech (input at top left, but output at bottom right) or reading out loud (input at top right, but output at bottom left). [Trace these routes across the diagram with your fingertip.]

·         There are then some important specialised bypass routes - highlighted here in red - which allow a degree of flexibility of processing. The visual input logogen system, for example, can if necessary bypass the cognitive system and communicate directly with the speech output logogen system [find these processes on the diagram and trace out the optional routes with your fingertip]. Given that the cognitive system is (by definition) the only place where things get understood, this particular bypass route allows reading out loud to proceed without understanding, such as occurs when the material itself makes no sense, or when the reader is tired or mentally overloaded.

·         Some flowlines - highlighted here in blue - are directed UP the page. These may generically be referred to as "feedback" routes, and feedback is vitally important within cognition - click here for additional detail.

  • The model incorporates four "buffers". In this context, a buffer is a memory management process containing an area of specific usage short term store into which a string of instructions produced by an antecedent process can be placed, and from which the instructions may be accessed by a subsequent process. This terminology was borrowed from the computing industry, who routinely use such technology to speed up processing, the key point being that the first process is freed up to get on with what needs doing next as soon as it has transferred its last output into the buffer, rather than when that output is entirely clear of the system. [For more on the technicalities of buffers and buffering, if interested, click here or here.]

The model is also useful because it shows how a different mental code is used at different stages in the overall process. Most information flowlines are marked with one of the following codes .....

·         Acoustic Code (ac): This is the code used in the outer reaches of the auditory system, before discrete phonemes have been recognised. Eg. sound frequencies and intensities.

·         Visual Code (vis): This is the code used in the outer reaches of the visual system, before discrete visual forms have been recognised. Eg. light frequencies and intensities.

·         Phonemic Code (ph): This is the code used if and when the process of auditory perception succeeds in detecting known phonemes, that is to say, items from the repertoire of stable speech sounds used within the language concerned. Eg. the sounds |puh| for the letter "p", or |kuh| for the letter "c", etc. For further definitions of this much debated term, click here, and to see the full list of phonemes (the sounds and their conventionally recognised written symbols) authorised by the International Phonetic Association - the IPA - click here.]

  • Lexical Code (l): This is the code used if and when the processes of auditory or visual perception succeed in detecting a whole known word. [Thus it was the lexical units for the words "a", "whole", "known", and "word" which were activated when reading the tail end of the preceding sentence.] This recoding can be achieved in one of two ways for auditory perception, firstly by matching the entire, or "unsegmented", input sequence (in terms of the diagram, the encoding sequence here is ac-l), or secondly by converting successive segments of the input sequence into their individual phonemic codes (see preceding entry) (the encoding sequence here is ac-ph-l). The ac-l route is quicker, but will only work for relatively short inputs.

NB: Activation of the lexical code is the first major stage in input-side language processing, and will normally be followed almost immediately by activation of a corresponding semantic code .....

  • Semantic Code (s): This is the code used if and when lexical items correspond with, and can be linked to, previously established individual mental concepts (ie. units of understanding).

NB: Activation of the semantic code is the second major stage in input-side language processing. The distinction between the lexical and the semantic aspects of a word (that is to say, between the word and its "referent") has been fundamental to psycholinguistics since the days of Broca and Wernicke, and is seen very clearly in the explanatory models provided by Lichtheim (1885) and Freud (1891) (both of which are fully incorporated into the model presently under discussion).

·         Graphemic (gr) and Allographic (all) Codes: The graphemic code is the code used to identify the totally abstract concept of a written letter, such as |ay - the first letter in the alphabet|. Subvariant forms of each grapheme - such as its upper and lower case forms - are not yet differentiated. The allographic code is the code used when the appropriate form of a written letter has been chosen by the Allographic Long Term Store (lower right). Thus for the (abstract) graphemic code |ay - the first letter in the alphabet| we have the (not quite so abstract) allographic alternatives upper case "A" or lower case "a".

·         Graphic Motor Pattern Code (gmp): This is the code issued by the Graphic Motor Pattern Store and used to initiate the motor production of specific allographs.

  • Kinaesthetic Code (kin): This is the code used for the sensory feedback resulting from the muscle activity of writing (shown in the diagram, lower right side) or speaking (not shown in the diagram).

For additional commentary, see the caption to Ellis and Young (1988). Note also that the speech output "leg" of the diagram (lower left quadrant) has deliberately been left incomplete. The same approach was taken in Ellis and Young (1988), and is accounted for by the fact that Ellis was deliberately concentrating on the reading-writing system. Other authors have specialised in the hearing-speech system, and for an introduction to theories and models of speech production, see Smith (1997; Chapters 5 - 7), or click here.

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/PICellis1982.gif

PICellis1982.gif

Redrawn from a black-and-white original in Ellis (1982:140). This version Copyright © 2010, High Tower Consultants Limited.

 

References

REFERENCES FOR THE HYPERLINKED SOURCES ARE GIVEN IN THE INDIVIDUAL SUBFILES, QV.

Ellis, A.W. (1982). Spelling and Writing (and Reading and Speaking). In Ellis, A.W. (Ed.), Normality and Pathology in Cognitive Functions. London: Academic Press.

Smith, D.J. (1997). Human Information Processing. Cardiff: UWIC. [ISBN: 1900666081]