Course Handout - How to Draw Cognitive Diagrams

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2003-2018, Derek J. Smith.

 

First published online 16:28 BST 29th April 2003, Copyright Derek J. Smith (Chartered Engineer). This version [2.0 - copyright] 09:00 BST 3rd July 2018.

 

Earlier versions of this material appeared in Smith (1997; Chapters 3 and 4). It is presented here in slightly expanded form, and supported with hyperlinks.

Although this paper is reasonably self-contained, it is best read as extending and supporting our papers on Transcoding Models and Control Hierarchies.

1 - The Philosophy of Models in Science

Science is basically about being able to demonstrate a phenomenon at will (Radford and Burton, 1974). It is about prediction, about understanding, and (above all) about being able to explain that understanding to others. Moreover, you do not really understand something until you can take it to pieces, and its pieces to pieces, and then state the contribution of every single dismembered component. Which is all well and good if the something in question is simple, safe, and tangible, because you can experiment directly with it (and its pieces) to your heart's content. But if it is complex, and/or conceptual, and/or priceless, and/or dangerous to go near, and/or too small/large/far away, then you need to develop your understanding using models of that something, rather than the thing itself. In this chapter, we look at the types of model available to help.

The simplest form of model is a structural model (to scale or otherwise). Thus a toy train is a convenient scaling-down of a real train, a billiard ball is a convenient scaling-up of an atom, and a plastic brain is a convenient same-size scaling of the real thing. So .....

"A model is an alternate, and usually simplified, representation of something. [Modelling] is the means we use to ignore what we cannot understand and to consider what we do understand. The use of models allows us to simulate unfamiliar problems by replacing the unfamiliar with the familiar." (Steidel and Henderson, 1983:345.)

But Radford and Burton (1974) point out, for example, that you can model the brain with a warmish jelly or an electrical circuit, depending upon which attributes of the real thing you are concerned with at the time. To cope with this complication, we need to add:

"The model need not resemble the real object pictorially [.....], but it works in the same way in certain essential respects" (Craik, 1943; bold emphasis added).

Generally speaking, psychological models help us think about things we cannot directly perceive. They are conceptual models rather than the physical ones, and are built with hypothetical constructs rather than actual ones. Hypothetical constructs are thus inferred entities, that is to say, they are things whose existence might one day be proven, but whose presumed existence will aid theorising in the meantime. Memory is a good example: you cannot see it, but you need to argue its existence and ponder its nature in order to explain certain overt behavioural phenomena. Similarly with such concepts as perceptual analysers, mental images, word stores, motor programs, etc.

One particularly common form of psychological model is the "black box" model. These are models where it has been decided in advance to ignore as many of the internal complexities as possible. The complexity is consigned to a black box, so to speak, which by common agreement is not going to be opened. Thus if all you want to do is watch your television (rather than take it apart), you do not need to know - and do not care - what goes on inside it: you plug it in, switch it on, and that is that. You observe merely how the mechanism responds to the stimuli you give it. Here is a formal definition .....

"A black box is a system whose contents are unknown to us or do not interest us, and whose relation with the environment is predetermined. By viewing [systems] as black boxes, we can describe them functionally and clearly and study them experimentally, without the risk of damaging the system by opening it." (Kramer and de Smit, 1977:85; italics added.)

In practice, however, you always want to know more than a black box model can readily tell you. Consider:

"The task of explanation lies in deciding what sort of machinery inside the black box could produce the responses in question, given the inputs. Ideally, given sufficient knowledge of that machinery, behaviour could be predicted as a function of inputs and internal states of the system. [] One proposes hypothetical states inside the black box - internal variables, whose variation accounts for the observed regularities." (Clark, 1980:44; italics added.)

Exercise 1 - Devising Models

1 Devise appropriate physical and metaphorical models for the following .....

An Atom; A Skill; The Moon; The Sun; Thinking; A Viral Infection

2 - The Program Flowchart

Modern psychology makes extensive use of information processing models derived from the world of computing, and two diagram types have been particularly popular. The first of these is the "program flowchart" (variously known as the "logic flowchart", the "procedure flowchart", or simply just the "flowchart"). The program flowchart is an attempt to display pictorially the "flow of control" within the pre-specified sequence of operations by which a given problem can be solved. Such diagrams are widely used by computer programmers to familiarise themselves with problems prior to trying to code them, and the symbols by which this is all achieved are shown in Figures 1 and 2.

Figure 1 - The Simple Program Flowchart: Here we see the four basic elements of a program flowchart. Note the standard notation used.

(a) The Terminal: [Symbol A = round-ended rectangle] There are two of these, a START and a FINISH, and they state where the sequence of control begins and ends.

(b) The Command Flow: [Symbol B = arrow-headed line] This is how you find your way from the START to the FINISH. The rules are simple. Marking your current position with your finger, you start at START, and you follow the direction of the arrows doing whatever the flowchart instructs you to do until you get to FINISH. This dictates the sequence of actions which will solve your problem. Do not cheat. Do not give up.

(c) The Operation: [Symbol C = square-ended rectangle] This is an action of some sort, to be performed when requested by the command flow. You have to do as instructed before you can move on through the program. Operations which are performed on one processing branch (see next), but not another, are known as "conditionals".

(d) The Decision: [Symbol D = diamond] This is a point in the command flow where processing optionality is offered, and where a decision is accordingly called for. Subsequent action depends upon the answer given. By convention, the decision in question is abbreviated inside the diamond (as shown) or else placed conveniently just outside it, and only two output routes are allowed. The optional output routes are known as "branches", and must be appropriately labelled (in this case, as "Y" and "N").

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig1.gif

diagrams-fig1.gif

Redrawn from a black-and-white original in Smith (1997; Figure 3.2(upper elements)). This version Copyright © 2003, Derek J. Smith.

 

Figure 2 - The Looping Program Flowchart: Here we see Figure 1's four basic elements re-arranged so as to "loop" back upon themselves. Loops are enormously important tricks of the programmer's trade, because they enable programs to cope concisely with processes which involve repetitions (or "iterations"). They allow a flowline to return to the same point time after time until a desired condition is met. In the example shown, operations x, y, and z, are performed repeatedly via the NO pathway (top red loop), until eventually the YES answer is given. Loops are also commonly "nested", so that several end conditions need to be met before the process is totally discontinued. In the example shown, operations x, y, and z, are performed repeatedly until an initial exit condition is met. Operations p, q, and r are then performed repeatedly until a second exit condition is met (lower red loop). Then, unless a third exit condition is also satisfied, the whole process begins again at the top (blue loop). Massively complicated programs can be written by accumulating building blocks of logic in this way.

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig2.gif

diagrams-fig2.gif

Redrawn from a black-and-white original in Smith (1997; Figure 3.2 (lower elements)). This version Copyright © 2003, Derek J. Smith.

Looping of the sort shown in Figure 2 can rapidly get extremely difficult to follow (and extremely easy to damage). Exercise 2 provides some further examples.

Exercise 2 - The Program Flowchart

1              Draw the flowchart to control the digit display sequence for the hours, minutes, and seconds of a digital watch, starting at 00:00:00 and ending at 23:59:59. Add another 1 second, and then check that your display reverts to 00:00:00.

2              Draw the flowchart to get an imaginary blind robot out of a rectangular room with a door halfway along one of its walls. Your program must cope with the robot beginning its escape from any start position and pointing in any direction. The robot is capable of turning fully or fractionally left or right, moving forwards and backwards, feeling when it is hitting a wall, and knowing when it has escaped. Nothing else.

3              Draw the flowchart to lift a weight off a waist-high table and to bring it safely up to shoulder height, using the muscles of the upper arm only. [Students wishing to look in greater detail at the servomechanisms, control loops, and safety systems involved in the vertebrate spinal reflexes should divert to our paper on Biological Cybernetics.]

4              Using a highlighting pen, mark all the nested processing loops you have used in your answers.

3 - The Dataflow Diagram

The second important type of diagram is the dataflow diagram (or DFD). This is a powerful tool for describing the internal organisation of complex systems, and it complements the program flowchart by tracking the flow of information rather than the precise instruction execution sequence. It is reasonably non-technical, has high graphical impact, and - compared to conveying the equivalent message in text - is compact and unambiguous. It is also flexible and easily upgraded should your understanding of a system alter or develop over time.

Sadly, even though the basic elements of DFDs are always the same, there are competing sets of graphical standards. Two of the most popular are shown in Figure 3.

Figure 3 - The Simple Block and Bubble Diagrams: Here we see the simplest DFDs of all, based upon one or more circles or rectangles (each representing a processing stage), and using arrows to represent the flow of information. Such "box-and-arrow diagrams" or "bubble charts" have been commonplace in psychology since the second half of the nineteenth century [see Kussmaul (1878) for a good early one and Sperling (1963) for a more recent one], although demand for them died away for a while during the Behaviourist Period. 

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig3.gif

diagrams-fig3.gif

Enhanced from a black-and-white original in Smith (1997; Figure 3.3(a)). This version Copyright © 2003, Derek J. Smith.

With the growth of the computer software industry in the 1950s, competing consultancy houses tried to try to stamp their proprietary image on the basic diagramming conventions. More powerful diagrams and different visual layouts resulted, and two of these are worth looking at in detail. The first is a development of the bubble chart, and is known as the "Yourdon-Constantine notation" (or often just "Yourdon") (after, for example, Yourdon and Constantine, 1979). This is set out in Figure 4(a). The other is the "Gane and Sarson notation" (after Gane and Sarson, 1977), and is shown in Figure 4(b).

Figure 4 - More Powerful Diagrams: Simple block diagrams like those shown in Figure 3 do not readily show the distribution of memory resources around the available cognitive modules. Both the Yourdon-Constantine and Gane-Sarson notations (diagrams (a) and (b), respectively) allow and encourage this, although at the cost of additional symbols in the diagramming repertoire. Here are the entry-level symbol sets .....

Externals: These are where the flow of information begins and ends. There are two types of external, namely information sources (where the information comes from), and information destinations (where it goes to once it has been processed). The output information will usually be different in some key respect to the input information, otherwise there would be no value in carrying out the process. There is no theoretical limit to how many externals are allowed, but in practice there will usually be one or two sources, and one or two destinations.

Processes: These are where things happen to the information in transit through a system. This might involve storing it as it stands or amending it in some way. It might also involve making decisions on the basis of that information. One of the most useful "rules" of DFDs is that processes must always be nameable, because the act of being able to name them gives you a valuable clue as to whether you really understand what they are doing.

Stores: These are where information is stored until needed. They are the filing cabinets in the system's internal filing system, so to speak. As with biological memory, they can be for either short or long term storage of information. They can also be for the exclusive use of a single process, or else the common use of a number of processes. Under this latter heading, it is common to find one process writing to a store, and then a subsequent process reading from it.

Information Flows: These are the routes taken by the information en route from its source to its destination. Specifically, there will be flowlines between information sources and the initial processes, between process and process, and between process and store. Should information need to flow both ways in a particular instance, then it may be shown either as a double-headed single flow or (preferably) as two opposed single-headed flows. Some practitioners also scale the width of the flow lines to indicate the relative intensity of traffic along them. 

Ed Yourdon provides detailed online instruction on his method [click here].

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig4.gif

diagrams-fig4.gif

Redrawn from a black-and-white original in Smith (1997; Figures 3.3(b and c)). This version Copyright © 2003, Derek J. Smith.

Note that there is never any attempt to draw decisions or branches or loops in DFDs. This sort of detail is deemed inappropriate to DFDs, and is best left to supporting program flowcharts as necessary. Examples of modern DFDs (drawn to a variety of standards) abound in cognitive science generally, but have proven especially popular within psycholinguistics .....

[For a general history of psycholinguistic models, click here.]

4 - Processing Hierarchies

Even though DFDs are compact and unambiguous, they have two inherent weaknesses. Firstly, they take such a broad view of the processes they are describing that they have no space to spare to consider the decisions, branches, and loops shown in the more detailed program flowcharts, and secondly, the majority of processes turn out to be built up from lesser processes. The first of these weaknesses can be overcome by preparing program flowcharts and DFDs in parallel, so that each complements the other, and the second (provided only that the author can make the necessary time available) can be immediately turned into a strength by explicitly recognising the "hierarchical" nature of processing, and by drawing up a "pyramid" of DFDs, each at a different "level of analysis". Fortunately, there are only three rules to follow when drawing up this hierarchy of DFDs:

Rule 1 - Begin at the Top: Begin with "top level" DFDs (sometimes referred to as "context diagrams"), so as to provide an initially superficial view of the process in question. Then progressively add detail at each lower tier.

Rule 2 - Know When to Stop: The further down the pyramid you go, the more technically detailed the material becomes. However, as with a microscope, the more you see of something the less you can see of its context, so go only as far as you need to, given the needs of the investigation at hand, the time available, and the need not to introduce unnecessary complexity. [Figure 7 and 8 show an analysis in which three main processes are identified, but only one of them analysed in depth.]

Rule 3 - Keep Count: Each DFD layer must be clearly labelled, and the convention is to start from zero at the top. Your top level process is thus a "Level-0 DFD", the next one down is a "Level-1 DFD", the next a "Level-2 DFD", and so on. It is often helpful to show boxes within boxes (within boxes, etc.), like Russian dolls, another practice known as "nesting".

The top three layers of analysis for the psychological process known as <Cognition> are shown in Figures 5, 6, 7, and 8. Figures 7 and 8 are explicitly "nested".

Figure 5 -  Cognition as a Simple Context Diagram: Here are three visually different but essentially identical one-box renderings of cognition as an unanalysed (that is to say, Level-0) psychological phenomenon. There are differences between the three competing graphical standards, but if you look closely you will see that these are only cosmetic - the underlying message is the same in all cases. Note how formats (b) and (c) use the notation systems set out in Figure 4, and show the location of the memory resource more formally than does format (a). 

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig5.gif

diagrams-fig5.gif

Redrawn from a black-and-white original in Smith (1997; Figure 3.4(a)). This version Copyright © 2003, Derek J. Smith.

 

Figure 6 - Cognition as a Level-1 DFD in Gane-Sarson Notation: Here is a Level-1 Gane and Sarson three-box rendering of the Level-0 diagram shown in Figure 5(c). Reading downwards through the embedded captions (red text), we see that this gives three sequential component processes, numbered 1.1 to 1.3. At the same time, what was previously shown as a single memory store can now be subdivided into five more precisely defined memory stores. This allows the logic of memory storage and retrieval to be more accurately conceptualised. Remember that the single Level-0 process and the three Level-1 processes are totally interchangeable, the choice of detail being determined solely by the respective knowledge of author and reader, given the message the author wishes to convey at the time. [Funnell (1983) adopts this format.]

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig6.gif

diagrams-fig6.gif

Redrawn from a black-and-white original in Smith (1997; Figure 3.4(b)). This version Copyright © 2003, Derek J. Smith.

 

Figure 7 - Cognition as a Level-2 DFD in Nested Yourdon-Constantine Notation: Here is a Level-2 Yourdon-Constantine rendering of Process 1.2 from above (the equivalent expansions of Processes 1.1 and 1.3 are not shown). Because we are now concentrating on Process 1.2, there is room to show the subprocesses it is hypothesised as containing. These have been numbered 1.2.1 to 1.2.3 [note how a third digit has now been added to the process identification number]. Similarly, it is also possible to show the memory stores now being selectively accessed by different subprocesses.  Note the underlying Level-0 diagram (the backing green bubble). [Smith (1993) adopts this format.]

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/PICsmith1993.gif

PICsmith1993.bmp

Redrawn from a black-and-white original in Smith (1997; Figure 3.4(c)). This version Copyright © 2003, Derek J. Smith.

 

Figure 8 - Cognition as a Level-2 DFD in Nested Block Notation: Here is Figure 7 redrawn using blocks instead of bubbles, but retaining the memory stores. Note the underlying Level-0 diagram (the backing green rectangle), and the voluntarily unexpanded Processes 1.1 and 1.3.

If this diagram fails to load automatically, it may be accessed separately at

http://www.smithsrisca.co.uk/diagrams-fig8.gif

diagrams-fig8.gif

Published here for the first time. Copyright © 2003, Derek J. Smith.

Analyses of the sort shown in Figures 5, 6, 7, and 8 are called "functional decompositions" of a system. The decomposition begins with the context diagram and continues down the hierarchy of processes until one of two things happens - either (a) you reach the level at which you have seen enough, or (b) you reach a level beyond which no further decomposition is possible (the processes at this level being known as "functional primitives"). The beauty of this approach lies in the fact that it works for all systems, functions, and processes, including biological and psychological ones [for more on the pivotally important role of functional decomposition in the design of successful commercial computer systems, see Yourdon and Constantine (1979), De Marco (1979), Martin and McClure (1985), or Longworth (1989)]. The only drawback in practice - especially if the phenomenon under investigation is poorly understood - is that authors typically start arguing amongst themselves after only one or two diagrams. This is because there are invariably many ways to interpret a given body of evidence. Indeed, most boxes in most information processing models are not only hypothetical constructs in their own right, but, remembering what we said in Section 1, are also black boxes capable of further analysis, and it is totally at the discretion of the DFD author which black box s/he opens up next, and how s/he chooses to explain what s/he sees within.

ASIDE: At the risk of overstatement, this probably means that modelling skills are the thing philosophers of mind have most lacked during their history-long (and largely unsuccessful) attempts to decipher the mysteries of the mind.

Exercise 3 provides some further examples.

Exercise 3 - Functional Decomposition

1              Study Ellis and Young's (1988) model, and redraw it (a) as a one-box context diagram, and (b) as a three-box Level-1 DFD.

2              Redraw Ellis and Young's (1988) model so that the semantic system is five times its original size, all the input legs are at the bottom left, and all the output legs are at bottom right. Compare and contrast the resulting layout with Allport's (1985) attribute domain diagram.

3              Redraw Ellis and Young's (1988) model so that the semantic system is five times its original size and situated at top right, and all the input and output legs are at the bottom left. Compare and contrast the resulting layout with Freud's (1891) word-referent diagram.

4              Replace the speech output leg of Ellis and Young's (1988) model with Garrett's (1990) speech production model (in its entirety). Add feedback loops, and incorporate these into the remainder of the Ellis and Young information flows. 

5 - Motor Control Hierarchies

Further examples of cognitive models have been presented from time to time in our papers on the control of motor behaviour .....

[Separate paper on Motor Control Hierarchies]

[Separate paper on Motor Programming]

[Separate paper on Cybernetics]

[Separate paper on Biological Cybernetics]

6 - Processing Networks

In processing hierarchies of any size, there comes a point where there is so much processing to be done that it becomes advisable to share it out amongst more than one processor. The cure is to "go modular", that is to say, to have several relatively simple but specialised processes, rather than a single, more complex, general purpose one. However, the resulting biological modularity is not always easy to model, so here are some guidelines .....

As we have argued elsewhere [divert], the fact that biological cognition invariably involves "networked" (or "distributed", or "modular") processing architectures opens up its own set of problems. Put simply, communication in networks is expensive - it is an overhead, and the more complicated the network, the greater that overhead becomes. What you have to do, therefore, is to find the point of maximum return. You have to trade off the ability to concentrate your resources against the need to seek and convey information from one module to another. This may well put a bit more pressure on your design skills, for it is far from easy, but - properly handled - the benefits can be made significantly to outweigh the costs. Indeed, it is no exaggeration to state (a) that the secret of a successful processing network is the effective management of the modularity problem, and (b) that the secret of understanding the network lies in understanding the communication overheads. Cognitive science is still in its infancy in this latter respect, for even its flagship models fail more or less totally to address the problem of those network overheads.

Key Concept - Modularity of Processing: Modular processing is a system design philosophy which insists that like should go with like; that processing should be separated into functionally dedicated clusters, or "modules", each capable of operating more or less in isolation. Jerry Fodor - one of the main theorists on this issue - defines a module as an "'informationally encapsulated' cognitive facility" (Fodor, 1987:25). As demonstrated in any of the large psycholinguistic models [see the examples listed at the end of Section 3 above], there is a significant amount of modularity in the human communication system, and it is the vulnerability of these modules to partial damage which causes clinical communication syndromes to occur in such amazing variety.

Burns and Wellings (1990) then address the question of questions - how should a large system be decomposed into modules, that is to say, designed in a positive sense rather than allowed to evolve as such by the natural selection of the marketplace. They introduce two related test concepts, namely "cohesion" and "coupling" as now described .....

"Cohesion is concerned with how well a module holds together - its internal strength. [.....] Coupling, by comparison, is a measure of the interdependence of program modules. If two modules pass control information between them they are said to possess high (or tight) coupling." Burns and Wellings, 1990, p20)

7 - Good Modelling Practice

It is a widely held belief that modelling helps psychologists predict, and so helps psychology to be a science. Models do this by helping us to define a "problem space" wherein we can organise our ideas. This helps us (a) to think about our problem, and (b) to communicate our findings to others. In this latter respect, a good model is worth (at least) a thousand words. To paraphrase Ernst Mach (1960, cited in Harré, 1972), models convey the greatest number of facts with the least amount of thought. However, Kelvin (1980) makes a sobering counterclaim. Models, in his view, tend to constrain scientific creativity. All too quickly, they and the constructs on which they rely become conventional wisdoms, and what were originally really only suggestions become accepted instead as truths, resulting in a sense of false security. The original problem seems to have been solved, and so no more research gets done. Consequently, models have "stunted the development of psychology, not promoted it" (Kelvin, 1980:345). With the exception of models' undoubted value as tools for the rapid communication and teaching of complex concepts (what Kelvin terms their pedagogic value), models give a "spurious respectability" and are to be avoided. Indeed, science often makes its most impressive leaps forward when it becomes iconoclastic, and totally replaces one set of hypothetical constructs with a newer and more sophisticated set.

So how are we to tell whether we have a good model or a flawed one? Well Warr (1980) provides a checklist of the parameters by which competing theories or models can be compared or contrasted .....

We may also note a few points of good DFD practice from computer science .....

On top of that, there are seven distinctive features to look for when including servomechanisms in a modular architecture .....

Finally, when dealing with the control aspects of a system, the following guidelines will apply .....

If in doubt at any point, let yourself be guided by Ockham's Razor, the rule-of-thumb observation that when you are faced with explaining the unexplained, the simplest explanations are usually the best. To borrow Warr's term (above), the best theoretical models are parsimonious, that is to say, they are free from conceptual clutter. Go for symmetry and simplicity at all times, therefore, and if the resulting model still does not fit the facts, start to suspect and challenge what you have taken for "facts" rather than the model.

8 - References

See the Master References List

[Home]

[Separate paper on Shannonian Communication Theory]

[Separate paper on the History of Transcoding Models]