Course Handout - Smith (1993)
Copyright Notice: This material was
written and published in Wales by Derek J. Smith (Chartered Engineer). It forms
part of a multifile e-learning resource, and subject only to acknowledging Derek
J. Smith's rights under international copyright law to be identified as author
may be freely downloaded and printed off in single complete copies solely for
the purposes of private study and/or review. Commercial exploitation rights are
reserved. The remote hyperlinks have been selected for the academic appropriacy
of their contents; they were free of offensive and litigious content when
selected, and will be periodically checked to have remained so. Copyright © 2018, Derek J. Smith.
|
First published online 16:28 BST 29th April 2003, This version [2.0 -
Copyright] dated 09:00 BST 27th June 2018
Earlier
versions of this material appeared in Smith (1993) and Smith (1997; Chapters 3 and 4). It is
presented here in slightly expanded form, and supported with hyperlinks.
Although this paper is reasonably self-contained, it is best read as extending and supporting our papers on Transcoding Models and Control Hierarchies. |
1 - Processing
Hierarchies
Even though Dataflow Diagrams (DFDs) are compact and unambiguous, they have two inherent weaknesses. Firstly, they take such a broad view of the processes they are describing that they have no space to spare to consider the decisions, branches, and loops shown in the more detailed program flowcharts, and secondly, the majority of processes turn out to be built up from lesser processes. The first of these weaknesses can be overcome by preparing program flowcharts and DFDs in parallel, so that each complements the other, and the second can be overcome by recognising the "hierarchical" nature of processing, and by drawing up a "pyramid" of DFDs, each at a different "level of analysis". Fortunately, there are only three rules to follow when drawing up this hierarchy of DFDs:
Rule 1 - Begin at the Top: Begin
with "top level" DFDs (sometimes referred to as "context
diagrams"), so as to provide an initially superficial view of the
process in question. Then progressively add detail at each lower tier.
Rule 2 - Know When to Stop: The
further down the pyramid you go, the more technically detailed the material
becomes. However, as with a microscope, the more you see of something the less
you can see of its context, so go only as far as you need to, given the needs
of the investigation at hand, the time available, and the need not to introduce
unnecessary complexity.
Rule 3 - Keep Count: Each DFD
layer must be clearly labelled, and the convention is to start from zero at the
top. Your top level process is thus a "Level-0 DFD",
the next one down is a "Level-1 DFD", the next a "Level-2
DFD", and so on. It is often helpful to show boxes within boxes (within
boxes, etc.), like Russian dolls, another practice known as "nesting".
The top three layers of analysis for the psychological process known as <Cognition> are shown the figure below ...
Cognition as
a Level-2 DFD in Nested Yourdon-Constantine Notation: Here is a Level-2 Yourdon-Constantine rendering of
cognition, showing the subprocesses it is
hypothesised as containing. Note how different memory stores are now being
selectively accessed by different subprocesses. If this diagram fails to load
automatically, it may be accessed separately at |
Redrawn from a black-and-white original in Smith (1997; Figure 3.4(c)), itself a variation of that in Smith (1993). This version Copyright © 2003, Derek J. Smith. |
Analyses of the sort shown in the above figure are called "functional decompositions" of a system. The decomposition begins with the context diagram and continues down the hierarchy of processes until one of two things happens - either (a) you reach the level at which you have seen enough, or (b) you reach a level beyond which no further decomposition is possible (the processes at this level being known as "functional primitives"). The beauty of this approach lies in the fact that it works for all systems, functions, and processes, including biological and psychological ones. The only drawback in practice - especially if the phenomenon under investigation is poorly understood - is that authors typically start arguing amongst themselves after only one or two diagrams. This is because there are invariably many ways to interpret a given body of evidence.
2 - Processing
Networks
In processing hierarchies of any size, there comes a point where there is so much processing to be done that it becomes advisable to share it out amongst more than one processor. The cure is to "go modular", that is to say, to have several relatively simple but specialised processes, rather than a single, more complex, general purpose one. However, the resulting biological modularity is not always easy to model, so here are some guidelines .....
As we have argued elsewhere [divert], the fact that biological cognition invariably involves "networked" (or "distributed", or "modular") processing architectures opens up its own set of problems. Put simply, communication in networks is expensive - it is an overhead, and the more complicated the network, the greater that overhead becomes. What you have to do, therefore, is to find the point of maximum return. You have to trade off the ability to concentrate your resources against the need to seek and convey information from one module to another. This may well put a bit more pressure on your design skills, for it is far from easy, but - properly handled - the benefits can be made significantly to outweigh the costs. Indeed, it is no exaggeration to state (a) that the secret of a successful processing network is the effective management of the modularity problem, and (b) that the secret of understanding the network lies in understanding the communication overheads. Cognitive science is still in its infancy in this latter respect, for even its flagship models fail more or less totally to address the problem of those network overheads.
KEY CONCEPT - MODULARITY OF PROCESSING: Modular processing is a system design philosophy which insists that
like should go with like; that processing should be separated into functionally
dedicated clusters, or "modules", each capable of operating more or
less in isolation. Jerry Fodor - one of the main
theorists on this issue - defines a module as an "'informationally
encapsulated' cognitive facility" (Fodor,
1987:25). As demonstrated in any of the large psycholinguistic models, there is
a significant amount of modularity in the human communication system, and it is
the vulnerability of these modules to partial damage which causes clinical
communication syndromes to occur in such amazing variety.
3 - Good
Modelling Practice
Practical tips on effective cognitive modelling are given in Section 7 of the companion resource.