Lecturer's Précis - Smith (1991)

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2010, High Tower Consultants Limited.

 

First published online 09:52 BST 15th May 2003, Copyright Derek J. Smith (Chartered Engineer). This version [HT.1 - transfer of copyright] dated 12:00 13th January 2010

 

This paper was presented verbally to the Eighth Annual Conference of the Cognitive Section of the British Psychological Society, Oxford, 8th September 1991, at which time the author was combining the duties of database designer in the software industry with those of lecturer in psycholinguistics at the Cardiff School of Speech and Language Therapy. 

Full Title:

"Suppositions of belonging: Insights into the computational principles of meaning"

Official Conference Abstract:

"John Locke (1689) suggested that an idea of substance is nothing but a collection of simpler ideas 'with a supposition of something to which they belong'. With this assertion in mind, this paper considers the nature of large commercial 'distributed processing' systems and highlights the fragmentation of meaning which takes place therein. It emerges that at no one point in such systems can the data be totally meaningful, and that no one networked processor has direct access to all things of relevance. Indeed, the receiving processor of any internal data transmission - in an attempt to overcome this difficulty and in precisely the same way Locke described it - has to implement suppositions of belonging of its own in order to make sense of the incoming signals. The nature of these, and the theoretical implications for those designing neural data processing architectures are discussed." (From the Conference Programme; Copyright © 1991, British Psychological Society.)

 

 

The Philosophical Background - Locke (1689)

For readers unfamiliar with the views of the "British Empiricist" School of Philosophy, here are some extracts from its earliest formal presentation on perception and cognition, namely John Locke's "On the Human Understanding" (Locke, 1689). The extracts are from Book 2, Chapter 23. The italics are from the original, but the bold type has been added to indicate the key points of relevance to the current paper. Our interpretation of Locke's argument is stated against each major proposition.

¶23.1 (p208): "Ideas of substances, how made. - The mind being, as I have declared, furnished with a great number of the simple ideas conveyed in by the senses [.....] takes notice, also, that a certain number of these simple ideas go constantly together; which being presumed to belong to one thing [are called] by one name; which, by inadvertency, we are apt afterward to talk of and consider as one simple idea [and] accustom ourselves to suppose some substratum wherein they do exist, and from which they do result; which therefore we call 'substance'."

1991 Interpretation: The mind is constantly attempting to make sense of the flood of raw data arriving from the senses, and one of the tricks of its trade is to set up abstract conceptual representations whenever regularities in that data can be observed. These conceptual representations become our ideas of external entities (Locke's "things"), whilst the raw sensations reflect only particular attributes of those entities. However, when a particular entity next appears before us, the very regularities of the sensations it produces allow their source to be recognised. This is the essence of the process we know as "perception". Note the phrases "presumed to belong" and "suppose".

¶23.3 (p209): "Of the sorts of substances. - [.....] It is the ordinary qualities observable in iron or a diamond, put together, that make the true complex idea of those substances [.....] Only we must take notice, that our complex ideas of substances, besides all these simple ideas they are made up of, have always the confused idea of something to which they belong and in which they subsist: and therefore, when we speak of any sort of substance we say it is a thing having such or such qualities [.....] These and the like fashions of speaking, intimate that the substance is supposed always something, besides the extension, figure, solidity, motion, thinking, or other observable ideas, though we know not what it is."

1991 Interpretation: This extends ¶23.1 and repeats the mentions of supposition and belonging. Locke's idea remains that an array of simple fleeting sensations can be made meaningful to the post-sensory regions of the mind by their presuming a unity - an external object - from which said sensory stimulation emanates and to which it therefore ultimately refers. The issue of where the borders of that unity are placed was not formally investigated for another 220 years, until the Gestalt psychologists made sensory organisation in general, and the "figure-ground issue" in particular [example], central to their theory of perception in the early twentieth century, and the issue of the microtiming of the process was not investigated until Donders introduced psychology to the "reaction time" experiment in the 1860s [background].

¶23.4 (p210): "No clear idea of substance in general. - Hence, when we talk [or] think of any particular sort of corporeal substances, as horse, stone, etc., though the idea we have of either of them be but the complication or collection of those several simple ideas of sensible qualities which we used to find united in the thing called 'horse' or 'stone'; yet because we cannot conceive how they should subsist alone [.....] we suppose them existing in, and supported by, some common subject; which support we denote by the name 'substance', though it be certain we have no clear or distinct idea of that thing we suppose a support."

1991 Interpretation: This paragraph reinforces ¶23.3 with specific examples. The implication is again that there exists a later stage of cognition which is making judgements (ie. Locke's "suppositions") on the basis of information provided to it by an earlier stage.

¶23.6 (p210-211): "Of the sorts of substances. - Whatever therefore be the secret and abstract nature of substance in general, all the ideas we have of particular, distinct sorts of substances, are nothing but several combinations of simple ideas co-existing in such [.....] as makes the whole subsist of itself. It is by such combinations of simple ideas, and nothing else, that [.....] we, by their special names, signify to others [for example] man, horse, sun, water, iron; upon hearing which words every man, who understands the language, frames in his mind a combination of those several simple ideas [.....] which he has observed to exist united together." 

1991 Interpretation: This paragraph extends ¶23.3 and ¶23.4 with the Aristotelian notion that communication exists in the pairing of concept to word, and back again. A speaker's activated concept(s) (Locke's "combinations of simple ideas") generate his/her words, and those words then activate (hopefully) the same concept(s) in the mind of the listener. We see this exact state of affairs restated in engineering terminology in the mid-twentieth century's Shannonian Communication Theory. Modern psycholinguistics is additionally complicated by issues of grammar and language use, not to mention deciphering the part played by context in choosing one's own words and understanding the words of others, but the concept-word relationship nevertheless remains one of the mind's fundamental building blocks.

¶23.7 (p211-212): "Power, a great part of our complex ideas of substances. - [.....] Thus, the power of drawing iron is one of the ideas of the complex one of that substance we call a 'loadstone' [sic], and a power to be so drawn is a part of the complex one we call 'iron' [.....] therefore it is that I have reckoned these powers amongst the simple ideas, which make the complex ones of the sorts of substances; though these powers, considered in themselves, are truly complex ideas. "

1991 Interpretation: Amongst the repertoire of simple sensory ideas which make up a given object concept are a number of more complex attributes. One cannot, for example, at a single instant in time, see the sort of movement which a magnet will induce in an iron needle. To do that, we need to make repeated observations of a juxtaposition of (in this case) two objects, namely the magnet and the needle, and then make causal judgements on what we see taking place. Once we have done this, however, we may treat the power to move or be moved as though it were a simple attribute such as colour, weight, etc., and we have a distinctly more powerful concept as a result.

¶23.9 (p212): "Three sorts of ideas make our complex ones of substances. - The ideas that make our complex ones of corporeal substances are of these three sorts. First. The ideas of the primary qualities of things which are discovered by our senses, and are in them even when we perceive them not; such are the bulk, figure, number, situation, and motion of the parts of bodies [.....]. Secondly. The sensible secondary qualities which, depending on these, are nothing but the powers those substances have to produce several ideas in us [.....] Thirdly. The aptness we consider in any substance to give or receive such alterations of primary qualities as that the substance so altered should produce in us different ideas from what it did before; these are called 'active and passive powers': all of which powers, as far as we have any notice or notion of them, terminate only in sensible simple ideas." 

1991 Interpretation: This paragraph extends ¶23.7, and implicates a third type of information in the building up of an object concept. This is "aptness", that is to say, the intensity with which a given power operates.

¶23.10 (p213): "Powers make a great part of our complex ideas of substances. - Powers therefore justly make a great part of our complex ideas of substances. He that will examine his complex idea of gold, will find several of its ideas that make it up to be only powers: as the power of being melted [etc.]" 

1991 Interpretation: This paragraph reinforces ¶23.7 and ¶23.9. 

¶23.14 (p216): "Complex ideas of substances. - [.....] Our specific ideas of substances are nothing else but a collection of a certain number of simple ideas, considered as united in one thing. These ideas of substances, though they are commonly called 'simple apprehensions' [.....] yet, in effect, are complex and compounded. Thus the idea which an Englishman signifies by the name 'swan' is white colour, long neck, red beak, black legs, and whole feet, and all these of a certain size, with a power of swimming in the water, and making a certain kind of noise [.....] properties, which all terminate in sensible simple ideas, all united in one common subject." 

1991 Interpretation: This is a general restatement of Locke's argument so far. Note the phrase "considered as united", and the element of supposition implied therein. 

¶23.37 (p225): "Recapitulation. - "And thus we have seen what kind of ideas we have of substances of all kinds, wherein they consist, and how we come by them. From whence, I think, it is very evident, first, that all our ideas of the several sorts of substances are nothing but collections of simple ideas with a supposition of something to which they belong, and in which they subsist; though of this supposed something we have no clear distinct idea at all ....."

1991 Interpretation: This paragraph restates ¶23.1, ¶23.3, ¶23.4, and ¶23.6, and is the one which gave the current paper its title.

[Continued] "Secondly, that all the simple ideas that, thus united in one common substratum, make up our complex ideas of several sorts of substances, are no other but such as we have received from sensation or reflection [.....]"

1991 Interpretation: This is one of the classic statements of the "British Empiricist" school of philosophy, namely that there are no "innate ideas" in the developing mind; that everything, in other words, results from nurture rather than nature. It is presumably one of those accidents of history that the nature-nurture debate has stimulated general psychological discussion, whilst the object recognition debate has been restricted to cognitivists.

[Continued] "Thirdly, that most of the simple ideas that make up our complex ideas of substances, when truly considered, are only powers, however we are apt to take them for positive qualities; eg. the greatest part of the ideas that make up our complex idea of gold are yellowness, great weight, ductility, fusibility, and solubility in aqua regia, etc., all united together in an unknown substratum; all which ideas are nothing else but so many relations to other substances."

1991 Interpretation: This final point reinforces the earlier assertion [see ¶23.7 above] that much of what we know about any one object class derives from the way it acts upon or is affected by other object classes. This view is highly compatible with the modern "semantic network" approach to explaining human conceptual knowledge. 

 

 

1 - Scene Setting: The Cost of Cognitive Modularity

The aim of this 1991 conference presentation was to provide a database designer's perspective on the technicalities traditionally left implicit by the authors of the large modular diagrams which had become popular within psycholinguistics during the 1980s, and our rationale for so doing lay in the observation that such models are routinely left so seriously incomplete in isolation, and so incompatible with each other in combination, that they contribute little to the progressive accumulation of scientific data and common understanding. In other words, that models are often theoretically neutral, and sometimes a downright hindrance. We were also tangentially critical of the technical rigour of cognitive modelling, given the number of examples of good practice available in the engineering literature in general, and in the computing literature in particular. Few 1980s cognitive modellers used the full facilities of, say, the Gane and Sarson (1977) or the Yourdon and Constantine (1979) dataflow methodologies.

Our concern for improving the heuristic value of cognitive models led us immediately to the longer-standing debate on cognitive modularity (for this is what most models are models of), and the guiding principle here is simplicity itself - no processing architecture, biological or otherwise, can "go modular" without paying in some way for that privilege. Here are the major pros and cons .....

The Benefits of Modularity: The principal benefits of cognitive modularity are (a) that it brings simplicity in terms of the size and functionality of the individual modules, (b) that because there is accordingly less to know or say about any one module, they are much easier to upgrade successfully, and (c) that compelling prima facie similarities can be seen between highly modular cognitive diagrams and the known functional mapping of the human cerebral cortex.

The Costs of Modularity: The principal cost of modularity lies in the supramodular complexity it brings with it. The whole, on this occasion, really is greater than the sum of its parts. Taking the Ellis and Young (1988) psycholinguistic diagram as class-typical [we presented this model, duly attributed, in manuscript, but now have an e-version of that model available online - click here to view], such models are so constrained by the space available to present them, that they tend to record only core functionality; that is to say, they show none of the processing overheads normally associated with networked processing architectures. Consequently, they miss out a large part of the overall processing, and risk seriously understating the amount of processing required at any one node. 

 

2 - The Standard Elements of a Cognitive Module

Having set the scene, we went on to look at precisely what would need dividing if a unimodular brain were suddenly to start evolving into a bimodular one. Unfortunately, this question can be answered in many different ways, according to which of psychology's basic orientations you subscribe to. A cognitivist, for example, would argue that it would be the capacity for mental information processing which would have to divide, a behaviourist would argue that it would be the ability to respond to stimuli, a neurobiologist would argue that it would be this or that series of biochemical reactions, and so on, school by school. The database designer's perspective is perhaps closest to the cognitivist position in that it sees the biological nervous system as a data processing machine, and would argue that it would be both data and processing which would have to divide. To develop this argument, therefore, we therefore simply credited the nervous system with the same basic categories of component as are found in the classical digital computer.

2003 ASIDE: The first coherent vision of a general purpose computing machine was Charles Babbage's 1834 proposal for an "Analytical Engine". A number of mechanical computers were constructed to Babbage's vision during the 19th century, and his basic architecture was then taken as the starting point by those responsible for designing the first digital electronic computers during the Second World War [fuller story]. The culmination of what turned out to be a massive technological investment was the "Eckert-von Neumann Machine", a "General Purpose Computer" in which the major elements were a controller, a calculation unit, a supporting memory, input and output devices, and a wiring loom to connect all the other things together [technical detail]. For reasons space prevents us going into here, it only takes one errant bit of computer data to corrupt the logic of a process, or one errant bit of process to corrupt the data; and, as already noted, cognitive modellers tend not to distinguish data and process at all.

Figure 1 was used at this juncture to show how those various elements might be invoked to represent an all-purpose biological computer.

Figure 1 - The General Purpose Biological Computer: Here we see the standard elements of the non-biological general purpose computer arranged schematically within a unimodular biological cognitive system (green bubble). Input flows from the real world (far left) via <INPUT DEVICES> to produce behaviour (far right) via <OUTPUT DEVICES>. The totality of perceptual, higher cognitive, and motor control processing power is shown as a pool of <LOGIC>, the totality of the long term memory resources available to that processing is shown as <DATA>, and precisely measured subsets of logic - "programs" - are executed by the <PROCESSOR>. This processing architecture can only function effectively if the <DATA> are appropriately defined into each attempt to access or manipulate them, and this critically important body of data definitions is held in the <SCHEMA>. We recognise that the word "schema" has a number of very precise usages within psychology, but use it here in its database usage, where it acts as the blueprint specification for the contents of a computer's "semantic" memory. The word derives from the work of the computing industry's Codasyl Database Task Group over 30 years ago (Codasyl, 1969, 1971; ANSI/SPARC, 1976), who saw database schemas as repositories of "metadata" - data which describes data. [For a relatively painless introduction to database terminology see Silberschatz, Korth, and Sudarshan (1997 online).]

[The elements of biological computation]

 

From the original 1991 conference acetate, but with dotted line insets added here to give an explanatory context. Full original in Figure 2. This graphic Copyright © 2003, Derek J. Smith.

 

 

3 - Cognitive Meiosis

The next stage in our argument was to point out that when cognitive modularity first emerged in the Animal Kingdom's evolutionary past, it must have brought with it some sort of evolutionary advantage. Moreover, the modularising process must have been akin to cellular meiosis, the process of sharing out the current contents of a cell nucleus between two offspring gametes. Metaphorically speaking, the processing elements shown in Figure 1 have to share out their current functionality between two offspring (and necessarily smaller) processing modules. We therefore conducted a thought experiment to consider how and where this sharing out might take place, and here are our conclusions as to what this speculative first act of "cognitive meiosis" might have consisted of .....

Stage 1: An organism with a unimodular brain finds it evolutionarily advantageous to mutate in the direction of greater cognitive capacity. Its brain therefore starts to grow in size across the generations.

Stage 2: Beyond a certain size, however, the existing unimodular architecture starts to run into difficulties, eventually reaching the point where it cannot make its "software" more complicated than it currently is. There is then a clear evolutionary pressure for the processor to start to compartmentalise, that is to say, for various key functions to start to draw physically apart in the direction of a bipolar ("dumbelled") structure.

2003 ASIDE: In 1991, we suggested this merely as a working hypothesis. However, two papers soon came to our attention as precisely expressing the argument we were making. Both Norris (1991) and Hinton, Plaut, and Shallice (1993) have vividly demonstrated that larger and larger real world problems cannot be solved simply by deploying larger and larger monolithic processors. Instead, processing must be modularised in some way, with each module somehow handling a key sub-aspect of the overall problem. Fodor - one of the leading theorists on the modularity issue - defines a module as an "'informationally encapsulated' cognitive facility" (Fodor, 1987:25), and the bipolar structure which emerged in Stage 2 above helps deliver this informational encapsulation, and thus promotes more adaptive behaviour.

Stage 3: Organisms possessed of this compartmentalised processor flourish selectively, and the drawing apart continues until a bimodular brain finally emerges, with the waist of the transitional bipolar module becoming the communication channel between the two derived modules. There are many points at which the mother module might choose to divide, all conjectural, but we chose, again merely for the sake of having a working hypothesis, to separate the perceptual from the conceptual aspects of cognition. This gives us on the one hand an essentially sensory module with retained motor capabilities, and on the other hand an essentially motor module with retained sensory capabilities.

2003 ASIDE: The communication channel thus introduced is a serious processing overhead, and is the principal cost of modularity mentioned in Section 1 above.

 This hypothetical sequence of events was summarised diagrammatically in Figure 2 .....

Figure 2 - The Nature of Cognitive Meiosis: Here we see the effects of our initially unimodular mother system (top) dividing into two linked daughter modules (bottom left and right). These derived modules are capable of doing the same job, but now possess spare capacity to support future improvement. Note how all six of the elements identified in Figure 1 have divided. But note also the newly formed communication channel (green flash), and the processing necessary to transmit and receive data along it (green bubbles left and right). This link management processing constitutes a net increase in the overall amount of processing required of the system, albeit it is transparent to both the end user and any external observers. You need to do more to achieve the same, in other words, but by the same token you have doubled your original processing capacity so you are still "on a profit", so to speak. For the resulting system to perform effectively and efficiently, two in-many-respects incompatible conditions now need to be met. On the one hand, there should be no duplication of storage or processing across the two modules, and on the other the exchange of information between the two modules requires some commonality of understanding. <SCHEMA B>, in other words, must be able to make sense of whatever <SCHEMA A> is sending it, but without unnecessarily duplicating it. This is far from straightforward in practice, and we look at how this might be achieved in Section 4 ......

[The networking costs of modular processing]

Redrawn from the original 1991 conference acetate (in manuscript). This graphic Copyright © 2003, Derek J. Smith. The transition from Figure 1 to Figure 2 was done by physically unmasking the lower half of the acetate at the appropriate stage in the proceedings.

  

4 - Suppositions of Belonging

Finally, and as promised in our chosen title, we attempted to reconcile our late 20th century paradigm for parallel distributed processing with John Locke's late 17th century philosophy, and we did this by looking at the way the mother module's <DATA> (along with the <SCHEMA> which defined it) was apportioned between the two offspring modules. Given our explicit presumption (see Section 2) that Daughter A was a perceptual processor, whilst Daughter B was a conceptual-behavioural processor, it followed that Daughter A would be responsible for such processes as sensory data acquisition, signal transcoding, figure-ground organisation, and item matching against perceptual memory, whilst Daughter B would be responsible for activating and making good use of the conceptual memory associated with the perceptual node(s) activated by Daughter A.

But the glaring and immediate problem is that the percept, instead of being a conveniently located local adjunct to its associated concept, now has to activate it from afar. Daughter B therefore does not know what percept has been activated unless and until it has been informed of same along the newly installed intramodular communications link (and, even then, it is being required to make higher cognitive judgements on secondary inputs, for it is no longer itself in direct touch with the outside world). By the same token, Daughter A will totally lack a conceptual context, because the ability to consider the deeper meaning of the current perceptual scene relies on the contents of the semantic network contained in Daughter B.

2003 ASIDE: Here we have another of the Empiricist philosophers' favourite posers. If we know everything we know at second hand, they argue, then at best the resulting knowledge is arbitrary and highly personalised (Locke's position), and at worst reality is nothing but a figment of our imagination (Bishop Berkeley's position). It is also what Locke was on about when he wrote (see Header Panel, ¶23.9) that many of the qualities of an external object "are nothing but the powers those substances have to produce several ideas in us by our senses".

Here is a blow-by-blow account of how this impasse might be resolved. Note that all communications take place across the A-B link highlighted in Figure 2.

MODULE A PROCESSING

·         Module A receives the neural activity generated by external object X and its background.

·         Module A analyses X as being the relevant figure, and largely ignores the remainder as being background.

MODULE B PROCESSING

·         Module B receives enough information across the communications link to allow it to activate the conceptual node(s) corresponding logically to the perceptual node(s) now activated in Module A.

·         Module B adjusts its current understanding of the world [however that works] according to the new conceptual input, and adjusts its decision making and behavioural planning as necessary.

The above argument is summarised diagrammatically in Figure 3.

 

Figure 3 - The Limits of Bimodular Cognition: Here we see the two modules produced during our reconstructed act of cognitive meiosis. The system has been exposed to external object X, and we show across the bottom of the diagram the stages of cognition which now need to be gone through, and where we presume they take place. As already profiled in Section 2 and immediately above, we see Daughter A as being essentially a perceptual module. It therefore recognises external object X as a figure against a background, and activates the best matching perceptual memory node in <DATA A>, its long term data store. This, however, is all that it can do, because conceptual memory has been moved across the way into Daughter B. Daughter B therefore has to be told what its sister has decided is the best guess at a current perceptual truth, and this means not just that <DATA A> has to activate the corresponding semantic node in <DATA B>, but that it has to do it "at arms length", via the communications link. It is the resulting state of detached conceptual activation which, in our submission, constitutes the "supposition of belonging" described by Locke.

[Locating

From the original 1991 conference acetate. This graphic Copyright © 2003, Derek J. Smith.

 

5 - Conclusion

Our conclusions were therefore (a) that cognitive modellers would do well to follow more closely the diagramming standards used within engineering and computing, and (b) that semantic network modellers would do well to follow the methods of analysis used by database designers, because the devil is in the detail.

 

6 - Subsequent Developments of these Themes

Since this paper was first presented in 1991, various threads within the above argument have been further developed .....

·         An enlarged version of Figure 2, depicting the cognitive modularity characteristic of the adult human mind, was published in Smith (1993). This proposed 13 major cognitive processes and 20 separate memory resources, and apportioned these between five physical cognitive modules, plus a "virtual" consciousness module. This five-plus-one model was subsequently simplified for teaching purposes in Smith (1996b; Chapter 1) and Smith (1997a) and both the long and the short form diagrams have recently been placed online [click here]. In addition, the face validity of the model was tested against the ontogenetic cognitive series in Smith (1996a), and against the phylogenetic cognitive series in Smith and Stringer (1997).

·         The cybernetic implications of modularity were set out in Smith (1997b; Chapter 4). This material, too, has recently been placed online [click here (note Figure 2 especially)].

·         Our complaint that cognitive modelling skills qua modelling skills needed to be improved led to our introducing the Gane-Sarson and Yourdon-Constantine diagramming conventions into our psycholinguistics curriculum, and by duly supporting it with examples and exercises. This was done in Smith (1997b; Chapter 3), and this material, too, has recently been placed online [click here].

·         The notion that database principles may be seen at work in the organisation of biological long-term memory has been addressed at subsequent conferences as Smith (1997a) and Smith (1997c), and is currently [May 2003] being simulated in software.

·         The notion that the communications link overheads are similarly under-debated in cognitive theory has also influenced us greatly, motivated by a deep sense of unease that what cognitive diagrams choose not to show is precisely what non-biological computer networks spend most of their time worrying about. This line of argument has been developed at subsequent conferences as Smith 1997d, Smith (2000a), Smith (2000b), and Smith (2002). 

 

References

ANSI/SPARC (1976). Interim report of the study group on DBMS. ACM Sigmod Bulletin, 7(2).

Codasyl System Committee (1969). A survey of generalised database management systems. ACM Technical Report.

Codasyl System Committee (1971). Feature analysis of generalised database management systems. ACM Technical Report.

Fodor, J.A. (1987). Modules, etc. In Garfield, J.L. (Ed.), Modularity in Knowledge Representation and Natural Language Understanding. Cambridge, MA: MIT Press.

Gane, C. and Sarson, T. (1977). Structured System Analysis. New York: IST.

Hinton, G.E., Plaut, D.C., and Shallice, T. (1993). Simulating brain damage. Scientific American, October 1993, 269:58-65.

Locke, J. (1689). An Essay Concerning the Human Understanding. London: Routledge. [Page numbering from the 1896 Lubbock edition.]

Norris, D. (1991). The constraints on Connectionism. The Psychologist, 4:293-296.

Smith, D.J. (1991). Suppositions of belonging: Insights into the computational principles of meaning. [Paper presented 8th September 1991 at the Eighth Annual Conference of the Cognitive Psychology Section of the British Psychological Society, Oxford.]

Smith, D.J. (1996a). Brain and Communication. Cardiff: UWIC. [ISBN: 1900666014]

Smith, D.J. (1996b). Memory, Amnesia, and Modern Cognitive Theory. Cardiff: UWIC. [ISBN: 1900666006]

Smith, D.J. (1997a). The magical name Miller, plus or minus the umlaut. In Harris, D. (Ed.), Proceedings of the First International Conference on Engineering Psychology and Cognitive Ergonomics (Volume 2). Aldershot: Ashgate. [ISBN: 0291398472] [Being the transcript of a paper presented 24th October 1996 to the First International Conference on Engineering Psychology and Cognitive Ergonomics, Stratford-upon-Avon.]

Smith, D.J. (1997b). Human Information Processing. Cardiff: UWIC. [ISBN: 1900666081] 

Smith, D.J. (1997c). The IDMSX set currency and biological memory. Poster presented 10th March 1997 at the Interdisciplinary Workshop on Robotics, Biology, and Psychology, Department of Artificial Intelligence, University of Edinburgh. [Transcript]

Smith, D.J. (1997d). Chunking and cognitive efficiency: Some lessons from the history of military signalling. Paper presented 27th March 1997 to the 11th Annual Conference of the History and Philosophy of Psychology Section of the BPS, York. [Transcript]

Smith, D.J. (2000a). A slow-motion video analysis of information feedback in a computer-animated psycholinguistic model. Computer-animated poster presented 10th April 2000 at the Tucson 2000 - Towards a Science of Consciousness conference, University of Arizona, Tucson, AZ. [Transcript]

Smith, D.J. (2000b). A slow-motion video analysis of the arrival and circulation of initially unbinded input within consciousness. Computer-animated poster presented 30th June 2000 at the Fourth Annual Meeting of the Association for the Scientific Study of Consciousness, Free University, Brussels, Belgium. [Transcript]

Smith, D.J. (2002). Intramodular neurotransmission and the Wichita Lineman. Poster presented 9th April 2002 at the Tucson 2002 - Towards a Science of Consciousness conference, University of Arizona, Tucson, AZ.

Smith, D.J. and Stringer, C.B. (1997). Functional Periodicity in Biological Information Processing Architectures. Cardiff: UWIC. [ISBN: 1900666073]

Yourdon, E. and Constantine, L.L. (1979). Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design. Englewood Cliffs, NJ: Prentice Hall.

 

 

[Home]