Course Handout - Short-Term Memory Subtypes in Computing and Artificial Intelligence

Part 6 - Memory Subtypes in Computing

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2010, High Tower Consultants Limited.

 

First published online 12:30 GMT 19th December 2003, Copyright Derek J. Smith (Chartered Engineer). This version [HT.1 - transfer of copyright] dated 12:00 13th January 2010

 

 

This is the sixth part of a seven-part review of how successfully the psychological study of biological short-term memory (STM) has incorporated the full range of concepts and metaphors available to it from the computing industry. The seven parts are as follows:

 

Part 1: An optional introductory and reference resource on the history of computing technology to 1924. This introduced some of the vocabulary necessary for Parts 6 and 7. To go back to Part 1, click here.

 

Part 2: An optional introductory and reference resource on the history of computing technology from 1925 to 1942. This further introduced the vocabulary necessary for Parts 6 and 7. To go back to Part 2, click here.

 

Part 3: An optional introductory and reference resource on the history of computing technology from 1943 to 1950. This further introduced the vocabulary necessary for Parts 6 and 7. In so doing, it referred out to three large subfiles reviewing the history of codes and ciphers, and another subfile giving the detailed layout of a typical computer of 1950 vintage. To go back to Part 3, click here.

 

Part 4: An optional introductory and reference resource on the history of computing technology from 1951 to 1958. This further introduced the vocabulary necessary for Parts 6 and 7. To go back to Part 4, click here.

 

Part 5: An optional introductory and reference resource on the history of computing technology from 1959 to date. This further introduced the vocabulary necessary for Parts 6 and 7. To go back to Part 5, click here.

 

Part 6: A review of the memory subtypes used in computing (as already identified in Parts 1 to 5). This material follows below and highlights how a succession of individual innovations from the 1930s onwards gradually brought about the software industry as we know it today. The main sections are:

 

1 - Programming as "Structured" Problem Solving

 

2 - COBOL for Beginners

 

3 - The Subtypes of Volatile Electronic Memory

 

Part 7: A comparative review of the penetration (or lack thereof) of those memory subtypes into psychological theory. UNDER CONSTRUCTION.

 

To avoid needless duplication, the references for all seven parts are combined in the menu file. To return to the menu file, click here, and to see the author's homepage, click here. We also strongly recommend the computer history sites maintained corporately by the Charles Babbage Institute at the University of Minnesota, the Bletchley Park Museum, the Computer Conservation Society, and the National Archive for the History of Computing at Manchester University, as well as those maintained individually by the University of Essex's Simon Lavington, Clemson University's Mark Smotherman, and the freelance historians George Gray (editor of the Unisys Newsletter) and Ed Thelen.

 

 

1 - Programming as "Structured" Problem Solving

Although our narrative sequence thus far has been driven primarily by advances in computing hardware, our official objective throughout has been to highlight the software innovations which accompanied them, especially where they involved an innovative use of electronic memory. Hence the various "Key Concept" paragraphs dotted around Parts 1 to 5, and hence our repeated observation that no matter how sophisticated the machinery in question the basic software problem has always been the same - how to deploy the available electronic memory to best advantage, given the often conflicting demands of the computation and the capabilities of the processor [see, for example, the story of buffer storage in Part 4 (Section 1.3)]. Our task is now to collate and categorise all those key concepts, but before we do that we need (a) to say a few words about the nature of "structure" in computer programming, what it is and where it comes from, and (b) to take an introductory look at the COBOL programming language.

 

1.1 Systems Analysis, Programming, and Programmers

In Part 4 (Section 3.1), we explained how the roles of systems analyst and programmer call for significantly different sets of skills and types of mind. Metaphorically speaking, it is the analyst's job to tease each new problem from its lair, and only then - when it is out in the open - does it become the programmer's job to move in for the kill. Given the right people, the resulting object code is then anything between ruthlessly efficient and downright elegant; cut corners, however, and the whole thing soon degenerates instead into a dog's breakfast. So where exactly does the analyst's work start, and when should they best hand over the results of their preliminary investigations for coding? Let us look in a little more detail at what the two jobs involve .....

 

Scenario: Imagine that you are a rich inventor, who has just developed a new form of digital display. You wish to use your new technology in a bedside alarm clock, so you sketch out the sort of display you have decided to provide, and head off to see your IT Department. Here is what you have in mind .....

 

MON | SEP 08 03 | 09:23 am

 

Now the point is that before any analysis or programming has been done, many of the most important design decisions have already been taken. In other words, you - the system's "sponsor" - have already grossly overspecified your requirement, which is unfortunate, because your sketched design suffers several potentially serious difficulties. For example .....

 

 

 

 

 

You have straightjacketed your creatives, in other words ["micromanaged" is the current buzzword]. In fact, it would normally be the system analyst's job, taking due specialist advice, to devise a display which was simultaneously ergonomically sound, commercially promising, and technically feasible. This display would then need to be formally approved by your intellectual property, product design, branding, and marketing people, and only when those individual approvals were forthcoming would the idea be released for development. There are therefore several occasions when a vivid and unambiguous form of documenting the proposed processing would be needed, and preferably one which was equally accessible to non-technical and technical staff. One of the tools available to do this is the "program flowchart", and we included a tutorial on this particular method in Section 1 of our e-paper on "How to Draw Cognitive Diagrams". Here is how the program flowchart for the present scenario might look .....

 

Figure 1 - The Digital Clock Display Program Flowchart: Here we see the flow of decision making required to advance (from left to right) the <DAY>, <MONTH>, <DATE>, <YEAR>, <HOUR>, <MINUTE>, and <AM-PM> display registers on the sort of 2x12-hour sweep digital clock display shown above. We have presumed that this routine will in practice only be called after the user has set the various display elements to desired starting values, however we have not shown the detailed processing necessary to do this, preferring to collapse it instead into the dotted pink panel [top, centre]. The dotted green panel [top, centre] is then an optional set of validating tests intended to ensure that the pre-set starting values have not been corrupted by the transfer of control. Purists often claim that this is not necessary, because "nothing can go wrong"; pragmatists do it anyway, for safety's sake, because accidental range errors can cause havoc to the meticulously structured branching and looping which is to follow [specifically, we do not want to increment <SECONDS> until it gets to 60, only to be given that field already preset to 61]. There is then an unconditional <INITIALISE> operation [topmost yellow box], where any process-internal fields - that is to say, fields not already set by the prior processing - are set to their own starting values. The captions down the right-hand side of the diagram show the main phases of the processing, but annotation such as this is not normally included.

 

The process then begins in earnest. The intention, of course, is that each separate display register should advance to its logical limit (ie. the 59th second, the 23rd hour, etc.), and then "carry" in the same way that each advancing nine carries in a decimal display. We have presumed that the entire system is driven ultimately by an electronic oscillator which pulses to a TIMER PIN precisely once per second. The underlying electronics need not concern us, for we may refer to the condition as TIMER PIN ON or TIMER PIN OFF. If we then wait on each timing pulse, we can use it to drive the <SECONDS> register directly. This, of course, gives us a problem every 60 seconds, whereupon we need to carry one to the <MINUTES> display and set <SECONDS> back to zero. And whenever we advance <MINUTES>, we need to check for carry over to the <HOURS>, and so on for the <AM-PM> indicator every 12 hours, the <DATE> every 24 hours, the <MONTH> every 30 or so days, and the <YEAR> every 12 months [not all months have the same number of days in them, of course, and February has a leap year problem]. Note that for the hour after noon or midnight, the <HOUR> displays as <12>, not <00>. Finally, we need to convert each advancing <DATE> to its equivalent day of the week, and display that answer in the <DAY> field. However, this caption is already overlong, so we have decided to implement date-day conversion as a subroutine [dotted green panel, lower left], and leave it to someone else to design!

STM6-fig1.gif

 

Copyright © 2003, Derek J. Smith.

 

One of the strengths of program flowcharts such as the one shown above, is that it is possible to generate machine code more or less directly from the diagram. For example, any sequentially "interpreted" programming language would simply follow the diagram down the page substituting its own syntax for all the operations, tests, and branches identified therein. In the table below, we show what the resulting code might look like, written in syntax similar to that used by Microsoft's BASIC programming language, an interpreted language in which each instruction is automatically selected sequentially from the top, unless and until that selection sequence is overridden by an explicit "GO TO" instruction of the form seen on Lines 20, 30, etc., whereupon control jumps instead to the instruction thereby specified ..... 

 

Key Concept - "GO TOs": A "GO TO" is a specific jumping instruction designed to allow programmers to override their default execution sequence. This enables them to prepare their code in logically discrete chunks, despite the tight physical packing of the instructions. These chunks can then be GO TO'd in and out of when needed. GO TOs are particularly useful when controlled by an "IF ..... THEN ....." condition test (eg. as shown on Line 20 below), allowing condition-specific blocks of instructions to be selected or ignored as appropriate. As we are about to learn in Section 1.2, GO TOs came under heavy criticism in the 1960s, and were largely abandoned in favour of the much more human-friendly "PERFORM" statement.

 

Key Concept - "Iterations" and "Loops": GO TOs are also useful for simple repetitions, or "iterations", of blocks of code (eg. as shown in Lines 20-30 below). These are commonly known as "loops", because control repeatedly loops back to the beginning of the specified block. 

 

Instruction

Comment

NB: The GREEN instructions establish important processing loops, and will be discussed further in Section 3.5 below. LT = LESS THAN; GT = GREATER THAN

10 SECONDS = 0

INITIAL PROCESSING: This instruction sets the <SECONDS> display register to zero. All other display registers will already have been set to their desired starting values by a preceding routine.

20 IF TIMER PIN ON THEN GO TO LINE 40

AWAIT NEW SECOND: This instruction repeatedly does nothing, waiting for the TIMER PIN ON signal to occur. Since this happens precisely once a second, this is the mechanism by which the timer mechanism communicates with and actuates the display software.

30 GO TO LINE 20

This is the "ELSE" element of Line 20, that is to say, you only get this far if the TIMER PIN ON test is failed. The GO TO LINE 20 then serves to loop back to the preceding instruction, so that the TIMER PIN status can be re-sampled. This will happen many thousands, if not millions, of times a second, depending on the speed of the processor.

40 SECONDS = SECONDS + 1

NEW SECOND PROCESSING: You only get this far when a new second has just been signalled. This instruction adds 1 to whatever the <SECONDS> count currently contains [we are presuming that the TIMER PIN sets itself back to OFF, once sampled].

50 IF SECONDS = 60 THEN GO TO LINE 70

If <SECONDS> has now advanced to 60, then control is taken down to Line 70. A cautious programmer might replace the analyst's "equals 60" test with the more defensive "GT 59", because this is better able to cope with an accidental corruption of the value in question.

60 GO TO LINE 20

This is the "ELSE" element of Line 50, that is to say, you only get this far if <SECONDS> is in the range 1 to 59 inclusive, and no "minutes carry" is required. This instruction takes us back to our TIMER PIN sampling process.

70 SECONDS = 0; MINUTES = MINUTES + 1

NEW MINUTE PROCESSING: You only get this far when a new minute has just been detected. This instruction adds 1 to whatever the <MINUTES> count currently contains, and sets the <SECONDS> register back to zero.

80 IF MINUTES = 60 THEN GO TO LINE 100

If <MINUTES> has now advanced to 60, then control drops through to Line 100. Again, a cautious programmer might then replace the analyst's "equals 60" test with the more defensive "GT 59"

90 GO TO LINE 20

This is the "ELSE" element of Line 80, that is to say, you only get this far if <MINUTES> is in the range 1 to 59 inclusive, and no "hours carry" is required.

100 MINUTES = 0; HOURS = HOURS + 1

NEW HOUR PROCESSING: You only get this far when a new hour has just been detected. This instruction adds 1 to whatever the <HOURS> count currently contains, and sets the <MINUTES> register back to zero.

AND SO ON

 

 

The resulting sequence of one-offs and iterations, conditionals and compulsories, GO TOs and returns, routines and subroutines, is known as the "program structure" ..... 

 

Key Concept - A Program's "Structure": A program's "structure" determines the sequence of execution of each machine instruction. The default structure is simple sequential execution, but this can be creatively expanded by conditional branches and loops where appropriate. But beware - just as "ready - fire - aim" is a notably ineffective sequence of instructions to a firing party [think about it], so, too, does the slightest mis-sequencing of machine instructions prejudice a calculation. Indeed, it only takes one rogue instruction to wreak total and immediate havoc. Structuring therefore helps to ensure (a) that only logically related sets of instructions are executed, and (b) that they are executed when and only when necessary. << AUTHOR'S NOTE: Cognitive science has yet to establish how the highly trained mind of an experienced programmer models such structuring, that is to say, how the loops and branches of the mind mirror the loops and branches of the written code. >> 

 

Useful though the program flowchart is, there exists a diagramming format which is typically a whole lot easier on both eye and brain. We refer to the format known as the "program structure diagram". Here is Figure 1's substantive logic expressed in this alternative set of conventions ..... 

 

Figure 2 - The Digital Clock Display Structure Diagram: Here we see exactly the same solution as was proposed in Figure 1, but drawn differently. The rules for this type of diagram are that each box represents a related cluster of instructions, but that the fine detail of those instructions should be excluded because it confuses both eye and mind. Instead, the box is given a title ["Process Control", "Initial Process", "Main Process", etc.] which succinctly states its contribution to the whole. Execution passes downwards and left-to-right in turn, beginning at the top, and the result is a processing "tree" which methodically advances the point of control through a given problem until it has been resolved. In so doing, some blocks of instructions are executed once, some a known number of times, some only under a particular condition, and some until a particular end state is reached. The repeating blocks are conventionally shown with an asterisk in their top right corner, accompanied by a concise textual caption [eg. the "UNTIL RESET", top centre]. The conditional blocks are linked into the tree by a dotted line, again captioned so as to indicate when they are to be invoked [eg. the "SECONDS = 60" condition which controls when the NEW MINUTE PROCESS is to run]. One of the strengths of program structure diagrams such as this, is that it is possible to generate high level source code (that is to say, code ready for compilation rather than interpretation) more or less directly from the diagram. We show this in the column of text to the right of the diagram, and we shall be going into greater detail on how this is done in Section 2.4 below.

STM6-fig2.gif

 

Copyright © 2003, Derek J. Smith.

 

1.2 Structured Programming

In Part 4 (Section 3.2), we explained how assembled and compiled programming languages emerged in the early 1950s to help relieve the monotony and inefficiency of entering machine code instructions the hard way. In fact, the idea goes back to about 1942, when the German engineer Konrad Zuse began devising the Plankalkül programming language to go with his Z4 computer. Zuse described his computer programs as Rechenplans, the German word for "plans for calculation", and believed that writing them was as much part of owning a computer as collating packs of Jacquard cards was part of owning a loom. Zuse's Plankalkül is nowadays commonly accepted as being the first complete programming language [for a more technical introduction, complete with detailed examples, see Bauer and Wössner (1972/2002 online)]. Other early languages were .....

 

 

 

 

 

However, perhaps the best remembered of all the pioneer programmers was US Navy Lieutenant (later Rear Admiral) Grace Murray Hopper (1906-1992). This Yale mathematician joined the US Navy in 1943, and - despite the fact that "she did not know the difference between a relay and a basket of tomatoes" (Schneider and Schneider, 1998) - was assigned to ballistics computation duties. She learned quickly, and the quality of her work soon took her to Howard Aiken's computer development unit at Harvard, where she helped prepare mathematical formulae for the Harvards Mark I and II [she also found the first software "bug" - story and picture]. She then moved to the Eckert-Mauchly Computer Corporation in 1949 to help out on the BINAC and UNIVAC projects, and began developing early "compiling routines" such as A-0. This ran for the first time in 1952, and is described by Sammet (1992/2003 online) as being "incredibly primitive by today's standards". Sammet ranks the 1955-1958 follow-up product far more highly. This was B-0, later renamed Flow-Matic, a compiled language which allowed programmers to write in a simple form of English. Here is how Hopper herself described the achievement a few years later .....

 

"It was determined that the only common symbology used by the various installations was the English language and it was therefore decided to use a selected English vocabulary as a pseudo-code [note this word]. It was possible to isolate and define 30 verbs

 

ADD CLOSE-OUT COMPARE COUNT DIVIDE EXECUTE FILL HALT IGNORE INPUT INSERT JUMP MOVE MULTIPLY NUMERIC-TEST OVERLAY PRINT-OUT READ-ITEM REPLACE REWIND SELECT SELECT-LEAST SET STOP SUBTRACT SWITCH TEST TRANSFER TYPE-IN WRITE-ITEM

 

which as a start could make possible definition of data-processing problems in a common manner. Thus a sentence could be written

 

(19) COMPARE INV-ON-HAND WITH INV-LIMIT; IF GREATER GO TO OPERATION 29; IF EQUAL GO TO OPERATION 23; IF LESS GO TO OPERATION 23.

 

[.....] While 30 verbs [.....] could be distinguished, no such list of nouns could be prepared. It became clear that each installation must be permitted to define its own nouns, thus describing its own data. It was determined that these definitions must include descriptions of (1) the data as a file [.....], (2) the data as items or records of sets of words, and (3) the data as fields as sets of characters." (Hopper, 1959, pp171-172)

 

[To see a specimen Flow-Matic program, click here; the distinction between file, record, and field remains fundamental within the industry to this day.]

 

Since 1954, meanwhile, the American mathematician John Backus (1924-) had been conducting trials with what was to become in 1957 the first commercially successful higher order programming language, FORTRAN. While working on the IBM704, Backus, too, had grown concerned by how inefficient machine code programming actually was. Mistakes were particularly common when improvements were needed to pre-existing programs (a process known as "software maintenance"), simply because the structure of the original solution was usually far from transparent. Programmers routinely took days making improvements at one point in the program, only to find that they had inadvertently precipitated failures elsewhere. What was needed was a way to keep programs "intellectually manageable".

 

ASIDE: This will all be depressingly familiar to readers who have ever had to maintain software [those who have not had this pleasure will gain some idea of the severity of the problem by now trying to convert the Figure 1 flowchart to offer a single 24-hour sweep rather than two 12-hour sweeps]. Nor is maintenance programming any easier if the original code is your own, because the act of programming is fundamentally an act of linguistic output and your memory for what you have written soon reduces to a garbled gist - presumably the result of the sort of forgetting studied by Bartlett (1932) [detail]. To this day, therefore, it is often quicker to write a new program than it is to repair an old one, even if you wrote that old one yourself.

 

Even more concerned with the need to manage the process of computer programming was the much-honoured Dutchman, Edsger W. Dijkstra (1930-2002), originally of Eindhoven University of Technology, and later of the University of Texas at Austin. His focus, too, was on the fallibility of human cognition.

 

ASIDE: Dijkstra's "conceptual gap" is precisely the same explanatory problem as was made famous within psychology by Lashley's (1951) paper "The problem of serial order in behaviour". Lashley was concerned with the technical problem of converting such things as thoughts, images, or intentions out of some initially timeless physical memory trace - a structural "engram" of some sort - into a time-sequenced succession of behaviours. His explanation was - and the explanation of choice still is - that the motor engram is capable (a) of being located as a unit whenever its particular piece of behaviour is called for, and then (b) of being reactivated as a succession of subunits. This sort of motor engram is better known as a "motor schema", and its reactivation as "motor programming". << AUTHOR'S NOTE: Neuroscience has yet to decipher either (a) the nature of the engram, or (b) the mechanisms of program activation. For more on the schema approach to motor behaviour see our e-paper on "The Motor Hierarchy", for more on the mechanisms of program activation see our e-paper on "Motor Programming", and for what happens when things go wrong see our e-paper on "Mode Error in System Control". >>

 

Dijkstra's point was that if you had a good program structure you not only had a good technical solution, but also one which would stand the tests of time. It is therefore vital not to keep "jumping about" your programs, because it prevents them from being intellectually manageable [GO TO code is often referred to as "spaghetti code", and with some justification], and this line of argument eventually led Dijkstra to one of his most famous observations, namely that "the quality of programmers is a decreasing function of the density of Go To statements in the programs they produce" (Dijkstra, 1968, p147). To cut a long story short, the software industry liked what Dijkstra was telling it, and the 1970s saw a number of important publications extolling the virtues of this new, mentally disciplined, "structured programming". These include Michael Jackson's "Principles of Program Design" (Jackson, 1975), Tom De Marco's various works on "software engineering" (eg. De Marco, 1979), and Ed Yourdon and Larry Constantine's "Structured Design" (eg. Yourdon and Constantine, 1979), and their common theme was that the main problem was always going to be the utter complexity of the real world. Structured programming would help you get to grips with this complexity, but it called for a language which wore its program structures on its sleeve, and that language was not long in coming .....

 

2 - COBOL for Beginners

 

"The use of COBOL cripples the mind; its teaching should therefore be regarded as a criminal offence" (Dijkstra, 1975).

 

In the event, Backus's FORTRAN did not quite fit the bill. While it was fine for the sort of "scientific" computing carried out in universities and military research establishments, it was out of its league when faced with the larger but more run-of-the-mill applications seen in banks and insurance companies. These latter were "business" applications - arithmetically quite straightforward, batch as often as online, and involving extremely high volumes of data. They therefore demanded a language which was as quick with its READs and WRITEs as FORTRAN had been with its SINEs and COSINEs, and once again it was Grace Hopper who helped provide the breakthrough. She took her Flow-Matic language, upgraded its data handling aspects to cope with larger records and files, and drew up guidelines for a near-English high level language which traded off technical power for greater simplicity and readability. Her proposals were then submitted to a committee of industry experts known as the Conference on Data Systems Languages (CODASYL) [constitution; background], who made further improvements to the conceptual design before beginning the detailed specification process in May 1959. The new language was COBOL (short for "Common Business Oriented Language"), the project was coordinated by the US Department of Defence, and the first working version of the product was available in December 1959. For fuller histories and overviews of the COBOL programming language, click here, or here, or here.

 

Nigh on half a century later, COBOL remains quantitatively the most successful programming language ever, thus .....

 

"Cobol has underpinned much of the financial sector's IT infrastructure since its inception in 1959, and it is still going strong. According to industry estimates, there are currently more than 200 billion lines of Cobol code used in today's business applications, with several billion lines of new application code added annually." (Archbell, 2000) [For a longer discussion of the reasons for this success, click here or here.]

 

..... and, by virtue of COBOL's approximation to English, its code makes excellent pseudocode .....

 

Key Concept - "Pseudocode": Pseudocode is code which is not yet code because it is still on the coding sheet. It is "sketchbook" code, if you like; code which is still being considered by the relatively forgiving minds of the humans in the loop, and which has yet to be subjected to the full rigours of the compilation process. Nevertheless, it is code which approximates to English and which can therefore adequately convey at a glance the essence of the processing solution being proposed. This makes pseudocode a very powerful way of linking the conceptualising and problem solving capabilities of the human mind to the number crunching capabilities of the logic circuitry.

 

COBOL's approximation to English also means that once your mind has formed some idea of an appropriate processing sequence (a pseudocode representation of the problem solving structure), you immediately have the skeleton of your source code [see the paragraph structure inset into Figure 2]. In the remainder of this section we present what is effectively a short course on "COBOL Pseudocode", and we do this because we are going to need that pseudocode in earnest to get the most out of the various worked examples yet to come. However, readers eager for the cognitive science should feel free to jump straight to Section 3, and refer back to the technicalities as and when necessary.

 

2.1 The COBOL Program Divisions

To create an executable COBOL program, you have to provide the compiler with three preparatory blocks of definitions and declarations, and one further block of logical instructions. This is done under the following four headings .....

 

  1. Identification Division: The first few statements in a COBOL program address basic requirements such as the program name, its author (and author history), the date it was first written (and its amendment history), and the like. This commands no further explanation at this juncture, because it contributes nothing to the pseudocode facility.

 

  1. Environment Division: The next few statements serve primarily to describe the file(s) which is/are going to be read from, or written to, by the program. We say more on this in Section 2.2.

 

  1. Data Division: There then follows what can end up as a long and tortuous set of data declaration statements. These are the "nouns" talked about by Grace Hopper in Section 1.2 above, and they are vitally important to effective programming because they force you to define the data you wish to process at a very early stage, allocating it byte by byte (and sometimes bit by bit) to the physical storage resource provided in Main Memory. We say more on this in Section 2.3.

 

  1. Procedure Division: Finally, we come to the programming proper, that is to say, to the operations you wish to have carried out on said data, and which you therefore want the compiler to convert to substantive machine code. This is dealt with in Section 2.4.

 

2.2 The COBOL Environment Division

 

Non-programmers should refamiliarise themselves with the topics of Batch vs Online Processing from Part 2 (Section 1), Online Transaction Processing from Part 5 (Section 1.7), and Job Control Language from Part 5 (Section 1.2), before proceeding.

 

Early experience soon persuaded computer programmers to distinguish carefully between compilation-time and run-time considerations. The most cost-effective programs were those which could be compiled once and then executed repeatedly on no end of different input and output data files. Thus a monthly accounting program might run on January's data at the beginning of February, on February's at the beginning of March, and so on - yes, it would cost you a lot of secondary storage (initially card or paper tape, but later magnetic drums, tapes, or disks) to keep your data safe (for many years in the case of accounting or legal data), but you would only ever need a single copy of the object code [at least until the need for routine repairs or enhancements intervened].

 

This arrangement was managed by having one set of filenames reserved for use by the operating system in its day-to-day management of the physical filestore, and a different, much simpler, set of filenames for use by the programmer. The former were called the "physical filenames", because they referred to data which actually existed somewhere on tangible storage media, and the latter were called the "logical filenames" because they only ever had to make sense to the person actively doing the problem solving. All that then remained to do at run-time was to "assign" each of the file(s) containing the specific data you wanted to process to the "selected" logical filenames by which they would be known for the duration of that processing.

 

ASIDE: By definition, both input files and output files being "extended" (that is to say, added to rather than being created from scratch) are pre-existing files, and are accordingly already known by name to the operating system. Any appropriately structured file can therefore be assigned by its external filename to the simpler local filename. Less obviously, empty output files are also pre-existing files, being typically created by the job pack immediately before execution of the program which is to write to them. In fact, it was the need to keep re-creating and re-assigning your data files each time a program was to run which drove the development of the Job Control Languages described in Part 5 (Section 1.2).

 

Turning now to COBOL, this sort of run-time "binding" is controlled by the Input-Output Section, File-Control Subsection, and the basic syntax is <SELECT {logical filename} ASSIGN TO {physical filename} ORGANISATION IS {optionally SEQUENTIAL, INDEXED, or RELATIVE} [ACCESS IS {dynamic} RECORD KEY IS {fieldname} FILE STATUS IS {fieldname}]>.

 

2.3 The COBOL Data Division

Early experience also persuaded programmers of the need to deploy their computer's available Main Memory with the utmost forethought and precision. In COBOL, this is done in the "Data Division", the third of the four program divisions. The key facility under this heading is the "Picture Clause" .....

 

Key Concept - The "Picture Clause": A COBOL Picture Clause is a specific allocation of a number of bytes of Main Memory to a particular type of data, under one or more unique names, and optionally pre-set to a required value. It is the mechanism by which the programmer controls the initial build of the Record and Data Areas, and was presented to CODASYL by Bob Bemer, who had developed the idea on the 1957 IBM COMTRAN language. Data arrays can be set up using an OCCURS N TIMES extension to the basic clause, and the optional field pre-setting is done using a VALUE IS "....." extension [various examples below].

 

The following data types are recognised by COBOL .....

 

PIC A: The PIC A data type allocates a specified number of bytes to "alphabetical" characters, that is to say, to the 26 letters of the alphabet, plus space, plus the commonest punctuation marks, on a one-byte-per-character basis. The number of bytes to be allocated can be determined either by the number of As stated [so PIC AAA will allocate a three-byte field] OR by bracketing that requirement numerically [so PIC A(16) will allocate a 16-byte field]. Data in a PIC A field is automatically left justified during storage and display [so PIC A(16) VALUE IS "HI THERE" will allocate a 16-byte field containing "HI_THERE________"].

 

PIC X: The PIC X data type allocates a specified number of bytes to "alphanumeric" characters, that is to say, to the 26 letters of the alphabet, the ten numerals, plus space, plus the remainder of the available character set, on a one-byte-per-character basis. Again, the number of bytes to be allocated is determined by the number of Xs stated [so PIC XXX will allocate a three-byte field and PIC X(16) will allocate a 16-byte field]. Data in a PIC X field is also automatically left justified during storage and display.

 

PIC 9 DISPLAY: The PIC 9 DISPLAY data type allocates a specified number of bytes to "decimal numeric" characters, that is to say, the ten decimal numerals on a one-byte-per-digit basis. It also allows declaration of the decimal point, using an inset "V" (although no decimal point is stored as such). Thus PIC 999V99 will allocate a five byte field, in which the two lowest order digits are processed as decimal fractions. This form of representation is intended for human consumption: it is non-binary, and so should not be used for calculation purposes. Data in a PIC 9 DISPLAY field is automatically right justified during storage and display.

 

PIC 9 COMPUTATIONAL: The PIC 9 COMPUTATIONAL data type allocates an appropriate number of bytes to "binary numeric" data.

 

ASIDE: A single 8-bit byte can store the binary number 11111111 (decimal 255), although if you want signed fields this will effectively steal the highest order bit, reducing the limit to 127. Two bytes can cope with 1111111111111111 (decimal 65535; 32767 if signed), and so on. The inset on floating point arithmetic in Part 4 (Section 1.3) will give some idea how this resource is used.

 

There are a number of subtypes to the PIC 9 COMPUTATIONAL declaration, but these need not concern us at the pseudocode stage. Data in a PIC 9 COMPUTATIONAL field cannot be directly displayed because it is in binary [not entirely true - it can be displayed, but it screens as gibberish in the same way that your word processor can quite happily open, but then make no sense of, a digital photograph]. If numerical display is needed, then the binary data needs to be moved to a suitable DISPLAY field beforehand.

 

<<AUTHOR'S NOTE: Cognitive science has yet to locate either the "biological bit" or the "biological byte". Nevertheless, it is intuitively compelling to propose some sort of "biological picture clause". For example, glance at the exercises in mental arithmetic below [three additions and a multiplication], and try to decide by introspection what preliminaries your mind goes through before actually starting to answer them .....

 

17 + 94 =

173.1 + 947.92 =

1791.36 + 9416.835 =

1791.36 x 9416.835 =

 

Our own experience was that different expectations of answer length seemed to be forming before any calculation was done. The first sum simply generated a PIC 9(3) empty space to the right of the equals sign, which we then loaded digit by digit from right to left, as we added matching pairs of digits. Similarly, the second sum generated a PIC 9999V99 receiving field, and the third generated a PIC 99999V999 receiving field. The multiplication, on the other hand, frightened us. We immediately decided that it was far too complicated to do in our head, and set about mapping out the two-dimensional space needed to do it on paper. Ignoring the decimal points to begin with, this would consist of two vertically aligned PIC 9(7)s to hold the multiplicands, six more PIC 9(8)s to hold the individual sub-multiplications, and a final PIC 9(9) at the bottom to receive the final addition.

 

The declaration of data types and field sizes is then done under a number of sectional headings within the Data Division. The first of these is called the "File Section" ..... 

 

The File Section: Business computer files are almost always record-based (that is to say, they consist of bits within bytes within fields within records within files), and in this section the compiler is told how those records are constructed. Here is how it might look. Note (a) the use of the {logical filename} as previously discussed, (b) that the progressive indentation is only there to help the human eye deal with hierarchically numbered data declarations, and (c) that where no Picture Clause is stated, the data name in question is fully specified by the lower level lines between it and the next same-or-higher level entry [an instance of this has been highlighted in GREEN and annotated in RED in the example below].

DATA DIVISION.

FILE SECTION.

FD {logical filename} RECORD CONTAINS 80 CHARACTERS.

01 IP-CUSTOMER-RECORD.

03 IP-RECORD-TYPE PIC 9(2).

03 IP-CUSTOMER-NAME. [NO PIC HERE, BECAUSE ON NEXT THREE LINES INSTEAD]

05 IP-MR-OR-MRS PIC X(3).

05 IP-INITIALS PIC X(3).

05 IP-FAMILY-NAME PIC X(20).

03 IP-CUSTOMER-DETAILS PIC X(52).

 

The second block of data declarations is called the "Working-Storage Section" ..... 

 

The Working-Storage Section: In this section, the programmer's scratchpad storage requirements are defined down to field level. This allows the compiler to earmark Main Memory as scratchpad memory, thus allowing programmers to manipulate each and any intended fragment of their data by name. Typical working storage applications are for constants, counts, tabular data arrays, loop control, and the dissection and manipulation of fragments of input or output data, and these are further discussed in Section 3.3. Note how the field name prefix is now WS- instead of IP-, in an attempt to make the naming scheme more programmer-friendly. Note also how the IP-CUSTOMER-DETAILS field has now been "opened up" into the two separate subfields of WS-CUSTOMER-DETAILS. The motivation for this is that there is no point naming something until it is needed, and on this occasion the presumption would therefore be that WS-CUSTOMER-PIN is going to needed in this program, but that the field named WS-NOT-NEEDED-IN-THIS-PROGRAM would contain data inserted by, and of relevance to, other programs, but not this one. 

WORKING-STORAGE SECTION.

01 WS-CONSTANTS.

03 WS-CONSTANT-TEXTS.

05 WS-DUMP-AID PIC A(27) VALUE IS " WORKING-STORAGE STARTS HERE".

05 WS-ABORT-TEXT.

07 WS-ABORT-TEXT-HEADER PIC A(20) VALUE IS "ABORT WITH ERROR NO:".

07 WS-ABORT-TEXT-NUMBER PIC 9(5).

01 WS-COUNTS.

03 WS-RECORDS-READ PIC 9(3) VALUE IS ZERO.

03 WS-RECORDS-WRITTEN PIC 9(3) VALUE IS ZERO.

01 WS-ARRAYS.

03 WS-MONTHS PIC X(3) OCCURS 12 TIMES.

03 WS-DAYS PIC X(3) OCCURS 7 TIMES.

01 WS-LOOPS.

03 WS-UNTIL PIC 9(3) COMP.

01 WS-CUSTOMER-RECORD.

03 WS-RECORD-TYPE PIC 9(2).

03 WS-CUSTOMER-NAME PIC X(26).

03 WS-CUSTOMER-DETAILS.

05 WS-CUSTOMER-PIN PIC 9(4).

05 WS-NOT-NEEDED-IN-THIS-PROGRAM PIC X(48).

 

Note how the field naming in both the File Section and the Working-Storage Section is hierarchical, with the Picture Clause added only at the lowest level. This hierarchical structuring is important, because it allows categorical as well as atomic reference to your data. In fact, nearly 100 separate hierarchical levels are supported, although normally only the first half dozen are ever used. The rule is that MOVEs will be executed at and below the specified hierarchical level. Thus if you <MOVE IP-CUSTOMER-RECORD TO WS-CUSTOMER-RECORD>, this is a Level 01 move [see the original declaration lines above], and so every one of the 80 constituent bytes is moved .....

 

ASIDE: We have arranged for the receiving field on this occasion to be exactly the right size. Had it been larger, the leftmost 80 bytes of the receiving field would have been overwritten (and the remainder set to spaces), and had it been smaller, the rightmost end of the incoming field would have been "truncated" (ie. snipped off and lost). Carelessness in field sizing is a major source of bugs early in program development.

 

..... however, if you then follow this with <MOVE "SMITH" TO IP-FAMILY-NAME>, then only the 20 bytes specifically referred to by this name are affected. It is even possible to move bits around within the bytes, although the mechanisms by which this is achieved need not concern us at the pseudocode stage. In other words, no matter how big your Main Memory, the programmer can retrieve, examine, and if necessary amend every single bit of it, by name, or by any of its higher order names.

 

The third block of data declarations is called the "Screen Section", and is needed when preformatted input/output screens are required. Whenever this happens, the "templates" - that is to say, the invariable portions thereof - also need to be specified in advance. This section will therefore contain as many screen definitions as the system specification has asked for, each with lots of Picture Clauses with pre-set VALUE IS elements. 

 

The fourth block of data declarations is called the "Linkage Section", and is needed when a call-and-return relationship exists between two or more programs. Whenever this happens, the calling program will usually need to pass data across to the called program when making the call, and receive back some results of the computation upon the return of control. This exchange of data is done by linking the object modules in such a way that they have a number of data fields in common. These commonly addressable data fields need to be separately identified to the compiler and this is done in the Linkage Section provided.

 

For a simple example of what a Data Division might look on paper, click here, for good examples of Working-Storage Picture Clauses see Colin Wheeler's "COBOL Bible", and for a full tutorial, we recommend Michael Coughlan's pages [menu] at the University of Limerick [check it out].

 

2.4 The COBOL Procedure Division

With the environmental information in place and the necessary data declarations made, attention can now turn to the logic of the program. In COBOL this is traditionally expressed as a sequence of paragraphs of detailed source code, which are "performed" in the sequence already decided upon at the analysis and design stage. The key facility under this heading is the "PERFORM" statement .....

 

Key Concept - The COBOL "PERFORM" Statement: The COBOL PERFORM statement is the solution to Dijkstra's complaint about GO TOs [see Section 1.1]. What it does is to allow one or more instructions to be declared as a paragraph, named as a paragraph, and thereafter executed as often as necessary as a unit. The paragraph, in other words, is "performed" when called by name, and, when completed, passes control back to the calling paragraph. What happens, of course, is that the compiler simply replaces each PERFORM with two GO TOs, one at the top of the paragraph and another at the bottom, on a call-and-return basis. However, it does this with 100% accuracy, whereas human GO TO programmers get mightily confused mightily quickly.

 

Paragraphs can also be "nested", that is to say, you can PERFORM a paragraph which itself PERFORMs one or more lesser paragraphs, which themselves PERFORM still other paragraphs, and so on. This is what gave Figure 2 its layered depth, and does much to turn a two-dimensionality in space into what will ultimately be executed as a linearity in time - Section 1's Lashley-Dijkstra problem, of course.

 

Armed with this vocabulary, it remains only to control how often, and in what precise circumstances, each PERFORM is to be PERFORMed. Now the concepts of "iterations", "conditionals", and "loop control" were introduced in the worked example in Section 1.1 above, and examples of them were given in Section 2.4. In fact the GO TOs highlighted therein created a nest of such loops - the innermost loop repeatedly checks the TIMER PIN while the value is OFF, the next loop repeatedly adds <SECONDS> while building up towards a complete MINUTE, the next loops MINUTES waiting on a complete HOUR, and so on. However, there are a number of minor variations to loop control, as follows .....

 

Key Concept - DO WHILE and DO UNTIL Loops: The approach to loop control used by BASIC-style interpreters supports two forms of DO loop. In the DO WHILE form, a test expression is stated and the body of the loop is repeated until that test is no longer satisfied [for example, <DO WHILE SECONDS LT 60>]. The DO UNTIL form is similar, save that the test is now for an end condition to be passed rather than failed [for example, <DO UNTIL SECONDS GE 61>] (Ware, 2002/2003 online).

 

Key Concept - FOR NEXT Loops: Another BASIC-style variant of loop control takes the form <FOR {fieldname} EQUALS {start value} TO {end value} [STEP N] {nested instructions} NEXT>. This repeats the nested instructions, incrementing {counter} by STEP (or default 1 if omitted) each time.

 

Key Concept - The PERFORM N TIMES Loop: This is probably the simplest COBOL repetition, and repeats the specified paragraph the specified number of times. The N can either be a pre-set integer [as in <PERFORM EACH-DAY-PROCESS 7 TIMES>], or else a variable [as in <PERFORM EACH-EMPLOYEE-PROCESS WS-NO-OF-EMPLOYEES TIMES>].

 

Key Concept - The PERFORM UNTIL Loop: This is the COBOL equivalent of the DO UNTIL loop described above, and is commonly used for processing each record from a file [as in <PERFORM EACH-RECORD-PROCESS UNTIL END-OF-FILE>].

 

Key Concept - The PERFORM VARYING Loop: This is the COBOL equivalent of the FOR NEXT loop described above, and takes the form <PERFORM {paragraph name} VARYING {fieldname} FROM {start value} UNTIL {end condition}>. It is often used to loop a process through all the entries in a data array indexed by <fieldname> [see under Data Arrays and Indexing in Section 3.3 below].

 

Once the skeletal PERFORM structure has been established, it remains only to "flesh out" each paragraph with the logically appropriate detailed instructions. These are selected from a fairly short vocabulary, in which the following instructions are the most common .....

 

ACCEPT and DISPLAY: These two instructions take data from, or send it to, named terminal devices [examples]. They are often used in pairs interactively in the form <DISPLAY "ENTER {parameter name} NOW:"> followed by an <ACCEPT {receiving fieldname}>, thus prompting the user to enter the specified data item for storage in the specified field.

 

OPEN and CLOSE: The OPEN instruction prepares the specified file (as previously identified in the SELECT-ASSIGN and FD statements) for active use [the downside of which is that the operating system now has to manage the filestore very carefully to avoid accidentally corrupting the data, especially on multi-user systems], and the CLOSE instruction signals the operating system that those files are finished with and can be re-catalogued (complete with new file size, placement, and last-amended date) and de-protected.

 

READ and WRITE: These two instructions read/write records in/out of their allocated record areas, that is to say, they momentarily open the data bus for communication between the Record Area and the device concerned [more on this in Section 3.2 below]. Attempts to READ beyond the last record in a file generate an AT END Boolean, which can be tested as part of primary loop control [more on this in Section 3.3 below].

 

MOVE: This instruction takes the form <MOVE {fieldname 1} TO {fieldname 2}>, and copies the contents of the first fieldname to the space available in the second. It is commonly the most heavily used instruction of them all. 

 

ADD: These instructions take the form <ADD {literal or fieldname 1} TO {literal or fieldname 2} [GIVING {fieldname 3}] [ON SIZE ERROR {error handling instruction}]>, and execute the named arithmetical operation on the given operand string. 

 

SUBTRACT: These instructions take the form <SUBTRACT {literal or fieldname 1} FROM {literal or fieldname 2} [GIVING {fieldname 3}] [ON SIZE ERROR {error handling instruction}]>, and execute the specified arithmetical operation on the given operand string. 

 

MULTIPLY: These instructions take the form <MULTIPLY {literal or fieldname 1} BY {literal or fieldname 2} GIVING {fieldname 3} [ON SIZE ERROR {error handling instruction}]>, and execute the specified arithmetical operation on the given operand string. 

 

DIVIDE: These instructions take the form <DIVIDE {literal or fieldname 1} BY {literal or fieldname 2} GIVING {fieldname 3} [ON SIZE ERROR {error handling instruction}]>, and execute the specified arithmetical operation on the given operand string. 

 

COMPUTE: This instruction takes the form <COMPUTE {fieldname} = {compound algebraic expression of literals, parentheses, operators, and fieldnames}>.

 

2.5 The Story So Far

So to recap, a single sheet of structure diagram [as per Figure 2 body] can be used to give a static visuospatial representation of what will ultimately be a time-sequenced solution to our original real world problem. This visuospatial structure can then be "peeled off" the two-dimensional sheet from top left to bottom right, and mapped onto a nested linear paragraph structure [as per Figure 2 indent], which can then be fleshed out with any missing detail before being compiled into raw machine code. The machine code then executes one instruction at a time [save as pipelined or parallel processed] under the FETCH-and-EXECUTE logic of the CPU provided. Moreover, since the systems analyst's original solution was effectively an exercise of human "right-hemisphere" problem solving skill, we may look upon structured programming as opening up a direct channel of communication and opportunity between the full creative richness of the human mind and the logic gates of the CPU; between human ingenuity in all its glory, and binary in its rasping simplicity. The translation is achieved in discrete steps, of course, but if you had the time and inclination every single one of those steps could nonetheless be traced with total accuracy. The real world can thus routinely - and with full audit trail - be reduced to binary and the Explanatory Gap can be duly closed. Reductionism, in other words, suddenly becomes a viable option!

 

ASIDE - THE EXPLANATORY GAP: The Explanatory Gap is the long-standing philosophical gulf between explanations of human behaviour [nowadays especially consciousness] couched in sociocultural and/or psychological language, and explanations based on the underlying physiological, chemical, and even sub-atomic processes. It is the problem of explaining the macroscopic in terms of the microscopic, and it has yet to be adequately resolved. In Smith (1998b), we blamed the explanatory gap on the fact that "we are notoriously bad at seeing how wholes can ever become greater than the sums of their parts" (p215): we know that wholes routinely are greater than the sums of their parts, of course, but our explanations at one level do not map readily onto explanations at another. Yet compilers do precisely this, routinely reducing sourcecode wholes into object code parts. The Explanatory Gap, if anything, is a compiler gap. 

 

For a full Procedure Division tutorial, we recommend Michael Coughlan's pages [menu] at the University of Limerick [check it out].

 

3 - The Subtypes of Volatile Electronic Memory

In this section, we take the various types of computer STM previously identified, remind ourselves of where they sit physically within the hardware, and then describe how they work in practice. We do this under nine organising headings - the "memory subtypes" of our title - as follows. The Remarks column identifies the major areas of similarity between biological and non-biological systems, and may, on occasions, come close to stating the obvious .....

 

STM Subtype

Section

Remarks

STM in the FETCH-and-EXECUTE Cycle

3.1

Science is currently unable to explain the ins-and-outs of biological data processing at the level of the underlying electronics. Certainly, it EXECUTEs instructions of some sort, but whether this requires activation in situ or a physical FETCH is not known.

STM for Input and Output Purposes

3.2

The brain spends most of its time and effort converting input (sensory) data into output data (behaviour). However .....

STM for Scratchpad Purposes

3.3

..... less than one part in ten million of the input stream is ever stored long term (Frank, 1963). The rest, of necessity, must either be reduced in flight or else scratch-padded momentarily, and then discarded.

STM for Multiprogramming Purposes

3.4

The brain is in some important respects a Multiprogramming system.

STM for Transaction Processing Applications

3.5

The brain is in some important respects a Transaction Processing system.

STM for Database Applications

3.6

The brain is in some important respects a Database system.

STM for Distributed Processing Purposes

3.7

The brain is in some important respects a Distributed Processing system.

STM for Real-Time Processing Purposes

3.8

The brain is in some important respects a Real-Time Processing system.

STM for "Object-Oriented" Processing Purposes

3.9

The brain is in some important respects an Object-Oriented system.

 

So let us begin by donkey-tailing these various memory subtypes onto our standard hardware diagram. For simplicity's sake, we shall follow the context provided by the layout of the typical Eckert-von Neumann machine - the computer as it stood in 1950 - and simply add in whatever needs adding in; there would be little advantage to be gained by overlaying a more recent architecture because the principles remain the same. Here, then, is the last of the diagrams used in Part 3 (Section 6 subfile), with additional highlighting to show the distribution of short- and long-term memory resources .....

 

Figure 3 - STM and LTM Resources in a Typical General Purpose Computer: Here is the basic modular architecture of a typical computer [reminder], rendered feint, and then overlain with additional detail. We have made the operating system's share of Main Memory more visible [yellow underpanel, top left], allowed for optional extension software packages [mauve underpanel, top right], and then surrounded the whole thing with starred captions showing where the main subtypes of electronic STM may be found. The system supervisor resources specifically identified are the Stack Handler, the Paging Tables, the Interrupt Handler, Block Buffering, and the Semaphores. The two most important optional extensions to the system software are the DBMS (and within that the Currency Tables), and the TP Monitor (and within that the Partial Results). There are also two bold blue captions, one to show where the main body of LTM is located [bottom left], and the other to show the LTM read-only memory used to hold the microcode [left, as an adjunct to the Control Unit]. Note how peripheral these LTM resources are to the all-important FETCH-and-EXECUTE and INPUT-OUTPUT cycles of STM processing. In other words, for computers as well as brains, if you want to know the "go" of a given system, then there is no better learning experience than to accompany STM on its journey through that system, carefully observing (a) where it comes from in the first instance, (b) how it circulates within the hardware, (c) how it interacts with and is transformed by the logic gates, and (d) where it then ends up.

STM6-fig3.gif

 

Copyright © 2003, Derek J. Smith.

 

3.1 STM in the FETCH-and-EXECUTE Cycle

The key concepts under this heading are Control Registers and the FETCH-and-EXECUTE Cycle. We first introduced registers in Part 1 (Section 1) as the name given to the "tens carry" cogwheel displays used on the 17th century mechanical calculators, and we then discussed their metamorphosis into electronic components in Part 3 (Section 2), where we defined them in general terms as storage locations closely and permanently attached to the CPU. Two registers in particular were highlighted in our e-paper on "The Eckert-von Neumann Machine". These were the Instruction Register and the Program Counter, and what these combined mechanisms do is drive the basic FETCH-and-EXECUTE computing cycle. The former contains the instruction currently being processed, and the latter contains the address of the next program instruction to be obeyed. Both these STM elements require updating after every instruction, and should either of them fail the computational integrity of the entire system is immediately and fatally compromised.

 

We also need to consider the machine's instruction queuing philosophy. As explained in Part 5 (Section 1.4), most modern processors are designed to operate according to the stack principle. This means that a single queue of cleverly intermixed op codes and operands is maintained at the Control Unit, and then acted upon as time permits. This has the advantage that as you develop more and more powerful instructions you merely add to the length of the stack, not to the complexity of the electronics [costing you only processing time, not cash]. It also makes it easier to introduce "recursion" [literally "re-running"] into your programming toolbox, and that brings even greater savings. Here are some definitions of this important, but unfortunately quite obscure, concept .....

 

"In mathematics, recursion is the name given to the technique of defining a function or process in terms of itself" (Barron, 1968, p1).

 

ASIDE: Never one to ignore a strong historical link, we must mention that our old friend, Christopher Strachey [see Part 4 (Section 2.3)], also played a part in the development of recursive programming, via a 1966 paper co-authored with the same David Barron mentioned above. See Gordon (2000/2003 online) for more on this link.

 

"[Recursion is] a technique of describing something partly in terms of itself" (Rohl, 1984, p1).

 

Recursion allows you to simplify programming by dividing a problem into "subproblems of the same type" (Wikipedia, 2003 online).

 

Barron (1968) suggests that the best known example of mathematical recursion is the factorial expression. Thus we know that .....

 

Factorial 6 = 6 x 5 x 4 x 3 x 2 x 1

 

which is the same thing as .....

 

6 (Factorial 5)

 

and working to this pattern we may recursively define factorial n by using the factorial function on both sides of the equation, as follows .....

 

Factorial n = n (Factorial (n-1))

 

And of course one of the few true pleasures of programming is when the problem solving algorithm at the heart of your program reduces to an elegant one-liner such as this.

 

When implemented on a computer, recursion requires that a single block of instructions is there to be repeated; and in this respect it is superficially similar to the PERFORM UNTILs and DO WHILEs of conventionally structured code. Recursion is more than a simple program loop, however, because each pass through the loop is in some way progressive, taking you one step closer to a given logical objective. Simple looping is mere "iteration" (= repetition), and merely achieves another record written or another second counted off on your clock display. Recursion is "similar to a loop because it repeats the same code, but it requires passing in the looping variable and being more careful" (Alegre, 2003 online), and it is the requirement to nest the looping variable in this way which makes stack-based systems the routine choice for recursive programming. This is because the recursions can simply be stored in the stack until the innermost condition is met, whereupon the whole function can be resolved using the characteristic last-in-first-out quality of the stack. Here is how Barron (1968) explains the item handling sequence .....

 

"..... for each call of the subroutine there must be a different set of working registers and a different place to store the return address; on entry to the subroutine a new set of working registers must be allocated, and on exit, before transferring control to the return address, the environment (working registers, etc.) must be restored to the status existing at the call of the subroutine. These operations are carried out automatically in 'built in' recursive systems, but [otherwise] have to be explicitly programmed." (Barron, 1968, p34)

 

"Almost all recursive programming systems are based on the idea of the stack (sometimes called a push-down store or nesting store). [.....] A push-down store is a store that operates on the last-in-first-out principle. Whenever an item is placed in such a store, we speak of it going to the head of the store, pushing down all the items already there. Whenever an item is taken from the store it is taken from the head, and all the other items pop up. Thus it is always the item most recently added which is removed first. The term stack is usually applied to such a store when, in addition to accessing the item at the top, items in the interior may be consulted. More precisely, a stack is a store which works on the push-down principle as far as adding or removing items is concerned, and in which items not at the head can be accessed by quoting their position relative to one of a small number of pointers." (Barron, 1968, p35; italics original)

 

In fact, it is the stack pointer which really makes the stack system so easy to use, because all it ever does is point to the next free cell in the stack. Everything else can then be calculated relative to this.

 

Key Concept - The Stack Pointer: The stack pointer is a CPU register containing the address, S, of the first unused slot in the run of volatile memory declared as the stack. It thus serves to identify the notional head of the stack (which, by definition, will always be at storage position S-1), and it also allows the stack length to be managed logically rather than physically, that is to say, the contents of storage register position S before a push-down are totally irrelevant, because they are about to be overwritten. As the head of the stack is processed, its contents are left in situ, safe in the knowledge that they can do no harm as long as the stack pointer has been simultaneously decremented by one.

 

Other applications of recursive techniques are .....

 

(1) Algebraic Analysis: Following the example of factorial n above, mathematicians regularly reduce long algebraic expressions to recursive functions. 

 

(2) Elegant Programming: Similarly, programmers have a repertoire of recursive functions for such otherwise complex tasks as sorting.

 

(3) Syntax Analysis: Linguists with a computational bent also use recursive approaches to the parsing of sentence structures.

 

(4) Machine Problem-Solving: As already explained in Part 4 (Section 4.7), Newell, Shaw, and Simon's (1957) General Problem Solver (GPS) was driven by a recursive algorithm capable of scoring and re-scoring an internally maintained stack of possible problem solving options in its search for the best. In other words, recursive techniques are not just there for master-mathematicians to show how clever they are. They have also been used in artificial intelligence. Barron, for example, summarises Newell, Shaw, and Simon's General Problem Solver as follows .....

 

"[GPS] endeavours to discover a suitable attack on a problem by trying a variety of methods and assessing their relative success. Broadly, given a goal to be achieved, GPS breaks it down into subgoals and attempts first to achieve these. The process of achieving a subgoal may involve further subdivision, and so we see the essentially recursive nature of the system - the subprocesses are identical with the main process of which they form part." (Barron, 1968, p32; italics original; bold emphasis added)

 

To summarise, recursive programming buys you short sweet algorithms at the (low) combined cost of (a) making Main Memory available for the stack, (b) incorporating the necessary stack handling logic into the operating system, and (c) making register(s) available to the CPU to hold the stack pointer(s). In Part 7, we shall consider whether biological cognition uses directly or metaphorically cognate systems, and, if so, what neural mechanisms might be involved. 

 

3.2 STM for Input and Output Purposes

The key concepts under this heading are record areas and (the various forms of) buffering, as introduced in Part 3 (Section 6 subfile) and Part 4 (Section 1.3) respectively. Now we have already seen [Section 2.3 above] how the COBOL File Section sets aside areas of Main Memory to hold one record for every nominated input and output file. This is where input records are stored while being acted upon and future output records are constructed field by field until ready to be written out to their destination device. However, at least two variations on the buffering principle need to be considered, namely "device buffering" and "block buffering" .....

 

Device Buffering: This is where data can be momentarily accumulated within a peripheral device, awaiting onward transmission. As previously explained, this design feature is forced upon the designers by the inevitable difference between storing and retrieval speed and onward transmission speed.

 

Block Buffering: This is where a conveniently-sized "block" of contiguous input or output records is held as a unit within the operating system's share of Main Memory, thus allowing the operating system to "read ahead" and "write behind" the program's actual requirements. When the application program needs a record from a file, the operating system will get the block containing it, and if that block contains n records the next n-1 program reads can be serviced without any further physical reads. The program simply reads from the buffer as though it was the file, but at a fraction of the normal delay. Double buffering, etc., sacrifices yet more Main Memory to gain an even bigger reading ahead or writing behind advantage.

 

Device buffering is quite commonly seen in cognitive science, in such areas as perception [eg. the "recognition buffer" in Sperling's (1967) model of text perception], biological motor programming [eg. the "motor program buffer" in Sternberg, Knoll, Monsell, and Wright's (1988) model of motor subprogram retrieval], and language processing [eg. the "phonological input buffer" and the two output buffers in Kay, Lesser, and Coltheart's (1992) PALPA transcoding model], and in Part 7 we shall be looking for neuroanatomical evidence for the location of the structures involved. Block buffering appears to lack similar appeal.

 

3.3 STM for Scratchpad Purposes

All complex data processing creates intermediate results, and therefore all systems need to provide somewhere to store them. The issue is easy to demonstrate .....

 

The Nature of Intermediate Results: Use the empty box below to multiply 1437 by 469 .....

 

 

 

 

 

 

We make the answer 673,953 and used the "traditional long multiplication" method [there are others - see Biggs (1994/2003 online)]. This method generated three separate vertically aligned subproduct lines, one for each of the significant digits in the short multiplicand, and the individual multiplications were supported by nine subscripted "carry digits". These three subproducts were then added downwards to give the final total product (generating three further carry digits, which we handled mentally. The point of this exercise is that every one of the nine subscripted carry digits, the three separate subproducts, and the three further carry digits generated during the final addition are discarded afterwards; and although they were wholly intermediate to the calculation at hand, they were far from negligible in terms of resources used.

 

Perry (1962a,b) called this type of STM "scratch-pad memory" because it is the electronic equivalent of scrap paper for doing complex calculations by hand, but in fact the need for scratchpad storage had been there from the very beginning. For example, the store in Babbage's Analytical Engine catered explicitly for intermediate results, as did Atanasoff and Berry's ABC [reminder], and Zuse's Plankalkül allocated a specific data type "Z" for the purpose. Here, from Wilkes (1956), is perhaps the earliest detailed example available .....

 

Babbage's Solution to Simultaneous Equations: Babbage, it seems, kept few programming notes, but Wilkes (1956) tells how an Italian military engineer Luigi Federico Menabrea (1809-1896) met Babbage in 1840 and was fully briefed on the latter's programming methods. Menabrea faithfully recorded these informal tutorials, and his explanations of what Babbage taught him have survived. Here, from Menabrea (1842), and enhanced to show intermediate results in red and the final answer in green, is how the Analytical Engine would go about solving a simultaneous equation of the general form .....

 

ax + by = c

a'x + b'y = c'

 

The solution required 14 registers, numbered V0 to V14, the first six being set to the six known values a to c and a' to c'. Their use in computing the first of the unknowns, x, is shown in the following table. A similar set of instructions solves the same equations for y. [Menabrea (1842, cited in Wilkes, 1956; Table 1.1)]

Step

Operation

Stored In

1

V2 x V4

V8

2

V5 x V1

V9

3

V4 x V0

V10

4

V1 x V3

V11

5

V8 - V9

V12

6

V10 - V11

V13

7

V12 / V13

V14 (= x)

 

Using the syntax set out in Section 2.4, the same effect would be achieved in COBOL by a combination of DISPLAYs and ACCEPTs to prompt the user to input variables a to c', followed by the appropriate sequence of MULTIPLYs, SUBTRACTs, and DIVIDEs, followed by one final DISPLAY of the solutions for x and y.

 

However, there is a lot more to more to scratchpad storage than immediate results. Here are some of the most significant other uses .....

 

Constants: A constant is a data field whose contents are set once and then used repeatedly. This might be as a numerical operand in calculations, such as a current interest or tax rate, or pi, or any other algebraic constant. Alternatively, it might be as a textual element in a repeating display, for an example of which, see the VALUE IS element of the WS-DUMP-AID field in Section 2.3.

 

Counts: A count is a numeric data field used to accumulate useful data over major processing phases. It is common practice, for example, to count how many records have been read from an input file for reconciliation purposes, a simple task requiring the declaration of a suitably sized Working-Storage field, initialising it to zero, and then incrementing it by one for every successful pass through the READ paragraph. [The "right" answer would need to have been previously stored either on a file-end trailer record or somewhere in the job control environment, and if the actual and the expected counts did NOT match, then the run would need to be aborted for investigation.]

 

Loop Control: We have already seen (a) how file processing typically processes every record available, until the AT END condition is reached, (b) how the main process loops conditionally through short sequences of instructions, and (c) how recursive algorithms allow complex processes to be reduced to "subproblems of the same type". Working Storage assists the various looping processes by providing the necessary control fields and counts. Consider the field WS-LOOP in our Section 2.3 example.

 

Key Concept - "Flag" Bytes in Loop Control: A "flag" is a small Working-Storage field, usually only one byte long, and often set either to "Y" or "N". Flags are commonly used in loop control as the <{end condition}> tests described in Section 2.4 above. 

 

Data Arrays and Indexing: This is also a convenient point at which to remind ourselves about "Index Registers". These were introduced in Part 4 (Section 2.3), where they were described as a trick for retrieving a single entry from a long table of entries.

 

Again we must not forget that the Operating System is itself a program, and that it will therefore have Working-Storage resources of its own. Nor, if it comes to that, is Working-Storage the only scratchpad resource actually available. As explained in Part 5 (Section 1.1), the use of microcode requires scratchpad support from the registers directly available to the CPU. The key concept here is the "Accumulator Register", as introduced in our e-paper on "The General Purpose Computer" (see Figure 2, caption step 11). Unlike the two FETCH-and-EXECUTE registers discussed above, accumulators are entirely arithmetical registers, that is to say, they are used by the Arithmetic/Logic Unit for storing the intermediate results of complex single calculations such as DIVIDEs and MULTIPLYs. What we have, therefore, is an important distinction between "user scratchpad" resources and "system scratchpad" resources. The former is the Working-Storage resource controlled by the individual application programmer, and the latter is the CPU-resource controlled by the system programmer.

 

CRITICAL DISTINCTION - GENERAL REGISTERS VS SCRATCHPAD STORAGE: The point about intermediate results is that some live longer than others. Some are intermediate only within the execution of a single complex instruction - a particular DIVIDE, or MULTIPLY, or COMPUTE statement, for example. These resources can be freed up immediately after that statement has been executed. Other results are intermediate within the broader context of the program as a whole. These resources cannot be freed up, for example, until a particular record has been output, or (as would be the case with end-to-end closing totals and reconciliations, for example) until the very last record has been processed.

 

In Part 7, we shall (a) be exploring how the working-storage concept made a major breakthrough into cognitive theory as "working memory", (b) looking for evidence of biological general registers, and (c) considering whether biological cognition might rely on directly or metaphorically cognate systems, and, if so, what neural mechanisms might be involved.

 

3.4 STM for Multiprogramming Purposes

 

The key concepts here are multiprogramming itself, virtual memory, and the System Supervisor, as introduced in Part 5 (Section 1.2), and we should begin by reminding ourselves that the development of multiprogramming was one of computing's quantum leaps back in the late-1950s. Taking the Manchester University Atlas as typical of the leading edge hardware available, then its System Supervisor was equally typical of leading edge operating software. The critical design feature was the ability to conjure effectively unlimited Main Memory out of thin air, by surreptitiously paging all non-active Main Memory out onto drum or disc random access backing store. For the cost of the paging software and the page tables to keep track of where everything was, the system was suddenly able to support a much higher number of concurrent programs. For even greater efficiency, the operating system could also cope with a queue of jobs awaiting execution, and gave its console operators the power to "tweak" the processing workmix here and there in an ongoing attempt to match the overall availability of system resources against the overall demands of all the jobs in the job queue. The name subsequently given to this aspect of system supervision was "execution scheduling" .....

 

Key Concept - Execution Scheduling: An "execution scheduler" is a module within a multiprogramming operating system which manages the job queue. It submits the jobs in priority order, monitors their progress, resolves any contention scheduling problems, and varies how many jobs are allowed to be executing simultaneously. It also allows specific jobs to be "expressed" or suspended at will. The ICL 2900/3900 systems described in Part 5 (Section 3.3) mounted separate Job and TP Execution Schedulers for batch and TP, with TP normally getting the highest priority call on resources.

 

In Part 7, we shall be considering whether some sort of biological paging and execution scheduling mechanisms might underlie our ability to "multitask" a number of cognitive objectives simultaneously.  If so, then it may or may not turn out to be relevant that biology already has a tried and tested mechanism for medium-term memory sensitisation, in the form of calcium-modulated "second messenger" neurotransmission.

 

3.5 STM for Transaction Processing Applications

The key concepts here are transaction processing itself, as introduced in Part 5 (Section 3.3), and, within that, partial results. The raison d'etre for the partial results facility in transaction processing is that it allows large applications to proceed in stages, often separated by many seconds. The physical requirement, however, is simplicity itself - just a small area of Main Memory under the control of the TP Monitor, which is written to at the end of one pass through a multiphase application, and re-read by the same application-user combination at the beginning of the following pass. In Part 7, we shall be considering whether time-extensive biological cognition resorts to some sort of partial results facility, and, if so, what neural mechanisms might be involved.  Again, it may or may not turn out to be relevant that biology already has a tried and tested mechanism for medium-term memory sensitisation, in the form of calcium-modulated "second messenger" synaptic sensitisation.

 

3.6 STM for Database Applications

The key concept here is that of database currencies, as introduced in Part 5 (Section 3.1) (and explicitly located at the top of Figure 3 above). The essence of the currency concept is that the DBMS can instantly relocate the "current" record - that is to say, the record last accessed in a given set or of a given type. It does this by maintaining what is known as a "run time currency indicator" for every set and record type that it knows about, and every time it accesses a record it copies that record's database key into the appropriate "current of set" and "current of record type" currency indicator(s). The device comes as standard with what are known as "Codasyl" databases such as Computer Associates' Integrated Database Management System (IDMS), and is actually nothing more than a small address table held in memory and constantly updated. As to how this works in practice, see our e-paper on "Database Navigation and the IDMS Semantic Net". The importance of currency mechanisms to cognitive science lies in the fact that what they do is uncannily similar to what calcium-modulated synaptic sensitisation does in biological memory systems. We have previously argued this in Smith (1997d) and Smith (1998b).

 

3.7 STM for Distributed Processing Purposes

The key concepts here are those which support intermodular communication, as introduced in our e-paper on "The Relevance of Shannonian Communication Theory to Biological Communication" [see Section 5 thereof; especially Figure 7]. Indeed, the true value of 21st century computing lies squarely in its "connectivity", and, without exaggeration, almost the entire electronics industry is currently working on networking developments of some sort or other, and a sizeable proportion of the software industry is in there alongside them.

 

The first of our specific concepts is called the "semaphore", after the military signalling system popularised during the Napoleonic age [as already mentioned in Part 1 (Section 4)]. Here is how Raineault (2001/2003 online) summarises its electronic descendents .....

 

"Introduced by Edgar [sic] Dijkstra in the mid-1960s, the semaphore is a system-level abstraction used for interprocess synchronisation. The semaphore provides two atomic operations, wait (P) and signal (V), which are invoked to manipulate a nonnegative integer in the semaphore data structure. The wait operation checks the value of the integer and either decrements it if it's positive or blocks the calling task. The signal operation, in turn, checks for tasks blocked on the semaphore and either unblocks a task waiting for the semaphore or increments the semaphore if no tasks are waiting. A binary semaphore, which has counter values limited to 0 and 1, can be used effectively by an application to guard critical systems. (Raineault, 2001/2003 online, p15).

 

Before saying any more about semaphores, we need for a moment to consider the physical nature of the modern computer. Even with the most modern micro-fabrication techniques, the majority of computers end up with the electronics chip-mounted onto printed circuit boards, the circuit boards slotted into a larger chassis, and the chassis equipped with the necessary communications sockets (or "ports"). A system of internal wiring (the "loom", or "bus") then connects separate mounting boards to each other and/or to the communications sockets as appropriate. The connections between the electronics components and the circuit board, and between the circuit board and the wiring loom, are called "pins" and "pinouts", respectively. Software modules written specifically to initiate output to, or interpret input (or feedback) from, pinouts, are known generically as "device drivers" (or just "drivers" for short).

 

Now if semaphores are memory abstractions as Dijkstra says, and pins are physical things, then the art clearly lies in getting the two things to work together, and the classic solution for half a century has been for the CPU in question (a) to monitor the processing state of an associated module by wiring the latter to a pin it can itself directly access, (b) to monitor said pin [the way we monitored the TIMER PIN in Section 1.1 above], and (c) to schedule its own processing according to what that monitoring reveals. The signal pins are known generically within electronics as "busy pins", and the timing instructions are of the nature <WAIT ON BUSY PIN HIGH/LOW>. The semaphores - one per unit being monitored - then simply echo that pin status into the software, where it can be tested. To maximise processing speed it is usual to maintain the semaphores at bit level rather than at byte level, and to provide them as a class of purpose-built CPU registers called "status registers". Heidenstrom (1998/2003 online) shows several nice tables relating bit values within the status registers to the numbered pins on the communication ports, if interested.

 

Key Concept - Semaphores and Busy Pins: A semaphore is a status register bit, addressable in the inward direction by the local CPU, and directly connected in the outward direction to a communications pin of some sort. This allows the local Control Unit to monitor the processing state of a remote module with which it is for the time being associated. Semaphores are typically used to synchronise the execution of logically related software in distributed processing systems. 

 

Thus far, we have been concerned with a single CPU within a distributed processing architecture. However, as soon as we consider arrangements of more than one CPU we encounter a whole cluster of new key concepts at once. Rather than be constantly interrupting our core argument, therefore, we strongly recommend pre-reading our e-paper on "The Relevance of Shannonian Communication Theory to Biological Communication". As to what the associated modules might be, the detail depends on the individual model of computer and/or the particular network architecture involved, but here are the usual options .....

 

ONE LOCATION MODULARITY

 

CPU to Coprocessor [eg. a mathematics or graphics chip, case-internal and connected via bus]

CPU to Internal Peripheral [eg. communications card or disk drive, case-internal and connected via bus]

CPU to External Device [eg. printer, zip drive, etc., case-external and connected via port and cable]

 

TWO LOCATION MODULARITY

 

CPU to Hierarchically Subordinate Remote CPU [eg. local server, via port and complex network]

CPU to Hierarchically Equal Remote CPU [eg. e-mail correspondent on own PC, via port and complex network]

 

Fortunately, there is really only one problem, regardless of the architecture, and that is how to keep all the nodes properly synchronised. Less fortunately, perhaps, this is not just a matter of providing sufficient semaphores and busy pins. Instead, it calls for sophisticated networking protocols (for when things are working well), and equally sophisticated exception handling mechanisms (for when they are not).

 

Key Concept - Exception Handling: Exception handling is the systematic (ie. pre-programmed) response to an error condition. In a stand-alone program, such errors can arise on just about any single program instruction, although they are especially common in file handling, data exceptions, and large computations. In a networked program, they can arise on any instruction in any of the associated processors as well, including those making up the carrying data network. Overall, the biggest single risk to any complex system is when accident or breakdown at one point creates a situation which had not - due to corner-cutting, ignorance, or laziness - been catered for elsewhere.

 

Burns and Wellings (1990) describe how exception handling uses the sort of interrupts previously introduced in Part 4 (Section 1.3). In fact, they carefully distinguish between I/O Interrupts (the line speed type), and "exception handling", the ability of cooperating modules to react to problems elsewhere in the processing network. For IO Interrupts, they explain .....

 

"..... the role of interrupts in controlling input/output is an important one. They allow input/output to be performed asynchronously [glossary] and so avoid the 'busy waiting' or constant status checking that is necessary if a purely status-controlled mechanism is used. In order to support interrupt-driven input and output, the following mechanisms are required [.....] When an interrupt occurs, the current processor state must be preserved and the appropriate service routine activated. Once the interrupt has been serviced, the original process is resumed and execution continues. Alternatively, a new process may be selected by the scheduler as a consequence of the interrupt. [.....] Devices differ in the way they must be controlled; consequently, they will require different interrupt-handling routines." (Burns and Wellings, 1990, pp434-435)

 

Now the reason we are bothering with all this detail is that the similarity between distributed processing in the computer world and the parallel distributed processing approach to cognitive theory is more than just superficial - it could very well be that biological processing networks share the need for busy pins and semaphores. Here is how Philip Johnson-Laird, Stuart Professor of Psychology at Princeton University, puts it .....

 

"But parallel computation has its dangers too. One processor may be waiting for data from a second before it can begin to compute, but the second processor may itself be waiting for data from the first. The two processors will thus be locked in a 'deadly embrace' from which neither can escape. Any simple nervous system with a 'wired in' program that behaved in this way would soon be eliminated by natural selection. Higher organisms, however, can develop new programs - they can learn - and therefore there must be mechanisms, other than those of direct selective pressure, to deal with pathological configurations that may arise between the processors. A sensible design is to promote one processor to monitor the operations of others and to override them in the event of deadlocks and other pathological states of affairs. If this design feature is replicated on a large scale, the resulting architecture is an hierarchical system of parallel processors: a high-level processor that monitors lower level processors, which in turn monitor the processors at a still lower level, and so on down to the lowest level of processors governing sensory and motor interactions with the world. A hierarchical organisation of the nervous system has indeed been urged by neuroscientists from Hughlings Jackson to H.J. Jerison [citation], and Simon (1969) has argued independently that it is an essential feature of intelligent organisms. [.....] In a simple hierarchical computational device, the highest level of processing could consist of an operating system." (Johnson-Laird, 1988, pp361-362; bold emphasis added) << AUTHOR'S NOTE: This clear description of the biological operating system has not been well followed up. A full Ovid PsycINFO database search on the keywords "operating system" returned 138 hits [10th November 2003], but a mere half dozen of these are concerned with the mind as an operating system. Here they are, in date order .....

 

Blackmore (1981); Johnson-Laird (1983); Johnson-Laird (1988) [the paper quoted from above]; Von Cramon and Hebel (1989); Johnson-Laird, Miller, Galanter, Pribram, Nelson, and Narens (1992); Levin (1993); St. Julien (1997), Aleshin (1997); Lalitrojwong (2000)

 

In Part 7, we shall be looking for evidence of biological busy pins, semaphores, pinging, etc., and considering whether biological cognition might rely on directly or metaphorically cognate systems, and, if so, what neural mechanisms might be involved. Similarities with the seven-layered network structures currently used throughout non-biological telecommunications [reminder] will be microscopically explored.

 

3.8 STM for Real-Time Processing Purposes

We first introduced the concept of "real-time computing" in Part 2 (Section 1), where we described it as the sort of computing needed to control any sort of system in motion. This description avoided much of the true complexity of the problem, so here is a fuller definition .....

 

"Practitioners in the field of real-time computer system design often distinguish between hard and soft real-time systems. Hard real-time systems are those where it is absolutely imperative that responses occur within the specified deadline. Soft real-time systems are those where response times are important but the system will still function correctly if deadlines are occasionally missed. Soft systems can themselves be distinguished from interactive ones in which there are no explicit deadlines. For example, a flight control system of a combat aircraft is a hard real-time system because a missed deadline could lead to a catastrophe, whereas a data acquisition system for a process control application is soft as it may be defined to sample an input sensor at regular intervals but to tolerate intermittent delays. [.....] In a hard or soft real-time system the computer is usually interfaced directly to some physical equipment and is dedicated to monitoring or controlling the operation of that equipment. A key feature of all these applications is the role of the computer as an information processing component within a larger engineering system. It is for this reason that such applications have become known as embedded computer systems." (Burns and Wellings, 1990, pp2-3; bold emphasis original)

 

The classic examples of real-time computing are control systems such as those used in the defence avionics industry [touched upon in the inset on guided missiles in Part 4 (Section 4.6)] and aerospace in general [for example, the control surface servomechanisms in fly-by-wire aircraft, which (in the interest of superior combat performance) have been deliberately made so unstable that human pilots literally cannot think fast enough to control them]. Less concerned with infinitesimal response times and umpteen places of decimals are process control systems such as those used in the petrochemical industry [for example, using sensor-processor systems to control automatic flow valves]. They are also common elements in just about every sort of command-centre across the military-industrial-corporate-governmental spectrum. Not only do these three application domains all share the common word "control", but they also have a particular architecture - to borrow the term used in biology, they are designed to enable a motor hierarchy of some sort, and this constrains their design in three very important ways. Firstly, users find themselves sitting [functionally always, and possibly physically into the bargain] at the top of a complex command and control structure - they are pilots of aircraft, say, or the duty controllers in nuclear power stations, and so on. Secondly, the higher processes communicate relatively few guiding instructions and rely on relatively sparse feedback. Instead, they leave the bulk of the dataflow to the lower system to administer, via a host of low level, heavy traffic, automatic control loops and servosystems.

 

ASIDE: It is not always appropriate for cognitive modellers to draw flowlines "to scale" in an attempt to emphasise the relative flow of information at any one point. Nevertheless, Smith (1993) and Smith (2003 online; Figure 2) both show the low level information flows with heavy flowlines, and Frank (1963) solves the same problem a different way by annotating bits per second measures onto his flowlines.

 

And thirdly, this raises exceptionally high demands for accuracy and reliability on the part of any intermodular systems components.

 

So it turns out that the central issue for real-time systems designers is actually the same as it was in Section 3.7 for the designers of distributed systems - it is the inherent difficulty of managing the necessary modularity. Thus .....

 

"The software which controls the operations of the [typical real-time] system can be written in modules which reflect the physical nature of the environment. Usually there will be a module which contains the algorithms necessary for physically controlling the devices, a module responsible for recording the system's state changes, a module to retrieve and display those changes, and a module to interact with the operator." (Burns and Wellings, 1990, p6)

 

And it is in putting all these modules together that the key concepts we have encountered so far start to fall into place - concepts such as buffers, nodes, interrupts, busy pins, semaphores, and the like. In Part 7, we shall be looking for evidence of biological real-time processing, and considering whether biological cognition might rely on directly or metaphorically cognate systems, and, if so, what neural mechanisms might be involved. Again, similarities with the seven-layered network structures currently used throughout non-biological telecommunications [reminder] will need to be microscopically explored.

 

3.9 STM for "Object-Oriented" Processing Purposes

We turn finally to the difference between conventional and what is known as "object-oriented" computing. Khoshafian and Abnous (1990) introduce this issue as follows .....

 

"..... in the late 1950s, one problem found when developing large FORTRAN programs was that variable names would conflict in different parts of a program. [.....] The designers of the language ALGOL decided to provide barriers to separate variable names within program segments. This gave birth to the Begin ..... End blocks in ALGOL 60 [Citation]. Since the variable names appearing within a block are known only to that block, their use will not conflict with the use of the same variable name in other blocks of the program. This was a first attempt at providing protection or encapsulation within a programming language. Block structures are now widely used in a variety of languages. [//] In the early 1960s, the designers of the language SIMULA-67 [Citation] took the block concept one step further, and introduced the concept of an object [thus laying] the foundation of object-oriented languages and some of the object-oriented terminology."(Khoshafian and Abnous, 1990, p11)

 

"Using conventional techniques, the code generated for a real-world problem consists of first encoding the problem and then transforming the problem into terms of a von Neumann computer language. Object-oriented disciplines and techniques handle the transformation automatically, so the bulk of the code just encodes the problem and the transformation is minimised. In fact, when compared to more conventional (procedural) styles of programming, code reductions ranging from 40% to an order of magnitude have been reported. [Therefore] the object-oriented concepts and tools are enabling technologies that allow real-world problems to be expressed easily and naturally [and] provide better methodologies to construct complex software systems out of modularised reusable software units." (Khoshafian and Abnous, 1990, pp7-8)

 

Within the computer world, therefore, the ideal object is an "encapsulated" (= physically delineated) run of object code within what is otherwise a conventional program, which has been deliberately set up as such by an object-oriented compiler from an equally encapsulated run of source code. The result is a machine instantiation of a real world object, and the fact that it remains encapsulated at object code level gives it a powerful practical appeal as a means of delivering "re-usable software". Indeed,  object orientation "can be loosely described as the software modelling and development (engineering) disciplines that make it easy to construct complex systems from individual components" (Khoshafian and Abnous, 1990, p6; italics original).

 

But that is not all. Gorlen, Orlow, and Plexico (1990) summarise the benefits of the object-oriented approach as follows .....

"When doing object-oriented programming, a programmer specifies what to do with an object rather than focussing on the conventional procedural aspects of how something gets done. Simply stated [.....] an object comprises the data elements or the data structure needed to describe the object, together with the set of permissible operations on the data." (Gorlen, Orlow, and Plexico, 1990, p2; italics original)

 

Needless to say, successful object-orientation needs to begin at the early analysis stage with a search for the real world objects the system is going to be dealing with. We have already described this process in Part 4 (Section 4.2) and Part 5 (Section 3.1), and emphasised the pivotal role of the data model in software development. The process then continues into the detailed design stage, looking for hierarchies within these real world objects, so that "classes" of objects may eventually be declared to the compiler, so that attributes are not needlessly duplicated. This is what provides an object-oriented system with what is known as "inheritance". Here is a formal definition .....

 

"Inheritance simplifies the task of creating a new class that is similar to an existing class by enabling a programmer to express just the differences between the new class and the existing class, rather than requiring the new class to be written from the beginning" (Gorlen, Orlow, and Plexico, 1990, p101).

 

ASIDE: In fact, the process of data modelling has always involved abstraction of this sort, and in the early 1980s we personally carried out detailed design abstraction for an IDMS network database system to be implemented in conventional COBOL. This was perfectly straightforward, given that the Entity-Relationship Diagram is quintessentially a model of objects and hierarchically inherited attributes, and is designed to be implemented in a variety of ways. Object-oriented products need, therefore, to be approached with caution, for it is sound design which delivers truly reusable software, not this or that breed of compiler. What does make the object-oriented paradigm interesting is the encapsulation it offers .....

 

Yet again, however, the central issue for object-oriented systems designers is actually the same as it was in Section 3.7, namely the inherent problems of modularity. Only now it is called encapsulation, and encapsulation is a very important issue .....

 

"[Encapsulation] is a good programming practice because it enforces the separation between the specification and the implementation of abstract data types ....." (Gorlen, Orlow, and Plexico, 1990, p21).

 

"The design of a large embedded system cannot be undertaken in one exercise. It must be structured in some way [and] two complementary approaches are often used: decomposition and abstraction. Decomposition, as the name suggests, involves the systematic breakdown of the complex system into smaller and smaller parts until components are isolated that can be understood and engineered by individuals or small groups. [.....] Abstraction [allows] a simplified view of the system and of the objects contained within it to be taken, which, nevertheless, still contains its essential properties and features. [.....] The needs of abstraction dictate that [program] subcomponents should have well defined roles, and clear and unambiguous inter-connections and interfaces." (Burns and Wellings, 1990, pp18-19)

 

The guru of modularity in cognitive science is Rutgers University's Jerry Fodor, who first raised the topic in his 1983 book "The Modularity of Mind", and who went on to define a cognitive module as an "'informationally encapsulated' cognitive facility" (Fodor, 1987:25). We should also mention University College, London's Richard Cooper, who is working on an object-oriented language called COGENT for cognitive modelling (Cooper undated/2003 online), and who points out that functional modularity in the brain achieves exactly the sort of encapsulation which prompted the object-oriented movement in the first place. Thus ..... 

 

"[OOP] offers the possibility of directly addressing, within a sound computational framework, the functional modularity implicit in box/arrow diagrams. Specifically, within an object-oriented paradigm, boxes might be directly modelled as objects (of various classes), with arrows being directly modelled as communication channels between those objects." (Cooper undated/2003 online)

 

 << AUTHOR'S NOTE: This clear description of biological encapsulation has not been well followed up. A full Ovid PsycINFO database search on the keywords "encapsulation" returned 141 hits [11th November 2003]. A sizeable proportion of these appear to be in the Fodorian tradition, although others are discussing the impact of domain-specificity on the applied science of expert knowledge, and still others are pursuing knowledge domain explanations of various psychopathological conditions. Only a handful of papers are concerned with encapsulation as an explicit theory of computation. Here they are, in date order .....

 

Losiewicz (1997); Botting and Johnson (1998); Tall, Thomas, Gray, and Simpson (1999); Seok (2000) 

 

Finally, Burns and Wellings (1990) describe how object-orientation has a vital - but not exclusive - role to play in real-time systems design .....

 

"Objects, while providing an abstract interface, cannot protect themselves against parallel (concurrent) use of this interface. Nor can they directly represent parallel objects in the application domain. The process abstraction is, therefore, more applicable to real-time programming. [.....] Both object and process abstractions are important in the design and implementation of reliable embedded systems. Although the process abstraction is probably more applicable to real-time systems, the object construct is still nevertheless important. It has a role in the construction of processes themselves and, moreover, it can be used to define the interface to a process [where] the process is defined within the 'body' of the object." (Burns and Wellings, 1990, p19)

 

In Part 7, we shall be looking for evidence of biological abstraction and encapsulation, and considering whether biological cognition might rely on directly or metaphorically cognate systems, and, if so, what neural mechanisms might be involved.

 

4 - Still to Come

That concludes our review of the various ways computer memory can be used as STM in made-made computing systems. Finally, in Part 7, we undertake a comparative review of how well the full richness of these subtypes has been incorporated into cognitive theory.

 

References

See Main Menu File.

[Up][Previous][Next][Home]