Course Handout - Short-Term Memory Subtypes in Computing and Artificial Intelligence

Part 2 - A Brief History of Computing Technology, 1925 to 1942

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2010, High Tower Consultants Limited.

 

First published online 14:33 BST 8th October 2002, Copyright Derek J. Smith (Chartered Engineer). This version [HT.1 - transfer of copyright] dated 18:00 14th January 2010

 

This is the second part of a seven-part review of how successfully the psychological study of biological short-term memory (STM) has incorporated the full range of concepts and metaphors available to it from the computing industry. The seven parts are as follows:

Part 1: An optional introductory and reference resource on the history of computing technology to 1924. This introduced some of the vocabulary necessary for Parts 6 and 7. To go back to Part 1, click here.

Part 2: An optional introductory and reference resource on the history of computing technology from 1925 to 1942. This material follows below and starts to introduce the vocabulary necessary for Parts 6 and 7. The main sections are:

1 - The Data Processing Industry in 1925

2 - Computation in Ballistics

3 - Computing, 1935-1942 - The Digital Breakthrough

4 - Timeline, 1925 to 1942

Part 3: An optional introductory and reference resource on the history of computing technology from 1943 to 1950. This will further introduce the vocabulary necessary for Parts 6 and 7. In so doing, it will also refer out to three large subfiles reviewing the history of codes and ciphers, and another subfile giving the detailed layout of a typical computer of 1950 vintage. To go directly to Part 3, click here.

Part 4: An optional introductory and reference resource on the history of computing technology from 1951 to 1958. This will further introduce the vocabulary necessary for Parts 6 and 7. To go directly to Part 4, click here.

Part 5: An optional introductory and reference resource on the history of computing technology from 1959 to date. This will further introduce the vocabulary necessary for Parts 6 and 7. To go directly to Part 5, click here.

Part 6: A review of the memory subtypes used in computing. To go directly to Part 6, click here.

Part 7: A comparative review of the penetration (or lack thereof) of those memory subtypes into psychological theory. To go directly to Part 7, click here.

To avoid needless duplication, the references for all seven parts are combined in the menu file. To return to the menu file, click here, and to see the author's homepage, click here. We also strongly recommend the computer history sites maintained corporately by the Charles Babbage Institute at the University of Minnesota, the Bletchley Park Museum, the Computer Conservation Society, and the National Archive for the History of Computing at Manchester University, as well as those maintained individually by the University of Essex's Simon Lavington, Clemson University's Mark Smotherman, and the freelance historians George Gray (editor of the Unisys Newsletter) and Ed Thelen.

1 - The Data Processing Industry in 1925

In Part 1 we saw how the data processing industry developed out of the calculating aids of the seventeenth century into the Scheutz Difference Engine of the mid-nineteenth century, and thence into the cash registers, comptometers, and punched card sorters and tabulators of the late nineteenth century. We have also seen how the late nineteenth century was a period of exponential economic growth, in which it was not uncommon for companies formed merely to service this or that new invention to find themselves major international corporations a couple of decades later. Indeed, the sheer pace of progress regularly took even the experts by surprise - Burroughs, for example, sold 15,763 machines in 1909 alone, twice what their founder had originally estimated the total worldwide market would ever be (Cortada, 1993).

Let us therefore pause for a moment to summarise the situation as it stood in 1925, the year after Herman Hollerith's Computing, Tabulating, and Recording Company renamed itself IBM. At the household end of the economic spectrum, we find that AT&T and the various European Post Offices had made integrated telephony and telegraphy services an everyday purchase, that the likes of Remington, Oliver, Smith, and Underwood had done the same for typewriters, and that Marconi, RCA, and Westinghouse were well on their way to doing the same for radio. Outside the household, calculators made by the likes of Odhner, Monroe, and Marchant were helping small retailers do their accounts and research academics analyse their data, and at the heavy end of the economy, corporations like IBM, NCR, and Burroughs were helping to mechanise commercial and governmental structures worldwide. It was a world where the man in the street could now communicate by pressing a few keys and turning a few knobs here and there, and where new data was allowing managers "to make better decisions earlier based on more facts" (Cortada, 1993, p35).

Yet the pinnacle of corporate data processing remained nothing more sophisticated than the punched card suite, and if we look at all this activity with a more critical eye, we can see a clear pattern to where data processing technology was flourishing and where it was not. Specifically, we can identify two heavily selling market sectors in the 1920s, as follows:

(1) High Street/Office Batch Processing: In this sector we have small-to-medium sized enterprises (SMEs) whose investment in new technology might run to a bottom-of-the-range NCR (or rival) cash register, a Burroughs (or rival) office comptometer, and a few typewriters. Company accounts were probably drafted manually, computed by comptometer, checked over by the company chief clerk, and then produced in neat copy on a typewriter.

(2) Corporate, Military, and Governmental Batch Processing: Under this heading, we have such applications as the actuarial tables used in insurance, financial analysis, large billing runs, large inventory replenishment runs, etc., military gunnery tables, tide and navigation tables, census data, and so on. 

Key Concept - Batch Processing: These, note, were both "batch" applications. This means that data was time-buffered - it was not processed as it became available, but was allowed instead to accumulate until enough similar inputs could all be processed at once. The justification for this was (and, with the right sort of data, still is) that significant economies of scale will often result, thus reducing unit processing costs (metaphorically speaking, there is only a fractional additional cost in running your dishwasher with a batch of 100 items in it, compared to running it with only one item in it). With an SME, the batch in question might be a day's sales, or a week's overtime claims, or a month's stock replenishment requests, and with a government department, it might be anything from a trayload of Hollerith cards to a lorry load (or potentially several lorry loads) of census returns.

The predominance of batch processing systems was, of course, technology-driven: it was all that the available systems - the sorters and tabulators of your average corporate punched card department - could cope with, and it failed to take any account whatsoever of three as-yet-vaguely conceived additional areas, namely real-time computing, on-line computing, and personal computing, as now profiled .....

Key Concept - Real Time Processing: Processing is deemed to occur in "real time" if and when the decision making element of the machine (what Babbage called "the mill") is free to respond to a demand (a) instantaneously, and (b) without interruption. It is the sort of computing needed to control any sort of system in motion. The principal method of real time control in 1900 was to have a dedicated human operator at the system's (real or figurative) helm. By 1945, however, many functions were being carried out automatically by analog computers linked to servomechanisms, and modern real time systems (albeit they are now heavily digitalised) have become the mainstays of the aerospace [to see a typical "fly-by-wire" system, click here], healthcare [to see a typical "treat-by-wire" application, click here], and military cybernetics industries [to see a modern artillery fire control system, click here].

Key Concept - On-Line Processing: Processing is deemed to be "on-line" if and when the decision making elements of the machine are free to respond with only a few seconds delay to a particular enquiry or update. The textbook examples here are bank or stock balance enquiry, funds transfer, hotel, theatre, and travel ticket booking systems, and Internet shopping. In this sort of application, customers (a) have a particularly subjective definition of "now" (they will wait anything from one or two seconds for a bank balance enquiry, to a minute or so for a theatre or airline booking), (b) need to know present availability of the target resource with 100% confidence, and (c) need to have unchallenged access to that target resource during the chosen timeframe.  On-line processing is not available with any form of record card system, however, because it is totally impractical to go directly to a particular card in a particular card set. Indeed, this problem was not solved until 1961 [see Part 5 (Section 3.1)]. On-line processing is sometime referred to as "on-line transaction processing" (OLTP) because of the time-encapsulated nature of the exchange between the user and the machine [for a formal definition, click here].

Key Concept - Personal Computing: Processing is deemed to be "personal" if and when users have access to their own hardware (and do something with it other than play computer games). This sort of computing therefore had to wait until mass production techniques brought equipment prices down. The first desktop personal computers started to appear on the market in the early 1980s, the first laptops about a decade later, and the Internet/multimedia machines in the mid-1990s.

As far as the 1920s were concerned, therefore, it is probably fair to conclude that the data was there to be processed, but the computing machinery available was too slow and too inflexible to service the full richness of the demand. To understand what happened next, we need to consider a detailed case study, and one of the most generally informative of these has to do with the problems of artillery gunlaying in general, and anti-aircraft gunlaying in particular.

2 - Computation in Ballistics

Now the basic problem for artillery gunlaying is gravity. Put simply, your cannon ball wants to fall to earth as quickly as it can, and it is your job as gunlayer to combine the effects of gravity with some forward motion so that the point of impact of the projectile is also the point of maximum destruction to your enemy. To further complicate matters, the effect of gravity is to give any projectile a downward acceleration, whilst its forward motion remains approximately the same throughout the flight. This is what gives the resulting trajectory its characteristic "dipping curve" shape, the correct name for which is a "parabola". The science of gunlaying is therefore to tilt the gun barrel upwards or downwards from the line of direct sight, until it rests at a tangent to the parabolic trajectory which passes through both gun and target. Corrections then need to be made for wind resistance (up a touch), sidewinds (left or right a touch), headwinds (up a touch), tailwinds (down a touch), targets uphill relative to you (up a touch), and so on. The dimensions of the parabola will also vary according to projectile weight, the weight and chemical condition of the propellant charge, the goodness of fit of the projectile into the barrel (up a touch if the gun is starting to wear, or has been firing for some time, and has got hot).

Given all this mathematics, it will come as no surprise to learn that these calculations were beyond individual artillerymen, especially in the heat of battle. As a result, the earliest cannons were simply pointed by eye and adjusted as necessary if they missed (the British took three "bombards" with them to Crécy in 1346 just to frighten the enemy's horses). The situation did not improve much until an Italian mathematician named Niccolo Tartaglia (1506-1559) wrote a monograph on the subject, and devised some quick and easy field procedures ..... 

Key Invention - Tartaglia's Quadrant (ca. 1545): In an attempt to introduce some discipline into the setting of an elevation, Tartaglia invented the gunner's quadrant. This consisted of a short plumbline suspended across a graduated 90-degree quadrant. The quadrant itself was fixed to the gun in question by a muzzle plug, so that as the barrel was raised and lowered the plumbline crossed the quadrant at a different point [for a helpful picture of the quadrant in use, click here or here]. The elevation could then be read off as the number of degrees marked at this intersection point, "point blank" being the true horizontal. In the meantime, the gun commander had estimated the range to the target and could set the corresponding required elevation from a look-up table or pre-drawn graph supplied with the gun. We shall refer to these graphical and tabular aids henceforth as "ready reckoners". [To see the "table of fire" ready reckoner for an American Civil War 20-pounder cannon, click here.]

Quadrants were most effective when helping siege artillery destroy fortifications, because they allowed the precise resetting of an elevation after each shot. However, the lighter guns were still largely laid by the experienced eye, because the fact that they were intended to engage more mobile targets interacted unfavourably with the fact that the decision making required by Tartaglia's system itself took time. If you had a moving target, you had to know how long it was likely to take between reading your instruments and the projectile arriving at the end of its travel. Just as in clay [skeet] shooting, you must aim ahead of your enemy by a carefully calculated amount, a cunning little trick which goes by the names "deflection shooting" or "leading the target". This trick does not just take a lot of learning, but needs to be done in the forwards-backwards and leftwards-rightwards dimensions simultaneously. The forwards-backwards deflection is known as the "elevation lead", and the leftwards-rightwards deflection is known as the "azimuth lead". You no longer needed a gunsight, in other words, you needed a "fire control system", complete with your own computer.

By the Napoleonic era, ready reckoner techniques were at work in both (a) the use of field artillery against infantry or cavalry formations manoeuvering on land (where the ground speed of the target can be anything from 2 to 20 mph, and where relative topographical elevation can change by the second), and (b) the use of shipboard artillery (where the ground speed would normally be 5 to 8 mph, but where your own gun platform would also be pitching, rolling, and yawing into the bargain). [For an example of the effectiveness of shipboard artillery against slow moving infantry formations on the flat, see the role played by the USS Louisiana at the Battle of New Orleans, 1815, on our webpage on military disasters.] However, as with computing and many other things, the period between the Battle of Trafalgar (1805) and the First World War was one of unremitting innovation for the artillery world, and the following major changes may be identified:

(a) Calibre and Range: A typical heavy naval gun at Trafalgar was the carriage-mounted British 32-pounder black powder muzzle loader. This had a 91/2 foot barrel, a 32-pound solid projectile, a shot diameter of 6.4 inches, and an effective range of about one mile. By 1915, its equivalent would have been the 1913-designed 15-inch Mark 1 turret-mounted breech loader, with a 55-foot 100-ton barrel, a 1-ton explosive projectile, a shot diameter of 15 inches, and an effective range of 16 miles.

(b) Speed of Firing: The rate of fire of our 1805 32-pounder was around one shot every two minutes; that of our 1915 15-inch Mark 1 was around one shot every minute. The critical inventions here were the integration of shot and cartridge, and the breech mechanism.

(c) Rangefinding: Although it is possible to range monocularly using a telescope, by relying on the amount of focusing required to give a clear image, the critical development under this heading was the binocular optical rangefinder. This was a contraption rather like a pair of binoculars, but with the front lenses anything from a few centimeters to several meters apart [to see a picture of a German 4-metre rangefinder of 1940s vintage, click here]. The user stood at an eyepiece halfway along the telescope tube and saw two half-images from the lens and prism systems - the "objectives" - either side of him. These images would only perfectly superimpose, however, when the two objectives were angled slightly inwards, and, just as with biological binocular convergence, the degree of "nasal" rotation is an analog of the target's range. The adjustment angle was therefore factory-calibrated for reading off in meters distant. Such instruments were manufactured by the Glastechnische Laboratorium Schott & Genossen (later Carl Zeiss, Jena) from 1894, and by Barr and Stroud in the UK from 1907.

(d) Naval Fire Control Systems: Knowing your enemy's range is only one factor in successful naval gunnery. For each selected target, your own speed and heading need to be computed against target speed and heading, and due allowance made (a) for elevation and azimuth lead, and (b) for any pitching, rolling, and yawing of the gun platform. The solution was to bring together into a single fire control computer a number of "follow the pointer" analog systems, each coping with one of the sighting or stabilisation parameters. Such systems were developed during, and fairly effective by the end of, the First World War. The US effort in this area was led by the Sperry Gyroscope Company (see under Elmer Ambrose Sperry in Part 1) and the Ford Instrument Company (see under Hannibal C. Ford in Part 1). Ford's Range Keeper Mark 1 [picture and specification] was installed in 1917 on USS Texas, and heralded an era of greatly improved naval gunnery (Clymer, 1993/2002 online).

The arrival of the aeroplane as a weapon of war ended any residual reliance upon a gunner's experience and good judgement. Not that the principles themselves had changed, for in his monograph on the history of anti-aircraft (henceforth "flak") technology, Müller (1998) was still explaining that  "in order to hit a target moving in a free space, [estimations] have to be fed to the gun for the point in space which would be occupied by the target at the end of the artillery shell's flight time" (p3). But there are estimations and there are estimations: it is one thing to shoot at a formation of cavalry two hundred yards across and only two hundred yards away, and quite another to shoot at an aeroplane a few yards across and several thousand yards away. The point is that deflection shooting becomes progressively more difficult as the target's range goes up, for the simple and mathematically inescapable reason that the projectile's time of travel also goes up. This both magnifies any sighting errors already made, and affords the target a chance to take evasive action. For example, it took around 20 seconds for a Second World War 88mm shell to reach its practical operational ceiling of 20,000 feet (roughly four miles), within which time an aircraft moving at 250 mph would itself have travelled well over a mile.

The experience of two world wars brought two popular solutions to the deflection shooting problem in air defence: one was to shoot off lots of small cheap bullets and rely on sheer weight of numbers to give you the occasional direct hit, and the other was to shoot fewer, but bigger and more expensive, explosive shells, engineered to go off near the target and inflict blast or fragmentation damage. In other words, if a direct hit was going to be a practical impossibility, you needed to capitalise on your near misses as well. The first option was the method of choice against low-flying aircraft, the latter the method of choice against high-flying, BUT it relied upon an accurate time fusing system, as now described .....

Key Invention - Time Delay Fusing: A time fuse is a mechanism for the timed detonation of an explosive projectile while still in the air, and in the absence of direct contact with the target. Because the resulting blast comes vertically downwards, this has always been the weapon of choice against defensive trench systems. Time fuses were in common use for large calibre mortars as far back as the late 16th century, and were then introduced into the lighter field artillery when an artilleryman named Henry Shrapnel invented his eponymous case shot in 1784. The early time fuses were simply lengths of paper or cloth rolled around a trickle of black powder (much like an Old Holborn roll-up, or spliff). You knew from past experience how fast this sort of fuse burned, so you cut off just enough of it to provide the intended delay, inserted one end into the main charge, lit the other, and kept your fingers tightly crossed it did not detonate prematurely. The delay was known as the "fuse burn time". World War One time fuses used much the same basic technology, save that the fuse was now a piece of clockwork, factory-machined into the shell, and its "burn time" was externally adjustable by the guncrew immediately prior to loading. The mechanism had to be extremely robust to withstand both the momentary linear acceleration when fired (typically well in excess of 10,000 "g") and the centrifugal forces of spinning at (typically) well in excess of 20,000 rpm while in flight. The systems invented to cope with determining the required fuse burn time from sighting data were known as "predictors". Time fusing was largely replaced in flak systems by the introduction of the radio proximity fuse towards the end of the Second World War - see next Key Invention panel but two.

Now we can actually learn a lot about computing by considering the prediction problem in some detail. If you were ever put in charge of an anti-aircraft gun, for example, then ideally you would need to know the following things about an enemy aircraft travelling in a straight line .....

Example: If your gun is facing east, and the target is in front of it, then that target's bearing is +90o (because it is to the east of you), but its azimuth is zero (because your gun is already pointing at it); its heading could, momentarily, be any value between 1o and 360o.

Unfortunately, some of the most important values cannot accurately be observed, so either you have to guess at them, or you have to compute them from what you can observe, according to the rules and equations of three-dimensional geometry, algebra, and trigonometry. Moreover, velocity and heading can only be computed if the rate of change of other variables is known, so you will have to repeat some sighting values after a fixed time delay, using a stopwatch. So basically you are dealing with the following four points in three-dimensional Cartesian space:

Coordinates #1: Where you are, and what direction the axis of your gun is aligned to.

Coordinates #2: The location of the target at T1, the first click of the stopwatch. To ascertain this, you have to focus a combined theodolite-rangefinder on the target, and read off the scale values for the bearing (horizontal scale), angle of elevation (vertical scale), and, assuming you have a binocular rangefinder, range (binocular convergence scale).

Coordinates #3: The location of the target at T2, the second click of the stopwatch. To ascertain this, you need to repeat your readings after a standard stopwatch delay. The difference between the two sets of values then allows the target's three velocity components to be calculated, and from that a reasonably accurate heading and ground speed.

Coordinates #4: The (future) location of the target at T3 (because this is where you are going to aim your gun). Setting T3 needs to take into account both fuse setting and fuse burn time. It might take three or four seconds to load the shell into the gun, and then of the order of 20 seconds in flight.

To make matters worse, the two-click system is only theoretically useful while the target aircraft remains travelling in a straight line. A "three click" system, in which three timings are taken, offers the additional ability to deal with curved flight, but requires quadratic curve fitting in all three key dimensions, and for a long time that was beyond the available technology and all but professional mathematicians. The calculations also rely heavily on a process known as "integration", one of the major subskills of differential calculus. Müller (1998) describes how German air defence technologists started to deal with these problems during the First World War. The early systems, not surprisingly, were quite basic .....

Key Invention - The Peres Predictor (1915): This was basically a Zeiss telescope-theodolite on a tripod. By taking a direct sighting on the target, it could give angles of elevation and azimuth, and by doing the same after a set number of seconds measured on a stopwatch the operators could extrapolate a suitable burn time and angles of elevation and azimuth. The apparatus was complemented in 1916 by a set of command charts (ie. ready reckoners) from which burn time and elevation could be read directly. 

Key Invention - The JAKOB Command Table (1917): The JAKOB command table was introduced in 1917, and was again basically a Zeiss telescope-theodolite. Here, however, the ready reckoner element was integrated into the main carcase of the instrument as an output display panel, rather than as a separate set of ready reckoners. This therefore involved internal gearing according to the principles of analog computation already described in Part 1.

Further attempts at mechanising flak gunlaying took place during the 1920s, and the solution in all cases was to rely on an analog computer - "a system to represent the problem whose solution is required [and whose behaviour] then yields the solution in terms of the measured values of various output quantities" (Williams, 1961, p9). Known generically as "follow-the-pointer" systems, analog computers consisted typically of a large box of interconnecting shafts, cranks and cams, differential gears, torque amplifiers, clutches, pulleys and cords, potentiometers, and/or spring-loaded mechanisms, which were used to estimate the solutions to complex equations.

Key Concept - Follow-the-Pointer Servocontrol: The idea of mechanisms capable of following the movements of a pointer at a distance emerged in the mid-nineteenth century. For example, in the 1847 Siemens automatic dial telegraph [picture], the remote-station telegraph pointer automatically rotates to the same angle as that to which the home-station pointer has been set, thus allowing the home operator to "spell out" a message letter by letter. The principle is also seen at work in the "steam steering engines" of the 1860s. Such a system was developed by McFarlane Gray in 1866 for the SS Great Britain. Firstly, a rudder deviation was chosen by the helmsman and set as a pointer deflection on the control apparatus. This then released steam to a rudder servomechanism to do the heavy work, and, as the rudder itself moved, a second pointer was gradually rotated until it matched the first, whereupon the steam supply to the servomechanism was shut off (Bennett, 1979). [For more on the history of analog systems, see Mindell (1995), and for more on the basics of cybernetics, see Smith (1997).]

The US effort in this area was again led by Sperry Gyroscope and the Ford Instrument Company. Drawing on their experience with World War One naval fire control systems, Sperry and Ford developed equivalents of the German equipment mentioned above, and in the period between 1927 and 1935 successfully developed a workable integrated flak control system. The culmination of this eight years of experimentation was the T-6 AA Director System. The 1935 German system was generally similar to the US system. Each flak battery consisted of a ranging station equipped with a four-metre binocular rangefinder, a command unit equipped with an analog predictor, and the guns themselves. The following is from Müller (1998), and describes how the four-man crew of the rangefinder (numbered E, for Entfernungsmesser, 1 to 4) interacted with another nine men on the command post analog computer (numbered B, for Befehlstelle, 5 to 13) to come up with the required gun and fuse setting parameters:

"After the E2 [lateral tracker] and E3 [elevation tracker] sight operators determined a target's height and bearing, the E1 [sergeant] could plot its range [] B4 passed data via voice radio to the B7, B5 controlled the computer laterally [by turning an adjusting handwheel], B6 controlled it vertically [by turning a second adjusting handwheel]. Speed of revolution [of the internal mechanism], controlled by the adjusting gears, corresponded to the azimuth and elevation lead [read from output dials at the rear and right of the apparatus respectively]. The azimuth lead was passed by the B8, who kept the pointer on the azimuth tachometer aligned with the null marker; the B9 did the same for the elevation tachometer. The B10 passed the range lead using the range fluctuation speed tachometer. The B11 gave the firing azimuth, the B12 the barrel elevation, and the B13 the fuse setting by voice radio to the guns." (Müller, 1998, p13.)

Each gun in the typically four (later eight) gun battery was manned by a further nine gunners (numbered K, for Kanonier, 1 upwards). K1 (the height man) hand-cranked the gun up to the angle of elevation specified by B12, K2 (the azimuth man) hand-cranked it round to the azimuth setting specified by B11, and K6 set the fuse to the value given by B13. Finally, the guns were loaded and fired in as short order as possible. The other K-roles were manual handling. Warning bells from the command post synchronised the gunners from one end of the battery to the other, and, typically, all the guns in the battery fired to the same coordinates at the same command (as, indeed, could the guns of neighbouring batteries, if linked by telephone). It thus took a cooperative processing network of at least 25 highly trained and totally focused human minds, aided by the best optics and mechanics money could buy, to aim what was effectively a single gun, and it still took an average 4000 shells to down each Allied plane!

ASIDE: It was a military secret at the time, of course, but the allied bombing fleets had been instructed to fly straight and level on their final approach to their target. This had been judged the lesser of two evils, because rapid manoeuvering of large formations of aircraft invited collisions. The relatively poor flak performance was therefore not influenced by any significant changes in altitude or heading: it was all down to inaccuracies in the basic rangefinding and prediction equipment, and poor operating procedures.

Similarly, by 1939, the British Royal Artillery's Air Defence Command was also relying on optical tracking and altitude finding, supported by analog computation of azimuth, elevation, and fuse delay, followed by manual gunlaying to those parameters (Williams, 1961). Again about a dozen highly skilled technician-operators (frequently women) were needed per battery.

The Second World War brought a further wave of improvements. An advanced US system, the "M9 Electrical Gun Director" was proposed in June 1940 by David B. Parkinson and Clarence A. Lovell of Bell Labs' Murray Hill facility [for the story of Parkinson's dream, click here]. The proposal was accepted in December 1940, and the resulting development project occupied some 400 Bell engineers, with assistance from the Ford Instrument Company and the Teletype Corporation on the altitude converter. Colonel H.B. Ely liaised. A prototype was delivered for evaluation in November 1941, and production approval given in the autumn of 1942. Full production was contracted to the Western Electric Company. Here is Bell's own description of the hardware configuration: 

"The apparatus [.....] is a combination of devices for obtaining the necessary information and making predictions for controlling the gun [.....] The electrical director is associated with a battery of four guns and an optical height finder [.....] It consists of four separate units which are transported in a trailer, namely the tracker, the computor [sic], the range converter, and the power equipment [.....] The two men on the seats of the tracker look through the telescopes and by means of controls, keep the cross hairs in the eye pieces constantly on the airplane. For example, one man is now rotating the tracker about a vertical axis and this rotation determines an angle called the azimuth. The other man raises or lowers his telescope about a horizontal axis and thus determines an angle called the elevation angle. [These] two actions are simultaneous [and] determine the direction of the airplane from the tracker. The two telescopes are driven by electrical motors, so that the rates of these motors can be set when the telescopes are following the plane smoothly, and they will continue to follow it for short periods without any aid from the men, as when the plane is flying through a cloud. However, even when the plane is flying in a straight line at constant speed the angular rates at the tracker are not constant, so the two men at the tracker must slowly shift the rates so as to keep the target on the cross hairs. If they are skilful in doing this, then, by electrical transmission systems, data, which give the direction of the airplane from the tracker at each instant of time, are continuously being fed from the tracker to the computor [sic] of the director. [New paragraph.] To determine the position of the plane in space we must add to these two direction angles another quantity called the range, that is, the air-line distance from the tracker to the target. This is furnished by either electrical or optical means [.....]. These quantities are known as present polar coodinates and all of them vary from instant to instant as the plane moves. [New paragraph] The first operation which the computor performs is to transform these polar coordinates into rectangular coordinates, that is values are produced in the computor which tell how far the plane is north or south, how far it is east or west, and how far it is above the ground. [The] computor then calculates by means of electrical and mechanical mechanisms the magnitudes of these rates, which are [then] indicated on meters. [..... New paragraph] Before the computor can use these data to make predictions as to where the gun must be pointed [.....] the information concerning the power of the gun and the kind of shell that it is firing must be taken into account. [.....] These [and many other corrections] may be set in the computor by means of hand-controlled dials [.....] In addition, the electrical computor also takes into consideration the fact that the guns are located some distance from the tracker. [In fact, the] computor may be put into a place of safety underground, hundreds of yards from the guns, and still make accurate predictions. The adjustment necessary when the gun and tracker are separated is called the parallax correction." (Dr. Harvey Fletcher, Director of Physical Research, Bell Telephone Laboratories; quoted in an anonymous editorial in Bell Laboratories Record, 1943, 22(4):157-167, pp163-165; all emphases added.) 

Few pre-war gunners would have recognised the situation a mere six years later, for by 1945 the tracking was by locking-on radar, the guns themselves were self-laying by servomechanism, and progress was being made at automating the manual tasks of loading and fuse-setting. Indeed, the Bell system, impressive enough in 1942, became even more potent when the projectiles were fitted with radio proximity fuses a few months later ......

Key Invention - The Radio Proximity Fuse: Live burning or clockwork time delay fuses were rendered largely obsolete when the "VT" type radio proximity fuse (described by some as second only to the atom bomb in World War Two military significance) was combat tested in January 1943. These shells relied on a miniaturised (and exceptionally robust) onboard radio transmitter, coupled to a Doppler shift amplifier. After a short initial delay to allow the shell to clear the gun, the radio echoes from any solid objects in the vicinity could be detected, amplified, and used to detonate the main charge. [For a cut-away diagram of one of these fuses, click here, and for fuller technical background, click here, or here, or here.]

ASIDE: Kenneth D. Smith (1905-1990) was one of many unsung heroes at Bell Labs during World War Two. He contributed significantly to proximity fuse and radar development before 1945, and then moved on to the team responsible for transistor development (see Part 3). Readers interested in this aspect of the history of computing are strongly recommended to visit the website of the Southwest Museum of Engineering, Communications, and Computation, Glendale, AZ, and consult their "K.D. Smith Collection". There are also two helpful sites devoted to wartime experiences with the British 3.7" Anti-Aircraft gun, one belonging to the British 36th Heavy AA Regiment, and the other to the New Zealand Permanent Force Old Comrades' Association, both of which carry some informative pictures.

Of course, it was not just the ballistics industry which had computation needs. Other applications of analog computing technology were focusing on solving the sort of large differential equations generated by nuclear physicists, electronics engineers, and astronomers. One of the most sophisticated of the early analog devices was an electromechanical differential analyser built at MIT in the late 1920s by an electrical engineer named Vannevar Bush (1890-1974). This was completed in 1931 [picture], but the experience was dogged by difficulties with all the moving parts. Bush therefore started to draw on the radio industry's 30 years' experience with the thermionic valve .....

Key Invention - The Thermionic Valve: In 1904, John (eventually Sir John) Ambrose Fleming (1849-1945), of University College London and the Marconi Wireless Telegraph Company, invented the thermionic valve (alternatively "valve", or "vacuum diode", or "rectifying vacuum tube", or just "tube"). In its simplest form, a valve consists of an evacuated glass globe similar in appearance to a small electric light bulb, containing two electrodes, one connected to a positive supply (the anode) and the other acting as a hot filament cathode. Given the fundamental nature of electricity, this arrangement can only allow a current to pass in one direction - from cathode to anode, in the form of a beam of electrons usually referred to as a "cathode ray". Properly incorporated into an electrical circuit, therefore, a vacuum diode could rectify alternating current. Subsequent development by the American Lee de Forest (1873-1961) introduced a third electrode in between the cathode and the anode (thus turning the diode into a "triode"). This additional electrode could block the cathode ray, and thus determine whether any current made it across to the anode. This allowed one part of a complex circuit to act as an electronic switch in another part. [To see some of the circuit diagrams involved, click here; see also the separate entry for the Thyratron Valve immediately below.]

Key Invention - Thyratron Valve: In 1913-1914, the Harvard physicist George Washington Pierce (1872-1956) filed for patents in thermionic valves filled with mercury vapour rather than being evacuated. This significantly changes the unit's performance when incorporated into an electrical circuit. For one thing, a thyratron circuit can deliver higher output current, and for another the switching profile is different. In vacuum diode switching, for example, electrons will flow from the cathode to the anode whenever the electrode-electrode voltage exceeds a certain critical value, and this flow will then cease as soon as that voltage falls back below it. In a gas-filled diode, however, the gas provides an ionisable medium. This means that electrons from the hot cathode can, if the anode voltage is high enough, collide with ionised molecules on their way from cathode to anode. This then dislodges further electrons, which collide with other molecules, and so on, in increasing intensity [further technicalities]. Again this only happens at a critical voltage, known as the "ionisation potential", but this time the ion flow is self-sustaining until the potential drops below a second, markedly lower, value known as the "de-ionisation potential". Properly incorporated into a circuit, this allows a thyratron to "stay on" until "switched off". Thyratrons are thus good candidates for "remembering" pulses, and it was this memory-like property which led the Welsh physicist Charles E. Wynn-Williams to use them in his prototype logic circuit in 1931 (see below). In 1913, AT&T purchased De Forest's valve patents, and set up a research laboratory - the beginnings of "Bell Telephone Laboratories" - to develop radio technology alongside its existing telephony and telegraphy interests.

Bush progressively replaced electromechanical units of his machine with valve-based electronic equivalents with no loss of performance until the basic design prevented him going any further. He then carried out a major redesign exercise, and spent the period 1935-1942 preparing a smaller, faster, but ultimately more powerful Differential Analyser Mark 2. Nevertheless, despite all the improvements, the fundamental representation in these differential analysers remained analog, whilst the computers we are familiar with today act digitally, even when dealing with continuous variables (see Part 1). Digital calculation was known to be possible, of course, but if you had a lot of numbers to crunch and wanted the answer in a hurry they were just too slow. If digital techniques were to compete with the follow-the-pointer analog systems they needed to get a whole lot faster. Another invention was sorely needed .....

3 - Computing, 1935-1942 - The Digital Breakthrough

The eventual challenge to the analog computing industry began to take shape in the late 1930s with three independent discoveries. The first of these took place in Britain, when the Welsh physicist Charles E. Wynn-Williams (1903-1979) used thyratron valves to make both binary (1931, published 1932) and decimal (1935) automatic event counters. This was followed between 1934 and 1938, in Germany, by Konrad Zuse (1910-1995), a Henschel and Company aircraft engineer, who wired up some home-made relays to make a binary "logic circuit", and in America by George Stibitz (1904-1995), a Bell Labs engineer .....

Key Invention - Electronic Logic Circuits: These are circuits capable of using the switching ability of valves to reflect simple mathematical processes. In 1931, at the Cavendish Laboratory in Cambridge, Wynn-Williams devised valve-based circuitry as a data capture device for researching nuclear particles. This circuit was capable of electronically simulating the process whereby <01> plus <01> gives <10> (the binary version of "one plus one equals two"), complete with an electronic twos-carry mechanism, and was known as a "scale of two counter" (Wynn-Williams, 1932). By 1935, he had converted the final result back into decimal for greater user-friendliness. Wynn-Williams' original apparatus is preserved in Cambridge University's Whipple Museum.

Wynn-Williams was a humble theoretical physicist, and was merely trying to count atomic particles. Accordingly, only Zuse and Stibitz proceeded with computer development, and Zuse was already ahead by several years. In 1934, as a newly qualified engineer, he began to spend his spare time designing computing circuits, chancing eventually upon much the same technique as Stibitz in the US, albeit with older underlying technology. Where Stibitz used valves to add columns of binary numbers, Zuse used hand-built relays. Zuse patented his design on 9th April 1936, began construction of a working prototype on his parents' living room floor, and had his first operational machine, the Z1, ready in 1938 [more detailed account]. A facsimile of the Z1 is on show in the Berlin Technology Museum.

The Z1 was not a particularly reliable piece of hardware, but it was binary computation nonetheless, and it was programmable in the sense that problems could be presented to it by punched tape instructions. Zuse was very clear on how his machine should be used, writing that "the prerequisite for every kind of calculation is the construction of a program". He was thereby resurrecting Babbage's strategic dream of a century beforehand of a basically quite simple machine, able to do complicated things by varying the command sequence, which, note, is the diametric opposite of the analog machines, where the complexity was built into the hardware itself and the command sequence was minimal.

In fact, Zuse's only mistake was to have avoided valves. A friend of his, Helmut Schreyer (1912?-1985), suggested using them in 1936 because they, too, worked to yes/no logic and could flip from state to state much more quickly than relays, but Zuse chose not to take the risk because their burn-out rate was too high. Instead, he decided to replace the scratch-built relays in the Z1's arithmetic module with proprietary ones. He tested this redesign in the Z2 (1938-1939), and then with the Z3 (1939-1941). The Z3 consisted of a large cabinet housing 1400 memory relays (giving 64 22-bit words of storage), another full of 600 arithmetic relays, a 22-bit parallel bus, and an array of microsequencers in a control unit. It was demonstrated to the German military on 12th May 1941, but was actually too small and too slow to impress them. It was then destroyed in an air raid in 1944, but a facsimile is on show in the Deutsches Museum in Munich, and many commentators believe that because it came in a year ahead of the ABC it rates as the world's first computer. Here is an analysis of the Z3's strengths and weakness, measured against the criteria of modernity discussed so far .....

Electronic rather than Electromechanical

Digital rather than Analog

Binary rather than Decimal

General Purpose rather than Specialised

No - electromechanical

Yes

Yes - as with modern computers, input and output was in decimal, but this was converted to pass through a central binary logic unit

Yes, within the limitations of its rudimentary instruction set

Stibitz, too, was short of resources, because although Bell Labs immediately set him to work on a fully sized machine (see the entry for the Bell Complex Number Computer in the data table at the end of this paper), they had a lot of other defence contracts on the go, and could not give the project their undivided attention [see the Southeast Museum of Engineering, Communications, and Computation's e-exhibition on Bell Labs at war]. As a result, the honour for being the first to put such an electronic digital computer together has since been adjudged under US patent law to go to John Vincent Atanasoff (1903-1995) and Clifford Berry (1918-1963) of Iowa State University (ISU). After his childhood adventures with his father's slide rule (see Part 1), Atanasoff graduated from the University of Florida at Gainesville in 1925, and joined ISU as a masters student. His studies completed, he then moved to Madison, WI, to complete his PhD, and his ISU biographers explain how while completing his doctoral thesis he spent many hours crunching numbers on a Marchant calculator, a thoroughly tiresome experience which resolved him to develop a better and faster way of doing things. Atanasoff received his PhD in theoretical physics in 1930, and returned to ISU as a lecturer. This exposed him to the work of Bush and Stibitz, and through 1938-1939 he researched some novel design ideas of his own. When he asked for an assistant, he was assigned Berry, a well-recommended young graduate who had hand-built his own radio at the age of 11 years, and in 1939 they began work on a machine to put Stibitz's circuit designs into operation. The resulting Atanasoff-Berry Computer (ABC) used binary arithmetic logic circuits, and was completed in 1942 [for further details visit the Ames Lab, Iowa State University website]. Here is an analysis of the ABC's strengths and weakness, measured against the main criteria of modernity discussed so far .....

Electronic rather than Electromechanical

Digital rather than Analog

Binary rather than Decimal

General Purpose rather than Specialised

Yes (valve-based)

Yes

Yes

No

4 - Timeline, 1925-1942

And finally, here is a timeline for the period from 1925 to 1942. The material contained is from many sources, including Berkeley (1949/1961), Wilkes (1956), Hollingdale and Tootill (1965), and Evans (1983):

Name/

Claim to Fame

Project Leader(s)/

Purpose

Date/

Sponsor

Remarks

JAKOB

First generation analog flak predictor

German military

Ballistics calculations

Operational 1917

Carl Zeiss-Jena

 

IBM600 series Tabulating Machines

Commercial and scientific data handling

1930s

IBM

The latest "sort and tabulate" punched card systems.

Differential Analyser Mark 1

Largest analog computer to date.

Vannevar Bush

Started mid-1920s

Operational 1931

In development until 1935 when replaced by Mark 2

MIT

Initially electromechanical, but later modified to include some electronic sub-assemblies.

Flakkommandohilfsgerät 35

Second generation analog flak predictor

German military

Ballistics calculations

1935

Carl Zeiss-Jena

 

Model K

First US prototype electronic binary logic circuit calculating machine

George Stibitz

1937

Bell Telephone Laboratories

Prototype only - the "K" stands for "kitchen table", which is where Stibitz did his basic experimentation.

Z1

First German prototype electromechanical binary logic circuit calculating machine/ computer

Konrad Zuse and Helmut Schreyer

1936-1938

No major sponsor.

The memory and arithmetic unit components were entirely hand made.

Z2

Konrad Zuse and Helmut Schreyer

1938-1939

No major sponsor.

An improved version of the Z1, in which the hand made units of the arithmetic unit were replaced with relays.

Complex Number Computer

First binary computer to be operated over a telephone line

George Stibitz and Samuel Williams

Started 1939

Operational January 1940

Bell Telephone Laboratories

Also known as Bell Labs "Model 1 Relay Computer". Not surprisingly, given its pedigree, the machine ended up being built largely out of scavenged telephone exchange components.

Atanasoff-Berry Computer (ABC)

First valve-based binary digital computer

John Atanasoff and Clifford Berry

Scientific computing

Started 1939

Operational 1942

Iowa State University

 

Z3

First German electromechanical digital computer

Konrad Zuse

Started 1939

Demonstrated 12th May 1941, Berlin, Germany

The full potential of this machine was not developed, since it did not get development sponsorship from the German military.

Differential Analyser Mark 2

Vannevar Bush

1935-1942

MIT

 

5 - References

See Main File.

 [Up][Next][Home]

[Codes and Ciphers in History, Part 1 - to 1852]

[Codes and Ciphers in History, Part 2 - 1853 to 1917]

[Codes and Ciphers in History, Part 3 - 1918 to 1945]

Recommended Reading

 "Between Human and Machine: Feedback, Control, and Computing Before Cybernetics"

Mindell, David A. (2002)

To see an abstract, or to order this book, click here

[Mindell's jacket]