Course Handout - Hebbian
Theory
Copyright Notice: This material was
written and published in Wales by Derek J. Smith (Chartered Engineer). It forms
part of a multifile e-learning resource, and subject only to acknowledging
Derek J. Smith's rights under international copyright law to be identified as
author may be freely downloaded and printed off in single complete copies
solely for the purposes of private study and/or review. Commercial exploitation
rights are reserved. The remote hyperlinks have been selected for the academic
appropriacy of their contents; they were free of offensive and litigious
content when selected, and will be periodically checked to have remained so. Copyright © 2004-2018, Derek J. Smith.
|
First published online 08:00 GMT 19th February 2004,
Copyright Derek J. Smith (Chartered Engineer). This
version [2.0 - copyright] 09:00 BST 5th July 2018.
Earlier versions of this material appeared in
Smith (1996; Chapters 4 and 6). It is repeated here with minor amendments and
supported with hyperlinks.
1 -
Introduction
The search for the biological engram [glossary] is perhaps the most persistent of all the persistent issues in the study of memory. And yet at one level of analysis, the fact of its existence is blatantly obvious: all but the most primitive of organisms show signs of learning, and this learning has to be recorded somewhere. There simply has to be a structural trace of some sort within the nervous system; it is just that these structural changes have never actually been seen. In other words, while the engram's existence is beyond question its precise nature is still poorly understood. In this handout we introduce one of the two main focuses of enquiry, namely cell assembly theory (also known as neuronal net theory), which looks at how neurons might connect themselves up to become engrams. A separate handout on "The Neurobiology of Memory" then takes a more reductionist approach and looks at the subcellular dynamics of neurotransmission and neural growth (that is to say, at the processes which allow the networks to connect themselves up in the first place).
We begin this part of the story in 1950, when the American neuropsychologist Karl Lashley published a paper entitled "In Search of the Engram". In it, he confessed to 30 years of "inconsistent and often mutually contradictory" results from experimentation into the localisation of memory traces (Lashley, 1950, p455). Lashley's experimental paradigm had been developed just after World War I, and involved surgical ablation (in rats) of varying amounts of brain tissue, followed by painstaking experimentation on their residual learning and problem solving abilities. He found, for example, that if you correlated maze learning error scores against extent of physical injury a broadly linear relationship was found - the greater the ablation, the greater the error score. But what was really challenging about this data was the fact that it did not seem to matter whereabouts a lesion had been created. Thus, a 30% frontal lesion typically had the same deleterious effect as a 30% parietal lesion, etc. Which was tantamount to saying that there was no localisation of ability within the nervous system. Which was surprising, given that in humans several quite pronounced functional areas had by then already been identified [as shown on Kleist (1934), for example].
It was precisely this sort of observation which had led Lashley twenty years earlier to formulate his joint doctrines of mass action and equipotentiality. In the first of these - his Law of Mass Action - he argued that it was cortical volume - of itself - which was the greatest single determiner of mental ability. Thus .....
"efficiency of performance is conditioned by the quantity of
nervous tissue available and is independent of any particular area or
association tract" (Lashley, 1929, p88).
".....
for some problems, a retardation results from injury
to any part of the cortex, and for equal amounts of destruction the retardation
is approximately the same. The magnitude of the injury is important; the
locus is not." (Ibid, p60; emphasis added.)
His second conclusion - his principle of equipotentiality - simply extends this argument, and states that for the law of mass action to be true every part of the brain had to be equally capable (hence equi-potential) of doing a particular job. If one part was damaged, then other parts simply took over.
2 - Hebbian Cell Assembly Theory
Naturally, there have been many theories of biological memory put forward over the years apart from Lashley's, but (fortunately) these boil down to only one real proposal, namely that memorising involves connecting up numbers of individual neurons into circuits, such that the pattern of those circuits somehow encodes and defines that which was to be memorised. Bain (1872) put it this way .....
"Actions,
sensations, and states of feeling, occurring together, or in close succession,
tend to grow together, or cohere, in such a way that when any of them is afterwards
presented to the mind, the others are apt to be brought up in idea." (Bain, 1872, p85.)
These circuits go by a variety of names, such as neuronal nets or cell assemblies. However, the idea that neurons can be combined together into networks cannot be meaningfully discussed without firstly considering the point at which neurons communicate with each other, namely the synapse (Greek synapsis = connection). Modern techniques have taught us a lot about the microstructure of the synapse, and yet the original concept goes back to the closing years of the nineteenth century. The concept itself has been credited to the French physiologist Dubois-Reymond (1875), but the precise term was not coined until later (Foster and Sherrington, 1897). Sherrington then made much of synaptic mechanisms in his now classic discussions of the nature of the reflex arc (Sherrington, 1906).
One of the first to suggest that learning was accompanied by the formation of new synaptic connections was Santiago Ramon Y Cajal (eg. 1911). The general idea was that synapses made it possible for neural pathways to connect themselves up on demand - that is to say, as learning took place. Pathways could thus appear where previously just disconnected neurons had existed. Moreover, the biological act of creating those pathways underpinned the psychological act of memorising the experience in question. In other words, there was no shortage of neurons in the brain of an inexperienced organism, merely of the appropriate connections between them, and because he concentrated so much on neural connections, Kandel (1976) describes Ramon y Cajal's notions as "cellular connectionism" (p212). Others have described his concept as the "long-term potentiation" (LTP) of a neural circuit, and LTP - not unreasonably - "has long gained the attention of students of memory, since it is experience dependent and enduring", the two primary properties of the engram (Dudai, 1989, p89). You will also encounter the term "neural plasticity", by which is meant the ability of brain tissue to respond automatically to an experience, thus retaining a "deformation" in much the same way that the "memory" of a key can be taken by pressing it into a piece of soft wax.
Some years after Ramon y Cajal, Lorente de No (1938) extended the debate by microscopically analysing the layout of the synaptic "buttons" on neural cell bodies. Working at the limits of magnification of the microscopes then available, he found many hundreds of synaptic buttons dotted about the neural membrane, thus reinforcing suspicions as to their role in neural circuitry. Subsequent studies have confirmed this view, and with today's very powerful microscopes veritable forests of synaptic buttons can be seen, and not just on the neural soma but out on the dendritic trees as well. Alkon (1989) puts it this way .....
"Many
of the molecular transformations involved in memory formation appear to take
place in the neuronal branches called dendritic trees, which receive incoming
signals. The trees are amazing for their complexity as well as for their
enormous surface area. A single neuron can receive from 100,000 to 200,000
signals from the separate input fibres [and so] an almost endless number of
patterns can be stored without saturating the system's capacity." (Alkon, 1989, p27.)
Another of Lorente de No's contributions was to show how interneurons could be used to feed excitation back into a given neural circuit, thus keeping that circuit active long after the original source of excitation had been removed. This arrangement is commonly referred to as a reverberating circuit.
That said, the most influential exposition of neuronal net theory was in Donald Hebb's 1949 book "The Organisation of Behaviour". Hebb described the interlinking of neurons as creating what he called a "cell assembly", which he described thus .....
".....
a diffuse structure comprising cells in the cortex and
diencephalon (and also, perhaps, in the basal ganglia of the cerebrum), capable
of acting briefly as a closed system." (Hebb, 1949, p. xix.)
Hebb's ideas of how cell assemblies form and function can be gathered from the following .....
"The
general idea is an old one, that any two cells or
systems of cells that are repeatedly active at the same time will tend to
become 'associated', so that activity in one facilitates activity in the
other." (Ibid, p70.)
"When
one cell repeatedly assists in firing another, the axon of the first cell
develops synaptic knobs (or enlarges them if they already exist) in contact
with the soma of the second cell." (Ibid, p63.)
The idea of cells repeatedly assisting each other's firing is particularly well brought out in what has since come to be known as "Hebb's Rule".....
Key Concept - "Hebb's Rule": "Let us assume then that the
persistence or repetition of a reverberatory activity
(or 'trace') tends to induce lasting cellular changes that add to its
stability. The assumption can be precisely stated as follows: When an axon
of cell A is near enough to excite a cell B and repeatedly or persistently
takes part in firing it, some growth process or metabolic change takes place in
one or both cells such that A's efficiency, as one of the cells firing
B, is increased." (Hebb, 1949, p62; italics
original.)
Hebb was thus the major exponent of the nowadays conventional view that engrams - at the holistic level of analysis, at least - are neuronal nets.
3 - Post-Hebbian Cell Assembly Theory
As we have seen, therefore, the biochemical investigation of memory is following a variety of different lines of enquiry, all converging on the processes of protein synthesis. The mechanisms emerging are amazingly complex and still largely under investigation, nevertheless they are gradually supporting the older concepts of reverberation and synaptic growth, thus generally confirming the cell assembly approach to memory.
We begin by revisiting the notion of the cell assembly, to see how the concept has evolved since Hebb's days. What we find is that there have been regular minor upgrades to the concept, but that it still remains by far the best candidate for the job of engram. Moreover - and the significance of this cannot be understated - the concept is now irretrievably merged with an area of study known as neural networks - part of the science of artificial intelligence. In other words, psychology, biology, computing, electronics, and robotics now share a single common goal, namely unravelling the processes of memory and cognition; it is just that some workers prefer to study men and women directly, whilst others prefer things they can more easily dissect, such as goldfish, worms, and printed circuit boards.
The developments of note are:
Milner (1957): The first
refinement of Hebbian ideas came from Peter Milner in
1957. Milner, a student of Hebb's, published his suggestions in a paper
entitled "The Cell Assembly, Mark Two". This was a largely Hebbian approach, although for its underlying mechanism it
relied more on the concept of opening up pre-existing-but-inactive synapses
than on new axon growth. It is therefore somewhat more compatible with the
physiological studies reviewed at the end of Chapter 5 than was Hebb himself.
Pleasure-Pain Signalling: The
next improvement came when it was asked how a given cell assembly ever comes to
know the significance of what it is representing. How does it know that a
particular external occurrence is worth remembering, and how does it know
whether to remember it favourably or unfavourably? What is it, in other words,
which gives memory its adaptive value? One of the main workers here has been
the British anatomist John Young, whose research into memory phenomena in
cephalopods - octopus and squid - goes back to the late 'thirties. During the
'fifties and 'sixties, Young was based at the Stazione
Zoologica in Naples, leading a team of biologists
including Stuart Sutherland and Brian Boycott .
Together, they analysed the relationship between octopus vision, octopus
problem solving, octopus hunting behaviour, and the various lobes of that
species' brain. In short, the results from many separate experiments indicated
that Lashley's law of mass action held for
invertebrates as well as vertebrates. One member of the team put it this way:
"[Lashley] concluded that, in the organisation of a [mammal]
memory, the involvement of specific groups of nerve cells is not as important
as the total number of nerve cells available for organisation. A similar
situation appears to hold true in the functioning of the vertical lobe of the
octopus brain; there is a definite relation between the amount of vertical lobe
left intact and the accuracy with which a learned response is performed [].
This seems to suggest that, at least in the octopus's vertical lobe and the
mammalian cerebral cortex, memory is both everywhere and nowhere in
particular." (Boycott, 1965; emphasis added.)
It
is the words "everywhere and nowhere" which are the most significant,
because they drew the Naples team strongly towards the idea of neuronal net
memory mechanisms. Engrams laid down in widely distributed networks of neurons
would behave in precisely the required fashion. But such widely distributed
networks would only function at all if each part of the network could somehow
be kept informed as to the good-bad nature of the current input. That is to
say, it was not enough merely to recognise a stimulus: engrams needed
also to be coded either as to-be-approached or as to-be-avoided.
Memory was only worth having if it was biologically adaptive; if it was going
to help you survive. And the necessary coding could only take place, Young
argued, if there existed results indicator pathways
in the CNS capable of tagging each engram with some sort of pleasure-pain
evaluation.
In fact, in Young's analysis at least five distinct types of neuron were needed for such a memory system to work effectively, and these, and the circuitry linking them, are shown in Figure 6.1.
Young's team also tried to lay out its findings in circuit diagram form. Young (1964) cites work by Maldonado (1963) which showed how an array of many "memory units" could be controlled by a single what-he-called "noci-hedono" receptor system. This latter system served to tell all the others whether what they were doing was a good idea or not, that is to say, whether it was nasty and to be avoided, or nice and to be repeated. And much the same idea has resurfaced recently in the work of Gerald Edelman, of the Neurosciences Institute, New York, who postulates (eg. Edelman, 1994) what he calls a value system, the role of which he describes as follows:
"What
the value system does is it sends a chemical signal to the rest of the brain
such that those connections that were just being used to produce [an] action
which was valuable will become strengthened." (Edelman,
1994, p11.)
Young's results indicators and Edelman's value system add considerable value to the Hebbian cell assembly concept, albeit they force a slight amendment to the wording of Hebb's famous Rule, which should now read:
"When
an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth process or
metabolic change takes place in one or both cells such that A's efficiency,
as one of the cells firing B, is increased [if the results are
simultaneously pleasure-tagged or decreased if the results are simultaneously
pain-tagged]." (Hebb, 1949, p62; italics original;
revised as shown [].)
Allport (1985) provides another recent statement of cell assembly theory and its role in forming engrams. He begins with the concept of auto-association, which he describes as follows:
"If
the inputs to [a] system cause the same pattern of activity to occur
repeatedly, the set of active elements constituting that pattern will become
increasingly strongly interassociated. That is, each
element will tend to turn on every other element [] and (with negative weights)
to turn off the elements that do not form part of the pattern. To put it
another way, the pattern as a whole will become 'auto-associated'. [] We
may call a learned (auto-associated) pattern an engram." (Op cit, p44; italics original, but bold added.)
Auto-association is thus Hebb's Rule in yet another disguise. Allport's "auto-associated pattern" is the same concept as Hebb's "closed system" of neurons: auto-association is simply what needs to happen to make a given cell assembly stand out from its background matrix.
Allport then points out that as far as memory retrieval is concerned, the task is essentially one of "reactivating" the engram in situ. This allows us to view retrieval as more or less the opposite process of consolidation. Instead of a reverberating trace dying slowly away and leaving a permanent patterning in the neurons it had been stimulating, we now start with the permanent trace, switch it on somehow, and watch the electrical activity simply reappear as if by magic.
Allport then identifies five "interesting consequences" of an auto-associated memory trace, namely (a) that the engram would be "stable" and capable of maintaining itself over time, (b) that it would demonstrate "part-to-whole" retrieval, such that the whole engram could be activated by stimulating only one part of it, (c) that it would take longer to recognise a given input according to how similar it was to other stored engrams (especially if there were several potential matches to be sorted out), (d) that it would be able to recognise previously unknown inputs by automatically including same as an example of the most similar known input (a process known as "generalisation", or "categorical capture"), and (e) that many engrams could be superimposed on the same matrix without mutual interference (albeit only one of these might be active at a time).
The issue of categorical capture is further explored in the following extract:
".....
matrix memory systems automatically respond to the common elements, or prototypes,
from a set of related, learned instances, where the 'prototype' is the pattern
having the highest correlation with [the] entire set of instances, even though
the prototype pattern itself was never previously encountered[.] To put the
same point in a slightly different way, matrix memory systems extract
'semantic' memory [] as an automatic by-product of the encoding of particular,
related, 'episodic' instances." (Op cit,
p49; italics original; bold emphasis added.)
4 - References
See the Master References List
[Home]