Course Handout - Experiential Learning: The Knowledge Structures and the Cognitive Processes

Copyright Notice: This material was written and published in Wales by Derek J. Smith (Chartered Engineer). It forms part of a multifile e-learning resource, and subject only to acknowledging Derek J. Smith's rights under international copyright law to be identified as author may be freely downloaded and printed off in single complete copies solely for the purposes of private study and/or review. Commercial exploitation rights are reserved. The remote hyperlinks have been selected for the academic appropriacy of their contents; they were free of offensive and litigious content when selected, and will be periodically checked to have remained so. Copyright © 2010, High Tower Consultants Limited.

 

First published online 08:00 BST 17th June 2004, Copyright Derek J. Smith (Chartered Engineer). This version [HT.1 - transfer of copyright] dated 18:00 14th January 2010

The material on Piaget previously appeared in Smith (1996b; Chapter 8), and is repeated here with minor amendments and supported with hyperlinks. The paper as a whole is intended primarily for our psychology and speech and language therapy students, as conceptual underpinning for their respective professional practicums.

 

1 - Introduction - A Little Piaget

This paper is about "Experiential Learning" (EL), a system of educational practice designed according to the principle that knowledge cannot properly be acquired in isolation from actual and sustained practical experience. To understand EL, therefore, we have firstly to understand (a) the psychology of knowledge itself, and (b) the cognitive science of its transmission from one generation to the next; and these - to give fair warning - are massive, sometimes decidedly obscure, and as-yet-incomplete disciplines. So let us begin with perhaps the most accessible psychological description of knowledge, namely that put forward during a lifetime's penetrating observation and analysis by the Swiss "epistemologist", Jean Piaget (1896-1980). This is, of course, the theory which gave us the now-famous "Piagetian stages" of human intellectual development, as set out below .....

(a) Sensorimotor Period: This is the simplest form of human intelligence, because it is based on the most primitive forms of knowledge. The period lasts from birth to roughly 2 years. At the outset, the infant relies solely on an inborn set of reflexes, but by its completion these have been massively modified by subsequent experience. Nonetheless, there is a typical quality of this level of intelligence, which is that it involves the perceptual and motor systems more or less in isolation. The main memory and problem solving systems are not yet in place, and so can play little part in modulating what goes on "beneath" them. The sensorimotor period consists of six stages (abbreviated SM1 through SM6), as now described .....

i) Sensorimotor Stage 1 - Reflexes: SM1 is the simplest form of sensorimotor intelligence. It begins as the ability of sensory input to trigger and deliver simple reflexes (which Piaget regards as innate schemas). These are gradually modified and improved with practice. This stage lasts roughly from age 0 - 1 month.

ii) Sensorimotor Stage 2 - Primary Circular Reactions: SM2 is an improvement on SM1 intelligence, and comes when behaviours can be (a) repeated, and (b) inter-coordinated. The objects of these behaviours remain parts of the body and close objects. What is important is that circular reactions (see panel) emerge between the actions of one part of the body and subsequent input from another, thus: "When the hand can grasp what is held in the mouth or bring to the mouth what it has grasped, and when the hand can grasp an object that the eyes see, or carry in front of the eyes an object that the mouth is holding, then the hitherto separate images that the eye, mouth and hand have of the object unite into one ....." (Spinozzi and Natale, 1989, p28). This stage lasts roughly from age 1 - 4 months.

iii) Sensorimotor Stage 3 - Secondary Circular Reactions: SM3 is an improvement on SM2 intelligence, and comes when primary behaviours start to be applied to more distant objects. The typical opportunity for this sort of improvement to occur is when the subject accidentally acts upon an object and comes to notice what happens as a result (a noise, or movement, etc). This is another instance of a circular reaction, because the subject then repeats the original action specifically to make the original outcome happen again. (Piaget describes this as "making interesting sights last".) This, of course, rapidly leads to cause-and-effect associations being made. This stage lasts roughly from age 4 - 8 months.

iv) Sensorimotor Stage 4 - Coordination: SM4 is an improvement on SM3 intelligence, and comes when SM3 sequences develop more complex organisation. The typical ability here is that of coordinating a series of individually simple sequences so as to achieve a desired goal. In other words, this is the stage when planning and intentionality are first seen. This rapidly leads to means-end relationships being formed. This is also the stage where the phenomenon of object permanence starts to become apparent. This stage lasts roughly from age 8 - 12 months.

v) Sensorimotor Stage 5 - Tertiary Circular Reactions: SM5 is an improvement on SM4 intelligence, and comes when deliberate trial and error exploration of objects takes place. This is the most advanced type of circular reaction so far. It is the stage of what Miller calls the "infant scientist" (p49). This enables the SM4 means-end relationships to be achieved in progressively larger numbers of ways. Object permanence behaviours continue to improve. Onomatopoeic language begins, with words like "bow-wow" and "choo-choo". This stage lasts roughly from age 12 - 18 months.

vi) Sensorimotor Stage 6 - Invention: SM6 is an improvement on SM5 intelligence, and comes when thought processes become internal. This is because the child can now process mental symbols of the objects in the real world, rather than the objects themselves. This shows itself in the ability for deferred imitative behaviour, that is to say the imitation of a piece of behaviour some time after exposure to it. This makes for a quantum leap in cognitive ability because it allows problems to be taken away from the scene and solved at leisure at another time and place. Object permanence behaviours are now fully established, with hidden objects being persistently looked for because their continued existence somewhere is known. This stage lasts roughly from age 18 - 24 months.

Readers unfamiliar with the psychological usage of the term "concept" should read one of the several Internet introductions to the theory of knowledge [click for example] before proceeding. Alternatively, use the [glossary] links as and when you come to them.

(b)       Representational Period - Preconceptual Phase: What Piaget called "preconceptual" representation constitutes a quantum-leap improvement in intelligence, and involves a more advanced form of knowledge. One of the main advances derives from the existing (SM6) ability to use symbols. These symbols are internal representations ("preconcepts") of the world, and are then processed by an emerging repertoire of "cognitive operations", such as enclosure, proximity, and line and angle. However, there remain significant weaknesses in these thought processes, distinguishing them from the more fully mature "concepts" [glossary] which come next. For example, preconcepts still retain large sensorimotor elements. Also the cognitive operations are not reversible, and are characterised by being self-referenced (ie. egocentric) and rigid. This stage lasts roughly from age 2 - 4 years.

(c)        Representational Period - Intuitive Phase: This level of intelligence is characterised by the gradual replacement of preconcepts with full (and therefore far more intellectually efficient) concepts. This stage lasts roughly from age 4 - 7.5 years.

(d)       Operational Period - Concrete Operational Phase: Having invested in a suitable repertoire of conceptual knowledge, it pays to use it in ever more advanced ways. The main advances here come from the still growing repertoire of cognitive operations, and by decreasing egocentricity of thought. Now, for example, the child acquires the mathematical operations of addition, subtraction, multiplication, and division, as well as those of class inclusion and conservation. However, there still remain weaknesses in these thought processes, especially in such areas as the hypothetical. In short, concrete thinkers can only successfully process the actual world. They are far poorer at seeing how things might be, if only circumstances were different. This stage lasts roughly from age 7.5 - 11 years.

(e)        Operational Period - Formal Operational Phase: This is a major qualitative improvement upon the concrete operational phase. The main advances here come from the ability to generate and test hypotheses. This in turn allows operations to be carried out upon operations, and allows thought to become abstract as well as logical. The development of this stage lasts roughly from 11 - 15 years, although it has been suggested that not everybody actually gets through it .....

ASIDE: This latter line of argument became popular in the late 1970s, thanks to papers by Long, McCrary, and Ackerman (1979) and Shute (1979). Estimates of the proportion of normal adults lacking the ability for formal operational thought come out at 40% to 50% or so, increasing to 80% shortfall, or worse, in old age. The implications of this rather dismal statistic for the future of human life on Earth have yet to be worked through.

The formal operational period consists of two stages, as now described .....

i) Early Formal Stage (11 - 12 yr): Ability to follow an argument regardless of its concrete content. Conservation of volume as well as quantity.

ii) Late Formal Stage (13 - 15 yr): Efficient use of the rule of one variable. Hypothetico-deductive thought.

2 - Introduction - Educational History to 1949

"Exercise is the beste instrument of learnynge" (Recorde, 1557, "The Whetstone of Witte").

It is not always immediately apparent, but several of the most influential ideas within modern education came initially from philosophy. The study area in question was Epistemology, the theory of knowledge [which is why Piaget, an educational theorist, described himself as an epistemologist]. The word is made up as follows .....

Epistem [Greek epistemo = knowledge] + ology [Greek logia = discoursing]

Interest in epistemology goes back to ancient times, to classical works such as Plato's Republic (still arguably the best inspirational sourcework for modern education [more history and background]). Here is how Castle (1961) explains the classical position ..... 

"The cultural problem facing Athens in the fifth century BC was not unlike that of nineteenth century England. [.....] As we have seen, the craftsman, the trader, and the newly rich were equally enfranchised [but] they were not natural heirs to the ancient traditions of leadership which reposed in the old landed aristocracy whose areté [more on the meaning of this word later], expressed in martial courage, was deemed to be inseparable from noble birth. [.....] The Athenians solved their problem in a characteristically Greek way [with] the development of education, largely free from state interference, whose aim was to make the good life available to all. And the good life, it was assumed, depended on the development of the whole personality in a balanced relationship of its physical, intellectual, aesthetic, and moral aspects." (Castle, 1961, pp42-43.)

We now jump forward more than two full millennia to three of the 19th century's educational visionaries, namely the Swiss Johann Heinrich Pestalozzi (1746-1827), the German Friedrich Froebel (1782-1852), and the British philosopher-psychologist Herbert Spencer (1820-1903) [readers preferring to see a more complete history of the missing years may click here]. The historical significance of these three pioneer educators lies in the fact that they brought the Athenian concern with developing the whole personality down into the very early years of a child's education. Pestalozzi's work with children gave us activity learning and an emphasis on self-expression, whilst Froebel's novel kindergarten inspired the modern nursery school, and Spencer (1861) urged that children should be led to make their own investigations. "They should be told as little as possible," he wrote, "and induced to discover as much as possible" (Spencer, 1861, cited in Birchenough, 1914, p290; emphasis original). Spencer was an influential exponent of science both within the curriculum and in the manner of its teaching, and saw education as a key influence on nothing less than the future of humankind. His system also followed Froebel in recognising the primacy of personal experience, and he was thus one of the first to realise the importance of what would today be referred to as "discovery learning".

And then came perhaps the most famous innovator of them all, namely Maria Montessori (1870-1952), creator of what is still marketed as the "Montessori Method". The method itself was derived from the earlier experimental work of Itard and Séguin [timeline], and emphasised practical skills, progressively taught. Montessori's first school was a one room affair in a run-down tenement, and was opened in January 1907 in the San Lorenzo district of Rome. It was designed to serve the families who lived in the block, was referred to as the casa dei bambini [Italian = "childrens' house"], and was based on giving even the smallest children practical skills and then fostering their freedom to enjoy and work beyond them. Thus .....

"Any pedagogical action, if it is to be efficacious in the training of little children, must tend to help the children to advance upon [the] road of independence. We must help them to learn to walk without assistance, to run [etc., etc.]. We must give such help as shall make it possible for children to achieve the satisfaction of their own individual aims and desires. All this is a part of education for independence. We habitually serve children; and this is not only an act of servility toward them, but it is dangerous, since it tends to suffocate their useful, spontaneous activity. We are inclined to believe that children are like puppets, and we wash them and feed them as if they were dolls. We do not stop to think that the child who does not do, does not know what to do." (Montessori, 1912, p97; italics original, bold emphasis added.)

Basically, the Montessori method requires the teacher to create an atmosphere of calm, order, and joy, and to be there to encourage children in all their efforts, allowing them to develop self-confidence, concentration, and "joyful self-discipline" (Association Montessori Internationale website, 3rd August 2001). Practical activities are emphasised at all ages. At ages 3-6 years, for example, children will use simple hand tools and kitchen utensils, and during ages 6-12 years this will be extended to include planning and cooking meals, public speaking, sewing, woodworking, animal care, gardening, etc. Another of Montessori's secrets was to ensure contact between children of different ages, so that they could learn from each other. [For fuller details of the Montessori Association's approach, click here.]

A similar set of conclusions had also been reached at university level by those responsible for satisfying 19th century society's rapidly expanding demand for chemists, physicists, doctors, and engineers. Under this heading we offer the Glasgow philosopher George Jardine (1742-1827) and the London chemist Henry E. Armstrong (1848-1937). Jardine was professor of philosophy at the University of Glasgow from 1774 to 1826, and reflected at length upon how best to remove the tedium from the university learning process, gradually rejecting a curriculum framed only to support "the disputes and wranglings of divines, and of little use to the lawyer or physician" in favour of one which cultivated "all the powers of intellect" and which gave them "appropriate subjects for their exercise" (Jardine, 1818, cited in Gaillet, 1998). Half a century later, Armstrong took up a lecturing post at St Bartholomew's Hospital, and was immediately concerned at how dull his first year medical students were. He therefore visited the schools which provided him with his undergraduates, and so appalled was he with the shallowness of that preparation that he began a 20-year crusade to improve things. With the co-operation of the British Association for the Advancement of Science and the Chemical Society he lobbied examination boards across the country to have greater emphasis placed on practical experimentation, problem solving, and intelligent questioning, and his legacy can be seen in the laboratory and project work on the modern science and technology curriculum.

The progressive movement was continued in the early 20th century by the American educational philosopher, John Dewey (1859-1952). He wrote a number of books on the subject of thought, and in "How We Think" (Dewey, 1910/1997) he followed Herbart in emphasising the importance of mental reflection in allowing present knowledge to suggest new ideas. Anticipating later theorists like Piaget, he described in particular detail how an accumulation of simple concrete ideas are needed before a student becomes capable of abstract theoretical analysis. He was also very clear as to the duties of the good educator in facilitating this process: "in some educational dogmas and practices," he wrote, "the very idea of training mind seems to be hopelessly confused with that of a drill which hardly touches mind at all - or touches it for the worse" (Op. cit., p52; italics original). For Dewey, indeed, rote learning reduced the skill of the teacher to nothing better than the level of animal training! Echoing Locke, Jardine, Spencer, Huxley, and Armstrong, he argued vehemently throughout his life that wisdom was what students needed, not information.

3 - Introduction - A Little Ryle

"Knowing that, knowing how, and being able to are different, though closely connected." (Skemp, 1979, p167; italics original)

So what, then, is knowledge, if it is more than rote-learned fact? Well the most enduring modern analysis is to be found in the work of the Oxford philosopher Gilbert Ryle (1900-1976), in his 1949 book "The Concept of Mind" (Ryle, 1949). Here is the key passage in Ryle's argument .....

"But it would be quite possible for a boy to learn chess without even hearing or reading the rules at all. By watching the moves made by others and by noticing which of his own moves were conceded and which were rejected, he could pick up the art of playing correctly while still quite unable to propound the regulations [.....]. We all learned the rules of hunt-the-thimble and hide-and-seek and the elementary rules of grammar and logic in this way. We learn how by practice, schooled indeed by criticism and example, but often quite unaided by any lessons in the theory. It should be noticed that the boy is not said to know how to play, if all he can do is recite the rules accurately. He must be able to make the required moves. [.....] His knowledge how is exercised primarily in the moves that he makes, or concedes, and in the moves that he avoids or vetoes." (Ryle, 1949, p41.) "Learning how or improving in ability is not like learning that or acquiring information" (Ibid., p58; italics original).

In other words (and to cut a long story mercifully short), the epistemologists' most important distinction was between .....

Knowing How, Knowing That, and Being Able To

You know how, if you have grasped the sequence of events involved in doing something well. You know that, if you can define in encyclopaedic terms the key concepts in an area. And you are able to, if you can physically act on the how and the that.

A Simple Example: If David Beckham suffered a spinal injury [God forbid] and lost the use of his legs, he would still be able to coach other players how to take a good free kick, and he would still know that no opposing player should be within ten yards of the ball. He would not, of course, be able to take said kick personally. If, on the other hand, he suffered a brain injury he would probably lose the know how and the know that, but might nevertheless retain the physical ability.

Epistemologists call knowing how "procedural knowledge", knowing that "declarative knowledge" (or "propositional knowledge"), and being able to "skill".

4 - The Tyler Rationale

"Don't look at a text-book; avoid most of them as you would poison" (Armstrong, 1896).

So why is epistemology so important to the caring professions in general, and to psychologists and speech and language therapists in particular? Well one reason is that knowing that - declarative knowledge - has been familiar territory to them all along, because it is at the heart of what the various branches of neuroscience have long referred to as "higher (mental) functions" [glossary]. Declarative knowledge is the very stuff of everyday higher cognition, and is accordingly one of the first things to suffer following brain insults or disease, being the single most apparent clinical sign in any form of confusion, "confabulation" [glossary], "agnosia" [glossary] or "dementia" [glossary]. And a second reason is that knowing how - procedural knowledge - is at the heart of our ability to organise conceptual knowledge in the interest of adaptively appropriate behaviour. This shows itself in everyday cognition as the abilities (a) to put things into sequence, (b) to follow complex rules, and (c) to balance a number of competing objectives. It is therefore no exaggeration to claim that the science of remediation is - to a greater or lesser extent - the science of the remediation of knowledge, witness the fact that the whole science of frontal assessment is dedicated precisely to this [see the parade of frontal assessments detailed in our e-paper on "From Frontal Lobe Syndrome to Dysexecutive Syndrome"].

It is also worth reminding ourselves that professional practice is itself unavoidably knowledge-based, because the essence of "evidence-based practice" is the deployment of what can only be classified as declarative knowledge. Whether you are a clinical neuropsychologist doing a frontal assessment, or a criminological psychologist trying to cure a repeat offender, or an educational psychologist planning the management of a developmental dyslexic, or a paediatric therapist presented with a semantic-pragmatic impairment, or an adult therapist working on an acquired language disorder of some sort, you need to know what you are about. You need the same three bodies of knowledge and skill identified by the educational philosophers. Or to put it another way, you are deploying the knowledge and skills you yourself acquired during training in an attempt to salvage the knowledge and skills of your patients. Thus you know that client X is presenting in a certain way, and you know that the approved management in those circumstances is to apply treatment such-and-such. You also know how to deliver said treatment, and possess the necessary hand-eye skills to be able to carry it out.

So the only thing educationalists needed to do to comply with Ryle was to expand his three-way classification of knowledge into a detailed educational delivery system, and by a strange coincidence of dates the idea that the curriculum could itself be the subject of academic enquiry hit the streets in the same year that Ryle published that classification. The work in question came from the American Ralph W. Tyler (1902-1994), and was entitled "Basic Principles of Curriculum and Instruction" (Tyler, 1949). Tyler summarised his arguments into four principles of curriculum development, often referred to as "the Tyler Rationale" .....

Tyler's First Principle: The curriculum development process should begin by defining appropriate objectives.

Tyler's Second Principle: Corresponding educational experiences should be developed.

Tyler's Third Principle: These experiences would then need organising into a programme.

Tyler's Fourth Principle: The programme would need to be complemented by systems to evaluate and improve upon the end result.

Tyler's approach, and especially its emphasis on objectives, went on to become the backbone of the modern educational model, onto which all subsequent modifications up to and including the 1997 Dearing Report [detail] have been grafted, and it earns the epithet "experiential" from the explicit emphasis provided by the second and third principles, and from the fact that three out of Tyler's five chapters directly concern learning experiences.

5 - The Bloom Domains

"Everything depends upon the quality of the experience" (Dewey, 1938, p27).

Tyler's arguments were generally well received by the educational establishment, but were then given an even greater boost in 1956, when one of his students, the University of Chicago's Benjamin S. Bloom (1913-1999), published the equally influential and largely complementary "Taxonomy of Educational Objectives" (Bloom et al, 1956; Bloom, Krathwohl, and Masia, 1964). Heading a team of 34 experts (including Tyler himself), Bloom drew up a detailed classification of what knowledge was, and therefore of the fundamentally different ways human beings could improve under instruction. Three fundamentally different types, or "domains", of knowledge were identified, namely "psychomotor knowledge", "cognitive knowledge", and "affective knowledge", all dependent upon experience. Here is how Bloom wove experience into the fabric of objectives .....

"Knowledge as defined here includes those behaviours and test situations which emphasise the remembering, either by recognition or recall, of ideas, material, or phenomena. The behaviour expected of a student in the recall situation is very similar to the behaviour he was expected to have during the original learning situation. [.....] The process of relating and judging is also involved to the extent that the student is expected to answer questions or problems which are posed in a different form in the test situation than in the original learning situation. In the classification of the knowledge objectives, the arrangement is from the specific and relatively concrete types of behaviours to the more complex and abstract ones." (Bloom et al, 1956, p62.) <<AUTHOR'S NOTE: Note the deliberate avoidance of any non-observable aspects of cognition - and especially the mysteries of experience itself - in what was, after all, still a very behaviourist era of psychological history. For further background detail, we recommend Clark (2000/2004 online).>>

The resulting combination of objectives and activities gave us experiential learning [see next Section], GCSE project-work, the A-level practical paper, student-centred learning, and - most recently of all - problem-based learning [see Section 7]; and the point of the story so far is that educational designers - and their advisers in the corresponding professional bodies - have to devise curricula which manage both the objectives and the activities simultaneously. Unfortunately, though this may sound easy enough in theory, it can be surprisingly hard to do in practice, because specifying activities in the pursuit of abilities means attacking the full mysteries of the experiencing mind. Put bluntly, you have to become a cognitive scientist in miniature: you need to know your Ryle, your Piaget, your Tyler, and your Bloom, all at once, and you need to appreciate how that final complex of knowledge and meta-knowledge [see the Key Concept panel below] reconciles with what cognitive science can tell you about the optimal rates at which new conceptual, procedural, and psychomotor learning can take place.

Key Concept - "Meta-Knowledge" and "Metacognition": "Meta-" is a very versatile English morpheme. Coming originally from the Greek m e t a (="with/after"), it signifies something over and above or new. Thus metaphor implies "additional significance" and metamorphosis implies "additional form". "Meta-" is also used as a prefix for areas of scientific enquiry, where - as with metaphysics - it carries the sense of a higher science, of the same nature as the root science, "but dealing with ulterior and more fundamental problems" (OED). More recently, the prefix has been used to turn a variety of technical concepts back onto themselves. Thus computing coined the term metadata to refer to data about data, while in psychology we see terms like metacognition (knowing about knowing) and metaanalysis (research into research). Similarly, metalinguistic awareness is the ability to use words to describe and comment upon other words. Thus, if you know only that the word "sun" refers to the sun, and that the word "bun" is a type of cake, then you have linguistic but not metalinguistic awareness. If, on the other hand, you know also that "they rhyme", then you have commented upon the words themselves (rather than their referents), and you have begun to develop metalinguistic awareness as well. Meta-knowledge thus includes such professionally vital abilities as being able to reflect upon and strategically control one's learning.

6 - Experiential Learning

"Tell me and I shall soon forget. Show me and I might remember. Involve me and I shall understand" (Confucius).

Although it is convenient to date formal interest in the relationship between learning and experience to the works of John Dewey, the search for a final coherent theory of experience has continued ever since. The classic reference on this topic is Dewey (1938), whose views may be summarised by the following two extracts .....

"The belief that all genuine education comes about through experience does not mean that all experiences are genuinely or equally educative. Experience and education cannot be directly equated to each other. For some experiences are mis-educative. Any experience is mis-educative that has the effect of arresting or distorting the growth of further experience. An experience may be such as to engender callousness; it may produce lack of sensitivity and of responsiveness. Then the possibilities of having richer experience in the future are restricted. [.....] Again, experiences may be so disconnected from one another that, while each is agreeable or even exciting in itself, they are not linked cumulatively to one another." (Dewey, 1938, pp25-26; from the chapter entitled "The Need of a Theory of Experience".)

"An experience is always what it is because of a transaction taking place between an individual and what, at the time, constitutes his environment, whether the latter consists of persons with whom he is talking about some topic or event, the subject talked about being also a part of the situation; or the toys with which he is playing; the book he is reading [.....]; or the materials of an experiment he is performing." (Dewey, 1938, pp43-44.)

So it is important to use the phrase learning experience advisedly, recognising that its status as one of educational theory's major theoretical constructs should not be taken as implying that we know a great deal about it, because we do not. In educational terms, learning experiences go far beyond the confines of the classroom, and refer to experiences of any sort, provided only that you can draw conclusions of some sort from them. EL is thus learning by doing, rather than by being lectured at, and what is important about Bloom's taxonomy, is that it steered subsequent theory and research in this particular direction, carefully retaining the behaviourist emphasis on observability whilst recognising the complexity of the experiencing mind. Robert Gagné, for example, incorporated both observables and experience into his discussions of a "learning hierarchy" of outcomes (Gagné, 1975), and Marton and Saljo (1976a,b) distinguished what they called "surface learning" (the recall of simple facts) from "deep learning" (the recall of issues and principles).

Readers unfamiliar with the distinction between "semantic memory" and "episodic memory" should read one of the many Internet introductions to the topic [click for example] before proceeding. Alternatively, use the [glossary] links as and when you come to them.

So when did Dewey's "transactions", Tyler's "experiences", and Bloom's "learning situations" evolve into modern EL, what, if anything, was gained or lost along the way, and how does declarative knowledge map across onto separate episodic memory [glossary] and semantic memory [glossary] resources? Well the earliest PsycINFO reference to EL is by Wilmer (1967), and the author usually credited with the method's fame is David A. Kolb (1939-) (although Grundy, 1987, gives honourable mention to the Brazilian, Paolo Freire (1921-1997) for his emphasis on "informal" learning). Wilmer's conclusions arose out of work using videotaped clinical exchanges as learning material. He taped psychiatric interviews and then played them back to both physician and patient alike for a frank discussion; arguing that "inductive or experiential learning especially lends itself to videotaping procedures". However, Wilmer was merely reporting his own experiences; he was not laying claim to anything so grand as a theory. Kolb, on the other hand, was far-sighted enough to take the diverse practice of experiential learning (in small letters), and to elevate it to EL-as-system by using it as the title of a book. In its Kolbian incarnation, (Kolb and Fry, 1975; Kolb, 1976, 1981, 1984), EL consists of consecutive periods of (1) having the initial experience, (2) reflecting upon that experience, (3) the gradual emergence of an abstract understanding of that experience, and (4) a period of exploration in which new experiences are deliberately sought in order to check out and further build upon that understanding; and much of the critical acclaim directed at Kolb's "experiential learning theory" derives from the fact that it integrates the different components of knowledge exceptionally well. Indeed, Kolb's disciples Dennison and Kirk (1990) use the simple motto .....

Do - Review - Learn - Apply

These four stages then translate into knowledge types as follows .....

Do (store experiences in episodic memory) [=episodic declarative knowledge]

Review (identify and practice key procedural memory sequences) [= procedural knowledge]

Learn (conceptualise at unit level) [= semantic declarative knowledge]

Apply (conceptualise at higher level, and use to direct behaviour) [= semantic declarative meta-knowledge]

EL's subsequent fame derives partly from Kolb's and others' work on "learning styles", that is to say, on the matching of the educational experience to the personality and cognitive make-up of the individual student. Typical of this genre are Kolb's own Learning Style Inventory (Kolb, 1984) and Honey and Mumford's (1982) Learning Style Questionnaire. The rest of EL's fame derives from the popularity of "problem-based learning" (PBL) as the chosen implementation of EL in medicine [see next section]. Students needing a more detailed evaluation of EL are recommended to Greenaway (2004 online).

7 - From DL to PBL

"Eureka! [Greek = "I have found it!"]" (Archimedes).

The relationship between an experience and a problem is also a subtle one, for you can have an experience without there being a problem to provoke it, and you can have a problem without it provoking an experience. An example of the former might be the conscious appreciation of a (non-problematic) work of art, and an example of the latter might be having to divert as usual around a known obstacle in order to get to a destination behind it. By and large, however, "growth depends upon the presence of difficulty to be overcome" (Dewey, 1938, p79; bold emphasis added),

The first distinct system was from (then) Harvard University's Jerome S. Bruner (1915-), and became known as "discovery learning" (DL). As profiled by Bruner (1960, 1966), effective education requires students to participate in deciding what is important in a topic area, and they can only do this competently if they work out for themselves how that topic is put together. Bruner (1960) explained the role of discovery as follows .....

"Mastery of the fundamental ideas of a field involves not only the grasping of general principles, but also the development of an attitude toward learning and inquiry, toward guesses and hunches, toward the possibility of solving problems on one's own. [,.....] To instill such attitudes by teaching requires something more than the mere presentation of fundamental ideas. Just what it takes to bring off such teaching is something on which a great deal of research is needed, but it would seem that an important ingredient is a sense of excitement about discovery - discovery of regularities of previously unrecognised relations and similarities between ideas, with a resulting sense of self-confidence in one's abilities. Various people who have worked on curricula in science and mathematics have urged that it is possible to present the fundamental structure of a discipline in such a way as to preserve some of the exciting sequences that lead a student to discover for himself." (Bruner, 1960, p20.)

Bruner (1966) summarised the educator's role as "the appropriate direction of exploration" (Bruner, 1966, p44), and insisted that educators needed to know the "optimal structure" of the body of knowledge they were endeavouring to get across, thus .....

"..... a theory of instruction must specify the ways in which a body of knowledge should be structured so that it can be most readily grasped by the learner. 'Optimal structure' refers to a set of propositions from which a larger body of knowledge can be generated [and] the optimal structure of a body of knowledge is not absolute but relative." (Bruner, 1966, p41.)

"A body of knowledge, enshrined in a university faculty and embodied in a series of authoritative volumes, is the result of much prior intellectual activity. To instruct someone in these disciplines is not a matter of getting him to commit results to mind. Rather it is to teach him to participate in the process that makes possible the establishment of knowledge. We teach a subject not to produce little living libraries on that subject, but rather to get a student to think mathematically for himself, to consider matters as an historian does, to take part in the process of knowledge-getting." (Bruner, 1966, p72.)

Given Bruner's start, our liking for catchy names soon resulted in "problem-based learning" (PBL), the earliest MEDLINE reference to which is in a paper by Barrows and Mitchell (1975). Again the concept goes back far farther than that, being inspired ultimately by "aha!" problem solving successes like that of Archimedes. In its purest modern form, PBL is a group problem-solving method which simulates the processes of medical diagnosis by presenting case information in an often inconsistent fashion (to make it more lifelike), and often not until directly asked for it (to make it even more lifelike). Groups are then expected to pick up the background knowledge as and when it becomes relevant, rather than through a formal lecture programme. In practice, however, it is difficult to explain how PBL is different from EL, and how are they both different from old-fashioned bedside learning. In fact, no less than 54(!) types of activity-based learning have been identified (University of Lethbridge website, 9th October 2000), so it will not come as any surprise that there is no standard definition of the "P" in "PBL" (Silver, 1998; David et al, 1999; Maudsley, 1999).

ASIDE: Some idea of the range of activity types to choose from can be gained from examples of a NASA "classroom activity", a San Diego State University "webquest", and a North Carolina State University "scenario" (this last a truly excellent piece of work, although one shudders at what it must have cost in development time).

PBL can therefore be conceptually extremely messy. On the one hand, it is clearly EL, the way Armstrong and Dewey would have had it. On the other hand, it is invariably extremely expensive to set up, and then surprisingly difficult to keep up-to-date. Moreover, its success is also technically difficult to evaluate, (a) because its proponents are usually trying to justify and defend significant financial and academic investment, and (b) because it suffers a severe Hawthorne Effect [glossary]. Among the method's critics, Norman and Schmidt (1992) have failed to find any evidence that PBL improves generic problem solving skills, and Colliver (2000), in a thorough review of the literature, found no PBL improvement in either knowledge base or clinical competence. Colliver also raised serious doubts over the ability of PBL to produce improvement in the all-important affective domain (the third - and murkiest - of Bloom's three knowledge domains), having located only one study into its effects on self-directed learning in practice, and that falling short of statistical significance! So if PBL is "a recycled idea with an identity crisis" (Maudsley, 1999, p179), we need to ask whether the problem lies with the basic EL philosophy, or merely with PBL as the method of delivery. Do our students need experiences, or don't they? And if they do, just what sort of activities do we need to provide, and how are they best packaged and supported?

Well as it happens there is near consensus on the first of these questions. The following list merges the recommendations of Dennison and Kirk (1990) and Duch (1996), but similar lists are not difficult to find .....

a good activity must ..... engage the students' interest, motivate them to probe for deeper understanding, relate the subject to the real world, require decisions or judgements based on facts, call for decisions or judgements to be justified when challenged, encourage students to define any assumptions they are making, require cooperation, integrate new concepts with prior knowledge within the topic area, integrate new concepts with prior knowledge in other topic areas, require organising, and require critical thinking rather than recall.

So in the final analysis it all depends on what you mean by "discovery" or "problem" or "task" or "activity" or "exercise". If there is a trend at all, it is that medical education is currently moving away from monolithic PBL and reducing the size and scope of its individual activities. For example, Harden et al (2000) have reported on the problems with multidisciplinary clinical teaching teams in the Medical School at the University of Dundee in the early 1980s. They conclude that if PBL timetabling is not integrated with other clinical and theoretical sessions it will fail to exploit the true value of clinical attachment as a learning opportunity. Their "1995 curriculum", on the other hand, was a more integrated method, with logically interlocked lectures, small group work, and independent learning, and is reported to have "flourished" (Harden, Davis, and Crosby, 1997). Harden et al (2000) prefer the term "task" for this new, more manageable, unit of student activity, and regard "task-based learning" (TBL) simply as an effective way to implement PBL as a higher-order strategy. PBL is the general concept, in other words, whilst TBL is merely a candidate substantive system. And the secret of cost-effective TBL is that the tasks themselves should be designed with a deeper message in mind, because the most effective learning comes when uncovering that hidden meaning (Harden et al, 2000).

ASIDE: Indiana University's Charles M. Reigeluth is another who stresses the importance of elaborating unitary entries in perceptual-episodic, procedural and semantic memory into ever more sophisticated knowledge webs. His aptly named "elaboration theory" (Reigeluth, 1979; Reigeluth and Rodgers, 1980) is based upon the need for instruction to become progressively more complex and reflective, without inducing the sort of student confusion which can so easily result. This means that for instructional material to be deemed truly well-organised it must allow students to zoom the focus of their current study in and out between the "subparts" of a topic and its broader "context" "until the whole picture has been seen" (Reigeluth, 1979, p9).

To recapitulate, EL is the generic philosophy, DL is a particular philosophy, and PBL is a method-cluster within the DL philosophy. We may then look at TBL either as a competing method-cluster within DL, or simply as a particular instance of PBL, but either way as a neatly integrated way of doing things - as a system ready to run. We shall be looking at what implications these various methods have had for educational management in Section 9. In the meantime, we need a little more detail regarding Bloom's affective domain, because it is when we come to the "directing of behaviour" that EL really shows its promise .....

8 - The Return of Virtue

"When lecturing to the first-year medical students he [the aforementioned Henry Armstrong] was disappointed to find how dull were their responses. They had no critical spirit, they did not challenge his statements or require proofs of assertions; they were unable to interpret simple experimental results and could not make satisfactory notes of their laboratory work. Their sole aim was to learn facts, definitions, and whatever could be stored in the memory in order to pass examinations." (Van Praagh, 1973, p2; bold emphasis added.)

Let us return for a moment to Bloom's taxonomy of educational objectives. This, as we have already seen, was organised under three domain headings, the first two of which were pure Ryle. Bloom's "psychomotor" domain may be regarded as supporting Ryle's being able to skills, and the "cognitive" domain contains Ryle's knowing how and knowing that; and workers such as Bruner, Gagné, and Kolb had already established that EL could deliver on both of these types of education. However, there were much deeper problems with the third of Bloom's domains, for Bloom had deliberately used the term "affective" development as a "slop bucket" [our term] for the many troublesome, but desperately important, aspects of education which did not fit easily into the first two domains. The root citation on the topic of the affective domain is Bloom, Krathwohl, and Masia (1964), so the first thing to ask is why there was an eight year delay following the 1956 cognitive and psychomotor analyses. The authors' explanation is disarmingly frank, and gives a clear indication of the true scale of the problem they were tackling .....

"At least six working meetings were devoted to the task [but] several difficulties beset this work. First, there was a lack of clarity in the statements of affective objectives that we found in the literature. Second, it was difficult to find an ordering principle as simple and pervasive as that of complexity, which worked so satisfactorily in the cognitive domain. Third, few of the examiners at the college level were convinced that the development of the affective domain would make much difference in their work, or that they would find great use for it, when completed. There was no doubt that the affective domain represented a more difficult classification problem than the cognitive domain. [We present the resulting classification scheme] with some trepidation and full expectation of severe criticism from many quarters." (Bloom, Krathwohl, and Masia, 1964, p13-14; bold emphasis added.)

Needless to say, the authors persevered with their task, and by the end of their deliberations had listed five categories of affective involvement with a topic area. These categories contained 13 named subcategories, and the entire series was sequenced as far as possible on increasing complexity, as follows .....

Category 1.0 - Receiving: This is the shallowest form of engagement with a topic area, and involves merely a willingness to listen. It was subdivided into awareness (1.1), willingness to receive (1.2), and controlled or selective attention (1.3).

Category 2.0 - Responding: The next level of engagement is to respond, perhaps by participating in class discussions, etc. It was subdivided into acquiescence in responding (2.1), willingness to respond (2.2), and satisfaction in responding (2.3).

Category 3.0 - Valuing: The third level of engagement is to internalise a set of specified values related to the topic area (Clark, 2000). This is a major qualitative step upwards, because it marks the first point at which initially fragmentary knowledge and skills start to be seen as part of a higher goal. It was subdivided into acceptance of a value (3.1), preference for a value (3.2), and commitment to a value (3.3).

Category 4.0 - Organising: The fourth level of engagement is to compare, contrast, and generally organise the available values whenever more than one of them is relevant. Clark (2000) gives examples of this type of behaviour as accepting responsibility for one's own behaviour, accepting professional ethics, resolving conflicts between the demands of professional and private life. It was subdivided into conceptualisation of a value (4.1) and organisation of a value system (4.2).

Category 5.0 - Characterising: The deepest level of engagement is to synthesis a complete value system, that is to say, a "view of the world" capable of guiding future behaviour at the highest level. It was subdivided into generalised set (5.1) and characterisation (5.2), in the sense of this is what I am and do. Bloom et al interpret the latter as having "a consistent philosophy of life". Thus for the student it would mean being a good student, whilst for the graduate it would mean displaying "the right stuff" professionally.

Now we mention all this because what Categories #3 to #5 are in effect trying to do is formalise the Platonic ideal of education for perfection (which is why we took you back to the Athenian hillsides at the start of Section 2). The key classical concept is areté [Greek = "virtue with excellence"], and this is how the parts then fit together in modern educational management .....

PRIMARY AND SECONDARY EDUCATION leads to SKILL and KNOWLEDGE

HIGHER EDUCATION helps to develop UNDERSTANDING and "GRADUATENESS"

POSTGRADUATE EDUCATION helps to develop REFLECTIVE PROFESSIONALISM

REFLECTIVE PRACTICE then helps improve that PROFESSIONALISM

CONTINUOUS PROFESSIONAL DEVELOPMENT helps to direct one's PROFESSIONAL CAREER

Note the term "graduateness" in this overarching scheme of things, for this - at least as far as higher education is concerned - is modern areté. It comprises a complex of cognitive and behavioural skills and qualities which higher education institutions (HEIs) are currently feverishly incorporating into degree programmes in order to make their graduates more immediately employable. The Higher Education Quality Council began this particular debate in the mid-1990s, defining graduateness as "the attributes that a graduate may be expected to have acquired primarily through the experience of degree-level study" (HEQC, 1995, p5; italics original), and the Dearing Report calls for the same qualities under the name "key skills" (Recommendation 21). However, as to precisely what these attributes are, there is no final consensus. For our part, we are particularly impressed with the list produced by a Quality in Higher Education study (Harvey and Green, 1994, cited in Harvey, 1996). This asked employers to rank 62 separate graduate skills and abilities according to their practical value within commerce and industry. The top scorers were such factors as willingness to learn, reliability, self-motivation, teamwork, oral and written communication, problem solving ability, literacy, and numeracy. Specialist factual knowledge, on the other hand - the thing most examinations set out to measure (and just about the only graduate quality you can acquire by rote) - was rated a paltry 59th out of the 62 attributes! There are therefore four basic issues under debate, (a) what graduateness is, (b) whether UK graduates have enough of it, (c) what to do about it if they do not, and (d) what the relationship is between professionalism and graduateness.

Note also the term "reflective practice", perhaps the highest of the metacognitive functions. The root source here is Schön's (1983) book, "The Reflective Practitioner", in which the author described professional practice as depending less on factual knowledge than on the capacity to reflect before taking action. For Schön, the best reflective practitioners were those who then reflected upon their ability to reflect, that is to say, those who constantly questioned the effectiveness of the methods they used to question their methods of assessment and management! Reflective practice is thus the embodiment of the state of chronic scepticism called for by Skrabanek and McCormick (1989), and is a key skill for clinical professionalism. Reflective practitioners are seen as preventers who constantly question their means of prevention, as assessors who constantly question their methods of assessment, as interveners who constantly question their proposed point of intervention, and so on. And the practical problem for those who have to create such clinicians is how best to shatter the initially naive faith in textbook-crammers which students, almost without exception, will have acquired at school.

ASIDE: The Wisdom Centre at the University of Sheffield has recently been set up to promote professional development within primary health care, and its material on reflective practice is a good example of what can be achieved. One of its contributors, Bolton (1998), argues that clinicians must learn to embrace the uncertainty which reflective practice will surely bring; trust it and enjoy it, she says, but do not expect too many answers in the short term, merely more fundamental questions!

What we have, therefore, is a situation in which - whether you choose to call it areté, affective domain characterisation, or graduateness - you are talking about competence beyond the textbook, and it is both difficult and expensive to teach. The General Medical Council's solution to the problem has been to focus rather cleverly on attitudes. They have made attitudes (a) the principal affective entity, and (b) reasonably demonstrable. In their summary of the goals of undergraduate medical education, they put it this way: "The student should acquire and demonstrate ATTITUDES necessary for the achievement of high standards of medical practice, both in relation to the provision of care of individuals and populations and to his or her own personal development" (GMC, 1993 and current website, p12; upper case original). They then list 12 specific attitudinal achievements, including "ability to cope with uncertainty" (Para 40.3d) and "ability to adapt to change" (Para 40.3j). The General Medical Council conclude .....

"Whilst it is recognised that the core curriculum should be presented in a way that encourages student-centred learning, the greatest educational opportunities will be afforded by that part of the course which goes beyond the limits of the core, that allows students to study in depth in areas of particular interest to them, that provides them with insights into scientific method and the discipline of research and that engenders an approach to medicine that is constantly questioning and self-critical. [Such modules] are no less important than the core curriculum but they focus not on the immediate requirements of the pre-registration year but on the long term intellectual and attitudinal demands of a professional life....." (GMC, Tomorrow's Doctors, 1993, Recommendation #24.)

More recently still, Cheetham and Chivers (1998) have introduced the term "meta-competencies" - competencies at managing one's competencies - in an attempt to produce an all-embracing model of reflective competent practice. These meta-competencies include all the old "common skills" chestnuts, namely communication, self-development, creativity, analysis, and problem solving. Cheetham and Chivers then propose reflection as a "super-meta-competence", capable of overseeing all the rest.

9 - Putting it all Together

"Education is suffering from narration sickness" (Freire, 1972, cited in Grundy, 1987, p101).

Even when the fundamental philosophy of an EL curriculum has been agreed upon, there remain the practical problems of managing it. Two of the standard solutions here are the use of "study guides" at the beginning of a course unit, and the provision of tutorial support during it.

The science of study guides is relatively new. Nevertheless, it is already possible to instruct their authors as to broad content (Laidlaw and Harden, 1990; Harden, Laidlaw, and Hesketh, 1999). Here are the key points as laid down by Laidlaw and Harden (1990) .....

(1) Aims and Objectives: A study guide should state "what knowledge or competencies" is/are expected of the student upon completion of the study unit.

(2) Content: A study guide should act as an "advance organiser" for the material being taught. [For more on the science of advance organisers, see Section 5.3 of our e-handout on "Dyslexia and the Cognitive Science of Reading and Writing".]

(3) Relationships: A study guide should explain how the topic in question relates to the curriculum as a whole, and to the educational strategies adopted in the HEI in question.

(4) Prerequisites: A study guide should begin by identifying required previous learning, so that new learning can extend the old. [Remember Bruner's point about "optimal structures" [see Section 7] and Reigeluth's insistence on elaboration [see Section 6.]

(5) Resources: A study guide should carefully list the resources available to the student, including handouts, tutorials, demonstrations, videos, core textbooks, etc.

(6) Sequence of Study: A study guide should carefully sequence tasks, tutorials, and supporting reading.

(7) Self-Assessment: A study guide should contain exercises "that allow students to test their mastery of the subject and whether they have attained the objectives for the programme" (p11).

(8) Further Assistance: A study guide should include sources of advice to students seeking further assistance.

(9) Guidance: A study guide should include guidance on best practice in note-keeping and use.

(10) Further Study: A study guide should include suggestions for keeping the target competencies up to date after the unit has finished.

(11) Glossary: Finally, a study guide should include a glossary [the preparation of which is no mean feat in itself- to see one of our bigger glossaries, click here].

Our own experience with study guides goes back to 1998, and in their latest version we use them in hard copy to bring together objectives, advice, and general support, and to introduce the student to the electronic set of lesson plans which follow. We have also found study guides useful in the academic quality assurance process, having used them (a) to help external examiners evaluate specific modules, and (b) as evidence of compliance during professional body re-accreditation inspections. In both respects they serve as immediately available reference documents, and have been well received. [To see the 2004 Study Guide for our Applied Cognitive Psychology module, click here.]

As far as tutorial support is concerned, the problem is that we know that it works, although not always why [again, fundamentally, because cognitive science has no definitive theory of experience]. The notion of guided learning seems to derive from one Robert C. Craig, who included it in a book title in the early 'fifties (Craig, 1953). He then continued to publish on that subject throughout the 1950s, before being joined by one Ralph H. Ojemann in the 1960s. Some curious relationships emerged from this early research. In one study, for example, students who had directed discovery were compared with students who had non-directed (or "independent") discovery. The directed group were better if tested straight away, no better if tested after 3 or 17 days, and better again if tested after 31 days (Craig, 1957).

Other explanations for the effectiveness of tutorials are (a) that they "ensure an optimum environment so that students can learn from and teach each other" [University of British Columbia website, 26th September 2000], and (b) that they help students locate "the limits of knowledge" as they currently exist, so that something can be done to broaden them. Tutorials also help integrate knowledge across the various Bloomian domains. Hmelo and Ferrari (1997), for example, believe that one of the main benefits of PBL is that with the right sort of tutorial support it can help build higher-order thinking skills. Firstly, tutors need to be involved from the very outset of the experience, even at the point that students are being introduced to each other. This is followed by a phase where they help students realise what they do not yet understand (and thus develop their plan of action), and this is followed by a period of guided reflection, when key metacognitive lessons can be teased out, and barriers to progress identified. Sackett et al (2000) have described how a tutor's clinical questioning skills can help bring about this new level of awareness. 

It is also necessary to embed each module's planned experiences into the broader context of the programme as a whole. Typical of this approach is the Dundee-style "spiral curriculum" (Harden, Davis, and Crosby, 1997), a state-of-the-art approach to medical curriculum design. It is task-based, works to a detailed study guide, comes complete with tutorial support, and is intended to "rotate upwards" from prior learning in a range of related areas, to support subsequent learning in the corresponding advanced areas. This allows content to be both horizontally and vertically integrated, horizontally by pointing out cross-relevances within a given study year, and vertically by pointing out cross-relevances and developmental progressions from one year of study to the next.

Finally, there is also a significant role to be played by the student's future professional body, for they are one of the few voices of continuity in an otherwise faddy and overly politicised world. We have already heard from the General Medical Council on behalf of mainstream medicine, but we should not forget the many distinct "professions allied", such as midwifery, nursing, and the therapies. With speech and language therapists, for example, this falls to the Royal College of Speech and Language Therapists (RCSLT), who maintain not just the detailed taxonomy of relevant competencies, but also have to advise HEIs on the structures and methods appropriate for the delivery programmes. Cheetham and Chivers (1998) have provided a detailed breakdown of what this sort of "professionalism" entails, cleverly blending the Tyler-Bloom tradition with the Schön reflective practitioner model.

10 - References

See the Master References List

[Home]