Course Handout - Resource Allocation
Theory
Copyright Notice: This material was
written and published in Wales by Derek J. Smith (Chartered Engineer). It forms
part of a multifile e-learning resource, and subject only to acknowledging
Derek J. Smith's rights under international copyright law to be identified as
author may be freely downloaded and printed off in single complete copies
solely for the purposes of private study and/or review. Commercial exploitation
rights are reserved. The remote hyperlinks have been selected for the academic
appropriacy of their contents; they were free of offensive and litigious
content when selected, and will be periodically checked to have remained so. Copyright © 2003-2018, Derek J. Smith.
|
First published online 16:12 GMT 18th March 2003,
Copyright Derek J. Smith (Chartered Engineer). This version [2.0 - copyright] 09:00 BST 5th July
2018.
This is the paper usually cited as the origin of "Resource Allocation Theory". It was written by Donald A. Norman, of Northwestern University, and Daniel G. Bobrow, of the Xerox Palo Alto Research Centre. The basic propositions of this paper are as follows .....
1. That when
considering biological information processing systems there is a lot to be
learned from what goes on in man-made information systems.
2. That
cognitive functioning involves a whole group of independent processes,
constantly exchanging information. These may be referred to as "programs".
ASIDE: There are some very
basic problems touched on here, because it is actually far from established which
brain structures do the processing, which do the memorising, and which do the
communicating.
Given these basic propositions, it follows that there are two ways a given process can run into trouble. The first of these is if the volume of information currently available exceeds the system's powers to cope. This is known as "data-limited" processing. The other form of limitation is where the data are not excessive, but the resources required to process it are. This is known as "resource-limited" processing.
Some clever theory is then provided to explain how different types of task will perform. The general rule is as follows:
"Most
tasks will be resource-limited up to the point where all the processing that
can be done has been done, and data-limited from there
on" (p46).
The standard shape of this relationship is known as the "performance-resource function", and will normally have an RMIN, the minimum amount of resources required to do the task, and an RDL, the level of resources beyond which performance becomes data-limited. The thrust of this argument is summarised diagrammatically, and the shape of the resulting curve shown, in Figure 1.
Figure 1 -
The Idealised Performance-Resource Function: Here is a plot of performance against resources for
an idealised cognitive task. As we increase the available resources from zero
(a state of sleep, perhaps), nothing happens at all to start with. Once we
reach RMIN (Point A), performance jumps suddenly to a base
rate (Point B). It then increases linearly with increasing resources until it
reaches Point C, where it might level off again, waiting another minimum
amount of resources (Point D). When Point D is reached, performance starts to
increase again, initially quickly, but tailing off until at Point E no
further improvement is possible because the process has reached the point
where data-limitation takes over. Up to Point E, the process has been
resource-limited, but from Point E to Point F it is data-limited, that is to
say, it is now data, not resources, which limits performance. Finally,
resources cannot exceed Point F, because they are themselves at an absolute
limit. Different real-life processes consume resources at different rates. In
fact, "most processes have both data-limited and resource-limited
regions" (p49). If this diagram fails to load
automatically, it may be accessed separately at |
Enhanced from a black-and-white original in Norman and Bobrow (1975; Figure 1). Red annotation ours. This version Copyright © 2003, Derek J. Smith. |
Norman and
Shallice (1980/1986)
Norman and Bobrow (1975) made no theoretical assertion as to what mechanism(s) decided what resources were needed to complete a task. The term "supervisory system" came along slightly later, when the concept of limited resources was incorporated into attention theory by Norman and Shallice (1980/1986) as a "supervisory attentional system". As such, this concept is effectively synonymous with the concept of "central executive" as used within Baddeley's Working Memory Theory.
Shallice (1988)
Shallice (1988) summarises the role of the supervisory system as follows:
".....
the Supervisory System [has] access to a
representation of the environment and of the organism's intentions and
cognitive capacities. It is held to operate not by directly controlling
behaviour, but by modulating the lower level [resources] by activating or
inhibiting particular schemata. It would be involved in the genesis of willed
actions and required in situations where the routine selection of actions was
unsatisfactory - for instance, in dealing with novelty, in decision making, in
overcoming temptation, or in dealing with danger." (Shallice,
1988, p335.)
It therefore follows that the supervisory system concept is central (a) to human problem solving, and (b) to the deterioration of same following brain injury.
Clinical
Implications
Two papers are typical of how Resource Allocation Theory can be used in a clinical setting, namely Selinger, Walker, Prescott, and Davis (1993) (which interprets the problem solving performance of CVA patients from this theory's point of view [read all about it]) and Van der Linden, Coyette, and Seron (1992) (which focuses on the topic of central executive function and includes details of a research paradigm claiming to factor apart processing and storage functions [read all about it]).
References
Norman,
D.A. and Bobrow, D.G. (1975). On data-limited and resource limited processes. Cognitive
Psychology, 7:44-64.
Norman,
D.A. and Shallice, T. (1980/1986).
Attention to action: Willed and automatic control of behaviour. Centre for
Human Information Processing (Technical Report #99). Reprinted
in revised form in Davidson, R.J., Schwartz, G.E., and Shapiro, D. (Eds.)
(1986), Consciousness and Self-Regulation (Volume 4), New York: Plenum.
Shallice,
T. (1988). From
Neuropsychology to Mental Structure. Cambridge: Cambridge University
Press.
[Home]