Paper: Skills, Rules, and Knowledge
Following last week's Jens Rasmussen's paper, I decided to revisit another key component of his legacy and go through Skills, Rules, and Knowledge; Signals, Signs, and Symbols, and Other Distinctions in Human Performance Models. That paper from 1983—it is now 40 years old—is cited more than 5000 times, and at a cursory glance it doesn't seem to draw a lot of criticism, though the few spot checks I ran on papers citing it seem to run free with the interpretation they can make of it. Either way, this paper introduces the fundamentals of the Skills, Rules and Knowledge (SRK) model.
The context for this paper is one of accelerating automation with computers. The tradition until then was one of each sensor having its own indicator, which the operators then had to figure out to operate the system. But said systems kept getting more and more complex (and it hasn't really stopped today), and risk for major accidents also increased. The industrial sector felt a need for a better model of human performance, one that designers of large systems could rely on to properly describe processes and adapt them to human requirements: you can't wait to discover failure modes in decision-making, you have to predict them as much as possible.
Rasmussen made the point that what you need is not one big quantitative model that predicts everything (measure and specify all the human behaviors), but to define various types of human performance for which various qualitative models could be developed and put together, so that designers can properly support the right type of behaviors in the right contexts, and optimize their detailed designs. Control tasks ought to be described in terms of human mental functions rather than system requirements.
To make that point, he proposed his own model, which is split into the following categories:
- performance levels: skills, rules, knowledge
- information observed from the environment: signals, signs, symbols
- causal properties: causes, reasons
All together, they fit the following diagram (which I have annotated because my copy of the paper had a more modern graphic that made a mistake compared to the old one from the original paper):
Keep that one in mind because we're going to dive into most categories, but first, the author wants to warn us. Humans shouldn't be described as input-output machines; we are goal-oriented, but also we can select our own goals and seek the information we need:
Human activity in a familiar environment will not be goal-controlled; rather, it will be oriented towards the goal and controlled by a set of rules which has proven successful previously. In unfamiliar situations when proven rules are not available, behavior may be goal-controlled in the sense that different attempts are made to reach the goal, and a successful sequence is then selected. Typically, however, the attempts to reach the goal are not performed in reality, but internally as a problem-solving exercise, i.e., the successful sequence is selected from experiments with an internal representation or model of the properties and behavior of the environment. The efficiency of humans in coping with complexity is largely due to the availability of a large repertoire of different mental representations of the environment from which rules to control behavior can be generated ad hoc.
Basically, human behavior depends not necessarily on hard rules of the world, but on the internal representation we make of these constraints in our mind when simulating things. The way these constraints are represented are what gives rise to the 3 performance levels.
Skills, Rules, and Knowledge
At the lowest level, we have the skills-based behaviors, which are smooth, automated, and highly integrated. This level does not require conscious attention or control: the senses are directed to bits of the environment subconsciously, to update and orient an "internal map" guiding action. It's hard to break down in parts—such as explaining how to grab a glass or breaking down all the adjustments required to ride a bike—but it can however be consciously modulated, in general terms. The examples given here are "be careful, the road is slippery" or "here comes the hard part", which show that while you can't necessarily decompose the parts, there's a way to focus and control them. Most human activities could be considered to be compositions of various skill-based actions into larger routines.
That composition calls the rules-based behaviors. A rule is a procedure you might have "stored", derived empirically, communicated, or planned. Goal-orientation starts to show up here, but may only be implicit through the rules. Feedback isn't always available since long sequences of acts would be required for it to be obtained, but rules are nevertheless selected based on past success. While skills are generally unconscious though, rules are explainable and based on explicit knowledge. The boundary between skills and rules is fuzzy, however, and will depend a lot on the amount of training and attention of a person. My understanding is that over time and practice, rule-based behaviors can become skills.
In unfamiliar situations, or when you have no good rules to draw from, you have to switch to a higher conceptual level: knowledge-based behavior. This is where you explicitly come up with a goal, formulate a plan, and test it. This test can either be conceptual (understanding and predicting) or physical (by trial and error), both of which rely on an explicit mental model of the system.
The same information in the environment can have multiple levels of representation, which match the performance levels.
At the sensory level (which aligns with skills), we receive signals. They are continuously processed by the organism, and vary in time and space. They have no meaning on their own, though reactions to them can be guided by higher-level processes.
If the observed phenomena can be tied to store patterns to activate them, they are considered to be signs. They are used to select or modify rules that guide skilled behavior, but cannot be used for functional reasoning that generates new rules or helps you predict responses. They are mostly tied to action and their states.
Symbols are what you would use for reasoning and computation. They refer to internal conceptual representations, relationships, and properties.
Rasmussen is careful to point out that the distinction between the three types does not depend on its representation, but on the context in which they are perceived. The following image is given as a brilliant example of it:
The same flowmeter can be a signal (keep the needle close to the set point and track this), as a sign (at position B in state X, do Y), or as a symbol (why is the needle not moving the way I expect it?).
(note: I am thinking about his Ecological Interfaces paper, and I wonder if this isn't one of the unifying mechanisms there: a good interface supports that switching between levels of abstractions and types of performance and perception. I would imagine that an interface that is just a text description forces you to function at a given level, though I'm sure at some point you'd end up finding patterns in the "shape" of the text and shifting levels regardless.)
Let's introduce the two terms right away: causes are behind physical events, and reasons are for physical functions. A "reason" is like a "final cause" for choosing an approach–based on a kind of purpose—whereas "causes" tend to control function via the structure of the system itself.
This section is a bit different from the rest because rather than being about establishing these as distinct categories, they're presented as ways we organize information and between which we can shift. Basically, attention is limited, tracking the net of all causal relationships is way too complex, so they get clumped into simplified causal chains that can also call into simple operations and offer ways to reason and backtrack.
The efficiency of human cognitive processes seems to depend upon an extensive use of model transformations together with a simultaneous updating of the mental models in all categories with new input information, an updating which may be performed below the level of conscious attention and control.
Three strategies are pointed out:
- Aggregation: elements get chunked together as familiarity with the context improves
- Abstraction: elements get transferred to a model category at a higher level
- Analogies and ready-made solutions: the representation is transferred to a category of model for which solutions or rules are already known
These imply an abstraction hierarchy, which is represented as follows:
So moving from the lowest levels to highest levels is not simply removing details, it's the addition of new information based on higher-level principles. The questions you ask about the same environment will be different based on the internal representation.
For example, events can only be defined as errors or failures when in reference to intended states (higher-level invariants). The causes for these are often explained in a bottom-up manner, and the reasons for things working right could be explained in a top-down manner. The propagation of both, up and down the hierarchy, will play a role in the correction of errors and faults.
Similarly, system design can rely on working on multiple of these levels at once, and finding ways to jump from one representation to the next. Inventions and new solutions could be described by going up a level and then back down another one in a different one based on functional meaning, which was previously disconnected. Shifting your level when modelling can sometimes yield better, simpler solutions, and leverage analogies.
The author concludes:
In order to switch from the one-sensor-one-indication technology to effective use of modern information technology for interface design, we have to consider in an integrated way the human performance, which is normally studied by separate paradigms. [...We] will be able to obtain some of the results needed more readily by conceptual analysis before experiments than by data analysis afterwards.