Paper: Seeing the Invisible
This week's paper is a book chapter by Gary Klein called Seeing the Invisible: Perceptual--Cognitive Aspects of Expertise, from 1992. As is the pattern, a lot of the work I go through is by Woods or Klein because they're just titans of that stuff, and in this chapter, Klein tries to define what makes the difference between an expert, an adept, or a novice.
The first sentence is sort of the whole thesis: Novices see only what is there; experts can see what is not there. The question is why, or rather how? The paper first covers the difference between an expert and a novice, the development of experts, ways of framing expertise, and then the implications for their training.
So what's the difference between a novice and an expert? In physics problems, both the students and the experts were able to pick up the critical cues. The observed difference was that the experts could see how they all interacted together. In tank batallions, novices can name all the critical cues and things to look out for as well without getting overwhelmed. In medicine, the observation is that diagnoses are not really related to how thorough the practitioner is in cue acquisition, and higher levels of performances are generally not the consequences of better strategies in acquisition of information that is directly perceivable.
The difference noted is that rather than being able to pick up more contextual cues, experts are able to pick up when some expected cues are missing. They're able to see things unfold, and make more accurate predictions about what is about to happen, and form the according expectations.
There's also a difference between expertise and experience. A rural volunteer firefighter getting 10 years of experience may learn less than a professional firefighter spending 1 year in a decaying dense city, although some minimum amount of time is required. We expect experts to make harder decisions more effectively, even in non-routine cases that would stymie others. You can spot experts because:
- variable, awkward performance becomes relatively fast, consistent, accurate, complete
- individual acts and judgments are integrated into overall strategy
- learning shifts from focused on individual variables to perception of complex patterns
- more self-reliance
There is a mention of the Dreyfus model already seen in designing for expertise, so I'm skipping it here, even if there's a sizable chunk of the chapter dedicated to it (it's the one going novice, advanced beginner, competent, proficient, then expert). There's also a mention that while we can expect experts to be pretty good at all things under their area of expertise, we shouldn't expect them to show mastery at all of them.
The chapter covers a bit of literature about what makes experts different from novices, and settles on the idea that experts and novices don't use different strategies: they just have a different knowledge bases to work with. Experts have more schemata, but both experts and novices do reason by divide and conquer, top-down and bottom-up reasoning, think in analogies, and have multiple mental models. The richness of the knowledge base seems to be the difference.
There are however more subtle differences: novices tend to encode their models based on surface features whereas experts tend to think in terms of on deep knowledge (functional and physical relationships) and can better gauge conditions and importance of information. The issue is: how can we train people? How do you teach that? Generally this means you just train people by giving them more and more information, which the authors don't dispute, but they want to look at the cognitive angle and how things change.
The first thing they mention is the ability to see typicality. To know what is normal and what is an exception requires having seen lots of cases. Identifying a situation as typical then triggers a lot of responses and patterns about courses of actions (what is feasible, promising, etc). This was observed in firefighters, tank platoons, design engineers, and in chess. In fact, at higher levels of expertise, this becomes sort of automated—it's not an analytical choice, more like a reflex, or automated heuristics. Particularly, this also comes with an ability to see what situations are atypical because expected patterns are missing. It has been found that for some physicians, the absence of symptoms is often as useful as their presence in making a diagnosis.
They also noticed that experts with this ability do not show a lot of skill degradation with time pressures, whereas it does with journeymen (blitz chess observations were behind this). Physicians don't really use an inductive process in diagnoses. Even if they're trained not to, they can't help but form early impressions. The idea there is that these early hypotheses, which are also found in software troubleshooting, could direct the search for more evidence, rather than just gathering facts over and over again.
How is this developed? Well, not by analogies. Analogies are used a lot by novices and journeymen, and rarely by experts. Though when experts use analogies, they're on point. One explanation is that as you gain more experience, things blend together and lets you more easily reason about typicality. Another possible explanation is pattern matching (which would not be sufficient, lest experts also were to have issues dealing with novel situations). There's no great theory underpinning how this happens.
Experts just can see more things. The example is simple: watch olympic gymnastics or diving, where you just go "well the splash was small so that had to be good" or "gosh that was a fast flip, amazing" and then the analyst just points out 40 things that were imperfect but you'd never see unless it was in slow motion. This can mostly be formed when you get accurate timely feedback for your judgment (and you can validate your hit rate).
Seeing antecedents and consequences
This is essentially mental simulation to let you know how you got there, and where you're likely going. Doing this lets you evaluate a course of action without necessarily having others to compare it with, you just know if it's likely to be good or bad, regardless of alternatives. The more expertise you have, the further ahead you're likely to reliably project things, or the most likely you are to imagine further back in time how things were to get where they are now.
Implications for training
For chess, the idea is that you need 10k-100k patterns, which takes ~10 years to acquire. It takes 5-10 years in many other disciplines as well. There is no reason to think you can train experts by showing novices how experts think. The only thing they tracked that could be reliable helped is metacognition (thinking about how you think about things, assessing your performance, framing yourself as a learner). They point out 4 strategies to improve perceptual skills:
- personal experiences: spend time doing things, but with a lot of variation in challenge and difficulty (eg. 10 years of experience, not 1 year of experience 10 times)
- directed experiences: this is on-the-job training and tutoring. The challenge is in making sure your tutors know how to train people and pass their own experience on to others.
- manufactured experiences: this is a fancy way of talking about simulations and simulators. If expertise requires you to go through rare events, then you can make experts faster by making the rare events happen more often.
- vicarious experiences: storytelling and accounts from others such that the listener can learn the important lessons and signals from the person who lived them. War stories are a good way to get compressed experience.
One perspective in the chapter is to treat expertise or knowledge as a resource, which you then want to locate and develop.
So to do that, you have to be able to spot who the experts are. They define three criteria:
- performance: variability, consistency, accuracy, completeness, and speed. They don't actually point great ways of evaluating that and just point at chess ratings as an example of a rating that should predict how often games are won.
- content knowledge: the things you know. They mention things like coming up with conceptual graphs and multidimensional scaling, or semantic nets. In short, lay out information you know and organize it.
- developmental milestones: the Dreyfus & Dreyfus model mentioned earlier or Piaget's model are examples of this.
The paper concludes by reiterating that expertise is seeing what is not there, what is missing. The idea that experts have special strategies tends not to hold to scrutiny; a broader knowledge base is instead what seems to be the differentiating factor. This is however disappointing (their words, not mine) because it doesn't tell much about how to make more experts, so they suggest once again looking at how experts perceive things instead, and ways to better transfer the experiences.
I'd probably like to see a more modern version of it that could build on the last 30 years or so of progress in cognitivism, not quite sure where I'd find it though.