Paper: Designing for Expertise
I'm back at posting older notes on some David D Woods stuff: Designing for Expertise. It's a book chapter, but it's nevertheless interesting and cited here and there. It also has some of the most, uh, memorable graphics I recall seeing in serious literature.
First, the expert relies on a conceptual model, which is essentially a mental model; the things the expert knows about the domain and that can be used to simulate what will happen. Designers essentially end up shaping how experts can form and augment these models. A basic thing they suggest in line with that is to replace the term "user" with the term "practitioner" because the people using the tech are not passive people being imposed the product, they're people doing shit with objectives and challenges who are sometimes relying on your product to do something. Practitioners will modify unsatisfactory design, devise workarounds, or simply abandon things that do not let the meet their goals.
So to predict how your tech is going to impact your experts, you gotta know what the hell expertise is, and have an idea what their expertise is. But you can't expect someone who designs a surgeon's tools to also be a surgeon on top of being a designer. This is something dubbed the Ethnographer's Challenge:
in order to make interesting observations, they have to be, in part, insiders in the setting they are observing, while remaining, in part, outside the domain in order to have insights about how practice works, how practice fails and how it could work better given future change. Design observations in the field of practice, where designers watch experts doing cognitive work, relies on being prepared to be surprised in order to distinguish unexpected behaviors that reveal how expertise works and how these experts work
You tend to end up with multidisciplinary teams where designers consult with experts to design for other experts. This can create clashes because designers tend to look for simple solutions to problems whereas systems engineers assume that only complexity can cancel out complexity. So both approaches that aim to design for simplicity and those that are more analysis-based are needed, but insufficient. This cross-disciplinary team ends up having to gain some of each other's expertise to work as well. So this starts the chapter's long detour on defining expertise. There's a big section that contains a tour of the history of the study of expertise, which I'm eliding here, after which they conclude:
One of the key results is that expertise uses external artifacts to support the processes that contribute to expert performance – expertise is not all in the head, rather it is distributed over a person and the artifacts they use and over the other agents they interact with as they carry out activities, avoid failure, cope with complexity, and adapt to disruptions and change.
If you've read Don Norman's The Design of Everyday Things, this is a sort of reference to "knowledge in the head" vs. "knowledge in the world," but told in an academic manner.
Anyway, past that section, we get to initial definitions of expertise. The first perspective is one where expertise is definable in terms of how much and how well organized your domain-specific knowledge is. The more you know the better you are. This perspective can be expanded by saying "hey sometimes, knowledge is social too" which changes things a bit to say that expertise is having a rich repository of strategies for applying knowledge based on context. This further means that a) expertise is domain-specific b) experts adapt to changes c) they rarely act as solo individuals.
This gives them a list of 5 key attributes of experts:
- They are willing to re-adjust initial decisions
- They get help from others when uncertain and can identify experts un sub-domains
- They make use of formal and informal external decision aids
- They may make small errors but tend to avoid making big ones; they focus on not being wrong rather than being right
- They decompose complex situations into manageable chunks that can then be re-constructed
This is accompanied by a kite diagram I can't ignore. I get what they're going for here, but that sure is a choice of visual analogy.:
The next question is how you identify people with the knowledge of experts. They mention:
- They perceive more stuff; they can extract information that non-experts will miss
- They have a good idea of what is relevant and when it is; they have a lesser tendency to be side-tracked
- They can simplify complex problems effectively; novices tend to oversimplify however.
- They can communicate information they are experts about.
- They can deal with more diversity in terms of situations encountered
- They can identify and adapt to exceptions
- They can identify changing conditions to know when to shift their strategies
- They're self-confident and trust their decisions
- They have a strong sense of responsibility
A lot of these characteristics make experts difficult to work with, but also makes experts able to identify other experts in their domain. So how do you acquire expertise? There are a couple of models.
The first one is: Novice (slow performers who follow rules), Advanced Beginners (they see patterns that support rules), Competence (lots of patterns known, hierarchical reasoning sequences, can deal with more situations, but are still slow), Proficient (intuition takes over reasoning, decision structures are adapted to the context and the knowledge flows naturally), and Expert (they know what needs to be done and can do it; immediate response to identified situations).
A second one is 10+ years of deliberate practice, going through 4 phases: 1. playful activity with the domain, where those with potential are selected 2. extended preparation with trainers/coaches, 3. full-time engagement to practice and performance, making a living off of it, 4. making an original contribution to the domain by going above the teachers and innovating.
That requirement for innovation is one of the tricky ones when trying to design for experts: the time spent by the designer in the domain can never match that of the domain expert. The observations can't easily be linked to practice, so there is a need for a very iterative process of trial and evaluation to anchor the design. This gives us that image, which isn't even as complex as it's gonna get:
The legend sort of explains what they mean. They're messy diagrams, but they sort of try to load a lot of meaning into both. The top one sort of puts you in a given role (the dotted circles with floating labels) and moving clockwise or counter-clockwise represent activities required for design synthesis or analysis. The second diagram tries to put normal project labels to the map when used counterclockwise, for design creation (synthesis), to show how it would translate to practice.
The paper spends a couple of pages explaining the map, and introducing an even more confusing one which tracks the development of domain-specific expertise of the designer as they interact with the domain, and the places where you may want an expert to cope for your own lack of expertise there:
It took me a while to get it, but this is the first model of expertise development in black (flowing counterclockwise from 'novice' to 'advanced beginner' all the way to 'eminent expert'), along with significant activities in light grey (implementing change, directing observations, etc), overlaid on top of the Figure 8.2A. The big dashed lines are essentially "regressions", where an eminent expert put in contact with a new device or technology suddenly reverts to simply being "competent" and needs to gain new knowledge and mastery again, and the cycle partially starts over.
Anyway that's what I think it means, and it would have been better served by 2-3 different images in my opinion. It makes me feel like this is a screen grab of a powerpoint slide that has had 5 minutes of animation and explanations collapsed into one unfathomably complex still image.
What happens when you introduce new technology or solutions then is that your assessment of the expert has also changed: since they needed to adjust to the new added complexity (and they've created fancier conceptual models) and that this is done in a broader system (where there may be other changing pieces of equipment, teammates, other experts, and unrelated people or interferences in play), each new thing you designed becomes part of the environment and must now be accounted for. So as you understand expertise, you're able to better design for it, but as you do so, your understanding of expertise also melts away because you changed what it means to be an expert!
This leads to once again reminding people that you can only design for experts with an ongoing collaboration between designers and experts.
The authors then summarize what expertise is once more with extra factors that were added over the courses of a few pages:
- Experts have learned, observed, and practiced a long time. Their expertise is domain-specific, driven by context, and part of the social structure of the domain
- Expertise is both knowledge and skill in the understanding of observations in the context of a situation
- Expert practitioners have a model of their domain and strategies that they keep refining
- Expertise changes and evolves towards various improvements, and so do social standards in the domain
- Expertise is a form of contextual understanding that help form new strategies to make sense of observations in said contexts
- Expertise is limited by the perspective of the expert. When needing to go broader, there is a need for collaboration
- At an eminent level, experts innovate and generate original contributions to the domain
Finally, this innovation factor means that anything you do that changes the field causes ripple effects by causing more need for adaptation for more experts, which in turn creates new design demands. Assessing the expertise of practitioners is therefore both a requirement and a consequence of design work.