Paper: MABA-MABA or Abracadabra?
This week I'm going through Sidney Dekker's MABA-MABA or Abracadabra? Progress on Human–Automation Co-ordination over again. MABA-MABA stands for "Men-Are-Better-At/Machines-Are-Better-At" lists of terms, which tend to categorize humans as good at some things, machines good at others, and therefore good automation design aims to let humans do some tasks and machines do others.
A classic one is "Fitts' List" for function allocation, from 1951, which looks like this:
These lists have come up many times, Dekker naming examples from 1965, 1972, 1974, 1980, 1987, and 1990. He states that even in the 2000s, this kept happening when people suggested quantitative divisions of work based on these perceived fixed strengths and weaknesses.
Usually, they are followed by a framework, where you assign a level of sophistication to your automation, and then get the promise of good results while discussing challenges like "operator complacency" or "situation awareness", which have little or no consensus in the human factors community:
[These constructs] give the psychologically uninitiated (indeed the engineer who uses the method) an illusion of understanding; a mere fallacy of deeper access to human performance issues associated with his or her design.
This leads to something called the substitution myth. In short, this is to say that MABA-MABA lists are an oversimplification that tends to consider both humans and machines in a system as a linear device that you can compose together. In fact, the functions are often defined based on what the machines can do, and the humans have to fit in, or fall to the wayside as their strengths (classically ideas like reallocating activities to meet current constraints, anticipating events, learning from past experiences, collaborating) are not part of the lists, and end up misguiding people who rely on them.
This concept is named "function allocation by substitution", and has a few issues:
[MABA-MABA lists] foster the idea that new technology can be introduced as a simple substitution of machines for people – preserving the basic system while improving it on some output measures (lower workload, better economy, fewer errors, higher accuracy, etc.)
Most of these lists, or frameworks that specify various levels of automation (think of autonomous driving levels for example) tend to indicate how involved a supervisor might be, but they tend not to actually explain the amount of cognitive work demanded in deciding how and when to intervene, and when to switch around levels. You end up with a list that defines what humans should or shouldn't do, but one that does not specify when or if they should take-over, interact, or back-off.
Dekker states that this substitution myth is propped up by the false idea that the strengths of people and computers are fixed, and that all you have to do as an engineer is capitalize on strengths while compensating for or removing weaknesses:
Capitalising on some strength of automation does not replace a human weakness. It creates new human strengths and weaknesses – often in unanticipated ways. For instance, the automation strength to carry out long sequences of action in predetermined ways without performance degradation (because of fatigue) amplifies classic and well-documented human vigilance problems. It also exacerbates the system’s reliance on the human strength to deal with the parametrisation problem (automation does not have access to all relevant world parameters for accurate problem solving in all possible contexts), but systems may be hard to direct even if the human knows what he/she wants it to do.
And it's not because you allocate a function that it can be done without consequence. Letting the machine do something creates new demands (say, typing, searching, having ways to invoke the functionality).
Dekker warns that engineers who follow the substitution method can become "spectacularly ill calibrated with respect to the real consequences of technology change in their domain of work." Specifically, you think automation will have benefits and drawbacks, and then assume that only these benefits will exist as a consequence. For example you can follow a list and expect more productivity, a lower workload, creating more complacency, but also more accuracy:
But none of these folk claims have a strong empirical basis, and in fact the inverse may be true. [A]utomation hardly ever ‘fails’ in a binary sense. In fact, manufacturers consistently point out, in the wake of accidents, how their automation behaved as designed.
He adds that rather than being complacent as predicted by lists, accidents such as those in aviation often involve operators who are being highly active trying to create safety and coordinate their activities and intentions, not people who are taking it too easy.
Part of the challenge is that automation does not just change measurable outputs, it also creates qualitative changes. People's roles are transformed, they have to adapt, and change everyday practices. This, in turn, creates further unanticipated consequences, such as commanding high levels of expertise from operators when the automation was intended to make their job easier.
(as a personal note, I'm now thinking of this article stating Airbus wants fully automated jets that can land and taxi themselves: "The functionality frees up the crew to focus on other crucial actions"—to which I ask, what would be more crucial than landing the plane?)
The problematic pattern is that:
it is not the technology that gets transformed and the people who adapt. Rather, people’s practice gets transformed and they in turn adapt the technology to fit their local demands and constraints.
Dekker states that designers have to recognize and accept that:
- design concepts represent hypotheses or beliefs about the relationship between human cognition/collaboration and technology
- these beliefs need to be tested and challenged, and you have to look for evidence that they would be wrong
- they have to be open to revision as more is learned in the field of practice about how automation gets used
The substitution myth assumes that the humans and machines do not need to cooperate much, and reduces the whole relationship to some "you do this and I do that" barter. The real question shouldn't be who controls what, but "how do we get along together?"
That cooperative mindset is the core objective. How do you turn automated systems into effective team players? Generally this requires that your own activities are observable to others, and that all participants are possible to direct and influence. You have to provide historical information but also future-oriented projections for people to anticipate what the others are going to do. And it needs to be doable at a low cognitive cost that doesn't take all your attention. Additionally, turning the automation into a passive component that needs to be micro-managed isn't helpful either, and wastes resources.
He concludes, simply, that the engineer’s dream of a fully mechanized world is illusory:
Questions about fully automated systems are misguided as they reframe the debate about the human–machine relationship in the language of a gradual marginalisation of human input [...] What matters is the extent to which powerful automation allows team play with its human operators. What matters is how observable the automation makes its behaviour for its human counterparts, and how easily and efficiently it allows itself to be directed, even (or especially) during busy, novel episodes.
[S]ystem developers should abandon the traditional ‘who does what’ question of function allocation. Instead, the more pressing question today is how to make humans and automation get along together.