Paper: How in the World Did We Ever Get into That Mode?
There's a concept I often refer to that is called Mode Error, and many months ago I found more about it in a paper from Nadine B. Sarter titled How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control.
The core assumption of the paper, particularly in the context of a cockpit, is that automation that requires the operator to select and monitor any sort of mode while doing their work in the name of flexibility actually tends to increase the cognitive burden rather than decrease it, and designers should be aware of that effect.
For example, an automated cockpit system such as the Flight Management System (FMS) is flexible in the sense that it provides pilots with a large number of functions and options for carrying out a given flight task under different circumstances. Pilots can choose from at least five different methods at different levels of automation to change altitude. This flexibility is usually portrayed as a benefit that allows the pilot to select the mode best suited to a particular flight situation. However, this flexibility has a price: The pilot must know about the functions of the different modes, which mode to use when, how to "bumplessly" switch from one mode to another, and how each mode is set up to fly the aircraft as well as keep track of which mode is active.
A basic concept of mode error is that you can create it by changing the rules from one mode to the other, such that the wrong context means the desired action has counter-intuitive results. specifically, the idea is that to properly select the mode, you must already know and keep awareness of everything from the current situation to the desired one, and then knowing enough about how the automation works to also know what to monitor and how. The assertion is that a lot of that cognitive cost is a function of the automation's design.
As automation got more complex over the years, it went from small simple feedback loops that did little with few modes, to advanced automation with tons of functions, configuration, and sub-modes. The richness of automation tends to map to a proliferation of modes, and therefore of an ever-increasing cognitive burden. The longer the feedback loop, the worse it is and the less reactive the human can be. So to help with that, a lot of automation also started gaining ways in which it changes its own modes itself, which in turn make it more and more likely that the operator loses track of the modes used, particularly when operator input can inadvertently change the mode.
They refer to an issue in an airplane crash (eg. Indian Airlines Flight 605) where the pilot put the automation in an open descent mode, which makes the flight be controlled by pitch with the throttles going to idle, unlike the desired speed mode for the phase of flight at the time:
As a consequence of going into OPEN DESCENT, the aircraft could not sustain the glide path and maintain the pilot-selected target speed at the same time. The flight director bars commanded the pilot to fly the aircraft well below the required profile to try to maintain airspeed. It was not until 10s before impact that the crew discovered what had happened—too late for them to recover with engines at idle. How could this happen?
One contributing factor in this accident may have been that there are at least five different ways of activating the OPEN DESCENT mode. The first two options involve the explicit manual selection of the OPEN DESCENT mode. In one of these cases, activation of OPEN DESCENT is dependent on the automation being in a particular state. It can be selected by pulling the ALTITUDE knob after selecting a lower altitude, or it can be activated by pulling the SPEED knob, provided the aircraft is in the EXPEDITE mode at that time.
The other three methods of activating the OPEN DESCENT mode are indirect in that they do not require the explicit manual selection of a mode. Rather, they are related to the selection of a new target altitude in a specific context or to protections that prevent the aircraft from exceeding a safe airspeed. In the case of the Bangalore accident, for example, the fact that the automation was in the ALTITUDE ACQUISITION phase resulted in the activation of the OPEN DESCENT mode when the pilot selected a lower altitude.
They mention how the mental model of the operator may vary from what the automation actually does, and so they aren't properly primed and ready to even look for signals about mode change. Particularly, signals may look like this:
Spotted it? That's the top-right (V/S +25 vs. FPA +2.5, standing for vertical speed of 2500 feet per minute versus a flight path angle of 2.5 deg). Easy, and not confusing at all. The gotcha? Most of this confusion happens during high-pace events:
The problems in coordination between pilot and automation (e.g., automation surprises) occurred primarily in the context of nonnormal, time-critical situations: for example, aborted takeoff, disengagement from an automatic mode during approach for collision avoidance, and loss of the glide slope signal during final approach.
Overall, only 4 of 20 participants responded completely correctly in managing the automation during the aborted takeoff, and 1 of these 4 pilots explained that he did so because he was trying to comply with standard procedures, not because he understood what was going on within the automation.
Most of the pilots knew only one of the several methods to disengage the mode, and 14 pilots also "knew" at least one inappropriate method that could lead to a delayed response to the ATC request. In the case of the glide slope loss during final approach, about half of the pilots were not aware of the consequences of this event in terms of FMS behavior. They could not explain the effects in the debriefing, and some even had difficulty detecting the occurrence of the problem during the ongoing simulation.
The paper also denotes cases where the pilot enters a new flight path, and then is surprised that the plane wouldn't react to it because they had forgotten to also change the mode to follow the new path. The guess here is that pilots enter their data the way they would explain a human to do it: if I'm giving you the instructions for a new path, that's because I expect you to act on them.
They highlight many problems:
- Designers don't/can't anticipate how their automation will transform the work people do
- Training should give more room for experimentation, which would allow better mental model formation
- Interfaces tend to be opaque and have poor support for observability (they can give a state, but not the context around it, and the pilot is left to guess)
This results in common questions: "what is it doing?", "why is it doing that?", "what will it do next?", and finally: "how in the world did we get into that mode?"
The things to do to help:
- reduce the number of modes and their complexity, even if the market keeps asking for more and more of them when buying from a checklist
- new training approaches that are better tailored to dealing with automation (knowledge activation in context, to make sure it does not become inert)
- increasing training for rare but critical situations—and consider this a necessary trade-off when adding automation
- better interface design for mode awareness. The authors caution that operators often have their visual field already busy so this may not work well, and suggest maybe kinetic or auditory clues, or displaying recent data to let people shift the cognitive burden over time.
- adding forcing functions (the user can't do a thing until they clear or acknowledge a thing), but this only works in well-defined situations where few accepted behaviours exist.
- have the system obtain the human's consent before switching modes; this is seen as a good cooperative approach, but may create bottlenecks.