My notes and other stuff

2023/03/02

Paper: Ten Challenges for Making Automation a "Team Player"

Here are my notes of one of the most useful texts around Resilience Engineering: Ten Challenges for Making Automation a "Team Player" in Joint Human-Agent Activity by Gary Klein and David Woods. I've referred to that one a bunch of times everywhere, and it's about this idea that Automation should be considered a team player. The text introduces challenges around automation being teammates and compares them to how humans are able to do a good job there.

They first introduce something called the basic compact, which is a sort of [unspoken] agreement that people will work together. The idea is that if you have some of your goals aligned, then individuals in the team will be ready to sometimes trade off their own personal objectives now in order for larger longer-term objectives (either individual or shared) to also be met. It comes with expectations that this compact will be continuously renewed and maintained, and that when faulty assumptions or misalignments are found, they'll be corrected. There's reciprocity involved in some ways. Usually, someone giving up on the compact communicates it clearly, whether willingly or not.

There's also a list of things required for proper coordination:

  1. Mutual predictability: you have to be able to guess/know what others will do. In high-pace activities, this is often done implicitly and through very short signals. In large bureaucracies, you end up having extended procedures to cope for the high difficulties of being predictable
  2. Directability: when priorities change, it must be possible for an agent to tell another one to behave differently, with the other one being able to adjust themselves.
  3. Common ground: this includes the pertinent knowledge, beliefs, and assumptions that the involved parties share. The common ground is what lets you understand each other's messages, signals, and intents. It can erode fast and requires constant maintenance.

With this in place, the 10 challenges are defined. It's important to point out here that a lot of technology work aims at increasing the autonomy of computer systems, whereas the authors state that the real core need is to let them be good collaborators in a joint cognitive system (a setting where many agents interact on a cognitive level):

  1. an intelligent agent must fulfill the requirements of a Basic Compact to engage in common-grounding activities: The agent must be able to know when it's struggling to meet demands, and let others know. They must be able to understand and accept joint goals, and know their role in the collaboration.
  2. must be able to adequately model the other participants’ intentions and actions: Are others having trouble? Are they doing well in accomplishing their tasks? Has everyone adjusted to changes in planning?
  3. team members must be mutually predictable: you have to be predictable, and able to predict the actions of others. Ironically, the more adaptable an automated agent is, the less predictable it becomes, and operators may become reluctant to give them more responsibility as they find them harder to predict accurately and therefore won't trust them much.
  4. Agents must be directable: The authors believe that policy-based automation here is possibly one of the most interesting approaches, because you can change and adjust policies and expect a bunch of unrelated agents to adjust their behaviour accordingly. They also allow clear bounds on automation.
  5. must be able to make pertinent aspects of their status and intentions obvious to their teammates: To make their actions sufficiently predictable, agents must make their own targets, states, capacities, intentions, changes, and upcoming actions obvious to the people and other agents that supervise and coordinate with them. You don't want a system you don't notice, you want one where people know what's going on.
  6. must be able to observe and interpret pertinent signals of status and intentions: that's the counterpart of the previous point; it's one thing to send signals, it's another one to receive and act upon the signals of others. Every participant in a complex sociotechnical system will form a model of the other participant agents as well as a model of the controlled process and its environment.
  7. Agents must be able to engage in goal negotiation: If the situation changes and requires adaptation, people must share their goals, understand those of others, be ready to adjust, and negotiate. Autonomy-centric and algorithm-centric approaches are incompatible here.
  8. Support technologies for planning and autonomy must enable a collaborative approach: the processes of understanding, problem solving, and task execution are necessarily incremental, subject to negotiation, and forever tentative. There is an assumption here that nothing is definitive and fully understood ahead of time, and there is a need for give-and-take across participants.
  9. must be able to participate in managing attention: Knowing when someone's busy or overloaded, and adequately adjusting your behaviour to omit less important information whe others clearly don't have the bandwidth for it. (note: I'm looking at you, crappy blaring alarms.)
  10. team members must help control the costs of coordinated activity: All of the points above require energy and time. Investing in the basic compact helps lower these costs, and this implies agents investing to become more understandable to each other, and more aware of each other's needs for knowledge and needs.

The paper concludes (in more polite words than mine) that most of current automation is not very good. Mostly these criteria highlight ways in which systems that try to be more independent and autonomous (to avoid requiring our attention) can actually be worse (by acting like chaotic agents), and that there could be potential in methods that may require less independent intelligence if it is more legible and directable by others, for example.

Their hope is that with enough progress, automation could be considered a teammate the way a novice or child could be—subject to brittle and risky literal interpretations of language, events, nuance, and so on, but which would still be better than the current status quo.