Paper: Imaginaries of Omniscience
The last paper I annotated led people I was chatting with to make a statement along the lines of "it's funny how when we aim mainly for facts so it feels more objective, it ends up obscuring a much richer and useful picture you'd get from more subjective descriptions of experience." Someone then nerd-sniped the conversation with a paper from Lucy Suchman titled Imaginaries of omniscience: Automating intelligence in the US Department of Defense, which I'm covering here.
It's a bit different from most of the papers I read, since it advances a political point of view by contrasting the US foreign policy and approaches to unconventional warfare, their desire for AI in an approach to signal processing, concepts for cybernetics and the OODA loop, drone killings, and the importance of press. That's a tall order, but a very interesting paper, albeit a tricky one to annotate.
The paper weaves in the history of US military policy throughout. It's not my forte, but it's also impossible to separate it from the more cognitive aspects of it, so I'll try to make a quick rundown of the various points made by the author:
- Starting with WWII, the US foreign policy has become a mandate for global military supremacy
- A metaphor for this stance is one of a "closed world" or "dome of global technological oversight" that started with Truman in 1946 and reinforced itself through Vietnam in the 60s and the cold war arms race in the 1980s
- That vision turned into "building weapons, systems, and strategies whose components could function in a seamless web", which the author describes it as "fantasy of total surveillance and complete control over the battlefield from the safety of a distant, high-tech command center"
- This caused further centralization of operations, and widened a gap between official discourse of success and pessimistic assessments of independent observers and soldiers on the ground
- The battlefield progressively turned into a "hunting ground"—special forces, pop-up bases, no lengthy occupation, and precision strikes as a favoured approach
- A shift to counter-terrorism where resources are geared towards eliminating "imagined but potentially catastrophic" futures that do create conditions for these to happen
the closed world and its theaters of operation rest upon an objectivist onto-epistemology that takes as self-evident the independent existence of a world ‘out there’ to which military action is a necessary response.
I had to look up "onto-epistemology" and that wasn't the clearest of thing but I understood it to be what we know, how we get to know it, and how things come to be, specifically in this case a view that is based on the observation of demonstrable facts.
This puts a lot of the burden on the actual data gathering as an approach, coined under the broader term situational awareness ("the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status"). In this view, the stimulus is considered external to the actor observing it, and responses that can be seen are considered to be an effect of having observed the stimulus.
In traditional cybernetics views, this is the "human in the loop" of weapon systems, which brings us to the cycle known as "Observe, Orient, Decide and Act" (OODA) loop. A simple view of it looks like this:
This view however over-represents the decision-making aspect of it, and the more classical cybernetics loop is more like this:
A lot of decisions made are often more "automated" pattern matches made based on the "orientation" part, which contains people's mental models of reality that impact predictions made about the effect of actions:
Within the context of war fighting, effective operations under the OODA model require that ‘our’ side have a shared Orientation, [...] consistent ‘overall mind time-space scheme’ or a ‘common outlook [that] represents a unifying theme that can be used to simultaneously encourage subordinate initiative yet realize superior intent’. At its imagined ideal, this shared mental model obviates the need for explicit command and control, as the force operates as a single body.
Situational Awareness can be framed as both "Observe" and "Orient" steps. Being able to observe and know what is going on is what sets the US toward the goal of "information dominance." However, this leads us to issues where more sensors are needed, more processing is needed, and therefore, more automation is required.
This in turn, made the DoD buy into AI as a solution. Still paraphrasing a lot of content here:
- A Defense Innovation Advisory Board (DIB) was created, staffed with Silicon Valley people, to fund "start-up" military R&D projects
- They started pushing for AI, harder and harder, as the promise of better-than-human capabilities is taken as definitively possible, both in terms of accuracy and in properly identifying who is or isn't a target
- Any risk or demonstrated weakness in AI is seen as a need for even further investment in AI
- data sharing across departments and historical standard is excessively difficult
- They believe the commercial sector was best equipped to lead its development with an enterprise cloud solution
- most sources are really vague about what the sources for training data would be
- by making the future of security rely on big data, big data becomes its own weak target, and data centers must exist in bunkers and various security mechanisms to prevent sabotage
In the end, the author states:
Utopian futures of profit and a conjured specter of disaster are conjoined. The disaster anticipated redoubles itself, as the promised solution to an insufficiency of data becomes a new site of vulnerability.
Circling back to the OODA loop, though, because orientation is a crucial step, it follows that orienting the now-unavoidable AI is also a key element if we want it to be able to make decisions rapidly and effectively. A few issues exist there, such as trying to gather so much noisy data that you can get adequate signal (which tends to never pan out well), but also because the labelling of that data and the orientation of AI tends to be a re-encoding of existing "imperial impulses" which come with their share of violent patterns often aligned on racial lines.
Technology plays an important role in legitimizing some of these views by putting a sharp focus on some elements and placing unwieldy ones outside of its scope. For example, target identification demands making life and injury decisions based on sensor data, and fundamentally demands a classification on a civilian/combattant axis, an ultimately binary choice. As conflict shifted toward civilian areas however, the space for someone to be considered "civilian" has consistently been shrinking, and legitimizes more and more "extra-legal state violence".
Keeping the idealized image of AI requires suppressing inconvenient truths, and getting a good discrimination for patterns within signal and noise demands having created that pattern in the first place, which relies on existing ideologies. The author makes a list of various military incidents, drone killings, the tendency of the US military to blow up the evidence (and bodies) of those who would let them know whether a decision was actually adequate or not, and so on.
Or, to make it short, the OODA loop approach with AI in a military context, its chase for objective facts, tends to ignore its own initial framing and creates a poorer, narrower view of the world that reinforces its own existing pattern. To counter that effect, the author concludes that rather than improving that effort at data gathering and analysis, situational awareness could be improved by broadening the frame of reference used, if not by outright reversing it:
Expanding situational awareness would require an inversion of current practices so that all of those killed in an operation would be assumed innocent until an administration was able to prove otherwise.
The aspiration to closure, integral to the logics of an international order based in military dominance, propels the destructiveness of a US foreign policy that regenerates the insecurities that it ostensibly eradicates. The closed world relies, moreover, on forms of systemic ignorance required to maintain the premise that war fighting can be conducted rationally through a seamless web of technologically generated situational awareness.
This premise rests upon the conflation of signals with information, through erasure of the situated knowledges through which information is produced.
I have suggested that the most powerful alternative to closed-world knowledge making is investigative journalism and other modes of on-the-ground research and reporting. These accounts convey the radical openness of war, foregrounding its associated injuries, challenging the military’s attempt to make clean demarcations where there are none to be made, and demonstrating knowledge-producing practices that do not fit the military’s imaginaries of omniscience.
Once again, this was a challenging paper for me to review, very much outside my wheelhouse, but I found the context of my recent reviews, the current AI context, and the self-reinforcing approach to be difficult to ignore as a good source of lessons that would be generalizable.