My notes and other stuff

2022/12/17

Paper: Sex and Death in the Rational World of Defense Intellectuals

This week's paper is an old favourite of mine, Sex and Death in the Rational World of Defense Intellectuals by Carol Cohn, in the journal of Women in Culture and Society (1987). This is very much outside my wheelhouse, and one of the first paper I've read coming from the humanities probably close to a decade ago. In it, she tells the story of attending a summer workshop on nuclear weapons, nuclear strategic doctrine, and arms control taught by defense intellectuals in 1984, people who used to define the concept of nuclear deterrence and strategy.

They formulate what they call "rational" systems for dealing with the problems created by nuclear weapons: how to manage the arms race; how to deter the use of nuclear weapons; how to fight a nuclear war if deterrence fails. It is their calculations that are used to explain the necessity of having nuclear destructive capability at what George Kennan has called "levels of such grotesque dimensions as to defy rational understanding." At the same time, it is their reasoning that is used to explain why it is not safe to live without nuclear weapon. In short, they create the theory that informs and legitimates American nuclear strategic practice.

She comes at it with a skeptical eye, from a feminist standpoint, and spent weeks with them, wondering "how can they think that way?" After the summer program, she was offered a role at the center, and stayed with them for an extra year. She attended lectures, listened to and talked to defense analysts, learned their specialized language and analyzed it. This specialized language is a huge focal point for the paper: it shapes and reflects how strategists think, and she asserts that it is in fact a central part of why analysts can think the way they do.

Listening

She points out that specialists have "clean" words to talk about strategy: "first strikes", "counterforce exchanges", "limited nuclear war", or say "minimum deterrent posture". She points out the abstractions and euphemisms used by very normal men doing their job, where they create a distance between the speaker and listeners and the reality of an actual nuclear holocaust:

Defense analysts talk about "countervalue attacks" rather than about incinerating cities. Human death, in nuclear parlance, is most often referred to as "collateral damage"; for, as one defense analyst said wryly, "The Air Force doesn't target people, it targets shoe factories."

She points out the ironies of using "clean bombs" when these bombs are 1,000 times more powerful than those that destroyed Nagasaki and Hiroshima and how they are discussed almost as if they were humanitarian efforts. As she states, there is an "astounding chasm between image and reality that characterizes technostrategic language."

nuclear bombs are not referred to as bombs or even warheads; they are referred to as "reentry vehicles," a term far more bland and benign, which is then shortened to "RVs," a term not only totally abstract and removed from the reality of a bomb but also resonant with the image of the recreational vehicles of the ideal family vacation.

[...]

Calling the pattern in which bombs fall a "footprint" almost seems a willful distorting process, a playful, perverse refusal of accountability—because to be accountable to reality is to be unable to do this work.

She also has a critique on the phallocentric language around rockets and missiles (and having visitors "pat the missiles"), but points out that all imagery in the language hasn't been chosen by these men, but inherited by a broader cultural context. Part of that role is to render the weapons (which visitors and allies "pat") cute and harmless, rather than terrifyingly destructive. She posits that this sort of behavior and naming exists to minimize the deadly consequences of their work, to make the incredibly powerful bombs feel easily controllable.

One of the strategists she met described deterrence as "threatening to break the arm of his son" whose "tv-watching habits he disapproves." She points out the analogy sucks because deterrence is often taught by comparing opponents of equal forces, which isn't the case of a father breaking his son's arm, but also points out that:

it is nonetheless extremely revealing about U.S. nuclear deterrence as an operational, rather than rhetorical or declaratory policy. What it suggests is the speciousness of the defensive rhetoric that surrounds deterrence—of the idea that we face an implacable enemy and that we stockpile nuclear weapons only in an attempt to defend ourselves. Instead, what we see is the drive to superior power as a means to exercise one's will and a readiness to threaten the disproportionate use of force in order to achieve one's own ends. There is no question here of recognizing competing but legitimate needs, no desire to negotiate, discuss, or compromise, and most important, no necessity for that recognition or desire.

She points out plenty of other terms like this:

She also mentions that in all discussions about "vulnerability" or "survivability", they only talk about whether the weapons systems are those that are vulnerable and survive, not the humans involved.

Learning to Speak the Language

The author was expecting a lot of technical terms (she had spent years reading on the topic) but was not ready to how many acronyms and technical language would be encountered. She thought acronyms as mostly utilitarian, to speak and write faster, to create an abstraction of what you're talking about, and in a way, to prevent others from participating to the conversation. But she found another dimension with terms such as:

She adds:

speaking about it with that edge of derision is exactly what allows it to be spoken about and seriously discussed at all. It is the very ability to make fun of a concept that makes it possible to work with it rather than reject it outright.

[...]

[The words] are quick, clean, light; they trip off the tongue. You can reel off dozens of them in seconds, forgetting about how one might just interfere with the next, not to mention with the lives beneath them. [...] Some of us may have spoken with a self-consciously ironic edge, but the pleasure was there nonetheless.

She says the words in a way let you have "cognitive mastery", where even if you're working with technology that is fundamentally hard (if not impossible) to control and fully comprehend, you nevertheless get a feeling of mastery over the domain.

What Cohn mentions next is very interesting:

The more conversations I participated in using this language, the less frightened I was of nuclear war. How can learning to speak a language have such a powerful effect? One answer, I believe, is that the process of learning the language is itself a part of what removes you from the reality of nuclear war.

The act of learning the acronyms and the overall foreign language of experts became its own challenge, but also created two different perspectives to work from, and sat you directly into a new one. She gives the following example as the way she would have previously thought of nuclear war, based on testimonies from Kyoto after dropping the atomic bomb:

Everything was black, had vanished into the black dust, was destroyed. Only the flames that were beginning to lick their way up had any color. From the dust that was like a fog, figures began to loom up, black, hairless, faceless. They screamed with voices that were no longer human. Their screams drowned out the groans rising everywhere from the rubble, groans that seemed to rise from the very earth itself

Now here's the sort of lens used by the experts for a similar event:

[You have to have ways to maintain communications in a] nuclear environment, a situation bound to include EMP blackout, brute force damage to systems, a heavy jamming environment, and so on.

She points out there's no way to use the language from the second snippet to explain the events of the first one despite both covering the same type of "nuclear environment":

Learning to speak the language of defense analysts is not a conscious, cold-blooded decision to ignore the effects of nuclear weapons on real live human beings, to ignore the sensory, the emotional experience, the human impact. It is simply learning a new language, but by the time you are through, the content of what you can talk about is monumentally different, as is the perspective from which you speak.

[...]

Technostrategic language can be used only to articulate the perspective of the users of nuclear weapons, not that of the victims.

She states that speaking as an expert not only provides distance, control, and focus, but also a way to escape having to think of the consequences on victims. This removal happens at a structural level, almost necessarily moving you from the victim's seat and into the planner's or the user's role. It changes your stance from one who is passive and powerless into one that is active and powerful:

effects of nuclear weapons systems become extensions of the self, rather than threats to it.

Dialogue

This is where rather than just observing and learning the language, the author wanted to question and challenge a few assumptions. Things like if submarines were so invulnerable, why would you need a strategic triad, or why you'd rather plan the US' military according to Russia's capabilities rather than really trying to gauge their intentions. On this latter point she mentions that when we swap what one can do with what they intend to do, you end up committing vast resources to preventing them even if they were absolutely unlikely.

She points out that it was nearly impossible to engage with the experts without using their technical language. Not doing so made them see her as ignorant or simpleminded, and it seemed not to occur to them that she willingly chose not to use the jargon. So she started adapting her language and noticed:

the better I got at engaging in this discourse, the more impossible it became for me to express my own ideas, my own values. I could adopt the language and gain a wealth of new concepts and reasoning strategies—but at the same time as the language gave me access to things I had been unable to speak about before, it radically excluded others. I could not use the language to express my concerns because it was physically impossible. This language does not allow certain questions to be asked or certain values to be expressed.

One example she gives is the word "peace": it is not available for use without looking like an activist outsider; the closest term she could use was "strategic stability", but that term refers to a balance in weaponry. As she tried to adjust, she found herself thinking less and less of people who would be incinerated by nuclear weapons. But the problem goes further than not thinking of them:

The problem, however, is not only that defense intellectuals use abstract terminology that removes them from the realities of which they speak. There is no reality of which they speak. Or, rather, the "reality" of which they speak is itself a world of abstractions. Deterrence theory, and much of strategic doctrine altogether, was invented largely by mathematicians, economists, and a few political scientists. It was invented to hold together abstractly, its validity judged by its internal logic. Questions of the correspondence to observable reality were not the issue. These abstract systems were developed as a way to make it possible to "think about the unthinkable"—not as a way to describe or codify relations on the ground.

In their abstract world where estimates about the other's capacity are well known, where nuclear use can be "limited", there is no room for a field commander using tactical "mini-nukes" to turn a losing battle, no EMP-generated failures, no "human errors" that detect comms networks. Actors are rational, free from the pressures of a population who just got bombed, despair, or most other factors that make decision-making messy. Instead, you get a calculus of megatonnage mathematically informing decisions.

What counts, she says, is the internal logic of the system. Changing language wouldn't necessarily help that. The abstract model they work with makes some concepts inherently not applicable. But even with this admission, and with the idea that fine, that abstract model can be useful to decision-making, some things are still off:

How is it possible, for example, to make sense of the following paragraph? It is taken from a discussion of a scenario in which the United States and the USSR have revised their offensive weaponry, banned MIRVs, and gone to a regime of single warhead (Midgetman) missiles, with no "defensive shield":

> The strategic stability of regime A is based on the fact that both sides are deprived of any incentive ever to strike first. Since it takes roughly two warheads to destroy one enemy silo, an attacker must expend two of his missiles to destroy one of the enemy's. A first strike disarms the attacker. The aggressor ends up worse off than the aggressed.

"The aggressor ends up worse off than the aggressed"? The homeland of "the aggressed" has just been devastated by the explosions of, say, a thousand nuclear bombs, each likely to be ten to one hundred times more powerful than the bomb dropped on Hiroshima, and the aggressor, whose homeland is still untouched, "ends up worse off"? How is it possible to think this? Even abstract language and abstract thinking do not seem to be a sufficient explanation.

What she finds is that in the whole space's reference point is centred on the weapons, not any human being. Human factors are irrelevant. The attacker is worse off because they have fewer weapons left. Starting a war is framed as being sure you're the one with the most weapons left at the end. So a MIRV with 10 warheads wins you the war because a single missile could destroy five other missile's silos. It's a numbers game, and the language is complete to its practitioners.

Experts are not asking whether they should, or could use nukes this way, and while they may consider these valid questions, they would mostly just consider them to be out of scope for them.

The Terror

After a few weeks with experts, whom she found likeable and admirable, she realized that her surprise at their language and perspective slowly vanished, and it felt normal: "I had not only learned to speak a language: I had started to think in it."

Once she suspended disbelief and agreed with the predicates of the discipline, then all of its consequences were sensical, and even interesting. She mentions at some points finding major insights, new approaches within the framework that made her really excited about having accomplished something, only to step back and realize that her new perspective was something she had already known a long time outside the framework. As she puts it, "I began to feel that I had fallen down the rabbit hole-and it was a struggle to climb back out."

She concludes:

The activity of trying to out-reason defense intellectuals in their own games gets you thinking inside their rules, tacitly accepting all the unspoken assumptions of their paradigms. You become subject to the tyranny of concepts.

[...]

Most often, the act of learning technostrategic language is conceived of as an additive process: you add a new set of vocabulary words; you add the reflex ability to decode and use endless numbers of acronyms; you add some new information that the specialized language contains.

[...]

However, I have been arguing throughout this paper that learning the language is a transformative, rather than an additive, process. When you choose to learn it you enter a new mode of thinking. [...] If we refuse to learn the language, we are virtually guaranteed that our voices will remain outside the "politically relevant" spectrum of opinion. Yet, if we do learn and speak it, we not only severely limit what we can say but we also invite the transformation, the militarization, of our own thinking.

Assuming that the new language lets you express your viewpoint, join the conversation, can itself be misguided. At times, rather than informing decisions, the role of such technocratic language becomes one of legitimization for outcomes that have entirely different reasons behind them.