My bad opinions

2022/11/01

The Demanding Work of Analyzing Incidents

A few weeks ago, a coworker of mine was running an incident analysis in Jeli, and pointed out that the overall process was a big drag on their energy level, that it was hard to do, even if the final result was useful. They were wondering if this was a sign of learning what is significant or not as part of the analysis in order to construct a narrative.

The process we go through is a simplified version of the Howie guide, trading off thoroughness for time—same as using frozen veggies for dinner on a weeknight instead of locally farmed organic produce even if they'd be nicer. In this post, I want to specifically address that feeling of tiredness. I had written my coworker a large response which is now the backbone of this text, but also shared the ideas with folks of the LFI Community, whose points I have added here.

First of all I agree with my coworker that it's tedious. I also think that their guess is a good one—learning what is useful or not takes a bit of time, and it's really hard to do an in-depth analysis of everything because you're looking for unexpected stuff.

You do tend to get a feel for it over time, but the other thing I’d mention is that the technique used in incident analysis—reading, labeling and tagging data many times over—is something called Qualitative Coding Analysis. In actual papers and theses, you’d also calibrate your coding via inter-rater reliability measures. Essentially, the qualitative analysis looks at all the data, waits for patterns to emerge, which they then label, then ask scientists to look at the labels and apply them to the source material. If the hit rate is high, then the confidence in the label is higher given different people interpret events and themes in the same way.

This process ensures their thematic analysis is solid and not biased, meeting the standards of a scientific peer review. Academics tend to pick their methodology, method, interviewing, and tagging mechanisms very carefully because you have to be able to defend your whole research. When we tag our incidents through a tool like Jeli, we do an informal version of this. Our version is less rigorous (and therefore risks more bias and less thoroughness) but can still surface interesting insights in a somewhat short amount of time, just not in a way that would survive peer review.

Still, that [superficial] analysis is demanding. It's part of something called the Hermeneutic circle, which Ryan Kitchens described as looping on the same information continually with compounding 'lenses'. This is cognitively taxing, but useful to gain insights that wouldn't have been visible from your own initial perspective.

Ryan also pointed out that incident analysts should recognize that they are taking on an additional, distinct burden that no one else in the incident has when doing the analysis, and that impacts the energy level an incident may have on you.

Eric Dobbs for his part states:

So many times I feel myself get lost in one forrest looking for specific trees, then distracted by all the fascinating flora and fauna—then something snaps me out of it and I can’t remember what tree I was originally looking for. Finding my way back… It’s so exhausting.

All these efforts are done to surface themes. Themes are what lets you extract an interesting narrative out of all the noise of things that happen in an incident. I like to compare it to writing someone's biography. Lots of things happen in someone's life, and if you want to make a book about it worth reading, you're going to have to pick some elements to focus on, and events to ignore or describe in less detail. That's an editorial decision that can remain truthful or faithful to the experiences you want to convey, while choosing to shine a light on more significant elements.

This whole analysis serves the objective of learning from incidents. But learning isn't something you control or dictate. People will draw the lessons they'll draw, regardless of what you had planned for. All you can hope for is to provide the best environment possible for it to take place. In environments like tech, a lot hinges on people's mental models. We can't implant nor extract mental models, so challenging them through experience or discussion is the next best thing, and exposing how people were making decisions, the various factors and priorities they were juggling, or the challenges they were encountering are all key parts of their experience you wish to unveil.

In short:

A final note on the editorial stance of a written review that follows up your investigation: focus on themes you think were interesting, be descriptive more than prescriptive. It may make sense to note insights or patterns people highlighted or felt were noticeable, but don't pretend to have answers or the essence of what people should remember. I feel I'm doing a better job of writing a report when I consider the task to be an extension of incident review facilitation. Set the proper tone and present information so people can draw whatever lessons they can, but from what is hopefully a richer set of perspectives with varied points of views.

You're not there to tell them what was important or worth thinking about, but to give the best context for them to figure it out.

Thanks to Chad Todd for reviewing this text.