My notes and other stuff

2023/05/31

Paper: Accident Report Interpretation

This week's paper is titled Accident Report Interpretation by Derek Heraghty. In this one, the author takes a real life incident review on a construction site, then tries to write 2 other variations using all the factual information that was used to write the first one, and sends it to 3 distinct groups of people for recommendations, and compares what he gets out of it. I particularly like it because I once tried doing something similar in a much less scientific manner and it's real nice to see someone driving that experiment for real.

I've covered other papers that looked at language and interpretations before, and there are other ones cited in this one such as how describing crime with the words "beast" led to enforcement-based solutions while describing it as a "virus" led to proposed solutions that focused on social reform. This paper specifically looks at incident reports and makes the supposition that reports that evoke blame and fault within operators can have dire consequences in the long term for organizations:

This leads organisations to believe that it is the operator who is the main contributor to accidents and it is the operator who is the problem within their system which requires rectification.

[...]

Workforces are much less likely to report mistakes for fear of retribution, creating an organisational culture where a chief executive officer (CEO) only learns of the problems within their organisation after a person is seriously injured. Using punishment to deal with error is likely to create a culture where the workforce resent the very system that was supposed to enable them to work safely because of it using them as the sacrificial lamb to appease society when something goes wrong. The greatest impediment to learning from mistakes is the use of punishment as you cannot have a system which punishes those who make mistakes while also trying to maintain a learning culture which needs people to be open and honest when an error is made.

The stance safety folks aim for nowadays is one where people ask how to fix the issues, not the people, and in which people part of incidents need help to recover rather than being vilified.

Specifically when writing a report, the idea goes that for accident investigators "what you look for is what you find"; seek blame and you'll find blame, look for a systemic framing and you'll find systemic recommendations. The readers will depend on the report to form their own views, and this paper tries to demonstrate this empirically by having 3 reports, sent to 3 groups of 31 people and then asking for 3 recommendations from each reader, and analyzing the results.

I won't show the reports—most of the paper is all 3 of them put in appendices—but the broad strokes are:

  1. Variant 1: the original report written for an actual incident, which tries to be fact-based and come up with an objective linear telling of the events. In doing so it ends up being based on the decisions and actions of workers and tends to ignore underlying conditions that influenced their decision-making.
  2. Variant 2: a report that tries to take a very system-based approach. Many frameworks exist (SCAD, FRAM, STAMP, Accimaps, ...) and the authors settled on a SWOT analysis that focuses on briefings, personnel, tools and equipments, the work environment, and task execution. The goal of this report is to see how different parts of the system dealt with managed and unmanaged risks.
  3. Variant 3: follows the multiple stories approach where rather than trying to come with a more objective re-telling, they instead just let each participant's verbatim re-telling of the event be accessible, so they are heard and can provide explanations about what was going on and felt significant for each of them. Front-line operatives are seen as victims rather than perpetrators.

Specifically, all 3 reports are entirely factual (the authors did not invent anything that did not come up in the investigation), but the approach taken for each means some facts are omitted or emphasized differently.

The recommendations were then coded for analysis and the effect is quite noticeable:

A chart showing the result for each of the 3 variants on two axes: human+blame and system focus. Approximately, the first variant shows a roughly 28%/72% divide, the second variant shows a 8%/92% divide, and the third variant shows a 5%/95% divide between human vs. system recommendations

A table showing all recommendations divided into 9 sub-categories over each report

Some interesting effects pointed out:

One interesting element the author points out here is that reports similar to Variant 1 are often used because those from Variants 2 and 3 are seen as opinion-based or hearsay, rather than being factual, but in doing so, people "prove" errors in hindsight, often because it is just easier to do that than prove systems are "broken" when each individual part functions as designed. Variants 2 and 3 allow more room for background information and individual factors that end up being omitted otherwise.

Once again, these three reports start from the same data sources, but based on how they are framed, they construct an entirely different perception in the reader, who in turn propose different types of correcting actions, that impact different parts of the organization going forward.

In the end, Variant 1 ends up often focusing on "who" did something. Variant 2 migrates the focus on the "what" rather than the "who", and Variant 3 ends up creating a focus on the constraints people were facing and creates a deeper understanding of the world they were operating on. The author suspects that these framings are what drives people to make different recommendations and changes to the work environment.

The author concludes:

Our results suggest that the pursuit of a linear report based only on facts determined as important by the author may increase the potential for recommended actions to be blame-focussed and impede the organisation from dealing with more serious issues which have been deemed irrelevant by the author.

[...]

As with accident reports themselves, readers should take care in under or over interpreting the results from this study. The results are strongly suggestive that accident report style influences accident analysis outcomes—but alternate interpretations can be drawn from the results.