2 minute read

Learning by Training

By Arend van Campen

SULLY & THE HUMAN FACTOR

Recently I watched the movie Sully with Tom Hanks and Aaron Eckhart. Perhaps you have seen it too? It made me ask this question: who do incident investigators trust more - people or computers? For those who haven’t seen the movie or know the story of the US Airways flight 1549, I will briefly tell you what happened. Right after take-off from La Guardia Airport in New York, the plane encountered a flock of birds, several of which flew right into both jet engines, causing them to stop running. With no thrust available anymore the Airbus A-320 lost altitude fast and could only glide towards destruction because it could not return back to La Guardia nor reach Teterboro airport. The only option would be to land on water of the Hudson River.

That decision saved the lives of 155 people. Captain Sullenberger became a hero overnight. It is what happened after this remarkable landing about which I wanted to write today. Airline and government authorities claimed that the pilot could have and therefore should have landed at either airport because a computer simulation said so. Immediately Sully and his co-pilot Jeff Skiles were accused of recklessness, jeopardising the passengers and crew, and destroying the airplane. Of course, Sully and his crew could not believe what they heard. Their handling, split-second decision making, and successful landing were questioned, because a computer had decided so. Their very humanity seemed irrelevant.

In an official hearing it was this ‘Human Factor’ which was proposed by Sully as the decisive element. His 40-odd years of experience and the help of his co-pilot, his numerous landings and the overall control over this airplane type, made him, within 35 seconds, decide that in order to have a chance, the only option would be to land on water. His contemplation, thinking, empathy, deliberation, brain functioning, imagination, concern, friendship, just to name a few ‘human only’ abilities, gave him the correct conclusion.

The question you may ask is: would a computer simulation be able to do the same? Modern times may provide us with computer-generated, algorithmic probability patterns, statistical chance calculations, even real-life situation simulations, but they miss one crucial, indispensable factor: the human one. Through complexity science, we can understand that technological systems only predict linearity, but can’t foresee nonlinearity, which is simply too complex. Computer simulation does not include human decision making, because it can’t.

In our world of transport there is talk of unmanned ships, unmanned trucks or unmanned airplanes to move our goods from A to B. Imagine what would have happened if an autonomous airplane had flown that day? Would it have landed on water or would it have been programmed to always land on an airstrip? If that is the case, it would have destroyed a large part of New York, because the human factor was not programmed in the software running the Airbus. Perhaps a good idea is to think about re-humanisation rather than de-humanisation, because that is what is happening. The uncertainty principle can’t be programmed, only people can maximise safety.

This is the latest in a series of articles by Arend van Campen, founder of TankTerminalTraining. More information on the company’s activities can be found at www.tankterminaltraining.com. Those interested in responding personally can contact him directly at arendvc@tankterminaltraining.com.

This article is from: