LINES OF THOUGHT ACROSS SOUTHEAST ASIA
Tech

Man v machine: overcoming human error in the workplace

In 2015, businesses across the world spent an estimated $75 billion on cyber security. But what do you imagine was the leading cause of data breaches? Perhaps hacking, or malware? Maybe it was even a raft of insecure networks? No, it was human error

Written By:
September 27, 2016
Man v machine: overcoming human error in the workplace
Human Error

Errare humanum est

A study by the technology analysis firm CompTIA, conducted last year, found that 52% of data breaches in the companies surveyed were because of human error. Breaking this percentage down, half of the time it was because the “end user [failed] to follow policies and procedures”, while almost equally it was because of “general carelessness”. Most troubling, however, was that less than a third of the companies surveyed thought human error to be a serious concern within the firm, citing malware and hacking as graver risks.

The Romans were knowledgeable enough to realise that errare humanum est – to err is human – and this hasn’t changed since Seneca the Younger inscribed the phrase around 30AD. Proper academic study on human error analysis, however, only began in the late 1970s, following the Three Mile Island disaster, a partial nuclear meltdown of a US power plant. The event is still recognised as the worst in the country’s history and is thought to have been caused by a string of human errors. By the following decade, industries from aeronautical to automobile began applying the latest techniques and theories of human error to their operations. Complex computerised systems were introduced, believed to prevent the errancy of their operators, and new positions emerged, such as risk assessment managers. It wasn’t long before people started describing human error as the cause of most disasters and malfunctions.

Human error caused the Three Mile Island disaster
The Three Mile Island disaster is still recognised as the worst in US history. Photo: EPA/CHRIS GARDNER

From the famous collapse of Barings Bank (the former British merchant bank that once held an account of the Queen’s) to the disaster of the Deep Water Horizon oil spill in the Gulf of Mexico, a whopping nine out of ten workplace accidents are a result of human error, according to research conducted by the global risk advisory Willis. Arguably one of the most catastrophic instances of human error in the business world, however, was the recent global financial crisis. The Financial Crisis Inquiry Commission, set up in the US, found that it was “an avoidable disaster caused by widespread failures in government regulation, corporate mismanagement and heedless risk-taking by Wall Street,” the New York Times wrote in 2011.

The comission put it more bluntly: “The crisis was the result of human action and inaction, not of Mother Nature or computer models gone haywire. The captains of finance and the public stewards of our financial system ignored warnings and failed to question, understand and manage evolving risks within a system essential to the wellbeing of the American public. Theirs was a big miss, not a stumble.”

It is not only in the world of business where human error can prove fatal. A study published by the medical journal BMJ in May reported that “medical error” – or human error – was the third leading cause of death in the US, resulting in an average of 251,454 hospital deaths per year.

“Humans will always make mistakes, and we shouldn’t expect them not to,” the study’s lead author, Martin Makary, told the New York Times. “But we can engineer safe medical care to create the safety nets and protocols to address the human factor. Measuring the magnitude of the problem is the first step.”

Outsourcing human error

This, indeed, may be the first step but solving the problem of human error is proving to be far from simple. One commanding suggestion is that since to err is human, we should put our faith in things that supposedly do not make mistakes: computers.

Yet studies have found that we are not all enamoured, as some believe we should be, by the possibilities of computers limiting our own inbuilt defects. In 2014, three University of Pennsylvania researchers coined the term “algorithm aversion” to describe peoples’ desire to place trust in other humans rather than unbiased and, apparently, infallible, digital hardware and software.

“Aside from their systematic failings, people get sick, tired, distracted and bored. We get emotional. We can retain and recall a limited amount of information under the very best of circumstances. Most of these quirks we cherish, but in a growing number of domains we no longer need to tolerate the limitations they entail. Nor do we have much to gain from doing so. Yet we seem determined to persevere, tending to forgive ‘human error’ while demanding infallibility from algorithms,” wrote David Siegel, co-chair of Two Sigma, an algorithmic investment management firm, in the Financial Times.

The driverless car was a case in point. Even though human error is said to be the major contributing factor in most car accidents, a great deal of scepticism remains about the future introduction of driverless cars – almost three-quarters of Americans surveyed by the insurance company AAA in March said they would feel unsafe in a self-driving car.

“The sooner we learn to place our faith in algorithms to perform the tasks at which they demonstrably excel, the better off we humans will be,” Siegel added.

However, if humans are prone to errors, then aren’t the most complex of algorithms and computerised machines also capable of making mistakes? In 1997, the chess grandmaster Garry Kasparov sat down to play an IBM-designed supercomputer nicknamed Deep Blue. It was to be the ultimate battle of man against machine – the survival of human thought against the future of technological assurance – and few doubted that machine would triumph. They were correct. However, towards the middle of the contest, a software glitch in Deep Blue forced it to make a random move. Kasparov, believing the machine to be perfect, responded as if this was intentional and, making a strategic error himself because of this, handed the game to the robot.

In their recently published book Only Humans Need Apply, authors Thomas Hayes Davenport and Julia Kirby attempt to reframe the debate about automation by arguing that, just like Kasparov, humans may be too quick to concede computers’ superiority. This stands in stark contrast to the more dystopian paperbacks on the shelves, such as Martin Ford’s Rise of the Robots, which predicts a future of human obscurity and redundancy.

Risk homeostasis

An insinuation is that as we surround ourselves with technology, which promises to be error-free and to limit our own human error, we respond by becoming more careless – again, like Kasparov. Take, for example, the argument that as we increasingly rely on a word-processor’s spell-checking facility, there is less incentive for us to learn how to spell words correctly ourselves. This theory has a name: risk homeostasis.

In 1982, a professor at Queen’s University, Canada, named Gerald J. S. Wilde proposed the controversial risk homeostasis hypothesis – it is also commonly called the risk compensation theory – after observing that people typically approach an action or behaviour based upon the perceived risk involved. When Sweden changed from driving on the left-hand side of the road to the right in 1967, for example, there was a demonstrable reduction in traffic accidents over the following 18 months, before returning to normal.

Using this as his basis, Wilde suggested that drivers drove more carefully during those 18 months because the perceived risks were greater, and hypothesised that the reverse was also true: people drove less carefully when the perceived risks were reduced. A more famous study, in 1994 by Wiel Janssen, contended that the introduction of the seatbelt actually led to a rise in traffic accidents. Because drivers felt safer, they were more inclined to drive recklessly. There has even been the suggestion that the best way to ensure people drive safely is to attach a large spike sticking out of the steering wheel, its sharp end pointing directly at the driver.

Self-driving car
A man travels in a self-driving car model of the German-based multinational corporation Thyssenkrupp AG. Photo: EPA/JANOS MARJAI HUNGARY OUT

There is, however, much debate about the merits of the risk homeostasis with many academics describing it as incorrect. Nevertheless, trust in technology was explored by the journalist Malcolm Gladwell in a 1996 New Yorker article titled ‘Blowup’.

“In the technological age, there is a ritual to disaster,” he wrote. “When planes crash or chemical plants explode, each piece of physical evidence – of twisted metal or fractured concrete – becomes a kind of fetish object, painstakingly located, mapped, tagged, and analysed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions. It is a ritual of reassurance, based on the principle that what we learn from one accident can help us prevent another.”

He took as an example the Challenger space shuttle, which exploded mid-air shortly after take-off in 1986. One of the largest salvage operations in American history was carried in the immediate aftermath, with each piece of the recovered debris studied and tested. It was discovered the space shuttle had exploded because of a faulty seal in one of the shuttle’s rocket booster. As Gladwell wrote, a special presidential investigative commission concluded that the disaster was the fault of “shoddy engineering” and bad management of NASA – in other words, by a string of human errors. The ritual of disaster was complete, and NASA set to work redesigning their systems to learn from past mistakes and prevent future ones.

Gladwell questioned, though, whether the assumptions that underlie our disaster rituals were incorrect: “What if these public post mortems don’t help us avoid future accidents?” He used a study by the Boston College sociologist, Diane Vaughn, which asserted that the Challenger disaster wasn’t caused by people at NASA failing to do what they were supposed to, rather it was caused by the exact opposite – a series of seemingly harmless decisions that were made which step-by-step moved the space agency towards a catastrophic outcome. Indeed, what was obvious in hindsight, wasn’t at the time. What’s more, as Vaughn noted, “at NASA, problems were the norm…The whole shuttle system operated on the assumption that deviation could be controlled but not eliminated.”

One might justifiably extend this to most aspects of our modern world, from the breakneck speed of the financial industry to the intricacy of global economics.

“We have surrounded ourselves in the modern age with things like power plants and nuclear-weapons systems and airports that handle hundreds of planes an hour, on the understanding that the risks they represent are, at the very least, manageable,” Gladwell wrote. “But if the potential for catastrophe is actually found in the normal functioning of complex systems, this assumption is false. Risks are not easily manageable, accidents are not easily preventable, and the rituals of disaster have no meaning.”

He added: “We have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life.”

If this is correct, then the increasing reliance on technology in our everyday lives, and in the business world, is no guarantee of diverting disasters or impeding human error. As man and machine become more dependent on one another, we must appreciate the insecurities of both.

Don’t panic

However, lest all hope is given up, there are a number of steps that can be taken to mitigate the impact of human error. In a study titled “Can Technology Eliminate Human Error”, the authors suggested that a holistic approach must be taken for human error reduction. As much as changes to worker behaviour and methods must be greatly improved, so too does the designs of the organisational and computerised systems they are working on. Fundamentally, vigilance is key; one can never assume a computer system to be infallible nor a worker failsafe.

By extension, it is also essential that employees actually understand key aspects of their jobs. A 2008 study of 400 businesses in the UK and US found that 23% of employees did not understand at least one essential aspect of their job – the combined economic cost of this was estimated to be $27.4bn per year for the companies studied. This is especially true for new employees or following a change to a company’s working methods.

“Drawing attention to incidents helps to raise awareness and enables employees to change their behaviour, so that the probability of an incident recurring is reduced,” David Hancock, head of risk management at Transport for London, was quoted by a Willis report.

“Talking about risk helps to reduce it,” he added. “If a colleague tells you there’s an ice patch on which you almost fell, you’ll avoid it.”

And yet, the onus must not fall solely on the employee. From the top down, companies must execute a wholesale policy that not only understands that human error is natural and a very real possibility, but that only with the correct procedures in place, adequate training and constant interactions between workers with different skill-sets, can it be minimised – although, it will never disappear completely.



Read more articles