Learning from High-Reliability Organizations

By Tara Hulen

noerror

There are many things that can go wrong during any given second on a nuclear-powered aircraft carrier: Planes can crash, radiation can leak, missiles can explode. Yet minutes, days, and years can go by without a single incident.

"This is an environment that should have a lot more accidents than it actually does," says W. Jack Duncan, Ph.D., who teaches classes in management at the UAB School of Business and in health-care organization and policy at the School of Public Health. Despite the potential for disaster that comes when jet fighters, nuclear reactors, and thousands of human beings are packed into a confined space, aircraft carriers are literally a textbook example of a "high-reliability organization."

Carrier crews, much like air traffic controllers and nuclear power plant operators, are able to consistently perform risky tasks with minimal errors. These pressure-filled environments offer valuable lessons for other groups who face life-and-death decisions on a daily basis, says Duncan, who is studying ways to bring those lessons to government agencies charged with disaster response.

The way that organizations become highly reliable, he emphasizes, is by making sure nothing becomes routine. "It's not just public safety, it's not just putting out a fire, it's not just triaging patients—they're constantly saying to themselves, ‘What is the big picture? What are we really trying to do?'"

Talking It Out

Nursing researcher Jacqueline Moss, Ph.D., is working in a similar vein. She is studying ways to reduce errors in chaotic health-care settings and agrees with Duncan that high-reliability organizations share several traits. Primarily, they disdain ignorance. They are always working to determine the causes of common errors and to identify solutions before it is too late. They also recognize how expensive it is to fail, offering incentives to encourage their employees to think the same way, including rewards for those who point out problems. And they try to get all workers to focus on the big picture instead of their own immediate goals.

A key factor is a "culture of safety" that values the prevention of accidents more than the preservation of hierarchies, Moss explains. In other words, "if you think something's going on that would impair safety, your position is not jeopardized by going against the head of the organization and pointing that out," she says, noting that hospitals are learning to create an environment where even a housekeeper can question a physician's actions.

Most often, however, the problem is one of information, not intention. A landmark report from the Institute of Medicine in 1999 concluded that 98,000 people die each year from medical errors. And "the top seven reasons for error were all related to lack of access to information," Moss says.

To help avoid errors, she notes, health-care teams need "the right information, in a timely fashion, to the right person, in the right amount, in a format that is accessible." Moss is now investigating technological solutions to help that process, such as "information system decision support" software that makes sure patients get the right medication.

Knowing the Unknowable

Like health-care administrators, disaster-response planners have to manage groups who are widely dispersed, working under high pressure with limited information and limited means of communication.

Emergencies are "high-impact, low-probability events," says Duncan. But even though many situations can't be anticipated, agencies can learn to ask more questions before jumping in and possibly putting themselves and others in danger, he argues. "What you try to build are organizations that aggressively try to find out the kinds of things that they really don't know instead of blindly following the protocol."