Top of This issue Current issue
As the Gulf oil spill continues, I find myself thinking about technology, risk and probability.
At the origins of human technology, the risk posed by a single tool was very limited. A badly thrown spear could kill an innocent person. Evil global effects could be caused by the concerted use of tools by numerous humans across generations and centuries. For example, the hunting to extinction of mammoths and sabretooths can be viewed as a consequence of the invention of spears.
Today, a single tool can go wrong in a few minutes in a way which harms millions of humans, or the globe itself. The Gulf oil rig, spewing millions of gallons of oil relentlessly 5,000 feet under water, is an example. While many of us are aware at least vaguely about the cumulative risks of widespread drilling, I don't know that many of us understood the possibility we could lose a body of water as large as the Gulf, for months or years, solely as the result of the failure of a single well. Yet that possibility is grimly before us now.
There is a doctrine of "technological determinism" which says that any possible technology will be invented and developed to its maximum capacity, almost as if humans had nothing to do with it, no ability to make choices or put the brakes on. This is a grimly naive or dishonest concept. Humans do have the ability, but rarely exercise it, to make effective technological choices together.
Once we reach the point of building single tools which can harm or even end the world, we enter a realm of fairlyland-thinking in which the questions of what happens when mistakes are made, are largely begged. We put black boxes in airplanes because we anticipate they will crash, want to understand why they do and avert future crashes. I doubt there are black boxes in H-bombs (could any kind of data collection device survive the explosion?) Once technology gets powerful enough, we start relying on a totally unfounded premise that it is so well-designed, that we are so good, that nothing can ever go wrong. The proliferation of nuclear weapons (the US just admitted it has more than 5,000) relies on an idea that we can go for centuries, forever, without ever detonating one by accident.
One of the most influential books I ever read was Charles Perrow's "Normal Accidents" (Princeton 1999). Perrow analyzed a wide variety of technology disasters, and found some common themes. In most cases, two problems arose simultaneously, either of which would be relatively trivial in itself, but which were fatal together. For example, a sensor which monitors temperature fails, just as the unit it is meant to monitor overheats. The sensor would have been easily fixable if anyone knew it was broken; the overheating would have been soon remedied if the sensor was working. Together, the two failures lead to massive destruction and death.
Doubtless when we know the details of why the rig's failsafes failed, it will be an explanation of this type. A corollary to the creation, and failure, of complex systems is that no single human knows everything about them, the way a single craftsman once knew everything there was to know about the design, materials and aerodynamics of a spear. But the division of human knowledge inevitably leads to gaps both in information (how do we know what we don't know?) and responsibility (its not my fault, its that guy's).
Before anybody builds a tool which can go wrong and harm millions, whether the builder is a private company or a government, there should be a real process of deciding whether the game is worth the candle. As a lawyer, whenever anyone asked me the question "If I destroy a key piece of evidence instead of producing it in a litigation, what are my odds of getting caught?", my unchanging answer was: "I am not in the business of calculating odds." My approach was always to assume you will be caught: can you then live with the consequences? People rarely could.
I think the same approach is warranted with technology. Assume at least one rig will gush uncontrollably for months every decade, or one nuke will go off unexpectedly every sixty years. Can we live with that? Is the benefit granted by the technology worth the cost in human life?
In fact, companies and governments make these kinds of decisions every day, but rarely admit they are doing so. Every bill passed by Congress will harm someone, and a remarkable number of bills will kill someone, but there is rarely any recognition of this fact. One of my favorite examples, was the passage of the legislation in New York which banned talking on cell-phones unless you had your hands free. On the first day of the law, a driver pulled over to the shoulder of a highway so he could take a call without breaking the law, was hit by an oncoming vehicle and killed. A law intended to save life, took one, on its first day.
There is a public, and naive, perception that every human life is precious, that we will routinely spend a billion dollars to rescue a child who fell down a well. The truth is the opposite, that we trade a certain number of lives for anything we want. This math is clearest in wartime--more than 5.000 Americans and untold Iraqis and Afghans so far--but bit exists in almost every field of public policy. Decisions about immigration cost lives-- in Arizona people will now die rather than call 911 or seek medical assistance, for fear of deportation. Decisions about public health care clearly have cost a large number of lives. Decisions not to regulate mortgages and mortgage instruments have forced people into poverty and ill health and certainly brought about some premature deaths.
For every scandal about the math--think of the Ford Pinto, or thalidomide--a million other transactions in lives take place without attracting any public attention. But a single Ford Pinto, or thalidomide dose, is like one spear. Only cumulatively can these technologies do terrible damage. The decision to operate a single oil well, or nuclear power plant, is on another level altogether.
It follows that these decisions cannot be made privately, but must be made collectively. Only in ideological Libertarian-world would companies say, "If we kill people by accident, we will have to pay unacceptably large costs, or go out of business." In the real world, companies say, "It will never happen," and fail, in the fog created by the perfect storm of ego, missing information and greed, to see that they have a problem. Clearly, we can't let Union Carbide make the decision about what is safe for the residents of Bhopal. It is worth noting that the then CEO of the company, convicted of murder in India, lives in freedom and comfort in the United States.
In a world where we are again swinging in the direction of distrust of government, of smaller government, how do we act collectively to protect ourselves? It seems to me that democratic governments were invented for us to exert our collective will. Governments are (or should be) "us", not "them'. When democratic government becomes the "other", it is usually because it has been captured by the very people the Libertarians would leave unfettered, the businesspeople.