Men have an indistinct notion that if they keep up this activity of joint stocks and spades long enough all will at length ride somewhere, in next to no time, and for nothing; but though a crowd rushes to the depot, and the conductor shouts "All aboard!" when the smoke is blown away and the vapor condensed, it will be perceived that a few are riding, but the rest are run over,--and it will be called, and will be, "A melancholy accident."
These words, of course, are Thoreau's, from Walden. They reflect all 19th century industrialization and 20th century technology policy. Thoreau asks and answers three questions: Whom does it benefit? What does it do to us? And what are the implications for democracy?
His imaginary railroad benefits the "few [who] are riding." It destroys the "rest [who] are run over"; they have apparently been a means to an end; their lives paid for the building of the railroad. Where is democracy in all this? It is the process by which men acquired their "indistinct notion". In Thoreau's parable, no-one ordered the workers to build the railroad; to put it bluntly, they were suckered, by an ideal of opportunity, of "life, liberty and the pursuit of happiness", into believing that, if they voluntarily participated in the work, they would "ride somewhere, in next to no time, and for nothing." But it was a scam; in the end they were to constitute the raw material, the building blocks, of someone else's happiness.
Thoreau's three questions, simple as they seem, are not frequently asked at the introduction of new technologies; and when they are, they are not clearly answered.
Democracy and capitalism
I have said elsewhere that democracy and capitalism are like a predator and its prey, say a lion and a bull, yoked to a sled together. The sled may move forward temporarily, but sooner or later the lion may be expected to try and eat the bull.
In our society, a weak democracy in which elected politicians are more responsible to their contributors than to the people electing them, capitalism introduces technology and government, beholden to capitalists, rarely acts strongly or consistently to examine the results of these introductions.
In a democratic capitalism, decisions might be made at the intersection of the interests of all constituencies: corporate CEO's, in making technology decisions, might ask what is best for
The idea of a democratic capitalism implies a certain amount of financial renunciation by the shareholders; they must be forward-thinking individuals who perceive that there is a benefit in earning ten, and bettering everybody's interests, than in earning twenty while doing others harm. In other words, our democratic shareholders are businesspeople capable of avoiding the tragedy of the commons--they will make the decision not to add the additional sheep that will destroy the village green.
Contrast the reality of our capitalist democracy, where shareholder value is everything and decisions are routinely made by business, then endorsed, defended or ignored by government, which put people out of work or poison them for the benefits of a few. Running companies so as to maximize shareholder value, taken to its extreme (and why shouldn't it be, in a society that allows and encourages it?) means the creation of a few large personal fortunes, built upon the destruction of the hopes and health of others.
If you think this sounds extreme, just read your newspaper. When a company announces that it is laying off four thousand people, its stock rises in the marketplace; when the government announces an increase in employment, the market gets depressed, for fear of wage claims and inflation. The interests of the shareholders are routinely conceived, and taught in American business schools, to be adversarial to everyone else's. "Life, liberty and the pursuit of happiness" are routinely conceived to mean "I'm all right, Jack, so fuck you," and "let me take mine now, while I can, no matter the consequences."
If businesses--powerful ones--are the shapers of our physical and social landscape, determining where buildings and factories will rise, how we will transport ourselves from one place to another, and (most importantly for democracy) how we will obtain our information about the world, then a threshold question must be answered: if democracy, as the Founders conceived it, is the expression of a people's will to shape their own affairs, what is the proper way for their voices to be heard where technology decisions are concerned?
No-one has said outright that it is our job to shut up and take our medicine, to provide our bodies to build the railroad regardless of the consequences. Anyone who made such an argument would be undercutting the foundations of a democratic system. So, instead of attacking democracy at its sources, those opposed to any intervention of the public will in technology decisions attack the tools by which the public protects its interests.
I have argued elsewhere that government, at least in a strong democracy, is an edifice of trust. The people may act through their government to oppose a decision that harms them, like the village council telling a sheep magnate that he may not add the extra sheep which will overwhelm and destroy the commons. But adversaries, who dare not strike directly at democracy, strike at "big government" instead, effectively arguing that "life, liberty and the pursuit of happiness" means--in effect, guarantees--their right to cause a tragedy of the commons. Government has no right to intervene in their pursuit of a dollar, though the consequences of the winning of that dollar may affect us all, and for a long time to come.
Our grasp exceeds our reach
Just one of the problems that business and government have collaborated to create is that of the storage of nuclear waste from reactors. These wastes contain many different radioactive materials, including plutonium isotopes with half-lives of 6,000 and 24,000 years. Once they are no longer needed in nuclear reactors, they must be disposed of somewhere. H.W. Lewis says in Technological Risk:
The disposal plan.... is to pack the waste with its plutonium untouched, and bury it in deep caverns in a mountain in Nevada....Standards for the repository have been established by EPA, and require that it remain intact for ten thousand years, by which time the radioactive materials will be relatively innocuous. EPA requires that the waste be stored in such a way that future people, presumed to be ignorant savages, will not be able to hurt themselves if they accidentally dig the stuff up.(Emphasis added)
Now, try a thought experiment. Pick any portion of the Earth and imagine you have stared at it for the last ten thousand years. There is constant motion--migrations, cities being built and declining, while other areas become agricultural or turn to desert. There is also constant violence--wars and murders. Humans more confidently manipulate the environment, pulling down hills and building pyramids. Every once in a while, a natural cataclysm occurs, such as an earthquake or flood.
All right. Now imagine that in the patch of ground you are staring at is a mass of plutonium, with a half-life of even six thousand years (let alone twenty-four). The plutonium is embedded in glass, then placed in durable canisters. Now stare at it for six thousand years while the world goes on around you. How confident are you the glass will not break?
How many jars has the human race ever created that remained unbroken for 6,000 years? Even the Greek amphorae and kraters you admire in museums are much younger than this--and most of those are reconstituted from shards.
Why, for that matter, do we feel that we have avoided the problem of contamination from an isotope with a half life of 24,000 years if we specify that it be stored safely for ten thousand years?
The real question is: What kind of decision-making process leads to a decision to distill a material so potent that, after being used, it must be guaranteed against accidents for ten thousand years? If I can't drive my car for five years without a dent, how can I trust unknown future generations to avoid nicking into some tons of buried nuclear waste for ten thousand years?
The truth of the matter: there was no decision-making process. There is only the tragedy of the commons. Anyone who supported nuclear power for his financial benefit could reasonably be accused of criminal recklessness towards his own grandchildren.
Our grasp has exceeded our reach: we can do things that we don't have the moral faculties to analyze and therefore to avoid. There is no better example than the ten thousand year stewardship of nuclear waste we have brought upon ourselves.
Tool use as a prisoner's dilemma
For those of you who haven't heard me talk endlessly about the prisoner's dilemma, (P.D.),it is a metaphor from game theory, in which two separately caged prisoners must each decide whether to betray the other. Although the long term consequences of cooperation are better, betrayal creates a tangible short-term benefit. The player who cooperates while his "partner" betrays him receives a "sucker's payoff", the worst result in the game.
We frequently think of the introduction of new technology as a matter far removed from any government consideration, in fact, not subject to any intervention. Why? Because if we don't use it someone else will. We must because someone else will--so let's shortcut any other decision-making process. Because if we are nonviolent and rational and choose not to produce poison gas or biological warfare weapons, someone else will, and as we are dying of gas or disease we will have the shame of the ultimate sucker's payoff.
This style of thinking puts us in the full cry of the P.D. We must play the betrayal card before the other player does. In fact, when the tool in question is a nuclear weapon, this logic leads inexorably to the idea of pre-emptive strikes:
If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?
The speaker is John von Neumann, Princeton professor and the father of game theory.
Prisoners and commons
In the abstract--in a classroom--we can play an endless series of the P.D., without any consequences. In the real world, we can play a very limited number of rounds. When the particular game we are playing involves tools that affect our health or the environment, we can betray each other only a few times, perhaps only once, before we have obliterated life or our surroundings.
Thus the tragedy of the commons is really just a subset of the P.D. The decision to add the extra sheep and overwhelm the commons is the playing of the betrayal card in the P.D.
Therefore, a first and necessary step in the formation of a morality of tool use is to abstract ourselves from the game. That's right: refuse to play. Decline the gambit, show fatigue.
Unfortunately, when you are playing the P.D., there is no such thing as a non-move: a non-move is in fact a move. Depending on the circumstances, it may be cooperation or betrayal. If someone is undertaking a risky action in the understanding that you will act as a safety net and you fail to act, you have played the betrayal card. On the other hand, when it comes to technological questions, a failure to act, or at least to act hurriedly, is more often an act of cooperation: I will not adopt nuclear energy until I understand its consequences; therefore I have cooperated with, not betrayed, humanity.
Playing the cooperation card , and disregarding the greater, more tangible, immediate pay-off of a betrayal, requires a leap of faith. All this means is that we should conduct technology the same way we would be well advised to conduct democracy: with humility and tolerance, and also optimism.
Technology is good or evil
Technologists and others advance a common fallacy: all technologies are morally neutral; it is the applications to which we put them which are moral or immoral.
This is not true. Every technology leans in some direction: each one changes us, and every change to a human being has consequences. Cars called for roads, which knit us together but slice through neighborhoods. In Democracy and Technology, Richard Sclove begins with the story of a Spanish town where the introduction of the washing machine ended the old community tradition of gathering in the town square to wash clothes in the fountain.
The NRA's oft-repeated statement that guns don't kill people is a variation on the theme that technology is morally neutral. The NRA is wrong. Adopting the gun means that we choose to live in a society of the gun, and endorse a philosophy of violence as an appropriate solution to human problems. Television, the Internet, nuclear weapons, nuclear plants, genetic engineering, inoculations against disease are all technologies that profoundly changed our moral landscape, or have the potential to do so.
A technology may be inherently evil, like the gun or nuclear weapons, if the changes it will work upon us are likely to promote violence or selfishness. It may be inherently good, like the Internet, if it is likely to promote learning, communication and ties between humans. Or it may be good only when used in accord with other technologies, like inoculations and antibiotics that contribute to population surges in countries where there is not enough food or employment.
The belief that technologies are morally neutral forces us into a kind of blindness, where we do not examine their impact before they are introduced. After all, if there is nothing about the technology itself which will lead us to conclude it is bad, we must allow it to be used and then examine how it has been used. Even then, because it is morally neutral, we must not change our minds about allowing it--we may only punish the people who use it badly. Of course, the more powerful the technology, the more absurd this philosophy seems--how does one punish the people who destroyed the world?
In reality, once a technology has been released into the world, it is very hard to return it to the bottle. We have a responsibility to ourselves and to our children to make intelligent and moral decisions about new technologies before they change our world.
Confusing technology with morality
While we are clearing cobwebs from our eyes, there is one other important problem to examine: we have a tendency to confuse technological differences with moral ones.
For example, as Colonel Dave Grossman points out in his book On Killing, soldiers who kill in hand to hand combat tend to suffer post-traumatic stress disorder, while pilots who drop bombs from thousands of feet in the air enjoy excellent mental health. Since killing at a distance is still killing, there is no moral distinction to be drawn; we have fooled ourselves into thinking that a technological distinction makes a difference. But the lawyers have a saying, "a distinction without a difference," that is exactly suitable here.
Nazi Einsatzgruppen drove into cities, rounded up Jews and others, and shot them by the thousands, burying them in mass graves. British and American air forces dropped incendiary bombs on cities like Dresden and Tokyo, intentionally creating firestorms which killed 70,000 civilians at a clip. While the actions of the Einsatzgruppen are clearly war crimes to us, incendiary bombing of civilians is not. Why? The only difference is technological; it would be impossible to draw any moral distinction between shooting 70,000 civilians in pits and burning them up from the air.
Rather than being the way we avoid violence, civilization is the means by which we organize ourselves to commit violence. Technology aids us by allowing us to act powerfully and at a distance. Civilization and its technological tools allow us to avoid taking responsibility, by diffusing the decision-making across an organization while allowing the effects of violence to happen at a distance rather than under our eyes.
A moral methodology for tool use
Before adopting any new technology, we should begin by interrogating it and ourselves. We should ask, "Are we ready for you? What are your effects? Are you inherently good or bad for us? Are you consistent with the values we already hold?"
This interrogation should be rigorous, in fact merciless. Since each new technology changes the topography of our world, we must start by trying to foresee the way in which our world will change, and determine if this is desirable. It is not enough to ask who will make money--frequently the only question we ask today.
Writing, by enabling the storage of information outside the human brain and its unchanged transmission across generations, ended the oral world of tribal expertise. Even a development so seemingly minor, and completely forgotten today, as the replacement of tablets with papyrus, revised the map of the world; according to communications scholar Harold Innis, a world of heavy stone tablets is heavily centralized, while papyrus, which could easily be carried long distances by road, created a more distributed system of government. The printing press freed us from the tyranny of a small intellectual elite, but also allowed many stable authoritarian structures to be overthrown. The car transformed us from a nation of separate rural areas with strong personalities into a nation which is largely covered at both coasts by an endless urban sprawl without identity. The gun guarantees that we will continue killing each other, motivated by a belief (called forth by the technology and calling for it as well) that killing is a legitimate solution to our problems. Television pacifies us and binds us, guaranteeing that we will not do anything visionary or dangerous in our spare time. On the other hand, the Internet promotes the spread of dissension, intellectual alertness and unrest, as did the printing press five centuries ago.
If we tried to sum up our world in a few words, we could say that it is a society of the gun, the highway and the television broadcast.
The suggestion that we interrogate a new technology may sound dangerously reactionary. After all, society did "interrogate" the printing press; witness King Francois I's ban on printed books, various Papal bulls, the Index of prohibited books, and the burning of a few printers. Today, society is "interrogating" the Internet; witness the Communications Decency Act and the media hysteria about pornography and hackers on the Net.
The interrogation of a new technology will be reactionary if the society conducting the interrogation is itself reactionary. The Catholic church at the introduction of the printing press and the fundamentalist forces fueling the Internet hysteria today have a lot in common, and a lot to lose. But the interrogation we conduct of a new technology is precisely intended to determine whether it is consistent or inconsistent with the values we already hold. If we are committed in the United States to equality and strong democracy, for example, then we must ask whether a new technology will promote these goals. These are not the same questions asked by King Francois I.
History (embodied in printed books!) shows us that a beneficial technology such as the printing press triumphs despite strong opposition. Harmful technologies, such as nuclear power, may eventually die of their own accord (it is too soon to tell but there is certainly no rush to build new plants). That said, there is a virtue in going slow. The kings and bishops of the sixteenth century were not wrong to consider what the book might do to them. We rush more rapidly and foolishly into new things than they did.
The deterministic argument that good technologies will triumph and bad ones die without human intervention is false, however. We all may die proving that a bad thing is bad. How many more Chernobyls would you find acceptable--and how many worse Chernobyls--before you acknowledged that the cost of waiting for nuclear power to die had been too great?
We must try on a technology before the mirror like clothing. If we live in a democracy today, we must ask what our democracy will look like with this technology.
Assuming that a new technology passes the initial interrogation, the next step is to pursue a dialog with it. Friendship is a good analogy: once you have determined that someone can be a friend, you never stop learning what she expects or offers and communicating your reactions to these stimuli.
We merge with our tools. It is quite natural, because they are extensions of ourselves. Our language betrays it; how many nouns have we created merging man and tool: gunman, busman, trainman, wingman. While it is idle to blame the tool and leave the human out of it (the NRA does have a point here), the human you are holding responsible likely has merged with a tool. And we want to use what we have. Horses never have a desire to use hands. Humans, like other animals, experience the world by manipulating it with the tools they have available. Dogs manipulate by biting; gunmen shoot. The gun induces the brain to divide the world into people who may be shot, and people who may not be. Possession of nuclear weapons induces us to want to use them, even to create harbors in Alaska.
First the human calls the tool (this is the phase requiring interrogation). But later, the tool calls the human; this is the phase requiring dialog.
Here, the questioning is gentler: what will I become, merged with you? The answer, in the interrogation, may be not to adopt the tool at all; the result of the dialog may be, not to use it in certain ways. For example, an interrogation may have led us not to build nuclear plants at all, though it would certainly have led us to conclude that we wanted cars. Once we adopted the automobile, however, a more careful dialog might have resulted in a decision not to blanket the country with interstate highways as much as we have. There is a difference between the use and abuse of a technology. Some technologies may be used, while others, such as nuclear power and possibly television, can only be abused.
We must still look in the mirror and ask ourselves: do I like myself as the car-man, the Net-man, the gene-man?
We should never impose technologies on communities that do not desire them; we should find volunteers to try them, the way that software companies find beta testers for their products.
As a significant step in taking responsibility for our world, we should make sure, wherever possible, that new technologies are championed by the people who will be affected by their failure. As Charles Perrow points out in Normal Accidents, airlines are engineered more safely than oil tankers, because important people are killed when airliners crash, while no-one important is killed when tankers sink or explode.
Thus, rather than introducing a new technology everywhere at once, we might spend some years or even decades observing its effects in a community that offered, or even lobbied very hard, to host it. Consistent with democracy, we would need to monitor the situation to make sure that desperately poor communities that could best be helped in other ways were not making guinea pigs of themselves for money, rather than out of a desire for the technology. Because so much money is awash around new technologies, this would be a significant danger.
We have become acutely aware of the need to monitor our food and drugs (despite the Contract Republican theory that we don't really need the FDA). If we took a similar approach to new technologies, we would avoid the end result--unplanned and self-destructive growth--achieved by nuclear weapons, automobiles and television in this century.
If you and I went, with 10,000 others, to found a Mars colony tomorrow, I would suggest as part of our new constitution that we create a Technology Court. Its job would be to conduct the dialog with new tools described here.
Humans can make mistakes, and always do. A dialog conducted with television before its introduction might have led to the misapprehension that it was a democratic technology--informing the masses and educating our children. Experience would then have informed us that the volatile combination of spectrum scarcity and commercial sponsorship had in fact created a very anti-democratic medium, shutting out most voices on the political spectrum.
In our ongoing dialog with new technology, we must have the wisdom and the freedom to change our minds. Today, in our capitalist democracy, only market forces can quickly achieve this. If people aren't buying it, it goes away. If people are buying it for short term gratification but will be harmed by it in the long term, we have only government--nothing requires businessmen to make decisions for the long term, their own or anyone else's. Limited introduction of new technologies, in communities that seek them, and with a process for reviewing them at intervals rather than making a permanent commitment, would be consistent with a strong democracy.
To return to the example of broadcast television, a decision to separate ownership of the signal from ownership of the content, and to make the owner of the signal a common carrier who cannot deny anyone else the right to purchase broadcast time, would be a mid-course correction that might make broadcast television a democratic technology again.
Where money becomes entrenched, we are not free to make such decisions. The dominant role of soft money and PAC contributions in our political system is the strongest guaranty that inappropriate technologies will be introduced, will proliferate destructively, and can never be pruned back.
In the process Teilhard de Chardin called "hominisation"--the process of realizing our full capabilities as humans--a key step is acquiring the will to say no.
Today, we do things--like creating waste that must be sealed in jars for ten thousand years--because we can. Scientists work endlessly on the "can" and ignore the "should". Not everything that can be done should be done. Technological determinism elides the can and the should, claiming that anything that can be, will be. Here again we hear the curious resonance of the Second Law of Thermodynamics: the roar of entropy, as everything passes away; tout casse, tout passe, tout lasse. But, just as our lives and the existence of all life opposes the Second Law, building higher orders from lower and fighting decay, nothing is inevitable where technology is concerned.
Once we have put aside the idea that we must develop new technology without any moral examination, we find other arguments, of which the most notorious is that if we do not develop it, someone else will. This engages us in a race, but we have already, though so young, become so powerful that it is a race to destruction. A new technology may be introduced in a godawful hurry, or in a measured way, but there is no such thing as a measured hurry. All races are unbalanced; if we proliferate bombs, we are proliferating the chance of accidents; technology is not inevitable but accidents are. Once we see clearly that death is at the end of the path, we must make the choice not to go down the path. The best way to break out of a race to destruction is for someone in the race to have a leap of faith, to take a chance on the humanity of fellow humans. The one taking the leap may die, but he will have died for life; the others in the race are living for death.
If we have done so poorly up until now, it is because, as de Chardin knew, we are in our infancy. We will not be fully human until we can speak with pride of something we did not do, and say, "We thought about it, but never built it, because we knew it would be bad."