Archive of South Asia Citizens Wire | feeds from sacw.net | @sacw
Home > Resources / Links > History of nuclear accidents illustrates absurdity & irresponsibility (...)

History of nuclear accidents illustrates absurdity & irresponsibility of nuclear weapons

7 April 2014

print version of this article print version

London Review of Books, January 23, 2014

HOW WORRIED SHOULD WE BE?

by Steven Shapin

[Review of Eric Schlosser, Command and Control: Nuclear Weapons, the Damascus Incident, and the Illusion of Nuclear Safety (Penguin, September 2013), 632pp., $36.00, 978-1594202278.]

‘Anything that can go wrong, will go wrong.’ That’s known as Murphy’s Law. It’s invoked in all sorts of settings, but its natural modern home is in engineering, where it is generally attributed to a remark made around 1950 by an aeronautical engineer called Ed Murphy, who was working on the design of rocket sleds at Edwards Air Force Base in California. In the mid-1950s, when Murphy’s Law wasn’t yet widely known under that name, Admiral Lewis L. Strauss, reflecting on the political and administrative troubles afflicting him, suggested that ‘a new law of knowledge’ be recognized and called Strauss’s Law after him: ‘If anything bad can happen, it probably will.’ At the time, Strauss was chairman of the Atomic Energy Commission, which had the responsibility for producing and maintaining America’s nuclear weapons, and the things that can go wrong with the control of such weapons are as bad as it gets.

Nuclear weapons are designed to detonate as the result of specific types of human intention. Explosions are the sharp end of elaborate, and constantly evolving, ‘command and control’ systems put in place to ensure that these weapons are used only as and when intended by legitimate political authority. Although there were concerns at one time that a high proportion of nuclear weapons would turn out to be damp squibs, or miss their targets by a wide margin, their designers, at least in the original nuclear states, are now confident that they will for the most part work as they are meant to.

But nuclear weapons can, in theory, go off accidentally. There have long been arguments about the chances of accidental explosions — failures of command and control in which weapons are detonated when no one intends they should be or when control is seized by an illegitimate party. Some people believe that the risk of accidental detonation has always been oversold. First, the novels Red Alert (1958) and Fail-Safe (1962), and then, based on Red Alert, the 1964 film Dr. Strangelove, put in play the idea that all-out nuclear war could happen as a result of technical flaws or through the actions of one or a few madmen, but, despite all the Cold War cold sweats, we’re still here. After 9 August 1945, there has never been either an accidental nuclear detonation, or, several thousand tests excepted, an intentional explosion of any sort. The U.S. has built some 70,000 nuclear weapons since the end of the Second World War, and currently possesses about 4,650, none of which has yet detonated accidentally or without authorization. So how worried should we have been about nuclear explosions, intentional or accidental? How worried should we be now? What has been the relationship between the possibility of accidents and the command and control systems meant to prevent them and to guarantee intentional outcomes, or between nuclear risk and the political structures in which nuclear weapons are embedded?

Since 1947, the Bulletin of the Atomic Scientists has had a Doomsday Clock icon on its cover, set to indicate how close to Armageddon we’re reckoned to be. In the beginning, when the U.S. was the world’s only nuclear power, holding only a few atomic bombs, the clock was set at seven minutes to midnight. In 1953, with both the Soviet Union and Britain joining America as nuclear states, and with the introduction of thermonuclear weapons, it was advanced several minutes. Since then, the time on the clock has varied between 11:58 and 11:43, reflecting test-ban treaties, arms races, and nuclear proliferation — but, again, we’re still here. Immense stockpiles of weapons vastly more devastating than the Hiroshima and Nagasaki bombs have been accumulating for almost seventy years, guarded by fallible human beings, loaded on ready-to-go bombers and mounted on missiles primed to fly at a moment’s notice, but the world hasn’t ended, and over time it’s become more difficult to work up collective hysteria, or even serious concern, about the possibility of nuclear annihilation, intended or accidental. It’s a state of affairs sometimes offered as solid proof that the use of nuclear weapons solely as a deterrent is highly effective and that the systems for keeping them safe against accident or theft work flawlessly. If you really need a bout of apocalyptic anxiety, then worry about climate change, or pandemic influenza, or drug-resistant bacteria, or meteorites doing to us what they did long ago to the dinosaurs.

That’s one way to steady nuclear nerves: learn to stop worrying and accept the bomb, even if you can’t bring yourself to love it. It’s a position that has its advocates. A few years ago, John Mueller’s Atomic Obsession: Nuclear Alarmism from Hiroshima to al-Qaida urged a relaxed attitude: far more has been spent on nuclear weapons than can be justified by any sensible political strategy; they aren’t of much military use; their proliferation presents little danger; fears of nuclear accidents aren’t justified. Historically, wars just don’t begin by accident and there’s no reason to think that nuclear war would be an exception. A sober effort to reduce the chances of an accidental nuclear explosion might be worthwhile, but ‘the myopic hype and hysteria that have so routinely accompanied such efforts are not.’ Writing in the Daily Telegraph recently, Gerard DeGroot declared that the system for preventing nuclear accidents ‘works,’ and that, even if an accidental explosion did occur, it would be unlikely to mean the end of the world. There is a market, he said, for books that frighten us and little or none for those that reassure: ‘Apparently we prefer hysteria to soothing logic in matters atomic.’

The book DeGroot had in mind is Eric Schlosser’s brilliant Command and Control, a gripping, joined-up history of American nuclear strategy and nuclear accidents over the past sixty years or so. The broader story is intercut with a minute-by-minute reconstruction of an accident that took place on 18 September 1980 in Damascus, Arkansas, when the socket from a socket-wrench carelessly dropped by a low-level air force maintenance man caused the explosion of a liquid-fuelled Titan-II missile in its underground silo, pitching its nine megaton yield W-53 thermonuclear warhead — then the most powerful in the American nuclear arsenal — into a ditch hundreds of yards away. Accidents happen in the best regulated families, nuclear families included, and Schlosser assembles information about how many U.S. accidents have happened and how close we’ve come on many occasions to unintended explosions and the consequences that might plausibly have followed. There have been numerous so-called ‘Broken Arrow’ accidents, in which nuclear weapons were lost; or in which they were released from aircraft; or launched without authorization; or in which fires or conventional explosions occurred or radioactive materials were released. The Pentagon has admitted to 32 Broken Arrows between 1950 and 1980, while other government experts estimate a much larger number of ‘significant’ incidents — 1200 between 1950 and 1968. Using the Freedom of Information Act, Schlosser obtained a Pentagon document titled ‘Accidents and Incidents Involving Nuclear Weapons’ and dealing with the period from 1957 to 1967: it was 245 pages long and had been kept secret from some of the government’s own nuclear safety experts.

Nuclear weapon accidents differ in kind over time, reflecting changes partly in the design of the weapons and their delivery systems and partly in systems of political and military control. Every technological system, as Charles Perrow has observed, has its characteristic ‘normal accident.’ The things that Murphy’s Law says will (or can) go wrong depend on the design of technical and human systems that are intended to ensure the weapons work. In the early atomic age, bombs were meant to be delivered by aircraft and, for safety reasons, their fissile cores (or ‘pits’) were kept apart from the cases containing wiring systems and conventional explosives which implode the core and cause the detonation. The pits might be secured in separate buildings at air bases, and then loaded onto planes, where they were inserted before the bomb was armed and released. In July 1956, an American B-47 bomber stationed at Lakenheath in Suffolk took off on a training flight to practice touch-and-go landings. It was not carrying a nuclear weapon but, on one of its touchdowns, it veered off the runway and smashed into a shed containing a cache of Mark 6 atomic bombs. The crew was killed, but the outcome was nevertheless a great piece of luck: the weapons destroyed by the subsequent fire did not contain their fissile pits, which were stored in a nearby ‘igloo.’ If the bomber had crashed into that igloo, Schlosser writes, ‘a cloud of plutonium could have floated across the English countryside.’ In March 1958, a Mark 6 bomb lacking its nuclear pit was accidentally released from another B-47 and landed in the backyard of Walter Gregg’s family home in Mars Bluff, South Carolina. Its high-explosive casing detonated, digging a 35-foot crater, destroying the family Chevrolet, killing six chickens and deeply embarrassing the Pentagon, which had led the American public to believe that such things couldn’t happen.

Two months earlier, the rear tyres of another B-47, based in Morocco, had blown out while it was practicing runway maneuvers. The plane was carrying a ten-megaton-yield Mark 36 hydrogen bomb; its plutonium pit was onboard but hadn’t yet been inserted into the conventional explosive casing. The resulting fire melted the bomber and its weapon into a radioactive slag heap, spreading plutonium over the emergency crews and over the surrounding area, though, since proper monitoring equipment wasn’t available, we don’t know how much of the country was contaminated. The Pentagon did tell the king of Morocco, but otherwise it was thought best to keep the accident secret.

‘Sealed-pit’ designs were first developed in the mid-1950s and later became standard, so cutting out the time-consuming process of bringing pit and casing together. These designs met the military’s need for the weapons to be as ready for use and reliable as possible, but, as Schlosser points out, there was always a tension between the demands of safety and those of readiness and reliability, between a weapon ‘always’ exploding when it was meant to and ‘never’ detonating when it was not meant to. Civilian designers wanted to err, if a choice had to be made, on the side of ‘never’, while the military, preferring ‘always’, won the medium-term historical argument. If a bomb with a sealed-pit design were accidentally jettisoned from a plane, anyone living where it landed might not be as lucky as the Gregg family.

On 17 January 1966, a B-52 carrying four Mark 28 hydrogen bombs and a crew of seven collided with its refuelling tanker over the village of Palomares in southern Spain. Four of the crew ejected and survived, and a search was mounted for the bombs. Three of them were located on the ground; one was intact but the high explosives in the other two had partially detonated, dispersing plutonium across the countryside. The fourth bomb was found intact months later half a mile deep in the Atlantic, and was eventually retrieved by a manned submersible. The government declared that there was ‘not the slightest risk’ in eating fish, meat, and vegetables from the impact zone, but that was more than they knew at the time, and the U.S. military collected and burned 4000 truckloads of vegetables from the area, then dug up heaps of Spanish soil and transported it in steel drums to South Carolina for burial.

From the mid-1950s, the military became concerned enough about accidental explosions to calculate the probabilities. Initial estimates put the chances of a hydrogen bomb accidentally exploding in the U.S. at one in 100,000 during any given year, and of an atomic bomb one in 125. Subsequent calculations — done on different assumptions — weren’t so reassuring. They put the chance of an accidental domestic hydrogen bomb explosion at one in five over a decade, and the chance of an accidental atomic explosion over the same period at 100 per cent. The Pentagon saw this as an acceptable state of affairs and its sense was that it would be all right with the American people too, though the public’s view on the matter was never sought.

* * *

New types of accident became possible when, from the late 1950s, intercontinental ballistic missiles tipped with nuclear warheads came into play. With the two superpowers both armed with ICBMs, submarine-launched missiles, land-based intermediate-range missiles and various other ‘battlefield’ nuclear devices, the timespan for responding to real or imagined attacks had shrunk radically. Missiles fired from the Soviet Union could reach their targets in the U.S. in thirty minutes; from Cuba, in three or four minutes; and for missiles launched from submarines off the American coast, there would be effectively no warning at all. The result was that nuclear weapons were put on a more sensitive hair-trigger than they had been in the bomber era. Schlosser’s chilling account of the 1962 Cuban Missile Crisis contrasts the highest historical state of American nuclear readiness, and the minuscule missile flight-time, to the means used by the Soviet ambassador in Washington to send urgent messages: a telegraph boy would come to the embassy on his bicycle and the ambassador ‘could only pray that he would take it to the Western Union office without delay and not stop to chat on the way with some girl.’ American strategic bombers had been flying for years on airborne alert, loaded with sealed-pit nuclear weapons, patrolling close to their Soviet targets, but with the emergence of the ICBM and the submarine-launched Polaris missile, in the event that a missile attack was detected, there would be next to no time available to consider whether it was an all-out ‘first-strike’ attack, a stray ‘accidental’ launch, or indeed a technology-induced illusion caused by a false ‘early warning’ signal.

In October 1960, America’s ballistic missile early warning system in Thule, Greenland detected, with an estimated 99.9 per cent certainty, a massive missile attack coming from Siberia. The Joint Chiefs of Staff were phoned and urgently asked for instructions how to respond. At that point, with only minutes to go before the first U.S. cities were incinerated, the Canadian vice commander of the North American Air Defense Command thought to ask his officers where Khrushchev then was. When the answer came back that he was in New York, there was reason to pause — which was a good thing because it soon emerged that the computer-detected all-out missile attack was in fact the moon rising over Norway.

On 26 September 1983, a 44-year-old Soviet colonel called Stanislav Yevgrafovich Petrov, on duty in an early warning installation near Moscow, unwittingly returned the favor. During a period of particularly tense superpower relations — on the first of the month, a civilian Korean 747 had strayed into Russian airspace and been shot down, and American Pershing-II missiles were scheduled soon to arrive in West Germany — a Soviet satellite picked up first one, then five, Minuteman missiles coming in from the U.S. Petrov had minutes to decide whether to inform his superiors that an attack was on its way, thus almost certainly triggering massive retaliation, or to notify them of a likely false alarm. Fortunately, Petrov decided this could not be a real attack, as indeed it wasn’t: the approaching U.S. ‘missiles’ turned out to be an odd configuration of cloud-reflected sunlight. For his services to humanity, a few days later Petrov received an official reprimand and now lives in poverty on a pension of $200 a month.

The Petrov episode was possibly in Gorbachev’s mind when he first met Margaret Thatcher at Chequers on 16 December 1984. Thatcher’s papers minute what he told her: ‘Mr Gorbachev argued that if both sides continued to pile up weapons this could lead to accidents or unforeseen circumstances and with the present generation of weapons the time for decision-making could be counted in minutes. As he put it, in one of the more obscure Russian proverbs, “once a year even an unloaded gun can go off.†’ On 25 January 1995, an atmospheric research rocket launched by Norway — about which Russia had been given prior notification — triggered alarm-bells in the Kremlin. Boris Yeltsin, Schlosser notes, ‘retrieved his launch codes, and prepared to retaliate,’ and it was some minutes before a false alarm was declared.

In the ICBM era, what counted as a nuclear accident could be a lot more serious than the fate of the Greggs’ chickens or the Palomares tomatoes, not least because American and Soviet policy came to include ‘launch on warning’ — retaliation as soon as an attack was detected but before the missiles have arrived — and a strategic plan according to which the response to any nuclear attack would be to let all one’s own missiles go. Since each side feared a first strike that would take out its capacity to mount an effective response, all ground-based missiles would have to be launched at once or their obliteration presumed. U.S. policy before 1960 was an incoherent mash-up of inter-service rivalries, with the result that the Army, the Air Force, and the Navy each maintained proprietary targeting plans, each jealously guarding its own nuclear technologies and strategic prerogatives. As a result, a single Soviet target might be hit by three different types of missile and H-bombs dropped from three B-52s. Some co-ordination was achieved in 1960 through the top-secret Single Integrated Operational Plan (SIOP), which prescribed an all-out strike with the entire American arsenal of 3,423 warheads against the Soviet Union, China, and allied socialist states. The implementation of SIOP was meant to be ‘automated’ — as soon as a Soviet attack was detected and confirmed, the whole plan was to be set in motion, after which ‘it could not be altered, slowed, or stopped.’ It was in this context that some types of ‘accident’ would effectively mean the end of civilization. The sheer overkill of SIOP made Eisenhower despair — though he was ultimately argued into approving it towards the end of his presidency. The all-or-nothing character of SIOP appalled members of the new Kennedy administration. Discussion of ‘limited’ nuclear war during that administration was understood by the American left as war-mongering, but it was motivated in part by revulsion at what was then the institutionalized national strategic policy.

Fears of a massive enemy first strike, decapitating the civilian chain of command, spurred the American military from early on to urge what was called ‘predelegation’: ceding civilian authority to use nuclear weapons to military commanders when the president or his legitimate successors could not be contacted and were presumed dead. Eisenhower ultimately agreed to predelegation, despite his fears that it might allow some officer or group of officers to do ‘something foolish down the chain of command,’ and it was eventually extended to Air Force fighter pilots and NATO officers in Europe. Eisenhower had some basis for his worries. General Curtis LeMay, who led the Strategic Air Command (SAC), urged military ownership of America’s nuclear weapons in the early 1950s, and later recalled that he intended to take matters in his own hands in a nuclear emergency if he couldn’t rouse the president: ‘If I were on my own and half the country was destroyed and I could get no orders and so forth, I wasn’t going to sit there fat, dumb, and happy and do nothing.’ Predelegation, Schlosser writes, ‘made sense as a military tactic,’ but the decentralization of control also introduced further uncertainties concerning human judgment. There were now more people whose states of mind could figure in the making of nuclear ‘accidents.’

In 1958, anxious about SAC’s airborne alerts, the Soviets circulated a Pentagon report establishing that 67.3 per cent of U.S. Air Force personnel were ‘psychoneurotic’ and that alcoholism and drug use was rife. That document was a forgery, but the Rand Corporation was commissioned to study the risk of accidental nuclear detonation, and concluded there was a real danger that technical safeguards could be circumvented by ‘someone who knew the workings of the fusing and firing mechanisms,’ and that psychiatric screening of military personnel was perfunctory or non-existent. The Rand report estimated that ten or twenty Air Force personnel working with nuclear weapons, in Schlosser’s words, ‘could be expected to have a severe mental breakdown every year.’ By 1980, the Pentagon knew that drug use was pervasive in the military. At an air base in Florida, 35 soldiers were arrested for selling drugs, including LSD: they belonged to a unit which controlled anti-aircraft missiles and their nuclear warheads.

There are heroes in Schlosser’s story, including civilian designers who fought long and hard with the military to build more effective safeguards into weapon systems, ensuring that bombs and warheads were ‘one-point safe’: incapable of nuclear detonation if any of their conventional high explosives went off at a single point or fired asymmetrically. Designs were improved in several ways: preventing the arming of weapons by accidental electrical signals, such as those generated by a short-circuit or faulty wire; ensuring that the conventional explosives surrounding the nuclear pits could not go off as a result of fire or single-point or asymmetric impact; including a ‘weak link’ that was bound to fail under abnormal conditions; and installing ‘permissive action links’ to prevent weapons being armed unless activated by a specific externally supplied code or combination. The military tended vigorously to push back against a wide range of recommended safeguards, bridling at their expense, and, in some cases, fretting that they would sap the morale of personnel by displaying a lack of confidence in their reliability and discipline. By the late 1970s, a coded control switch was installed in all SAC ballistic missiles, but it locked the missile rather than the warhead. ‘As a final act of defiance’ towards engineers concerned about safety, Schlosser says, the Air Force showed its attitude towards such security measures by its management of the code: ‘The combination necessary to launch missiles was the same at every Minuteman site: 00000000.’

* * *

If Murphy’s Law is to be taken seriously and acted on in any practical way, it is necessary to make several modifications. The first is a specification of time-span: over what period of time can we expect the unexpectedly bad thing to happen and how should we deal with different time-spans? The second is some probabilistic qualification: over any period of time, the various things that could go wrong have different chances of actually happening. The third is the scope of the ‘anything’ that might go wrong, a practical sorting of things into those that can conceivably go wrong and those that one can’t, at the moment, imagine going wrong. And the last — pertinent to any practical, accident-avoiding activity — is some sense of the wrongness of any given thing that may go wrong: the tolerability of its consequences, the trade-off between the costs of an accident and the costs of avoiding the accident.

The lessons Schlosser means us to draw from Command and Control address some, but not all, of these qualifications. The first is that we’re not out of the woods just yet: old risks have been better dealt with over time, but new ones are emerging. Civilian engineers have indeed reduced the risk of nuclear accidents, but that risk was for a long time unconscionably high and the current conditions of risk are worth consideration. In The Limits of Safety: Organizations, Accidents and Nuclear Weapons (1993), Scott Sagan reckoned the fact that a disastrous nuclear accident hadn’t yet occurred was down less to ‘good design than good fortune.’ By 1999, this was an opinion one could hear expressed even by SAC’s last commander, General George Lee Butler, who said that SIOP was, ‘with the possible exception of the Soviet nuclear war plan . . . the single most absurd and irresponsible document I had ever reviewed in my life . . . We escaped the Cold War without a nuclear holocaust by some combination of skill, luck and divine intervention, and I suspect the latter in greatest proportion.’

Criticising Schlosser as a hysteric misses the point. He has no problem acknowledging that if even one of the seventy thousand nuclear weapons constructed by the U.S. had been accidentally detonated or stolen, the command and control system would have proved 99.99857 per cent safe, and that is a very impressive degree of security. But the precautionary principle more familiar from environmentalist arguments relates the quality of damage inversely to the acceptability of risk, and nuclear weapons are, as Schlosser notes, ‘the most dangerous technology ever invented.’ This is so even if one puts aside worries about the detonation of weapons and considers (more than Schlosser actually does) the damage that could be brought about by dumping some stolen plutonium over London or Manhattan, or even by ‘dirty bombs,’ which disperse radioactive materials — including non-fissile isotopes widely used in industry and medicine — by combining them with conventional explosives. With these technologies, any degree of complacency is dangerous.

The improved security of the American nuclear stockpile was achieved largely through the prompting of people worried that it wasn’t nearly secure enough: by civilian weapons designers and by technical experts, politicians, and activists alarmed by nuclear accidents real and imagined, and by those suspicious that the Pentagon was lying about safety. Yet the military tendency has always been to control not just the weapons but information about their security, and the result of that institutionalized secrecy has been not more security but less. That’s why Schlosser thinks all the worry about accidents was justified in the past and why, even with greatly improved security, we should carry on worrying. The rational attitude sometimes looks like irrational anxiety.

Apart from a few asides about Soviet nuclear systems, Command and Control is about America, but it closes with gestures towards present-day nuclear proliferation and future risks. India and Pakistan are historical enemies and now considerable nuclear powers; missile flight times between them are four or five minutes; and they have already come close to nuclear exchanges, Schlosser says, half a dozen times since the early 1990s. Pakistan is ‘the only nuclear power whose weapons are entirely controlled by the military.’ Neither side’s command and control facilities are hardened against attack, so there is great pressure to launch first if an enemy strike is feared. Worries about an attack from India probably explain why Pakistan has dispersed its weapons to many different sites, including in the North-West Frontier Province, making it much easier for non-state terrorists to steal one.

Superpower command and control systems have got better; superpower stockpiles have been reduced, and there is a chance of further reductions. Former Cold Warriors including Henry Kissinger and George Shultz put their names to an Op-Ed piece in the *Wall Street Journal* imagining and urging ‘A World Free of Nuclear Weapons.’ Not so long ago, Barack Obama promised an effort to realize that goal, and the U.N. Security Council endorsed the idea. That’s the good news. The bad news is that while the risk of a world-ending nuclear exchange has probably diminished, the risk of some form of proliferation-related hostile nuclear explosion, accidental or intentional, has probably increased. There are still nuclear command and control lessons to be learned from the close calls of the Cold War.

— Steven Shapin teaches the history of science at Harvard. He is writing a short history of ideas about eating and drinking.

P.S.

The above article from London Review of Books is reproduced here for educational and non commercial use