For the First Time, the Summer of Hate Told by the Voices of Charlottesville
Why Disability Pride Matters in a Post-ADA World

The Unstable Path to Plutonium’s Atomic Seduction of the Manhattan Project

By Fred Pearce

Leslie Groves (left), military head of the Manhattan Project, with J. Robert Oppenheimer (right), 1942. Photo credit: United States Department of Energy
Leslie Groves (left), military head of the Manhattan Project, with J. Robert Oppenheimer (right), 1942. Photo credit: United States Department of Energy

Whether you’re taking Christopher Nolan’s three-hour epic biopic Oppenheimer straight or chasing it with Greta Gerwig’s fantasy-comedy romp Barbie, pregaming with the origin story of the Manhattan Project is in order. Environmental journalist Fred Pearce spells out the atomic stakes of the World War II global arena and what brought the (infamous?) theoretical physicist to the New Mexico desert in this excerpt from Fallout: Disasters, Lies, and the Legacy of the Nuclear Age.


Whatever its moral pitfalls, the production of the two atomic bombs dropped on Japan was a triumph of twentieth-century science. In the aftermath of Hiroshima and Nagasaki, the steam-powered industrial revolution suddenly seemed quaint. But the arrival of the new atomic age had been very sudden. It was the result of a tidal wave of new science about the structure of atoms, and how unstable these supposed building blocks of matter actually were.

This began with the discoveries early in the twentieth century that the apparently distinct atoms of each element—oxygen or uranium, copper or carbon—could take different forms, known as isotopes, and that these isotopes contained different numbers of neutrons, one of the building blocks of the atoms themselves. Most unsettling was the revelation that many isotopes were unstable. An isotope of one element might turn into an isotope of a different element, giving off radiation and energy as it did so.

What transformed this from fascinating science to a discovery that could change warfare was the insight that some atoms could be split to order, by bombarding them with neutrons released by another radioactive element. This splitting of the atom, or “fission,” was first achieved in the lab by New Zealand physicist Ernest Rutherford, in 1917. But it was only in 1933 that Hungarian Leo Szilard suggested that it would be possible to set off an explosive chain reaction in which every split atom released many more neutrons that smashed into yet more atoms. At every step in this nuclear chain huge amounts of energy would be produced.

Szilard said the best chain reaction would come from splitting uranium, a metal with a large nucleus that would produce the most neutrons at each step. The uranium would need to be packed very tightly, so that a large proportion of the neutrons released by fission hit other uranium atoms. But if a “critical mass” of uranium could be brought together inside a bomb, it would explode with a force as great as setting off thousands of tons of TNT.

At the outbreak of the Second World War, most of the world’s small band of nuclear scientists had fled from continental Europe and were working on these ideas in Britain and the US. At the end of 1939, Szilard, who was by now in America, met up with Albert Einstein, then the world’s most famous physicist. They wrote a letter to President Franklin Roosevelt suggesting that even though America was then neutral in the war, it should develop such a bomb—not least in case the Germans were doing the same.

Roosevelt initially didn’t seem too interested. But in Britain, which faced the prospect of invasion by Germany, two other émigré physicists got a different response. A few weeks after Szilard’s rebuff by Roosevelt, the Austrian physicist Otto Frisch and his German collaborator, Rudolf Peierls, wrote to Prime Minister Winston Churchill with the same idea. They added an important new detail. They had calculated that the critical mass of uranium needed to make a fission bomb was only twenty-two pounds, much less than most physicists had expected. There was a proviso: the bomb had to be made of one particular isotope of uranium, known as uranium-235, which made up only a small proportion of natural uranium. But if it was, they promised Churchill that it would create an explosion that “would destroy life in a wide area . . . probably the centre of a big city.”

This was the summer of 1940. The Battle of Britain was in full swing. The Germans were bombing London daily and an invasion might come at any moment. Within days, Churchill set up a secret committee, known for reasons never explained as the MAUD committee, to see how practical this proposal was—and how soon a bomb could be delivered. Thus began the political process that ultimately delivered the Manhattan Project on the other side of the Atlantic, and the fateful dropping of two atomic weapons on already enfeebled Japan just five years later.

The MAUD committee quickly heard from another pair of émigré scientists, Austrian Hans von Halban and Russian Lew Kowarski. At their lab in Cambridge, they had been investigating how to generate useable electricity from the energy released by chain reactions. They figured that, rather than setting off a runaway fission explosion, it should be possible to control the chain reactions inside what they called a nuclear reactor. This could generate energy useable for something other than destruction. But while investigating this, they realized that one of the products of splitting uranium atoms would be a new element, which they called plutonium. Plutonium did not exist in nature, but they calculated that one of its isotopes, plutonium-239, might be even more fissile than uranium-235. So even smaller amounts might make a bomb. In wartime, of course, nobody was interested in splitting atoms to generate electricity, but the idea of a plutonium bomb did grab the MAUD committee’s attention.

Making the ingredients for an atomic bomb would require finding supplies of uranium ore and separating out uranium-235, or constructing reactors to make plutonium. Both were huge industrial projects and Britain didn’t have the money. Only the Americans had the capacity to do the job.

Things briefly stalled until the US joined the war, following the attack on Pearl Harbor at the end of 1941. Then, Churchill ordered his scientists to share the conclusions of the MAUD committee with the American atomic elite. Within weeks President Roosevelt gave the green light to what became known as the Manhattan Project. Soon, America was throwing hundreds of millions of dollars into turning the work of European university labs into the bombs to win the war.

The US government decided that in case one design didn’t work out they would go full tilt to produce both a uranium and a plutonium bomb. By the end of 1942, a secret project was buying uranium from what was then virtually the world’s only source, the Shinkolobwe mine in the remote far south of the Belgian Congo, and work was under way to extract uranium-235 from that ore. Meanwhile, a nuclear chain reaction had been produced in a reactor in Chicago, and plutonium-239 had been extracted from it.

The Manhattan Project scientists developed a strange love-hate relationship with plutonium. Yes, it could destroy worlds, but it was also rather seductive. Thanks to its emission of radiation, it “feels warm, like a live rabbit,” said Leona Marshall Libby, one of the few women scientists involved in the project. Others reported that it had a metallic taste.

By mid-1943, a large expanse of remote sagebrush desert beside the Columbia River in Washington State had been commandeered for manufacturing plutonium-239, an isotope so fissile that a couple of pounds was thought capable of producing an explosion equivalent to twenty thousand tons of TNT. A huge construction enterprise employing thousands of workers at the Hanford reservation erected nine giant atomic reactors that bombarded uranium fuel with neutrons to create small amounts of plutonium. The fuel was then removed and the “spent fuel” was dissolved in nitric acid to extract the plutonium for turning into bombs, a chemical process called reprocessing.

The intellectual center of the Manhattan Project was far to the south at Los Alamos, a former boarding school in the New Mexico desert. Here hundreds of scientists spent their days drawing up blueprints for the bombs and for how to maximize their impact. Their average age was twenty-five. Almost all the British scientists who had contributed to the MAUD committee’s report joined Robert Oppenheimer and other young US luminaries there. They included Peierls and Frisch, as well as their close colleague, the German-born mathematician Klaus Fuchs. Besides his day-to-day work, Fuchs kept up with everything. He had a photographic memory and, it later transpired, was sending all the secrets he learned to Joseph Stalin’s chief nuclear scientist, Igor Kurchatov. During his decade-long journey through the British and American atomic science establishments, this diffident but sociable and amenable émigré gathered up a massive amount of information.

Kurchatov soon knew that the Los Alamos scientists were designing both a uranium bomb and a plutonium bomb. In both bombs, the neutrons to start the reactions came from an “initiator” inside the bomb made of isotopes of polonium and beryllium. But otherwise the designs were very different. The uranium bomb used conventional explosives to slam together two small packs of uranium-235, creating the critical mass for a chain reaction. For the plutonium bomb, Oppenheimer and his colleagues decided they needed a more complex “implosion” bomb. There would be a single ball of plutonium about the size of a tennis ball. The critical mass would be created by detonating a shell of explosives around the ball to squeeze it. Calculating the physics of the implosion, and deciding exactly how to configure the explosives in the shell, were Fuchs’s specialties.

The uranium bomb was never tested before being dropped on Hiroshima. But with more to go wrong, there was a test firing of a plutonium bomb in the New Mexico desert near Los Alamos in July 1945. It proved a dramatic success, with four times the anticipated explosive power. Just three weeks later, a duplicate bomb was dropped on Nagasaki. Days later, the Japanese emperor, Hirohito, surrendered. The job was done. Spookily, the workforce involved in the Manhattan Project was officially put at 175,000 people, almost exactly matching the death toll from the two bombs.

Many Manhattan Project scientists were fearful of what they had created. Robert Oppenheimer, their chief, invoked the words of the Hindu deity Krishna: “I am become death, the destroyer of worlds.” There was anger, too, especially about the decision of the military to drop two bombs on Japanese cities. Szilard, the man who had originally conceived of the bomb, had argued for a demonstration of the new bomb’s power in some remote location. But he was overruled by politicians and generals who wanted to see what would happen in a real city.

With their deed accomplished, the scientists knew well that others could repeat it. To forestall a nuclear arms race, some called publicly for nuclear weapons to be put under international control. The generals didn’t think much of that idea either. They rather liked the idea of being able to “destroy worlds.” For a while after World War II, America hoped to keep the technology to itself. To that end, even their British scientific wartime collaborators were sent home—and rather ridiculously instructed not to use what they had learned should Britain decide to develop its own bomb.


About the Author 

Fred Pearce has reported on environmental, science, and development issues from eighty-five countries over the past twenty years. Environment consultant at New Scientist from 1992 to 2018, he also writes regularly for the Guardian newspaper and Yale University’s prestigious e360 website. His many books include The New WildWhen the Rivers Run DryWith Speed and ViolenceConfessions of an Eco-SinnerThe Coming Population Crash, and The Land Grabbers.