On August 6, 1945 the United States detonated the first nuclear weapon over another country — dropping a fission device on Hiroshima — and just three days later, dropped a second bomb on Nagasaki. The first wiped out 90% of its target and resulted in approximately 80,000 immediate deaths, the second in approximately 40,000 deaths. (Tens of thousands of others would succumb in subsequent decades to the effects of radiation received from the bombs’ detonation.) After experiencing such indescribable devastation in Hiroshima and Nagasaki, the Japanese leaders, with no idea of how many more of these doomsday weapons existed in the hands of the Americans, and how many more cities might be targeted… Tokyo, Osaka, Kyoto… were left shocked and horrified. On August 15, nine days after the nightmare had begun, the Emperor Hirohito announced over the radio — his voice never before having been heard by the Japanese people — that Japan had to accept the unacceptable and unconditionally surrender, ending the Second World War.
In the subsequent decades since many American have justified the dropping of the two bombs. “After all, they had attacked us, and it was necessary to bring an end to the war. The dead were collateral damage,” was the conventional argument. But a handful of scholars have claimed that the end of the war was imminent, and the Japanese were on the verge of surrender anyway. My article will probably not change many minds, but I will write about the science and technology that created the bomb, and the human side of the decision to drop it on an enemy.
In my career in physics I have known a number of the central players in the development of the bomb. I listened to the first hand accounts of that history and the anguish they felt personally, although their pain could never compare with that of the victims. Some of the following material is familiar to historians. Other parts have never been published. It differs in some instances from the published accounts, including Richard Rhodes’s excellent books, The Making of the Atomic Bomb (1995) and the Dark Sun (2004).
Physics, the most fundamental of all natural sciences — indeed, underlying chemistry, which underlies the life sciences, and those in turn the social sciences — studies the nature of matter. Providing the underpinnings, the logic and language for the stratified structure of the sciences, is mathematics, which is not a natural science. Isaac Newton demonstrated incontrovertibly in his monumental book Principia (1687) that the laws of nature are mathematical, that we live in a clockwork universe. After Newton’s time, mathematics and physics developed hand-in-hand, mathematics becoming irreversibly entrenched as the most effective weapon in the physicist’s arsenal for explaining phenomena already observed, and phenomena expected. Whereas in earlier times science and technology could progress separately, in the 20th century it has become virtually impossible to probe deeper into the nature of matter without elegant and increasingly expensive equipment; and it is impossible to produce cutting edge technology without applying basic science. The discovery a few weeks ago of the Higgs Boson is one of those long anticipated pieces in the puzzle. The existence of the particle had been predicted in the early 1960s, but its detection awaited the construction of the Hadron Collider, a multi-billion dollar particle accelerator, in CERN, Switzerland. The landing of the car-size rover “Curiosity” on the Martian surface on August 5 is an exquisite demonstration of the collaboration of Newtonian physics with cutting-edge NASA engineering, produced in close association with JPL and Caltech. This program, it is hoped, will be able to explain whether life ever existed on the Red planet.
A MICROSCOPIC COURSE IN AN ULTRAMICROSCOPIC SUBJECT
In 1896, Henri Becquerel discovered radioactivity. Certain ores, among them pitchblende, gave off radiation without being subjected to sunlight (as phosphorescent material has to be). Within a year Becquerel’s assistants, Pierre and Marie Curie, isolated traces of radioactive samples — Curium, Radium… In the same year J. J. Thomson at Cambridge University discovered the negatively charged electron, the first of the fundamental particles physicists would identify. He went on to hypothesize a picture of the atom with electrons interspersed and embedded in a positive wad, creating the neutral “atoms” we experience in bulk in everyday life. This was a naive model, a “terrible model”, as the late Nobel Laureate Norman Ramsey characterized it (see the National Geographic Newswatch blog of Nov. 8, 2011.
In 1905 Einstein published his Special Theory of Relativity. Among its mind-boggling “relativistic effects”, such as time dilation and length contraction, was the equivalence of energy and mass (E=mc2), with the “c” in the equation denoting the speed of light (a constant with the unimaginably large value of 186,000 mi/sec or 300,000 km/sec). Then just seven years later, J. J. Thomson’s “plum pudding model” of the atom was overthrown when Lord Ernest Rutherford performed his celebrated experiment, bombarding wafer-thin gold foil with massive alpha particles (the nuclei of Helium atoms). Rutherford explained the scattering of the alpha particles at drastic angles by theorizing that matter was composed of “nuclear atoms” — of massive, positively charged nuclei orbited by swarms of light electrons. It also became likely that the radiation observed by Becquerel, the Curies and others, was being emitted by the atomic nucleus. A typical atomic nucleus was thousands of times “heavier” (more massive) than the electrons orbiting it. In view of Einstein’s expression for the equivalence of energy and mass, far more energy resided in the atomic nucleus than in the electrons; and that nuclear energy, if it could be harnessed, would yield millions of times more energy than the chemical reactions that involved the dynamics of electrons. (The energy released in burning one ton of wood or gasoline is of the order of millions BTUs, and for one ton of nuclear material hundreds of trillions of BTUs.) In a conference held on energy in Cambridge, England in 1921 the New Zealand born Lord Rutherford foresaw the depletion of fossil fuels within a century or two. Mankind’s salvation, he explained, might depend on harnessing nuclear energy, but then expressed his doubt that nuclear energy could be tapped before many centuries, or many millennia. He could not have been more wrong!