In the previous post, I tried to make sense what Compton (in a 1959 interview with Pearl Buck) and Bethe (in 2000, reported here) said about the analyses of the possibility that the first nuclear tests might ignite the atmosphere. Independently, on the same day, John Horgan wrote an excellent blog post on the same topic with many more interesting details. I’ll quote from it below, but strongly recommend reading it in full.
Horgan mentions Compton’s 1959 interview with Pearl Buck, alongside an embarrassingly weird 1975 article in the Bulletin of the Atomic Scientists by J.C. Dudley, who speculates about a purported ether-like “neutrino sea” altering fusion rates, suggest a supercritical nuclear reactor might meltdown to the centre of the Earth and out the other side, and just about stops short of hypothesising mutant giant goldfish taking over the planet and eating everyone. He also puts on stilts Compton’s strange claim that calculations showed the risk of igniting the atmosphere from a nuclear bomb to be slightly less than three in a million, nonsensically taking this fictional risk figure to be independent for each bomb detonated, making the catastrophe inevitable given enough nuclear tests.
Bethe in 1975 The main point of Horgan’s post is to give what he says is a lightly edited transcript of a 1991 interview with Bethe, in which Bethe repeats his 1975 rebuttal of Dudley, where he reviews the arguments of Konopinski et al. and suggests that Buck must have completely misunderstood Compton. Bethe, in 1975, says “There was never any possibility of causing a thermodynamic chain reaction in the atmosphere […] ; it is simply impossible”.
Bethe in 1991 According to Horgan’s transcript, in 1991 he described the suggestion as “such absolute nonsense”. But, interestingly, he then took a very different line on Compton’s interview:
“Just to relieve the tension [at the Trinity test on July 16, 1945, Enrico] Fermi said, ‘Now, let’s make a bet whether the atmosphere will be set on fire by this test.’ [laughter] And I think maybe a few people took that bet. For instance, in Compton’s mind [the doomsday question] was not set to rest. He didn’t see my calculations [or] Konopinski’s much better calculations. So it was still spooking [Compton] when he gave [the interview to Pearl Buck in 1959]. “
So, no misunderstanding after all: Pearl Buck had accurately reported Compton, who was spooked in 1945, and still spooked even in 1959.
Bethe in 2000 By 2000, as I mentionedearlier, Bethe had a subtly different take again: “[T]he 1 in 300000 figure was made up by Compton ‘off the top of his head’, and is ‘far, far too large’.” So, now not “simply impossible”, merely “much, much more improbable than 1 in 300000” (whatever that was intended to mean).
Horgan’s blog has a very interesting commentary, which deserves quoting in full:
“Clarification from Alex Wellerstein, an Actual Expert: My friend and Stevens colleague Alex, an historian specializing in nuclear weapons, posted the following info-packed comment. Alex alludes to the possibility, also raised by Teller, that a nuclear explosion could ignite deuterium in the ocean. The term “Super” refers to a hydrogen fusion bomb.—John Horgan
John, I suspect that by the time you talked to him, especially after the (very silly) Dudley exchange, Bethe was pretty sick of this issue, and probably was downplaying the amount of effort and lingering concerns that existed in 1945. Dan Ellsberg, in The Doomsday Machine, makes an argument that they were more concerned than they let on later, and while I don’t totally think he is correct in his whole argument, I think he does a good job of showing that it wasn’t quite as dismissible as nonsense at the time, whatever Bethe thought of it.”
This sounds very plausible, with the downplay getting ever softer over the years. Bethe shifts from categorical denial in 1975 that anyone was worried, to accepting in 1991 that Compton was worried, not only at the time but even many years later, to a hint of a suggestion in 2000 that even he (Bethe) might have accepted that, even after the Konopinski-Teller calculations, the worries were not absolutely eliminated. What was Bethe’s true position? We may never know. If we can’t trust his 1975 account, and can’t quite trust his 1991 account either, then there’s certainly no reason to take his 2000 statement (by far the least considered of the three, a response to an email enquiry) as definitive. Perhaps – who knows? – the Bethe of 1945 wasn’t actually so far (in his subjective risk estimate) from the Compton of 1959?
But wait, back up! There’s something else extraordinary in Horgan’s interview. According to Bethe, Compton decided whether or not the 1945 Trinity test should go ahead without seeing any of the relevant calculations (Bethe’s or Konopinski’s). Of all the extraordinary claims around this issue, this is by far the strangest. The Konopinski-Marvin-Teller paper was written in 1946, but the substance of their arguments must have circulated before Trinity: that’s why everyone involved was aware both of the concern and that there were pretty strong reasons to think it wasn’t realistic. It’s not that hard to get at least the main ideas of the paper: Compton was unlikely to be deterred by the discussion of Compton scattering, for example. Compton was “spooked”, according to Bethe in that very interview. So what on Earth would have stopped him from looking at the calculations, or getting Bethe or Konopinski to explain them to him? And whose calculations, if not these, was Compton referring to when he said calculations showed the risk of igniting the atmosphere from a nuclear bomb to be slightly less than three in a million?
Bethe’s 1991 account makes absolutely no sense. Compton’s 1959 statement also makes absolutely no sense. Bethe pressed heavily and then more lightly on the downplay button over the years. Compton, so far as I’m aware, never elaborated further. Getting the history of catastrophic risk analyses straight seems peculiarly difficult.
Christopher Nolan’s film, Oppenheimer, gives another version of its protagonist’s life, previously treated by a 1980 BBC series, by Bird and Sherwin’s 2005 biography, American Prometheus (credited in the film), and more recently by Tom Morton-Smith’s 2015 play. There’s no doubt great interest in comparing these, but I feel under-qualified for several reasons, not the least of which is not yet having seen the film.
I feel a little more confident, though, in commenting on the discussion during the Manhattan Project of whether a nuclear bomb would ignite the atmosphere, portrayed in the film (with, I understand, the strange fictional addition of Oppenheimer visiting Einstein to grapple with the dilemma).
This was the first serious technical discussion of a hypothetical human-created existential risk. It’s interesting to try to reconstruct the arguments, as far as we now can, because they highlight problems in existential risk analysis and policy that keep recurring — most obviously, of course, the risks of anthropogenic climate change and the now widespread concerns about how AI will affect and may threaten the future of humanity.
Digression on collider risk
I tried to dig into this in a 2000 paper, motivated by concerns about another hypothetical risk: that collider experiments then proposed (now carried out) could destroy the Earth, with, as Busza et al. put it, considerable implications for health and safety. Those concerns were based on hypotheses about unknown nuclear physics that experts agreed were unlikely. But, as Glashow and Wilson commented, “The word ‘unlikely’, however many times it is repeated, just isn’t enough to assuage our fears of this total disaster.” So peopletried to give quantitative risk bounds, based on the fact that collisions analogous to those in the experiments occur in Nature and haven’t led to catastrophe. For example, heavy ion cosmic rays have been hitting heavy nuclei in the Moon for billions of years, and the Moon survives.
The problem with those analyses is that, while obviously written with the intention to reassure, they were neither carefully presented nor thought through. They ended up effectively arguing that risk bounds of 1 in 100000 or 1 in 500000 meant the risk was negligible. That’s plausible if we’re talking about the risk of death for a medical operation on an individual, but pretty ridiculous for the risk of ending life on Earth. The scale of the disaster means that a genuinely negligible risk should be much, much smaller. Also, the bounds themselves are based on assumptions, and any reasonable estimate of the chance of those assumptions being incorrect is very likely higher than the bounds. To be clear, I don’t think the experiments should have gone ahead, on the basis of the arguments made at the time. They did, and the Earth survived, as always seemed overwhelmingly likely. But, as poker players say, it’s a mistake to be result-oriented.
The worry about igniting the atmosphere
Anyway, back to the Manhattan project. The concern about the first atomic bomb tests was that they might initiate an unstoppable fusion chain reaction. The principal worry was that atmospheric nitrogen nuclei would become hot enough to overcome the electrostatic repulsion barrier and fuse together, liberating more energy, heating more nuclei, and so on. Another possibility was that nitrogen nuclei might fuse with hydrogen nuclei in the steam created if a bomb exploded over an ocean. An analysis by Konopinski et al., currently available here, tried to rule out these and more exotic possibilities, essentially by arguing that even if some such fusion reaction started, it would quickly reach a point where it radiated away too much energy to self-sustain.
Konopinski et al. wrote quite cautiously and self-critically. Even their generally bullish abstract mentions a “disquieting feature”, namely that, on their calculations, their so-called “safety factor” can reach as low as 1.6 (where any value below 1 allows catastrophe), albeit for much larger bombs than those being developed in Los Alamos. They add (p. 12) that for this reason “it seems desirable to obtain a better experimental knowledge of the reaction cross section”, even though it seems “hardly possible” (they write “hardly impossible”, but it’s clear from context this is a typo) that sufficiently high temperatures can be reached.
They go on (p. 16) to say that, although the safety factors in various scenarios seem adequate, “it is not inconceivable that our estimates are greatly in error and thermonuclear reaction may actually start to propagate”. Although they give reasons why any such reaction should eventually be quenched, and suggest this should contain the reaction to within ~100m of the ignition point, they add (p. 18) that their “numbers may be somewhat in error” and also that “[t]here remains the distant possibility that some other less simple mode of burning [than the one they analyse] may maintain itself in the atmosphere”. Moreover, “[e]ven if the reaction is stopped within a sphere of a few hundred meters radius, the resultant earth-shock and the radioactive contamination of the atmosphere might become catastrophic on a world-wide scale.” They conclude that “the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable”.
To precis, they give pretty strong reasons why a catastrophe seemed unlikely, but also some reasons for lingering concern. They gave no risk bound and didn’t suggest any way to derive one: they didn’t and couldn’t quantify “unlikely”. Their paper seems written primarily for physicists, and leaves it to them to decide if it gives good enough arguments to ignore any hypothetical catastrophe risk and go ahead with an atomic bomb test. Evidently, for the Manhattan team, it did.
Compton’s strange account
The public version of this became confused. Compton was later reported, in a published interview with Pearl Buck, as saying that he had decided not to proceed with the bomb tests if it were proved that the chances of global catastrophe were greater than three in a million, but that in the event calculation proved the figures slightly less. As we’ve seen, this estimate certainly doesn’t come from Konopinski et al., and it’s hard to see how any meaningful calculation could have produced it.
I don’t think I can improve on my earlier discussion of Compton’s comment:
“Yet, so far as I know, Compton never made an attempt to correct Buck’s account. Had she simply misunderstood, it would have been easy for Compton to disclaim the statement. And, had it not reflected his views, he would surely have wanted both to set the historical record straight and to defend his reputation against the charge of unwisely gambling with the future of humanity. The natural inference seems to be that Compton did indeed make the statement reported. If so, although the risk figure itself appears unjustifiable, Compton presumably genuinely believed that an actual risk (not just a risk bound) of 1 in 300000 of global catastrophe was roughly at the borderline of acceptability, in the cause of the American and allied effort to develop atomic weapons during World War Two. Apparently the figure did not worry 1959 American Weekly readers greatly, since no controversy ensued. It would be interesting to compare current opinion on the acceptability of a risk of global catastrophe, in the circumstances of the Los Alamos project or otherwise. ”
“In April 2000, in an attempt to understand this puzzling statement of Compton’s, I contacted Hans Bethe, a key figure in both the Los Alamos project and the theoretical work which led to the conclusion that the possibility of an atomic bomb explosion leading to global catastrophe was negligible. His view, relayed by an intermediary (Kurt Gottfried), was that the analysis of Konopinski et al. was definitive and does not allow one to make any meaningful statement about probabilities since the conditions that must be met cannot be reached in any plausible manner. Bethe suggested that the 1 in 300000 figure was made up by Compton ‘off the top of his head’, and is ‘far, far too large’. “
At the time, I was more focussed on the flawed collider risk analyses and their policy implications. But I think there is more to be said about the Los Alamos risk analyses. Konopinski et al. wrote as scientists, for scientists, listing caveats and niggling concerns. They didn’t ask crucial questions: how sure do we need to be, given the catastrophic consequences of being wrong, and are we sure enough? But they also didn’t offer any false reassurances.
Compton, in 1959, was speaking as a scientist to the public. He seemed to want to emphasize the awareness of catastrophic risk, the enormity of the decision, the care with which it was made, and, implicitly, the confidence that everyone should have that the decision-makers were wise and the right choices were made. In those deferential and politically fraught times, this seems to have worked. The Manhattan project was by then widely seen as a national (and, to a much lesser extent, Allied) triumph. Its scientists were widely seen as heroes – unless perhaps they gave some hint of independent thought about the wisdom of aspects of US military and foreign policy in the atomic era. (Oppenheimer’s security clearance was, famously, revoked in 1954.) Questioning whether the Los Alamos tests might have been a bit reckless with the future of humanity, or even whether Compton’s justification made any sense, might have been, shall we say, career-limiting. Wise and distinguished people had thought everything through carefully. Questioning them would likely be portrayed as at best perverse, at worst intentionally subversive.
It was also a more risk-tolerant and less risk-reflective era. One in 300000 sounds pretty small, a lot was at stake in WWII, and there were no existential riskresearchcentres then.
It didn’t occur to me then to ask: and what about Bethe in 2000? Partly that was because the collider risk was my focus. But also, I felt awe and admiration for this extraordinary scientist and human being, who had testified for Oppenheimer in 1954, campaigned for the partial test ban treaty and against Reagan’s Strategic Defense Initiative, and whom I’d seen at Princeton – already in his 90s – giving a lucid and intriguing seminar on supernovae. I was grateful he took the time to respond, and his answer fitted well into the discussion: I wasn’t inclined to query it.
What did Bethe mean?
But I now think we should ask: do even Bethe’s comments on the hypothetical catastrophic risk quite make sense? That Compton made up the 1 in 300000 figure “off the top of his head” is, indeed, easy to believe. I don’t know how it could possibly be justified objectively by calculation. As far as I’m aware, no one has ever suggested a better explanation than Bethe’s. But that the figure is “far, far too large”? What does that mean? Surely not that there is an objectively correct, much, much smaller figure. Then we would need an explanation of how that figure could be justified, and there isn’t one.
How about this: that Bethe’s understanding of the science and the arguments (including those of Konopinski et al.) led him to a subjective probability estimate that is much smaller? On a Bayesian view of probability, that at least makes sense as a statement, though it would have been better stated subjectively – “I thought the risk was much, much smaller”, or something like that.
Is Bethe’s take plausible?
But would it be a reasonably justifiable statement? Take another read, if you will, of Konopinski et al., including all their caveats. They give strong reasons not to be concerned, but they don’t completely eliminate all concern – and 1 in 300000 really is a very small probability.
If you need further reasons to query Bethe’s statement, and want to entertain further unlikely speculations, consider the (at that point not falsified) possibility that something relevant could have been wrong in the laws of physics they took for granted. For example, the quantum tunnelling rates through the Coulomb barrier in N-N fusion could have been higher than expected, for some then unforeseeable reason. It would have seemed very unlikely (and of course we now know it’s not true). Quantum theory might well, like every theory before it, be empirically refuted sometime, in some regime, but there was absolutely no theoretical or empirical reason to expect any breakdown to materially affect relevant fusion reactions. It would have been incredibly unlucky. But it’s only one example and, again, 1 in 300000 is a very small probability.
It’s all arguable. We’ll never know for sure how Compton or Bethe would have justified themselves. But although I’m pretty sure Compton’s statement that 1 in 300000 was a calculated probability was absurd, I’m not so sure that it’s a ridiculous subjective estimate of the probability of catastrophe given what was known. You could certainly argue for higher or lower, on the basis of Konopinski et al.. Perhaps, given all expert knowledge at the time, you could argue for much, much lower, as Bethe (on this last reading) suggested – but I wonder if those arguments would really hold up.
What do we think about the policy dilemma?
Which leaves the question: if 1 in 300000 was a vaguely reasonable-ish guess, was it right to go ahead with the first atomic bomb test? That’s a topic for another post, but any thoughts are welcome.