Quanta and Qualia: A blog about things and perceptions

Category: Risks and Futures

More on Oppenheimer and the risk of global catastrophe: Bethe’s shifting stories.

In the previous post, I tried to make sense what Compton (in a 1959 interview with Pearl Buck) and Bethe (in 2000, reported here) said about the analyses of the possibility that the first nuclear tests might ignite the atmosphere. Independently, on the same day, John Horgan wrote an excellent blog post on the same topic with many more interesting details. I’ll quote from it below, but strongly recommend reading it in full.

Horgan mentions Compton’s 1959 interview with Pearl Buck, alongside an embarrassingly weird 1975 article in the Bulletin of the Atomic Scientists by J.C. Dudley, who speculates about a purported ether-like “neutrino sea” altering fusion rates, suggest a supercritical nuclear reactor might meltdown to the centre of the Earth and out the other side, and just about stops short of hypothesising mutant giant goldfish taking over the planet and eating everyone. He also puts on stilts Compton’s strange claim that calculations showed the risk of igniting the atmosphere from a nuclear bomb to be slightly less than three in a million, nonsensically taking this fictional risk figure to be independent for each bomb detonated, making the catastrophe inevitable given enough nuclear tests.

Bethe in 1975 The main point of Horgan’s post is to give what he says is a lightly edited transcript of a 1991 interview with Bethe, in which Bethe repeats his 1975 rebuttal of Dudley, where he reviews the arguments of Konopinski et al. and suggests that Buck must have completely misunderstood Compton. Bethe, in 1975, says “There was never any possibility of causing a thermodynamic chain reaction in the atmosphere […] ; it is simply impossible”.

Bethe in 1991 According to Horgan’s transcript, in 1991 he described the suggestion as “such absolute nonsense”. But, interestingly, he then took a very different line on Compton’s interview:

“Just to relieve the tension [at the Trinity test on July 16, 1945, Enrico] Fermi said, ‘Now, let’s make a bet whether the atmosphere will be set on fire by this test.’ [laughter] And I think maybe a few people took that bet. For instance, in Compton’s mind [the doomsday question] was not set to rest. He didn’t see my calculations [or] Konopinski’s much better calculations. So it was still spooking [Compton] when he gave [the interview to Pearl Buck in 1959]. “

So, no misunderstanding after all: Pearl Buck had accurately reported Compton, who was spooked in 1945, and still spooked even in 1959.

Bethe in 2000 By 2000, as I mentioned earlier, Bethe had a subtly different take again: “[T]he 1 in 300000 figure was made up by Compton ‘off the top of his head’, and is ‘far, far too large’.” So, now not “simply impossible”, merely “much, much more improbable than 1 in 300000” (whatever that was intended to mean).

Horgan’s blog has a very interesting commentary, which deserves quoting in full:

Clarification from Alex Wellerstein, an Actual Expert: My friend and Stevens colleague Alex, an historian specializing in nuclear weapons, posted the following info-packed comment. Alex alludes to the possibility, also raised by Teller, that a nuclear explosion could ignite deuterium in the ocean. The term “Super” refers to a hydrogen fusion bomb.—John Horgan

John, I suspect that by the time you talked to him, especially after the (very silly) Dudley exchange, Bethe was pretty sick of this issue, and probably was downplaying the amount of effort and lingering concerns that existed in 1945. Dan Ellsberg, in The Doomsday Machine, makes an argument that they were more concerned than they let on later, and while I don’t totally think he is correct in his whole argument, I think he does a good job of showing that it wasn’t quite as dismissible as nonsense at the time, whatever Bethe thought of it.”

This sounds very plausible, with the downplay getting ever softer over the years. Bethe shifts from categorical denial in 1975 that anyone was worried, to accepting in 1991 that Compton was worried, not only at the time but even many years later, to a hint of a suggestion in 2000 that even he (Bethe) might have accepted that, even after the Konopinski-Teller calculations, the worries were not absolutely eliminated. What was Bethe’s true position? We may never know. If we can’t trust his 1975 account, and can’t quite trust his 1991 account either, then there’s certainly no reason to take his 2000 statement (by far the least considered of the three, a response to an email enquiry) as definitive. Perhaps – who knows? – the Bethe of 1945 wasn’t actually so far (in his subjective risk estimate) from the Compton of 1959?

But wait, back up! There’s something else extraordinary in Horgan’s interview. According to Bethe, Compton decided whether or not the 1945 Trinity test should go ahead without seeing any of the relevant calculations (Bethe’s or Konopinski’s). Of all the extraordinary claims around this issue, this is by far the strangest. The Konopinski-Marvin-Teller paper was written in 1946, but the substance of their arguments must have circulated before Trinity: that’s why everyone involved was aware both of the concern and that there were pretty strong reasons to think it wasn’t realistic. It’s not that hard to get at least the main ideas of the paper: Compton was unlikely to be deterred by the discussion of Compton scattering, for example. Compton was “spooked”, according to Bethe in that very interview. So what on Earth would have stopped him from looking at the calculations, or getting Bethe or Konopinski to explain them to him? And whose calculations, if not these, was Compton referring to when he said calculations showed the risk of igniting the atmosphere from a nuclear bomb to be slightly less than three in a million?

Bethe’s 1991 account makes absolutely no sense. Compton’s 1959 statement also makes absolutely no sense. Bethe pressed heavily and then more lightly on the downplay button over the years. Compton, so far as I’m aware, never elaborated further. Getting the history of catastrophic risk analyses straight seems peculiarly difficult.

Oppenheimer and the risk of global catastrophe

Christopher Nolan’s film, Oppenheimer, gives another version of its protagonist’s life, previously treated by a 1980 BBC series, by Bird and Sherwin’s 2005 biography, American Prometheus (credited in the film), and more recently by Tom Morton-Smith’s 2015 play. There’s no doubt great interest in comparing these, but I feel under-qualified for several reasons, not the least of which is not yet having seen the film.

I feel a little more confident, though, in commenting on the discussion during the Manhattan Project of whether a nuclear bomb would ignite the atmosphere, portrayed in the film (with, I understand, the strange fictional addition of Oppenheimer visiting Einstein to grapple with the dilemma).

This was the first serious technical discussion of a hypothetical human-created existential risk. It’s interesting to try to reconstruct the arguments, as far as we now can, because they highlight problems in existential risk analysis and policy that keep recurring — most obviously, of course, the risks of anthropogenic climate change and the now widespread concerns about how AI will affect and may threaten the future of humanity.

Digression on collider risk

I tried to dig into this in a 2000 paper, motivated by concerns about another hypothetical risk: that collider experiments then proposed (now carried out) could destroy the Earth, with, as Busza et al. put it, considerable implications for health and safety. Those concerns were based on hypotheses about unknown nuclear physics that experts agreed were unlikely. But, as Glashow and Wilson commented, “The word ‘unlikely’, however many times it is repeated, just isn’t enough to assuage our fears of this total disaster.” So people tried to give quantitative risk bounds, based on the fact that collisions analogous to those in the experiments occur in Nature and haven’t led to catastrophe. For example, heavy ion cosmic rays have been hitting heavy nuclei in the Moon for billions of years, and the Moon survives.

The problem with those analyses is that, while obviously written with the intention to reassure, they were neither carefully presented nor thought through. They ended up effectively arguing that risk bounds of 1 in 100000 or 1 in 500000 meant the risk was negligible. That’s plausible if we’re talking about the risk of death for a medical operation on an individual, but pretty ridiculous for the risk of ending life on Earth. The scale of the disaster means that a genuinely negligible risk should be much, much smaller. Also, the bounds themselves are based on assumptions, and any reasonable estimate of the chance of those assumptions being incorrect is very likely higher than the bounds. To be clear, I don’t think the experiments should have gone ahead, on the basis of the arguments made at the time. They did, and the Earth survived, as always seemed overwhelmingly likely. But, as poker players say, it’s a mistake to be result-oriented.

The worry about igniting the atmosphere

Anyway, back to the Manhattan project. The concern about the first atomic bomb tests was that they might initiate an unstoppable fusion chain reaction. The principal worry was that atmospheric nitrogen nuclei would become hot enough to overcome the electrostatic repulsion barrier and fuse together, liberating more energy, heating more nuclei, and so on. Another possibility was that nitrogen nuclei might fuse with hydrogen nuclei in the steam created if a bomb exploded over an ocean. An analysis by Konopinski et al., currently available here, tried to rule out these and more exotic possibilities, essentially by arguing that even if some such fusion reaction started, it would quickly reach a point where it radiated away too much energy to self-sustain.

Konopinski et al. wrote quite cautiously and self-critically. Even their generally bullish abstract mentions a “disquieting feature”, namely that, on their calculations, their so-called “safety factor” can reach as low as 1.6 (where any value below 1 allows catastrophe), albeit for much larger bombs than those being developed in Los Alamos. They add (p. 12) that for this reason “it seems desirable to obtain a better experimental knowledge of the reaction cross section”, even though it seems “hardly possible” (they write “hardly impossible”, but it’s clear from context this is a typo) that sufficiently high temperatures can be reached.

They go on (p. 16) to say that, although the safety factors in various scenarios seem adequate, “it is not inconceivable that our estimates are greatly in error and thermonuclear reaction may actually start to propagate”. Although they give reasons why any such reaction should eventually be quenched, and suggest this should contain the reaction to within ~100m of the ignition point, they add (p. 18) that their “numbers may be somewhat in error” and also that “[t]here remains the distant possibility that some other less simple mode of burning [than the one they analyse] may maintain itself in the atmosphere”. Moreover, “[e]ven if the reaction is stopped within a sphere of a few hundred meters radius, the resultant earth-shock and the radioactive contamination of the atmosphere might become catastrophic on a world-wide scale.” They conclude that “the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable”.

To precis, they give pretty strong reasons why a catastrophe seemed unlikely, but also some reasons for lingering concern. They gave no risk bound and didn’t suggest any way to derive one: they didn’t and couldn’t quantify “unlikely”. Their paper seems written primarily for physicists, and leaves it to them to decide if it gives good enough arguments to ignore any hypothetical catastrophe risk and go ahead with an atomic bomb test. Evidently, for the Manhattan team, it did.

Compton’s strange account

The public version of this became confused. Compton was later reported, in a published interview with Pearl Buck, as saying that he had decided not to proceed with the bomb tests if it were proved that the chances of global catastrophe were greater than three in a million, but that in the event calculation proved the figures slightly less. As we’ve seen, this estimate certainly doesn’t come from Konopinski et al., and it’s hard to see how any meaningful calculation could have produced it.

I don’t think I can improve on my earlier discussion of Compton’s comment:

“Yet, so far as I know, Compton never made an attempt to correct Buck’s account. Had she simply misunderstood, it would have been easy for Compton to disclaim the statement. And, had it not reflected his views, he would surely have wanted both to set the historical record straight and to defend his reputation against the charge of unwisely gambling with the future of humanity. The natural inference seems to be that Compton did indeed make the statement reported.
If so, although the risk figure itself appears unjustifiable, Compton presumably genuinely believed that an actual risk (not just a risk bound) of 1 in 300000 of global catastrophe was roughly at the borderline of acceptability, in the cause of the American and allied effort to develop atomic weapons during World War Two. Apparently the figure did not worry 1959 American Weekly readers greatly, since no controversy ensued. It would be interesting to compare current opinion on the acceptability of a risk of global catastrophe, in the circumstances of the Los
Alamos project or otherwise. ”

“In April 2000, in an attempt to understand this puzzling statement of Compton’s, I contacted Hans Bethe, a key figure in both the Los Alamos project and the theoretical work which led to the conclusion that the possibility of an atomic bomb explosion leading to global catastrophe was negligible. His view, relayed by an intermediary (Kurt Gottfried), was that the analysis of Konopinski et al. was definitive and does not allow one to make any meaningful statement about probabilities since the conditions that must be met cannot be reached in any plausible manner. Bethe suggested that the 1 in 300000 figure was made up by Compton ‘off the top of his head’, and is ‘far, far too large’. “

At the time, I was more focussed on the flawed collider risk analyses and their policy implications. But I think there is more to be said about the Los Alamos risk analyses. Konopinski et al. wrote as scientists, for scientists, listing caveats and niggling concerns. They didn’t ask crucial questions: how sure do we need to be, given the catastrophic consequences of being wrong, and are we sure enough? But they also didn’t offer any false reassurances.

Compton, in 1959, was speaking as a scientist to the public. He seemed to want to emphasize the awareness of catastrophic risk, the enormity of the decision, the care with which it was made, and, implicitly, the confidence that everyone should have that the decision-makers were wise and the right choices were made. In those deferential and politically fraught times, this seems to have worked. The Manhattan project was by then widely seen as a national (and, to a much lesser extent, Allied) triumph. Its scientists were widely seen as heroes – unless perhaps they gave some hint of independent thought about the wisdom of aspects of US military and foreign policy in the atomic era. (Oppenheimer’s security clearance was, famously, revoked in 1954.) Questioning whether the Los Alamos tests might have been a bit reckless with the future of humanity, or even whether Compton’s justification made any sense, might have been, shall we say, career-limiting. Wise and distinguished people had thought everything through carefully. Questioning them would likely be portrayed as at best perverse, at worst intentionally subversive.

It was also a more risk-tolerant and less risk-reflective era. One in 300000 sounds pretty small, a lot was at stake in WWII, and there were no existential risk research centres then.

It didn’t occur to me then to ask: and what about Bethe in 2000? Partly that was because the collider risk was my focus. But also, I felt awe and admiration for this extraordinary scientist and human being, who had testified for Oppenheimer in 1954, campaigned for the partial test ban treaty and against Reagan’s Strategic Defense Initiative, and whom I’d seen at Princeton – already in his 90s – giving a lucid and intriguing seminar on supernovae. I was grateful he took the time to respond, and his answer fitted well into the discussion: I wasn’t inclined to query it.

What did Bethe mean?

But I now think we should ask: do even Bethe’s comments on the hypothetical catastrophic risk quite make sense? That Compton made up the 1 in 300000 figure “off the top of his head” is, indeed, easy to believe. I don’t know how it could possibly be justified objectively by calculation. As far as I’m aware, no one has ever suggested a better explanation than Bethe’s. But that the figure is “far, far too large”? What does that mean? Surely not that there is an objectively correct, much, much smaller figure. Then we would need an explanation of how that figure could be justified, and there isn’t one.

How about this: that Bethe’s understanding of the science and the arguments (including those of Konopinski et al.) led him to a subjective probability estimate that is much smaller? On a Bayesian view of probability, that at least makes sense as a statement, though it would have been better stated subjectively – “I thought the risk was much, much smaller”, or something like that.

Is Bethe’s take plausible?

But would it be a reasonably justifiable statement? Take another read, if you will, of Konopinski et al., including all their caveats. They give strong reasons not to be concerned, but they don’t completely eliminate all concern – and 1 in 300000 really is a very small probability.

If you need further reasons to query Bethe’s statement, and want to entertain further unlikely speculations, consider the (at that point not falsified) possibility that something relevant could have been wrong in the laws of physics they took for granted. For example, the quantum tunnelling rates through the Coulomb barrier in N-N fusion could have been higher than expected, for some then unforeseeable reason. It would have seemed very unlikely (and of course we now know it’s not true). Quantum theory might well, like every theory before it, be empirically refuted sometime, in some regime, but there was absolutely no theoretical or empirical reason to expect any breakdown to materially affect relevant fusion reactions. It would have been incredibly unlucky. But it’s only one example and, again, 1 in 300000 is a very small probability.

It’s all arguable. We’ll never know for sure how Compton or Bethe would have justified themselves. But although I’m pretty sure Compton’s statement that 1 in 300000 was a calculated probability was absurd, I’m not so sure that it’s a ridiculous subjective estimate of the probability of catastrophe given what was known. You could certainly argue for higher or lower, on the basis of Konopinski et al.. Perhaps, given all expert knowledge at the time, you could argue for much, much lower, as Bethe (on this last reading) suggested – but I wonder if those arguments would really hold up.

What do we think about the policy dilemma?

Which leaves the question: if 1 in 300000 was a vaguely reasonable-ish guess, was it right to go ahead with the first atomic bomb test? That’s a topic for another post, but any thoughts are welcome.

Covid-19 and wild swimming

In the UK, about 4 million people swim outdoors in pools, seas, lakes and rivers. I can’t find a figure specifically for rivers, but swimming in the River Cam between Grantchester and Cambridge is certainly popular — the Newnham Riverbank Club has several hundred members, and many more people swim from other points in Grantchester Meadows — and I believe this is true at many other sites around the UK, and in many other countries.

It’s a little surprising that there has been almost no discussion of possible covid-19 transmission risks from wild swimming. Here is one interesting but inconclusive article, focussing mainly on possible risks from sewage. This also contributed to cautious advice during peak lockdown. Nonetheless, government guidance since mid-May has been that wild swimming per se is low risk so long as social distancing is maintained.

It may well be; I hope so. But we’ve learned during the pandemic that there seems to be a high transmission risk in settings ranging from choir practice to meat-packing factories. Afterwards, very plausible explanations have been given, but no one seems to have identified the risks in advance. Might river swimming at popular sites be another potentially risky setting?

The River Cam on a calm day

I have some concerns, which may not stand up to empirical test but which I haven’t seen considered carefully. My worry isn’t about transmission from sewage: I’m going to assume that swimmers will avoid sewage-contaminated waters. The worry is that person-to-person transmission while swimming in a river might be much more effective, at much longer distances, than is generally true outdoors, or even perhaps indoors.

First, let’s look at the survival of coronaviruses in water. This study looks at other coronaviruses, but I’ll take it as applicable to the covid-19 virus in the absence of contrary evidence. In the most hostile aqueous enironment studied, primary filtered effluent at 23C, 1% of coronaviruses remained active after 1.5 days. The deactivation rate seems to be roughly time-independent, which gives us 10% still active after 18 hours, or ~90% still active after 1 hour. If someone sneezed into your river an hour ago, any coronaviruses they expelled are likely still active.

Ok, but we’re also not sure how long coronaviruses stay active in air or on surfaces. You can keep a social distance — let’s say the recommended 2m — while swimming, just as you would elsewhere, so what’s the problem?

Here’s one possible concern. In the air, droplets fall to the ground, and aerosols disperse in all directions. If someone coughs or sneezes while swimming in a river, droplets go into the water surface. What about aerosols? I don’t know — we need a fluid dynamicist and probably a range of experiments. But rivers have banks, and a direction of flow. Maybe aerosols mostly stay in a mist not far above the river, and over time a fair fraction maybe end up in the river.

And here’s another concern. Almost no one wears a mask while swimming (and it’s not obvious any standard mask would be effective). Few people wear goggles while swimming in the wild. Your eyes, nose and mouth are all close to the water surface — right in the zone where viruses are potentially concentrated. You’re often breathing hard, splashing water into your face, probably inhaling and swallowing some, and of course inhaling any aerosols or droplets above the surface.

Still, rivers are big, sneezed droplets are small. Surely they quickly dilute to irrelevance? Well, let’s try to estimate. A sneeze might emit 200 million viruses; an infectious dose might be 1000 viruses.      So you need to inhale just 1/20000 of the viruses from a single sneeze to be infected.  Let’s first think about the aerial route. Take a 10m river with 1m high banks; suppose the cough/sneeze spreads over 1m along the river, 1m above the river, and 10m across the river. Crudely idealizing, suppose that for some while it moves downriver as a 10x1x1 {\bf {\rm m}^3} box.      Your breath has volume ~ 6 litres.     {\bf 1 {\rm m}^3} is 1000 litres.     If you breathe in while going through the box, you might inhale {\bf 6 \times 10^{-4}} of the 200 million viruses, i.e. 12 infectious doses.     For how long is the rectangle model good enough to give roughly right answers?       It’s very hard to say without empirically modelling.      At a guess, if someone sneezes 10m upstream from you, the situation is worse than the model suggests: the viruses won’t spread out across the river by the time they reach you.     At another guess, if they sneeze 1km upstream from you, the situation is better –- maybe much better — than the model says.    But I don’t know; I wonder if anyone does.

Now suppose most of the viral load is in the water.   The River Cam is  ~6mx2m = {\bf 1.2 \times 10^5 {\rm cm}^2} in cross-section.     For a sneeze with 20 million viruses to dilute to 1 virus per cc, assuming equal dilutions at all depths and across the river, it would have to spread out uniformly over a length of 166m.     If it stays in the top 20cm — maybe more reasonable, for a long while, for a slow river on a calm day — it has to spread over 1.66km. If it stays in the top 2cm, the length is 16.6km. For 1000 viruses per cc, staying in a 2m x 10cm cross-section, the length is 16.6m.   Take a look at the photo above, and ask how confident you are that the river flow will quickly stretch out sneeze particles over that volume. Now 1cc is about a quarter of a teaspoon. How confident are you about not exposing your eyes, nose and mouth to ¼ teaspoon of water while swimming?

My conclusions? Swimming in an uncrowded unconfined area of sea seems a much better bet: there’s much more turbulence and no confinement except the beach boundary. My guess is that dilution is effective enough in the sea if you stay well away from others. I’m not completely confident about this, but personally, I’d take the risk.

If you’re going to swim in a river, I’d try to be upstream of, well, ideally everyone. All else being equal (don’t let a real drowning risk replace a hypothetical infection risk!), you should maybe prefer large faster-flowing rivers to small slow ones. I would keep well away from anyone not in your household if they’re upstream of you. I’ve no idea whether 100m might be a safeish distance in a small river; I don’t see any reason to think 2m is.

For me, regretfully, the unknowns deter. I love Cam swimming, but haven’t indulged this summer. The risks clearly aren’t huge — no clusters of cases have been reported among wild swimmers in Cambridge, or anywhere else as far as I’m aware. But the general infection rates in Cambridge, and most of the UK, have, thankfully, been low over the summer. There may thus have been few or no asymptomatic but infected people swimming in the Cam, just as there may be few or none in any given pub or gym in Cambridge. This leaves a niggling worry that wild swimming, like bar-hopping or gym-going, may nonetheless be a relatively risky activity.

If you can produce more reassuring data or better arguments, I and (at least) one or two other cautious swimmers would be very grateful.

Estimating your COVID-19 risk

[Disclaimer: these are my own informal calculations based on my inexpert impression of the science and data, which themselves seem still quite uncertain. They’re meant to encourage you to look at the current data and do your own.

Updates September 14, 2020.

  1. After writing the original post, I found a nice article by Tim Harford that spells out the basic risk estimate calculation.
  2. The figures below have been updated with the most recent ONS and ZOE estimates (as of 14th September). ]

It’s difficult to know what’s worth doing, or not doing, to reduce the risk of covid-19 infection. Here’s a way of cutting through all the uncertainties and getting an estimate of your actual risk levels. I’ll give current figures for England; obviously the method works in any region where reasonable infection rate estimates are available.

First, find the current estimated daily infection rate for covid-19. Early September estimates for England from the ONS are ~3200 per day for people living in households (i.e. not care homes or hospitals). The ZOE Covid-19 Symptom Study give ~4220 per day; this is for the entire UK and appears to be for all settings (including care homes and hospitals). All of the figures come with error bars; for example, the ONS 95% confidence upper bound is ~4600 per day. The figures currently seem to be increasing; the highest current estimate (that I’m aware of) is that they’re doubling every 7-8 days at present; other estimates suggest a lower rate of increase.

Assuming you’re not in a hospital or care home, you might thus conservatively go with twice the latest figures, i.e. roughly 8000 per day, for today. The population of England is 55 million, and the numbers in care homes (<500000) and hospitals (smaller) are small fractions of that. Call the residential population 50 million, rounding down.

So, if you’re a typical resident of England exposed to typical covid-19 risk, your risk of infection is ~ 8000/50000000 = 1/6250 per day. Annualized — if the risk were the same every day for the next year — this gives a risk of about 6% of infection.

Your risk of dying from covid-19 if you’re infected depends on your age, sex, and health. For most people it’s not that high. Estimates of the overall infection fatality rate vary; if we take it to be 0.6% then the average English resident’s annualized risk of death from covid-19 is the product, 6% x 0.6 %, about 1 in 2700. That’s about 1/25 of the overall mortality rate. Your covid-19 death risk might be higher, if you’re old or have existing conditions — but so will your overall death risk, and they roughly scale together. If you die in the next month, unless you’re very atypically at risk, it’s pretty unlikely it’ll be of a covid-19 infection you contracted today.

Of course, the risk won’t be the same every day for the next year. There may be a very serious second wave. The current annualized risk is only a good guide for decisions right now. If you’re behaving like a typical English resident, in a typical environment, it looks as if your risk today is quite low. If you’re following government guidance, keeping social distance, wearing masks if you go shopping or on public transport, and don’t have a job that exposes you to atypical risk, your risk is very likely lower than the figures above. Most non-mandated risk avoidance measures — such as disinfecting groceries or wearing masks outdoors when not socializing and not in crowded areas– will probably not greatly reduce your already low risk. Most experts suggest that the large majority of infections come from indoor exposure to airborne droplets or aerosols. The data aren’t solid, but my sense is it’d be surprising if 10% of infections, even now that we’re aware of the indoor risks and taking countermeasures, come from other sources. So it’d be surprising if disinfecting groceries and non-social outdoor mask-wearing reduced the average risk of infection by more than an annualized 0.6%, or the average annualised risk of death by more than 1 in 25000.

Are these extra countermeasures nonetheless worthwhile today? The emerging consensus on mask-wearing seems to be that it’s more for social good than for individual benefit: my mask protects you much more than me. If a large enough proportion of people wear masks when near others, then, models suggest, the transmission rate can be significantly reduced — plausibly by enough to mitigate a second wave. The inconvenience isn’t that great, and it’s probably a habit we should get used to when we’re anywhere around people. Wear-masks-in-public is an easy rule to communicate, follow and enforce; wear-masks-in-shops-transport-sufficiently-dense-crowds-and-less-than-fleeting-conversations, not so much.

The case is much less clear for grocery disinfection and the like. Rationally, we should put finite prices on our lives and on our time. A quick short-cut (ignoring future discounting and quality of life weighting) is to convert everything into time. A minute a week disinfecting groceries means investing about 1/10000 of the year to avoid at most a 1/25000 risk of death. If your remaining life expectancy is more than 2.5 years, that’s perhaps worth it — but unless you expect to live for 75 more years a more realistic 30 minutes a week perhaps isn’t. (If you’re very young, you might expect to live for 75 more years, but your death risk will be much lower. So you need to be optimistic, not just young, to make it worthwhile on this estimate.) But there are other costs: covid-19 also carries risks of serious and prolonged illness and perhaps lifelong loss of quality or life and lower life expectancy. Maybe the true cost of those risks is several times that of the death risk: my impression is that we just don’t know at present. Still, that might tip the balance towards grocery disinfection. There’s also some social benefit in personal risk minimisation: if you lower your risk of infection, you lower your risk of spreading. On the other hand, assigning 10% of total infections to infected groceries may well be far too high. If grocery disinfection brings you peace of mind, perhaps it’s worth doing, but I’d try not to worry about occasional lapses.

Tl;dr right now, in England (and the rest of the UK), unless you’re especially vulnerable or exposed, or live in a hotspot region where additional lockdown measures are in force, I think your personal covid-19 risk is still low today. Protect others with masks by all means; follow government guidelines, and you’ll be very unlucky to get ill today. But everything depends on the infection rate; I would follow it and reevaluate weekly.

© 2024 Quanta&qualia

Theme by Anders NorenUp ↑