The Brilliant Inventor Who Made Two of History’s Biggest Mistakes

Since the surgeon general’s hearing in 1926, we have invented a vast array of tools and institutions to explore precisely these kinds of questions before a new compound goes on the market. We have produced remarkably sophisticated systems to model and anticipate the long-term consequences of chemical compounds on both the environment and individual health. We have devised analytic and statistical tools — like randomized controlled trials — that can detect subtle causal connections between a potential pollutant or toxic chemical and adverse health outcomes. We have created institutions, like the Environmental Protection Agency, that try to keep 21st-century Ethyls out of the marketplace. We have laws like the Toxic Substances Control Act of 1976 that are supposed to ensure that new compounds undergo testing and risk assessment before they can be brought to market. Despite their limitations, all of these things — the regulatory institutions, the risk-management tools — should be understood as innovations in their own right, ones that are rarely celebrated the way consumer breakthroughs like Ethyl or Freon are. There are no ad campaigns promising “better living through deliberation and oversight,” even though that is precisely what better laws and institutions can bring us.

The story of Freon offers a more troubling lesson, though. Scientists had observed by the late 19th century that there seemed to be a puzzling cutoff in the spectrum of radiation hitting Earth’s surface, and soon they suspected that ozone gas was somehow responsible for that “missing” radiation. The British meteorologist G.M.B. Dobson undertook the first large-scale measurements of the ozone layer in 1926, just a few years before Kettering and Midgley started exploring the problem of stable refrigerants. Dobson’s investigations took decades to evolve into a comprehensive understanding. (Dobson did all his work from ground-level observations. No human had even visited the upper atmosphere before the Swiss scientist and balloonist Auguste Piccard and his assistant ascended to 52,000 feet in a sealed gondola in 1931.) The full scientific understanding of the ozone layer itself wouldn’t emerge until the 1970s. Unlike with Ethyl, where there was a clear adverse relationship on the table between lead and human health, no one even considered that there might be a link between what was happening in the coils of your kitchen fridge and what was happening 100,000 feet above the South Pole. CFCs began inflicting their harm almost immediately after Freon hit the market, but the science capable of understanding the subtle atmospheric chain reactions behind that harm was still 40 years in the future.

Is it possible that we are doing something today whose long-term unintended consequences will not be understandable to science until 2063? That there are far fewer blank spots on the map of understanding is unquestionable. But the blank spots that remain are the ones capturing all the attention. We have already made some daring bets at the edges of our understanding. While building particle accelerators like the Large Hadron Collider, scientists seriously debated the possibility that activating the accelerator would trigger the creation of tiny black holes that would engulf the entire planet in seconds. It didn’t happen, and there was substantial evidence that it would not happen before they flipped the switch. But still.

As the scenario planners put it, the question of leaded gasoline’s health risks to the general public was a known unknown. We knew there was a legitimate question that needed answering, but big industry just steamrollered over the whole investigation for almost half a century. The health risk posed by Freon was a more mercurial beast: an unknown unknown. There was no way of answering the question — are CFCs bad for the health of the planet? — in 1928, and no real hint that it was even a question worth asking. Have we gotten better at imagining those unimaginable threats? It seems possible, maybe even likely, that we have, thanks to a loose network of developments: science fiction, scenario planning, environmental movements and, recently, the so-called longtermists, among them Toby Ord. But blank spots on the map of understanding are blank spots. It’s hard to see past them.

This is where the time-horizon question becomes essential. The longtermists get a lot of grief for focusing on distant sci-fi futures — and ignoring our present-day suffering — but from a certain angle, you can interpret the Midgley story as rebuttal to those critics. Saturating our inner cities with toxic levels of ambient lead for more than half a century was a terrible idea, and if we had been thinking about that decades-long time horizon back in 1923, we might have been able to make another choice — perhaps embracing ethanol instead of Ethyl. And the results of that longtermism would have had a clear progressive bias. The positive impact on low-income, marginalized communities would have been far greater than the impact on affluent entrepreneurs tending to their lawns in the suburbs. If you gave a present-day environmental activist a time machine and granted them one change to the 20th century, it’s hard to imagine a more consequential intervention than shutting down Thomas Midgley’s lab in 1920.

But the Freon story suggests a different argument. There was no use expanding our time horizon in evaluating the potential impact of CFCs, because we simply didn’t have the conceptual tools to do those calculations. Given the acceleration of technology since Midgley’s day, it’s a waste of resources to try to imagine where we will be 50 years from now, much less 100. The future is simply too unpredictable, or it involves variables that are not yet visible to us. You can have the best of intentions, running your long-term scenarios, trying to imagine all the unintended secondary effects. But on some level, you’ve doomed yourself to chasing ghosts.

The acceleration of technology casts another ominous shadow on Midgley’s legacy. Much has been made of his status as a “one-man environmental disaster,” as The New Scientist has called him. But in actuality, his ideas needed an enormous support system — industrial corporations, the United States military — to amplify them into world-changing forces. Kettering and Midgley were operating in a world governed by linear processes. You had to do a lot of work to produce your innovation at scale, if you were lucky enough to invent something worth scaling. But much of the industrial science now exploring the boundaries of those blank spots — synthetic biology, nanotech, gene editing — involves a different kind of technology: things that make copies of themselves. Today the cutting-edge science of fighting malaria is not aerosol spray cans; it’s “gene drive” technology that uses CRISPR to alter the genetics of mosquitoes, allowing human-engineered gene sequences to spread through the population — either reducing the insects’ ability to spread malaria or driving them into extinction. The giant industrial plants of Midgley’s age are giving way to nanofactories and biotech labs where the new breakthroughs are not so much manufactured as they are grown. A recent essay in The Bulletin of the Atomic Scientists estimated that there are probably more than 100 people now with the skills and technology to single-handedly reconstruct an organism like the smallpox virus, Variola major, perhaps the greatest killer in human history.

Source link

Share This Post With A Friend!

We would be grateful if you could donate a few $$ to help us keep operating.