Log In

Reset Password
BERMUDA | RSS PODCAST

Uncertain models predicting end of Covid are still useful

First Prev 1 2 Next Last

When the 1918 influenza pandemic swept the globe, American and European epidemiologists reeled. They hadn’t predicted the devastation. The data was shaky, but something like 50 million people died worldwide, an incomprehensible number even for a period when the average life span at birth was 45 to 50 years old. How, epidemiologists asked, could they make their reactive science predictive?

Prediction was becoming a vital aspect of science, and those fields that aspired to predictive success looked to physics. What they found, more important than predictive power, were techniques for measuring and managing uncertainty. Examining the historical relationships between uncertainty and prediction that defined their efforts helps us to understand the challenges Covid-19 modellers face today and reminds us that models are useful, even if imperfect.

As deaths from the influenza pandemic declined, Albert Einstein became famous — on the back of a prediction. In November 1919, measurements taken during a solar eclipse confirmed that the sun’s gravitational field deflected the path of starlight according to the predictions of Einstein’s general theory of relativity. Newspapers worldwide trumpeted the results and Einstein catapulted to international stardom, unseating Isaac Newton’s view of the universe in the process.

Predicting the effect of gravity on starlight might seem like an unlikely route to fame, but arriving as it did amid the ravages of the flu and the First World War, Einstein’s predictive success offered a salve to a world gripped by uncertainty, as historian Matthew Stanley has argued. But the data did not speak for itself. The conclusion that Einstein was right required the judicious selection and analysis of data produced by finicky instruments deployed in challenging field conditions. Although recent historical work has defended the analysis, Einstein’s prediction was vindicated only by wrestling with uncertainty.

After the dual catastrophes of the First World War and the influenza pandemic, epidemiology also sought predictive tools. Traditional public health-oriented fieldwork techniques led to a decline in infectious diseases such as typhoid fever, cholera, smallpox, scarlet fever, diphtheria and tuberculosis in the first decades of the 20th century, but they were no match for the 1918 flu.

Epidemiologists turned to a pluralistic model, embracing the mathematical tools of physicists to conduct epidemic modelling. They looked at the laboratory work of bacteriologists and, later, virologists to understand the changing virulence of organisms. They undertook traditional field practice to unravel community infection. The goal was to make predictions about the recurrence and seasonal appearance of diseases such as measles and summer diarrhoea that were not declining, and about looming epidemic threats such as influenza and plague.

Spanning the Atlantic, a group coalesced around the idea that prediction was the most valuable part of the science. Leading the charge in Britain was Major Greenwood at the Ministry of Health. Why were some diseases seasonal? How did diseases increase or decrease in virulence? What role did weather, and later host immunity, play in forecasting disease events? How did an endemic disease, specific to a particular region, explode into an epidemic or pandemic? How could experimental laboratory studies of outbreaks in mice hold the key to predicting outbreaks in humans? To answer these questions, Greenwood emphasised the “need of a systematic plan of forecasting epidemiological events”, and argued that the present state of epidemiology was one that “only gives warning of rain when unfurled umbrellas pass along the street”.

By the late 1920s and into the 1930s, new mathematical models, informed by techniques developed in physics, began to dominate epidemiological forecasting. But discussions of epidemic waves, endemicity and herd immunity were predicated on mathematical modelling, as well as on deep recognition that public-health decisions needed to be implemented in light of the models’ uncertainties. Over subsequent decades, epidemiologists would struggle to tame that uncertainty and produce a prediction such as the one that made Einstein famous.

In the autumn of 1957, a new strain of influenza — H2N2 — emerged in China and quickly spread across the world, killing about one million people globally. Based on forecasting models, many epidemiologists believed this was a variant of the 1918-19 virus and not a “new” disease. Having predicted another epidemic wave, some epidemiologists such as Maurice Hilleman, at the Walter Reed Army Institute of Research, warned of the impending crisis and helped to rush forward a new vaccine.

But Western Europe and North America took little preventive action by way of quarantines, lockdowns or school closings, and the media paid the pandemic little mind. Much was the same during a subsequent influenza pandemic — H3N2 — in 1968. Flu pandemics had become normalised, and pandemic modelling for the flu was regarded as so uncertain as to be unreliable.

By the late 1990s, some epidemic models projected catastrophic pandemic events from zoonotic spillovers, first in 1997 with H5N1, in 2003 with Sars and again in 2009 with swine flu. Yet none blossomed into globally devastating pandemics. Many epidemiologists predicted that Sars-CoV-1, a biological cousin to the present coronavirus, for example, would explode throughout the world, but it remained relatively isolated to China and other parts of Asia, which implemented strict quarantines. In July 2003, just as the world braced for the disease to propagate globally, the World Health Organisation declared the pandemic over.

Epidemiologists had accurately predicted the emergence of these illnesses, but the global devastation their models forecast did not come to pass, leading to further scepticism of the value of predictive epidemiology.

In the early months of the coronavirus pandemic, we inherited this rich and complicated history of epidemiology, as well as the central question it raises: why trust an uncertain model? That much early epidemic modelling failed to correctly predict the course of the pandemic was no fault of the epidemiologists modelling a real-time disaster, who lacked basic data points for case fatality rates, infection rates, virus reproduction and the role of asymptomatic carriers. Poor data leads to high uncertainty.

Early model predictions motivated the “flatten the curve” slogan, stoking hopes that the pandemic’s imagined long tail would soon arrive. Both the models and the dictum failed. Some early models and self-styled experts greatly exaggerated death projections in 2020, while others underestimated the impact of the virus. More than anything else, models oversaturated public discourse. Predictions that the 2020 summer heat would lower transmission rates were not borne out.

By the time safe and effective vaccines were rolled out in Europe and the United States in early 2021, public trust in epidemic models had already deeply eroded, in part because of early failures. When a third wave struck in 2021, model mania morphed into discussions of vaccine uptick and efficacy. We had other types of data to visualise. The models turned moribund.

Models also lost their sheen because some were weaponising them to manufacture doubt, particularly around the question of herd immunity. Predictions that normal life would swiftly resume did not materialise. Vaccine hesitancy became a stubborn roadblock to effective pandemic prevention, but so, too, did the spurious use of models. Even after experiencing new waves, Delta in summer 2021 and then Omicron in late 2021, few have listened to modellers sounding the alarms about the present sharp rise in cases in Europe and Asia. Most states and municipalities have relaxed mask policies and stopped pushing vaccination mandates.

Joseph D. Martin is associate professor at Durham University, and author of Solid State Insurrection: How the Science of Substance Made American Physics Matter (2018)

But the history of Covid-19 may still be written largely as a story of the success of epidemiology — of determining the role of asymptomatic carriers, infection rates and the value of preventive public-health strategies, including mask-wearing, air filtration and vaccines. Future epidemiological modellers will also use the messy but unprecedentedly rich data of the coronavirus pandemic to improve their tools so that we might better face down the next pandemic.

Jacob Steere-Williams is associate professor at the College of Charleston, and author of The Filth Disease: Typhoid Fever and the Practices of Epidemiology in Victorian England (2020)

The failure more worrying than that of the early models is the widespread attitude that sees uncertainty itself as a failure of science. As it was in 1919, the promise of perfect prediction is a seductive but empty one. A model succeeds only insofar as it manages uncertainty.

Our pandemic is far from over, even with spurious, unfounded claims of its endemicity. Now, more than ever, we should be following Covid epidemiology, even the latest models. But not for their certainty. Rather, we should appreciate that uncertainty bedevils all predictive methods. To use an epidemiological aphorism attributed to George E.P. Box: “All models are wrong, but some are useful.”

Joseph D. Martin is associate professor at Durham University, and author of Solid State Insurrection: How the Science of Substance Made American Physics Matter (2018)

Jacob Steere-Williams is associate professor at the College of Charleston, and author of The Filth Disease: Typhoid Fever and the Practices of Epidemiology in Victorian England (2020)

You must be Registered or to post comment or to vote.

Published April 21, 2022 at 7:58 am (Updated April 21, 2022 at 7:48 am)

Uncertain models predicting end of Covid are still useful

What you
Need to
Know
1. For a smooth experience with our commenting system we recommend that you use Internet Explorer 10 or higher, Firefox or Chrome Browsers. Additionally please clear both your browser's cache and cookies - How do I clear my cache and cookies?
2. Please respect the use of this community forum and its users.
3. Any poster that insults, threatens or verbally abuses another member, uses defamatory language, or deliberately disrupts discussions will be banned.
4. Users who violate the Terms of Service or any commenting rules will be banned.
5. Please stay on topic. "Trolling" to incite emotional responses and disrupt conversations will be deleted.
6. To understand further what is and isn't allowed and the actions we may take, please read our Terms of Service
7. To report breaches of the Terms of Service use the flag icon