How computer models get it wrong while seducing their creators

A review of Escape from Model Land by Erica Thompson

By David A. Shay­witz, Wall Street Journal, Dec. 28, 2022

We live in an information age, as the cliché has it— really an age of information overload. But “measured quantities do not speak for themselves,” observes Erica Thompson, a statistician and a fellow at the London School of Economics. Data, she notes, are given meaning “only through the context and framing provided by models.”

When we want to know how rapidly a new infectious virus is likely to spread, we turn to mathematical models. Models are used by climate scientists to project global warming; by options traders to price contracts; by the Congressional Budget Office to forecast the economic effects of legislation; by meteorologists to warn of approaching storms. Without models, Ms. Thompson says, data “would be only a meaningless stream of numbers.”

Ubiquitous and persuasive, models also drive decisions— one reason why, in Ms. Thompson’s view, they require our urgent attention. She tells us that, as a graduate student studying North Atlantic storms, she noticed how different models predicted different overall effects and produced contradictory results. She started to reflect on the role of models—as metaphors, as tools for understanding, as expressions of sociopolitical power. “Escape From Model Land” offers a contemplative, densely encapsulated summary of her reflection and research.

Models seek to represent the real world, but they live outside it. Indeed, they exist in their own “wonderful place,” what Ms. Thompson dubs “Model Land.” In Model Land, the assumptions of a model are considered “literally true,” enabling expansive exploration and ambitious predictions.

The problem is that Model Land is easy to enter but difficult to escape. Having built “a beautiful internally consistent model,” Ms. Thompson writes, it can be “emotionally difficult to acknowledge that the initial assumptions on which the whole thing is built are literally not true.”

Having built “a beautiful internally consistent model,” Ms. Thompson writes, it can be “emotionally difficult to acknowledge that the initial assumptions on which the whole thing is built are literally not true.”

There are all sorts of ways that models can lead us astray. A small measurement error on an input can lead to wildly inaccurate forecasts—a phenomenon known as the Butterfly Effect. Fortunately, this type of uncertainty is often manageable.

Far more problematic are what Ms. Thompson calls “unquantifiable unknowns”—things that are left out of a model’s calculation because they can’t be anticipated, such as the unexpected arrival of a transformative technology or the abrupt collapse of a robust market. It is not always true, she observes, that the data we have now will be relevant to the future—as traders discovered in the stock-market crash of 1987, when their models catastrophically failed.

Models reflect bias of their creators

Beyond the inherent inability of models to account for the unaccountable, models also reflect the biases of their creators.

We may be inclined to regard models as objective expressions of truth, yet they are deliberately constructed interpretations, imbued with the values and viewpoints of the modelers—primarily, as Ms. Thompson notes, well-educated, middle-class individuals.

During the pandemic, models “took more account of harms to some groups of people than others,” resulting in a “moral case” for lockdowns that was “partial and biased.” Modelers who worked from home—while others maintained the supply chain— often overlooked “all of the possible harms” of the actions their models were suggesting.

And even when models try to describe the effects of different courses of action, it’s human beings who must ultimately weigh the benefits and harms. “Science cannot tell us how to value things,” Ms. Thompson says. “The idea of ‘following the science’ is meaningless.”

Science cannot tell us how to value things,” Ms. Thompson says. “The idea of ‘following the science’ is meaningless.”

The promise and peril of models, Ms. Thompson recognizes, has deep resonance in biomedicine, where so-called model organisms, like yeast and zebrafish, have led to foundational insights and accelerated the development of therapeutics. At the same time, treatments that work brilliantly in Model Land often fail in people, devastating patients and disappointing drug developers.

The search for improved disease models can be complicated when proponents of one model suppress research into alternative approaches, as the late journalist Sharon Begley documented in a powerful 2019 report. Ms. Thompson perceptively critiques the adoption of singular “gold standard” models, noting that the “solidification” of one set of assumptions can lock us into one way of thinking and close off other important avenues of inquiry.

The statistician George Box once observed that “all models are wrong, but some are useful.” For Ms. Thompson, the real utility of models is as a tool for exploration rather than a mechanism to divine the truth or predict the future.

“The process of generating a model changes the way that we think about a situation,” she writes; it “strengthens some concepts and weakens others.” Recalling President Eisenhower’s legendary maxim—that “plans are useless, but planning is indispensable”—she argues that relying on models solely for their output misses the indispensable value of the process of model development: a deeper understanding of trade-offs, and the agility to adapt if foundational assumptions unexpectedly change.

While acknowledging our “overenthusiasm for mathematical solutions,” Ms. Thompson emphatically counsels not abstinence but discipline and humility. Clarity about the purpose of the model matters, she says: An epidemiological model may inform us about viral transmission and hospital pressure but not about the economic effects of closing businesses.

Modelers should acknowledge the value judgments implicit in their models, explain what makes a model “good” and describe relevant limitations. But it’s up to us to learn from models without being drawn in by their seductive elegance, and to ensure that the lessons from Model Land find substantive expression where it actually matters: in our messy, material, magnificent world.

Dr. Shay­witz is a physi­cian-sci­en­tist at Takeda Phar­ma­ceu­ti­cals, a lec­turer at Har­vard Med­i­cal School and an ad­junct fel­low at the Amer­i­can En­ter­prise In­sti­tute.

Leave a Reply

Your email address will not be published. Required fields are marked *