Nassim Nicholas Taleb saw the financial crisis of 2008 coming. As pundits like Jim Cramer confidently declared that Bear Stearns was a safe investment, Taleb stood behind his warning that consolidation of banks would lead to global financial collapse – and he invested his own money accordingly. By the time the housing bubble had burst, the stock market had collapsed, and the dust had settled, Taleb had augmented his personal wealth by tens of millions of dollars – and cemented his reputation as an astute observer of financial markets.
But Taleb doesn’t just see himself as an expert on finance. He would prefer that we call him an expert on risk, more broadly. As he opined in Antifragile, the 2012 bestselling book, “everything entailing risk–everything–can be seen with a lot more rigor and clarity from the vantage point of an option professional.”
It is apparently in this spirit that Taleb has co-authored the article “The Precautionary Principle (with Application to the Genetic Modification of Organisms)” with Rupert Read, Raphael Douady, Joseph Norman, and Yaneer Bar-Yam. This preprint makes the case for a ban on the cultivation of genetically modified organisms (GMOs), organisms whose genomes have been altered using genetic engineering methods, which usually involve the insertion of genes originating in other species. Taleb and his co-authors argue for a general “non-naive precautionary principle” which compels us to avoid taking actions that hold even a remote chance of catastrophic global harm, and they assert that the ban on GMOs should follow from there.
Taleb focuses on the possibility of unforeseen, high-impact events, which are known to readers of his books as “black swan” events. He argues that a harmful black swan on a global scale would potentially be so catastrophic that we cannot allow any possibility of such an occurrence. He assures us that more traditional methods of plant breeding have already proven not to present risk of catastrophic harm, by virtue of their having been tested over millennia in nature. However, he asserts that GMOs haven’t been proven safe in the same way, and that we should therefore hold off on their cultivation until we can rule out the possibility of global catastrophe. Even lacking evidence to suggest that GMOs will cause global catastrophe, we need to be careful because the nature of black swan events is such that we often don’t see evidence until it is too late.
This stance pits Taleb against the many plant scientists, regulatory agencies, and scientific societies that have considered the environmental and health effects of genetic engineering without finding cause for concern. This is of little concern to Taleb, who explained on Twitter that, “Since Newton we have operated under the principle that a single mathematical derivation cancels millions of ‘scientific’ opinions(consensus).”
Certainly, mathematics is a powerful tool for gaining insight into the natural world, but Taleb’s “Precautionary Principle” is no Principia Mathematica. It would be overly generous to call it a mathematical argument; “The Precautionary Principle” would be more accurately described as mathiness1. Rather than set out a rigorous argument, it uses things that feel like mathematics – graphs, equations, and technical definitions – to provide a veneer of rigor to an ideological agenda. Critically, it lacks the precision of language and the careful attention to assumptions that lie at the foundation of mathematical reasoning.
Precision – to a mathematician – is not simply about computing figures to as many decimal places as possible. Instead, it means using language (sometimes including numbers) in a way that leaves no room for interpretation. If I told you that I lived near a fire station, that would be an imprecise claim. Whether it is true or false depends on what I mean by “near.” On the other hand, if I said that I lived within ten miles of the nearest fire station, that would be a precise claim – even if there were a fire station half a mile away from my home. The key is that the precise statement uses words with enough meaning that the statement can uncontroversially be classified as either true or false.
Even in mathematics, imprecise language does have its place. It can be useful in the process of hammering out a more detailed argument as a step along the way to something more precise, or it can serve to communicate general ideas when the details might get in the way of the big picture. However, an imprecise argument only has mathematical value if it can be expressed precisely.
While the “Precautionary Principle” pays some lip service to mathematical precision, the bulk of the argument is made only on the colloquial level. Moreover, this fuzziness is essential to the argument. It serves to hide mathematical errors and dubious unstated assumptions.
Exhibit A is the paper’s discussion of “fat tails”, the probability theoretic notion underpinning Taleb’s so-called black swans. A “fat tail” is a property of certain probability distributions, which specify the chance of getting different values when we measure some phenomenon. The word “tail” here refers to the role of extreme values, and a fat-tailed distribution is one in which these extreme values are relatively important.
A classic example of a phenomenon with a fat-tailed distribution is human income. Within a population, say the United States, a few individuals often earn orders of magnitude more than the vast majority of people. The average income is disproportionately influenced by these outliers (which is why it’s usually more enlightening to look at median income instead of mean income). By contrast, human weight follows a thin-tailed distribution. Nobody weighs 100 times as much as the average adult.
But none of that is precise enough for mathematical argument. To give the discussion an appearance of mathematical rigor, Taleb devotes an appendix to an allegedly mathematical discussion of fat tails. Here he defines fat tails in terms of what is known in probability theory as a subexponential distribution. After giving a few equivalent mathematical definitions, Taleb offers a colloquial translation: if we get a bunch of values from a subexponential distribution, “the sum…has the same magnitude as the largest sample…which is another way of saying that tails play the most important role.”
In the main paper, Taleb argues that due to fat tails, “[i]n human made variations the tightly connected global system implies a single deviation will eventually dominate the sum of their effects.” Even if we accept that “human made variations” produce fat tails – and this claim does deserve scrutiny – it should be noted that a fat tail, as Taleb has defined it, does not imply that “a single deviation will eventually dominate.”
The problem is that Taleb’s colloquial explanation of subexponential distributions is, at best, grossly oversimplistic. To be fair, it would be difficult to give an accurate non-technical definition of a subexponential distribution in a single sentence, but that does not make Taleb’s effort correct. Some concepts simply require a bit more detail.
Here’s a more accurate explanation of a subexponential distribution. Suppose that we have some way to generate random numbers according to some subexponential distribution, and we use that to generate a fixed number of values. The subexponential property tells us that there’s some threshold such that if the sum of all of our numbers exceeds that threshold, then it’s probably2 because a single one of the values was bigger than the threshold. It does not tell us how big the threshold is or the chance that the sum actually exceeds that threshold! Nor does it allow us to say anything when the sum is less than the threshold (which may be the vast majority of the time).
But if we generate enough numbers, won’t the sum eventually exceed our threshold? Well, there’s a big catch: the threshold depends on how many numbers we’re generating! The threshold for a trillion numbers will be bigger than the threshold for 100 numbers, which will be higher than the threshold for 10 numbers. That means that each time we generate a new number and add it to the list, we have a new, bigger threshold that the sum has to exceed for us to conclude that we have a single dominant one. The bottom line is that there are many examples of subexponential distributions for which a single dominant event is extremely unlikely, even with a massive number of variations.
On the surface, this error might not seem like a big a deal. It might even be repaired by replacing subexponential distributions with a more appropriate class of probability distributions – though there would be other holes to fill in this case. However, the example is instructive because it displays the importance of precision in mathematical argument. By straying from the precise mathematical formalism, Taleb arrives at a conclusion that doesn’t follow from his assumptions. And this kind of imprecision pervades all of “The Precautionary Principle.”
Consider, for instance, a section dedicated to fragility. Most readers will recognize the word “fragile” as meaning “easily broken”, but that’s not precise enough for mathematical use. Taleb offers a more formal definition, writing that an object is fragile if it has “a certain type on [sic] nonlinear response to random events.”
As an example, we’re asked to imagine a coffee cup on a table and consider how it responds to earthquakes of various magnitudes3. Perhaps an earthquake of magnitude 6 will knock over the table, causing the coffee cup to shatter into hundreds of pieces. But what if, instead of one earthquake of magnitude 6, the cup experiences six earthquakes of magnitude 1? A human won’t even notice an earthquake of magnitude 1, and a coffee cup won’t show the slightest sign of damage from six of these tiny quakes. Even though the sum of the magnitudes of the smaller earthquakes is equal to the magnitude of the larger one, the coffee cup is damaged much more by the big quake than by the small ones. This is what Taleb means when he describes the coffee cup as fragile.
According to the Taleb, this fragility isn’t unique to the coffee cup. Rather, he explains that “[t]his nonlinear response is central for everything on planet earth.” That is, similar ideas will hold for other objects if we just replace the earthquakes with the appropriate “stressors.”
We can reframe this property in slightly different terms. Suppose that we have a fixed amount of stress to dole out to an object and we can choose to divide that stress into however many increments we want. If the object is fragile, it will be damaged more by one big lump sum than if we spread the stress among a number of smaller installments. Taleb explains that this property follows from the fact that “small variations are much, much more frequent than large ones.” In the case of the coffee cup, for example, earthquakes of magnitude 1 happen all the time. If the coffee cup were harmed equally by six small earthquakes as by the one big one, it wouldn’t have lasted a week.
That reasoning might make some sense for the coffee cup, but can we really extend it to a theory of everything? Such a generalization, at the very least, lies beyond the limits of what mathematics can accomplish. Mathematics concerns itself only with structures and abstractions. Alone, it cannot tell us how the world works. Newton’s laws of motion, while mathematically elegant, were physically useful because they dealt with specific measurable quantities like mass and distance, and because they were consistent with empirical observations.
Measurement bridges the gap between the observable world and mathematical abstractions. It allows us to give precise meaning to claims such as “small variations are much, much more frequent than large ones.” If we do not establish a system of measurement, there’s little that mathematics can say about the real world.
In the financial world, which has been the proving ground for Taleb’s brand of risk analysis, the measurement problem is easy: just pick our local currency. But if we want to consider something like the health of the planet in this kind of framework, there are dozens of interconnected factors that we might care about, ranging from the molecular composition of the atmosphere to the number of living species. It’s not obvious how all of this information should be reduced to a single number.
The fragility argument, as it happens, is very much dependent on how things are measured. For instance, in the coffee cup example, Taleb measures the intensity of an earthquake by its magnitude. What happens if we measure an earthquake instead by the amount of energy it releases? Each 1-point increase in magnitude corresponds to a 32-fold increase in energy released, which means that a magnitude 6 quake gives off 32×32×32×32×32=33,554,432 times as much energy as a quake of magnitude 1.
But how many earthquakes of magnitude 1 will the coffee cup experience for each magnitude 6 quake? In seismology, a principle called the Gutenberg-Richter Law predicts that a seismically active region (such as California) will experience about 100,000 times more earthquakes of magnitude 1 than of magnitude 6. That’s a lot of earthquakes, but because these earthquakes are so small, they collectively release only about 0.003 times as much energy as the one big quake of magnitude 6. It would take hundreds of years for the coffee cup on the table to experience enough earthquakes of magnitude 1 to match the energy given off by a single magnitude 6 quake (which strikes California less than once a year).
This means that Taleb’s argument for fragility falls apart if we’re measuring earthquakes by the amount of energy released instead of the magnitude. The property that Taleb calls fragility is not, as Taleb puts it, a simple consequence of “the statistical structure of stressors.” Instead, it is inextricably dependent on the way in which we choose to measure stressors. Put another way, the definition of fragility in terms of “nonlinear response” isn’t really precise enough for use in mathematical arguments until we’ve decided how we’re measuring the relevant quantities. The assertion that fragility is “central to everything on planet earth” should therefore be greeted with skepticism in the absence of precise definitions and empirical support.
A corollary of this fragility argument – if we believe it – is, we’re told, that “it is preferable to diversify our effect on the planet, e.g. distinct types of pollutants, across the broadest number of uncorrelated sources of harm.” The idea is that fragility implies that the damage from the collection of small harms should be less than that from a few big harms. That makes sense – if we assume that all of the choices under consideration impose the same amount of total stress.
That’s a huge assumption, and Taleb doesn’t bother to acknowledge it or explain why it should be true. Moreover, like the fragility argument itself, it depends on how we’re measuring stress. Dividing the “magnitude stress” of a 6 earthquake into 6 smaller quakes gives us 6 quakes of magnitude 1, but spreading the “energy stress” of a 6 earthquake among 6 smaller quakes gives us 6 quakes of magnitude 5.5. Without establishing what we’re talking about and how we’re measuring it, the idea of spreading out stress among smaller events isn’t even meaningful.
That problem of how to measure stress is particularly vexing when dealing with stressors that differ qualitatively. How, for instance, should a pound of anhydrous ammonium nitrate compare to a pound of herbicide or a day’s operation of a combine harvester? One can’t even begin to make sense of the discussion of fragility for real-world decision-making without addressing questions of this type. Yet “The Precautionary Principle” offers no insight into how such comparisons should be made.
This fuzziness also afflicts the section of “The Precautionary Principle” dedicated to establishing that genetic engineering is riskier than traditional plant breeding methods. Taleb asserts that genetic engineering is an example of “top-down engineering” and that it is therefore doomed to fail in complex environments. We never see a formal definition of “top-down engineering” or any explanation of why the term accurately describes the practice of inserting a gene into an existing plant. Instead of a proof that top-down strategies don’t work, we’re given a few examples – including a failed attempt to overhaul the air traffic control system.
For all the discussion of risk of ruin, Taleb is rarely careful to clarify, risk of ruining what? He’ll argue, for instance, that because the earth has survived for billions of years, “natural variations” pose negligible risk of ruin. And that might make sense if we’re talking about ruin for the earth itself, but if our interest is in the continuation of Homo sapiens, we ought not ignore the billions of species which have gone extinct from natural causes. To deduce that nature poses negligible risk to humanity based on humanity’s survival to the present day would be succumbing to survivorship bias.
The argument is relentlessly muddled throughout, and there are many more criticisms to be made along similar lines. The whole screed calls to mind an anecdote recorded in Antifragile, which began when colleagues reacted unfavorably to one of Taleb’s ideas. He recalls, “According to the wonderful principle that one should use people’s stupidity to have fun, I invited my friend Raphael Douady to collaborate in expressing this simple idea using the most opaque mathematical derivations, with incomprehensible theorems that would take half a day (for a professional) to understand.” He reports, “We got nothing but positive reactions.” The lesson to be learned from this is that “if you can say something straightforward in a complicated manner with complex theorems, even if there is no large gain in rigor from these complicated equations, people take the idea very seriously.”
With “The Precautionary Principle,” Taleb has given us another argument in which the math does not contribute any perceptible degree of rigor. In this case, the math, while not particularly complex, serves only to intimidate and to confuse, rather than to advance the argument. A professional could easily spend half a day on the paper, even if most of the time would be spent leaving red ink in the margins. Yet, by all indications, this time Taleb is completely serious.
“The Precautionary Principle” nonetheless raises a couple of philosophical points that are worth considering. What if the experts haven’t looked for harm in all the right places? How do we deal with the possibility of “black swans”? These are difficult questions, and they won’t be resolved with mathematical rigor alone, much less mathiness.
Taleb is also half right on another meta-issue. In defending his foray into the GMO debate from critics who dismiss his work because he lacks a background in biology, he notes that “no amount of expertise in the details of biological processes can be a substitute for probabilistic rigor.” I say this is “half right” because although rigorous quantitative reasoning deserves a place in risk analysis, subject-matter expertise should guide us toward mathematical formalisms that describe the real world phenomena at hand. In the case of the GMO debate, this means that both biological knowledge and probabilistic rigor are important. Unfortunately, the “The Precautionary Principle” doesn’t contain much of either.
The term “mathiness” was originally coined by the economist Paul Romer and of course takes inspiration from Stephen Colbert’s “truthiness.”↩
What does probably mean here? This is one of those times where I’m being less precise in order to keep the details from getting in the way of the main idea. But if we want to be more precise, “probably” means that we choose any level of certainty, say 75% or 98% or 99.99%, in advance, then there’s a threshold that guarantees that things work out to that degree of certainty.↩
After publishing this post, I realized that “The Precautionary Principle” did not say how the intensity of an earthquake should be mentioned, and that I had decided that magnitude was intended based on a similar analogy in a different paper by Taleb and Douady.↩