Cover & Diagrams

resource preview
resource preview


Have you seen great ideas or apparently-solid organizations fail because of some random event or unexpected shock? Does your organization spend significant resources on trying to avoid volatility or uncertainty? What if shocks, volatility, and uncertainty were actually what your ideas or organization need in order to really take off? This summary of Antifragile shows how the key to thriving is not avoiding stress but embracing the concept of "antifragility."

The antifragile is the opposite of fragile; it is something that loves randomness and uncertainty and is strengthened by a shock. Antifragility is inherent in all the natural and complex systems that survived over time. Our modern civilization is intent on damping down volatility and randomness and avoiding stressors; but once we grasp the importance of antifragility, we realize that our modern approach actually causes harm.

Suppressing volatility and randomness in our economy, our health, our education, or our political life makes those systems more fragile. Without stressors, complex systems become weak and even die.

Black Swans—large, irregular, and unpredictable events—are what make history. We cannot predict them, but we can determine which object or system is more fragile to Black Swans than another. The fragile systems are the ones that do not like volatility, randomness, errors or stressors.

Modern society assumes that anything can be 'fixed,' but most of the time it is better to leave well alone. Socioeconomic life and the human body can actually be harmed by intervention, leaving the whole more fragile to shocks and uncertainty. Often, the best course of action is to ignore the noise from too much data and let time take care of the problem.

One of the worst aspects of modern society is the way that fragility and antifragility get transferred from one group to another, usually with one side getting all the benefits and the other getting all the harm.


1. Introducing antifragility

Most people assume that the opposite of "fragile" is "robust," or some similar word implying resilient; but resilient items do not change or improve. In fact, there is no word for the opposite of fragile—the best we can come up with is something like "unharmed." Instead, let's use the word "antifragile." The resilient resists shocks but stays essentially the same, while the antifragile gets better after a shock; it loves randomness and uncertainty. Anything that has more upside than downside from a random event or certain type of shock is antifragile; fragile is the opposite.

Consider the Hydra, a monster in Greek mythology with numerous heads. Whenever one head was chopped off, two would grow back. That is the essence of antifragility: something that likes a certain type of harm and even thrives because of it. Another example that gets us part of the way to this notion of antifragile is the ancient king Mithridates, who protected himself from assassination by ingesting ever-stronger doses of poison. It's the principle that underlies vaccination. The idea is that sometimes systems need stressors in order to grow and thrive.

Antifragility is an inherent aspect of all those natural and complex systems that have survived; so, if we deprive such systems of volatility, randomness, and stressors, we will actually harm them. Suppressing volatility and randomness in our economy, our health, our education, or our political life makes those systems more fragile. Without stressors, complex systems become weak and even die.

The triad

The fragile craves tranquility, the robust doesn't care too much, while the antifragile grows from disorder.

Get into trouble

To innovate, first get into trouble. The excess energy released when you overreact to a setback, the over-compensation, is precisely what triggers innovation. It's like encouraging a horse to run faster by pitting him against a strong rival. In any system, this over-compensation builds extra capacity that helps the system to survive. Unlike the risk analysts who figure out how to survive the kind of bad experience that happened in the past, overcompensation is nature's way of helping a system to cope with the worst that could possibly happen—not just the bad thing that has happened.

Information is antifragile. It feeds on attempts to harm it. Consider a large corporation or government attempting to "re-instill confidence" after a crisis. When you hear that phrase, you know they are fragile, hence doomed. The information is out there and it is merciless.

Life is antifragile—up to a certain point, all living things are strengthened by stressors. (But only up to a point, too much stress can kill.) Non-living, inanimate things, however, will break when stressed. Humans seem to do best with acute stressors that act as messengers, particularly if given time to recover. They do not do well with chronic stressors like a burdensome boss, tax problems, or long daily commute—i.e., the pressures brought on by civilization. Complex systems also benefit from a certain level of stress—it's how information is communicated to the system's component parts.

Yet, our modern life tries to eradicate stressors as much as possible, thinking it will strengthen us or strengthen society. But people and societies are not washing machines that can be tinkered with and made better. In fact, the opposite is the case: without periodic acute stressors, we will be weaker, not stronger. Living things need a certain measure of randomness and disorder in order to thrive.

Evolution and antifragility

Evolution can be thought of as getting stronger under harm—it loves stressors, randomness, and uncertainty. One individual may be relatively fragile, but its suffering strengthens the gene pool. If nature ran the economy and its institutions, it would not waste effort constantly bailing out every individual to help it to live forever. For evolution, the more noise and disturbances in the system, the more that reproduction of the fittest and random mutations will help to define the next generation. Of course, this is true only up to a point: a calamity that wiped out all life on earth wouldn't help the fittest to survive. Nevertheless, evolution happens when harm to an individual organism helps the species as a whole to survive.

The downside of this is that the mistakes or disasters suffered by some individuals may help the rest of us, but certainly don't help them. The sinking of the Titanic was catastrophic for its passengers, but without that disaster we would have kept building ever-larger ships and the next sinking would have been even more tragic.

Nature works in layers: smaller organisms contribute to evolution; our bodies' cells compete to survive; each cell contains proteins that compete; and so on. There is a similar kind of layering in the economy, with individuals, small firms, departments within large corporations, industries, regional economies, and the global economy. For the economy to be antifragile and evolve, individual businesses must be fragile and exposed to breaking. We can mitigate the harm that may ensue for the very weak, shielding individuals from starvation, providing social protection and above all respect. But, the fact is that government bailouts are the opposite of healthy risk-taking, by protecting the unfit. The better solution is to have a system where no one company's failure can drag others down.

Resisting black swans

Most of history comes from Black Swans—large-scale, irregular and unpredictable events that have huge consequences. Such events are necessary for history, technology, and knowledge. We may think, with hindsight, that we almost predicted these Black Swans, as we try to make history appear more linear, but in doing so we underestimate the role of randomness. In our modern world, with all its technological complexity, the role of Black Swans is actually increasing, even as we insist that we are better at predicting and even avoiding such events. Understanding antifragility will make us less fearful of Black Swans.

We cannot say for certain that a particular shock or event will happen, but we can determine which object or structure is more fragile than another, if the event were to occur. Instead of talking about risks, we should be talking about this notion of fragility: that which does not like volatility, randomness, errors or stressors.

We must also talk about 'fragilistas,' that category of people who mistake the unknown for the nonexistent, who overestimate the reach of scientific knowledge, and who pride themselves on being 'rational.' A medical fragilista is someone who underestimates the body's ability to heal itself; a policy fragilista thinks the economy is a machine that can be tinkered with and 'fixed'; and a financial fragilista is someone who forces onto the banking system risk models that end up destroying that system.

2. Denial and modernity

Artisans, taxi drivers, and dentists have volatility in their income but are relatively robust to the kind of Black Swan that would stop all of their income. Employees, on the other hand, have no such volatility but their income can disappear after one phone call from HR. A small mistake gives a self-employed person valuable information that helps her to adapt; an employee's small mistake becomes part of her permanent employment record.

The word "volatility" comes from the Latin volare, meaning "to fly." When we deprive political and other systems of volatility, through some misguided attempt at intervention, we harm them and prevent them from really taking off.

The Levant and the nation state

The northern Levant, roughly the area that today comprises northern Syria and Lebanon, was for thousands of years a prosperous region, dominated by traders. Then came the two World Wars, which split the region between two nation sates, Syria and Lebanon. A few decades later the Baath Party arose in Syria, embarking on a 'modernization' program that meant centralization and statist laws, replacing the ancient trading hubs or souks with modern office buildings. The trading families left (for places like New York or California) and cities like Aleppo plunged into decline. Meanwhile, Lebanon spiraled into civil war as rival factions armed themselves and the state did nothing.

Nation states are not new (there was a brief example in ancient Egypt) but for much of history empires such as the Roman or the Ottoman dominated. Empires thrive by allowing local elites to prosper and to conserve some effective local autonomy. Even Europe, before the creations of the states of Germany and Italy, was in a constant state of tension, with shifting alliances and conflicts between small states and city-states. The nineteenth century creation of nation states led directly to the two wars of the twentieth century. The modern system of nation states may mean fewer acts of violent conflict, but when wars do erupt they are far more destructive. In the long run, a messy locally-led confederation will be more stable than a centralized system.

Randomness (up to a point) rules

Variation can act as a purge: periodic small forest fires help to cleanse the system of flammable material. Similarly, the longer a financial market goes without any trauma, the worst it will be once one finally hits. A bit of confusion now and then helps to make the overall system stronger.

When volatility and randomness are artificially suppressed, not only does the system become more fragile, it exhibits no visible risks—so when the inevitable happens, the shock is even worse. Think about the government of Egypt before the riots of 2011: it had been supported for four decades by the United States "to avoid chaos." Preventing the noise made the problem worse in the long run.

The problem with modernity

At the core of our notion of modernity is the idea that more intervention is a good thing, that anything can and should be "fixed." Instead, consider the notion of iatrogenics, literally that which is "caused by the healer" (iatros being healer in Greek). A classic example is the old practice of bloodletting as a way of healing people, something that certainly caused more harm than good in most cases. Whenever we have unnecessary intervention, we have iatrogenics—causing harm while trying to help.

In the modern world, this happens most often in socioeconomic life and in the human body. State intervention ends up causing fragility and a deeper crisis once it happens. Think of Alan Greenspan and his pre-2007 interventionist attempts to "smooth out the boom-bust cycle," which hid problems under the carpet and caused a bigger economic shock.

Sometimes, intervention is necessary and desirable: the problem is with naïve interventionism that thinks it is making things better and does not recognize the harm it is causing. It's usually good to intervene to control things like size or speed. But often, the best course is not to intervene at all. Unfortunately, the modern world does not recognize this. Managers don't get rewarded for all the times they did not unnecessarily intervene.

The modern world introduces a lot of noisy data into our lives—which can cause too much intervention in a classic case of iatrogenics. Often, the best course of action is to ignore the noise and let time take care of the problem.

Prediction, schmediction

We assume that more data will help us predict events—but large events, particularly major political upheavals, cannot be predicted. Prediction and forecasting are modern phenomena, and they are not neutral. In fact, forecasting can be downright injurious, as bad as bleeding people who are ill. We can predict the movement of the planets, but we cannot accurately predict revolutions, crises, or the size of budget deficits.

Social, economic, and cultural life all lie in the Black Swan domain, where there is a limit to knowledge that can never be reached.

3. A different world view

Consider the case of Tony DiBenedetto, or "Fat Tony," who grew up in Brooklyn, moved to New Jersey, and is a smeller of fragility. He believes that nerds, administrators, and especially bankers, are suckers. Before 2007 he knew that some kind of crisis was coming, and he bet against it. As a result, he made a bundle. He did not believe in predictions, but he made big bucks predicting that some people—the professional predictors—would go bust. Fat Tony identifies fragilities, makes a bet on the collapse of that fragile system, collects big, then has lunch.

Seneca and the stoics

The Stoic philosopher Seneca lived a couple thousand years before Fat Tony. The Stoics advocated a certain indifference to fate. After a setback of any kind, Seneca would say, "I lost nothing." He was not fragile to decisions made by fate. Seneca knew that success increases fragility; having possessions make you worry about losing them. So, he would go through a mental exercise where he would write off all his possessions, so that if he did lose them, he would not feel it. He still preferred wealth; he just sought to make himself antifragile to its loss.

The first step to increasing antifragility is to decrease your exposure to negative Black Swans and to let natural antifragility work by itself. This brings us to the notion of the barbell, something with two extremes that are kept separate, with avoidance of the middle. In other words, aim for a combination of maximally safe at one end and maximally speculative at the other, while avoiding the fragile middle. A good illustration of this is the tradition in French literature of writers looking for a not-very-challenging sinecure to pay the bills, while freeing up time to pursue their creative calling. Einstein working at the Patent Office while developing his theories is another.

4. Education vs. innovation

One of our greatest errors as humans is thinking we know exactly where we are going, and assuming that others know this, too. In fact, from architecture to medicine to engineering, it is trial and error that leads to innovation. America's great asset is that it engages in trial and error; in Japan, by contrast, to fail is shameful, so people hide their errors.

Natural selection

Nature knows how to select; about half of all embryos undergo spontaneous abortion. Nature knows it is easier to try and fail, than to aim for always creating something perfect. Silicon Valley gets this, with its mantra of "fail early." Keep what is good, ditch what is bad, and know when to take the profits. The antifragile takes the best option; the fragile has no option.

A lot of the time, outcomes that we assign to skill were really the result of exercising an option. We do need some intelligence to recognize that an option exists; that's how trial and error works, using intelligence to recognize what to keep and what to discard.

The limits of academia

Academics like to downplay the role of trial and error in human society and knowledge. They like to push the idea that universities generate wealth and the growth of useful knowledge. But, just because rich countries are educated does not mean that education causes wealth. It's certainly useful for the individual, and it does play some role at the national level, but educating people is not an automatic path to national wealth as many assume.

There is a big difference between doing and thinking. Thinkers hate uncertainty; become trapped in the story; and always try to understand the logic of things. Antifragile doers embrace uncertainty; use narrative for motivation but are not trapped by the story; and know that the rational thing to do is to compare two outcomes and choose the better option.

Practitioners don't write, they do. We don't put theories into practice, we create theories out of practice. That's not to say that academic science is not behind some practical technologies, but we don't have to be suckers about what academia can and cannot do.

The Industrial Revolution was not triggered by science. Rather, the main sources of knowledge and technical innovation in Britain in the nineteenth century were the hobbyist and the rector, in other words, enlightened amateurs. The Revolution was the result of technologists building technology, like John Kay inventing the flying shuttle in 1733; an empirical development based on the trial, error, and experimentation of a skilled craftsman.

All of this implies that the role of government should be to fund tinkerers and innovators, and to encourage collaboration; because who knows where the unpredictability of collaboration might lead?

5. Detecting fragility

For something that is fragile, the more intense the shock, the more harm it delivers. If you drive your car into a wall at 50mph, it will cause more harm than if you drive it into the same wall at 5mph ten times. The cumulative effect of small shocks on something fragile is not as bad as the effect of a single, large shock. And, the more fragile something is, the more harm it will get from that single, large shock.

Small is less fragile, particularly in the world of business. Being large during difficult times is a real problem. The bigger a project, the poorer the final outcome; the bigger the project, the higher the cost of a delay as a proportion of the final budget. However, some projects can be divided into smaller pieces, such that the size of the segment matters more than the size of the overall project.

In our complex, interdependent, globalized world, Black Swan effects are increasing. A problem at one point can derail an entire venture; projects are as weak as the weakest link in their chain. The world is getting less predictable even as we rely more and more on technologies that have hard-to-estimate errors. We may think our economy is getting more efficient, but fragility means the cost of errors is much higher. On the stock exchanges, we swapped traders for computers, thinking this made for a (very small) increase in efficiency. But, when a trader makes a mistake the result is confined and distributed; when a computer makes a mistake, the impact can be wild. Remember the August 2010 "flash crash?" The whole thing was triggered by one small computer error. And, thanks to globalization, the effect of contagion is planet wide.

Who will go bust?

Fannie Mae was a giant government-sponsored lending facility that collapsed, sticking US taxpayers with hundreds of billions of dollars in losses. Back in 2003, a journalist showed me a secret report from a Fannie Mae employee that revealed how the corporation calculated risk: a move upward in an economic variable would lead to massive losses, while a move in the opposite direction would lead to small profits. Further upward moves meant even larger additional losses, while further downward moves meant ever-smaller increments of profit. This was a clear case of acceleration of harm, on a massive scale.

I told anyone who would listen that Fannie Mae was "sitting on a barrel of dynamite"—and, by inference, so were many other institutions, such as banks.

The lesson from Fannie Mae here is to look for acceleration of harm. When every deviation from an assumed norm makes the harm significantly worse, e.g., when every increase in the unemployment rate boosts the fiscal deficit, or a firm has to borrow more and more money to stay afloat, then the situation is inherently fragile.

We can use this approach to test whether the math in an economic model is bogus: make a small change in the model's assumptions and see how big the effect is and whether it accelerates. If it does, then whoever is relying on this model is going to blow up from a Black Swan effect Fannie Mae-style.

6. Some stuff should be left out

Sometimes it is easier to describe something by saying what it is NOT rather than what it IS. Similarly, sometimes it is better to leave things out. As Steve Jobs said, "People think focus means saying yes to the thing you've got to focus on. But that's not what it means at all. It means saying no to the 100 other good things that there are…I'm actually as proud of the things we haven't done as the things I have done. Innovation is saying no to 1,000 things."

We constantly look for more data to solve problems, but the reality is that in the social sciences, and especially in economics, tons of statistics can be instantly invalidated by one Black Swan event—an event that cannot be predicted.

The role of time

Something that has survived for a long time probably serves a good purpose that our eyes and our logic may not see. (Even something antifragile will eventually break, but it should take far longer.) When we try to imagine what the future may look like we often speculate about new technologies and products but forget that we should start with the past and respect for the historical record. The notion of heuristics is important: the rules of thumb (often unwritten) that really determine survival.

There are things that perish, like humans and most objects, and there are things that are nonperishable and potentially perennial. These usually have an informational component. Technology is great, particularly when it displaces something fragile, but it is at its best when it is invisible. The most invisible technologies are usually those that have been around for the longest—and every year that passes without them being eliminated increases the likelihood that such technologies will continue to endure.

Unfortunately, information hides failures. We notice change but not things that are static. We are far more dependent on water than on cell phones, but we assume cell phones are critical because of how often they change. So, how to tell if a new idea or invention is really important? Time. Time is the one factor that will turn unnecessary noise into dust and will preserve that which is really worth preserving. A book that has been around for a decade will likely last another decade; a book that has been around for 2,000 years will continue to have staying power. Mathematics is perhaps the exception here; in math, it is immediately obvious if results have meaning. For everything else, especially in the fields of literature, finance, and economics, time is the surefire way to get rid of the overhype of the new.

Fragility and healthcare

Denying antifragility obscures the hidden costs of modern healthcare. We shouldn't need evidence of harm to assert that a particular drug or procedure is harmful. Take trans fat, a human creation: at its invention, everyone assumed that this scientifically created fat was superior to its natural counterparts: it stayed soft in the fridge, it was cheap to make, and it was assumed to be healthier than butter or lard. We were suckers: turns out, the stuff actually kills people.

Trans fat is a classic case of iatrogenics at work, causing harm while trying to help. In such a situation the benefits are usually small but visible, while the damage is large, delayed, and hidden. Every so often science comes up with a drug that "enhances performance," only to find out what any market trader can tell you: there is no such thing as a free lunch. If there were a drug that could make us feel better without any negative long-term effects, nature would probably have figured it out already.

Above all, iatrogenics leads to the conclusion that medicine and the pharma industry should focus on the really severe, high-symptom conditions and diseases. For everything else, where the patient is more-or-less healthy, pharma should leave Mother Nature to be the doctor.

Evolution proceeds thanks to undirected tinkering. Top-down science, however, does the opposite. There is often a logic to natural things that we just may not understand. After all, whatever Nature does, it has been tested over time and is rigorous until proven otherwise. Even so, we are suckers for theories and information, forgetting that human explanations change all the time. We often mistake random variability for information and act on it, intervening when we should leave well alone.

Human life expectancy has increased thanks to a combination of many factors: sanitation, antibiotics, medical intervention in life-threatening situations, and so on. On the other hand, there are a lot of diseases that have been caused by civilization: dental cavities, the impact of smoking, and most cardiovascular disease. We can add to this second list unnecessary medical interventions. We have a tendency to intervene, especially when we think we have a lot of information that we should act upon.

However, a lot of the time people would benefit more from having things taken away than from adding yet more products and procedures: if we take away sugar and refined carbohydrates, sodas, food supplements, etc., many people would benefit. Over-the-counter painkillers are another product where many would benefit from its absence: relying on these painkillers can stop someone from addressing, say, the cause of their headaches.

Interestingly, periodic food deprivation or fasting can also have a positive impact on the human body. We assume that because there is such a thing as a balanced diet, then one should always eat this balance at every meal, and that frequent meals are a good thing. But, in more primitive societies, the food supply was both varied and vulnerable. We know that the periodic stress of exercise is good for the body; why do we not realize that periodic fasting can have the same effect? There are plenty of religions that incorporate ritual fasts; maybe they're onto something.

7. Fragility and ethics

The worst thing about modern society is the way that fragility and antifragility get transferred from one party to another, with one side getting all the benefits and the other side unwittingly getting all the harm. In traditional societies, a person's worth depended on how much they were willing to sacrifice for others. In our society, power accrues to those who have no skin in the game. It's the difference between saints, knights, innovators, dissidents, and artists, all people with skin in the game for the sake of others; and bureaucrats, corporate suits, politicians, and businesses, who have zero skin in the game yet exercise a great deal of power. The antifragility of society depends firmly on the first set of individuals (although it must be said, they may not always be right, and some may have had a messianic fervor that ended up causing huge amounts of harm).

No skin in the game

Some people have options in our society at the expense of others. Every so-called opinion-maker should have skin in the game; every forecaster and analyst should stand to lose out if people rely on their forecasts and those forecasts turn out to be wrong. It is profoundly unethical to talk without doing; if you express an opinion that may harm others, you should incur some liability for your words. In traditional societies, it is better to try and fail, with some skin in the game, than to never be exposed to the consequences of your own words or actions.

Don't ask someone for their opinion or recommendation; just ask them what they actually do or don't have in their own portfolio.

The stock market is the biggest transfer of antifragility in human history. Investments are packaged into shares, managers get to make big decisions that game the system, and they get more prestige than the entrepreneurs who are the real risk takers. The system leaves corporate managers like fund managers with plenty of incentives but no disincentives. Consider, the banking industry lost hundreds of billions of dollars, likely more than it ever made, yet individual managers till reaped billions in compensation with the bill footed by the taxpayer. Adam Smith, in his famous Wealth of Nations, understood this problem, warning that managers of other people's money cannot be expected to watch over it "with the same anxious vigilance with which the partners in a private copartnery frequently watch over their own."

Perhaps the biggest tyranny of all, the one that really allows people to practice ethical optionality, is the large organization. Ethical optionality is where people are encouraged to fit their beliefs to actions, rather than their actions to their beliefs. There's nothing wrong with practicing a profession, but when that person goes on to try to influence others, then they are suspect; because, unlike in an ancient city state, we no longer have the penalty of shame, the ultimate punishment in ancient times for a violation of ethics. In large, complex organizations shame cannot exert discipline. The hallmark of organizational knowledge is that mistakes are made collectively, not individually; "everyone is doing it" becomes the mantra.

8. Concluding thoughts

To sum up this book in one phrase:

Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty.

Time is volatility. True education, the kind that forms character, likes disorder; label-driven education doesn't. Error causes some things to break, but not others; some theories fall apart but others don't. We can detect which things like volatility and which do not.

Innovation gains from uncertainty. Distributed randomness is a necessity. Everything big or fast, the hallmarks of modernity, is short volatility. A glass is also short volatility. But, living things are long volatility.

You know you are alive because you like variation—hunger makes things taste good, effort gives meaning to results, sadness and uncertainty also come with joy and conviction. No life is ethical if it is stripped of personal risks.