Summary
This book has impacted my thinking greatly. It’s all about asymmetries in life and how to have them work for you. “Just worry about Black Swan exposures, and life is easy” sums it up perfectly. A book to reread.
Key Takeaways
- Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.
- We can detect fragility, see it, even in many cases measure it, or at least measure comparative fragility with a small error while comparisons of risk have been (so far) unreliable.
- The fragile breaks with time.
- The idea is to focus on fragility rather than predicting and calculating future probabilities.
- Depriving systems of stressors, vital stressors, is not necessarily a good thing, and can be downright harmful.
- Machines are harmed by low-level stressors (material fatigue), organisms are harmed by the absence of low-level stressors (hormesis).
- We hurt systems with the very best of intentions by playing conductor. We are fragilizing social and economic systems by denying them stressors and randomness.
- Mediocristan has a lot of variations, not a single one of which is extreme; Extremistan has few variations, but those that take place are extreme.
- Our mission in life becomes simply “how not to be a turkey,” or, if possible, how to be a turkey in reverse—antifragile, that is. “Not being a turkey” starts with figuring out the difference between true and manufactured stability.
- When we look at risks in Extremistan, we don’t look at evidence, we look at potential damage.
- As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks.
- One should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making.
- Wisdom in decision making is vastly more important—not just practically, but philosophically—than knowledge.
- Seneca’s practical method to counter such fragility was to go through mental exercises to write off possessions.
- The first step toward antifragility consists in first decreasing downside, rather than increasing upside.
- The barbell is meant to illustrate the idea of a combination of extremes kept separate, with avoidance of the middle.
- If you “have optionality,” you don’t have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells. All you need is the wisdom to not do unintelligent things to hurt yourself (some acts of omission) and recognize favorable outcomes when they occur.
- When you are fragile you need to know a lot more than when you are antifragile. Conversely, when you think you know more than you do, you are fragile (to error).
- To repeat, absence of evidence is not evidence of absence, a simple point that has the following implications: for the antifragile, good news tends to be absent from past data, and for the fragile it is the bad news that doesn’t show easily.
- Some rules to be more antifragile:
- Look for optionality; in fact, rank things according to optionality,
- Preferably with open-ended, not closed-ended, payoffs;
- Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career; one gets immunity from the backfit narratives of the business plan by investing in people;
- Make sure you are barbelled, whatever that means in your business.
- Avoidance of boredom is the only worthy mode of action. Life otherwise is not worth living.
- Sucker or nonsucker: exposure is more important than knowledge; decision effects supersede logic.
- The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself.
- Scaling property: if you double the exposure to something, do you more than double the harm it will cause? If so, then this is a situation of fragility. Otherwise, you are robust.
- It is completely wrong to use the calculus of benefits without including the probability of failure.
- A measure for fragility, hence antifragility: figuring out if our miscalculations or misforecasts are on balance more harmful than they are beneficial, and how accelerating the damage is.
- If you have favorable asymmetries, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform.
- Charlatans hits you with only positive advice; in practice it is the negative that’s used by the pros, those selected by evolution.
- We know a lot more what is wrong than what is right, so knowledge grows by subtraction much more than by addition.
- What is wrong is quite robust, what you don’t know is fragile and speculative, but you do not take it seriously so you make sure it does not harm you in case it turns out to be false.
- Just worry about Black Swan exposures, and life is easy.
- If you have more than one reason to do something, just don’t do it. Obvious decisions (robust to error) require no more than a single reason.
- Antifragility implies—contrary to initial instinct—that the old is superior to the new, and much more than you think.
- Building on the so-called Lindy effect: for the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy.
- Follow the Lindy effect as a guide in selecting what to read.
- How we could teach children skills for the twenty-first century: make them read the classics. The future is in the past.
- We notice what varies and changes more than what plays a large role but doesn’t change.
- The problem in deciding whether a scientific result or a new “innovation” is a breakthrough, the opposite of noise, is that one needs to see all aspects of the idea—and there is always some opacity that only time can dissipate.
- If something that makes no sense to you; if that something has been around for a very, very long time, then, irrational or not, you can expect it to stick around much longer, and outlive those who call for its demise.
- If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding.
- What Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.
- Theories come and go; experience stays.
- Reason backward, starting from the iatrogenics to the cure, rather than the other way around.
- Happiness is best dealt with as a negative concept; the same nonlinearity applies.
- The antifragility of a system comes from the mortality of its components.
- People fit their beliefs to actions rather than fit their actions to their beliefs.
- Data can only truly deliver via negativa–style knowledge—it can be effectively used to debunk, not confirm.
What I got out of it
I read Nassim Taleb’s Antifragile immediately after his Fooled by Randomness and The Black Swan, and it’s my longest book note for good reason.
He continues right where he left off: exploring asymmetries in the world.
Two things that make this the 1 Nassim Taleb book to read:
- His writing in this book is more mellow and succinct compared to his previous work, making it more pleasant to read.
- He covers the same topic in more depth, but fewer words, and provides actionable life advice – something that was largely missing in his previous 2 books.
Some concepts that changed my thinking:
- “Just worry about Black Swan exposures, and life is easy.” Focus on downside risk and staying in the game as long as possible. This reminds me of Charlie Munger and other value investors’ approach to investing: an emphasis on margin of safety. A short, great read on the topic is The Dhandho Investor.
- The above statement is, in my opinion, the most important line in the book.
- Lindy effect: for the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy.
- Taleb’s notion to apply the Lindy effect to what you read was an eye-opener. My book notes show I’m quite well-read; I’ve also been moving away from recently published books in favour of pre-2010 books. Life is short and time has proven a great judge on what to read. I hadn’t taken it as far as Taleb yet, but his philosophy makes sense. This has made me very interested in the Great Books movement, which I’ll explore in more depth the next few weeks.
- “Theories come and go; experience stays.” This statement is in line with the Taleb’s view on the Lindy Effect
- Time-tested heuristics over recent science.
- Via negativa:
- Disconfirmation is more rigorous than confirmation.
- Focus on the process of elimination.
- Data can only debunk, not confirm.
- “The antifragility of a system comes from the mortality of its components.” Another eye-opener. This made me realize that the cycle of life and death is not only natural, but also desirable.
- Summary
- Key Takeaways
- Summary Notes
- Prologue
- Appendix: The Triad, Or A Map Of The World And Things Along The Three Properties
- Book 1 – The Antifragile: An Introduction
- Book 2 – Modernity And The Denial Of Antifragility
- Book 3 – A Nonpredictive View Of The World
- Book 4 – Optionality, Technology, And The Intelligence Of Antifragility
- Book 5 – The Nonlinear And The Nonlinear
- Book 6 – Via Negativa
- Book 7 – The Ethics Of Fragility And Antifragility
Summary Notes
Prologue
Some things benefit from shocks; they thrive and grow when exposed to volatility, randomness, disorder, and stressors and love adventure, risk, and uncertainty. Yet, in spite of the ubiquity of the phenomenon, there is no word for the exact opposite of fragile. Let us call it antifragile.
Antifragility is beyond resilience or robustness. The resilient resists shocks and stays the same; the antifragile gets better.
Antifragility has a singular property of allowing us to deal with the unknown, to do things without understanding them—and do them well.
Anything that has more upside than downside from random events (or certain shocks) is antifragile; the reverse is fragile.
Fragility is quite measurable, risk not so at all, particularly risk associated with rare events.
Risk management as practiced is the study of an event taking place in the future, and only some economists and other lunatics can claim—against experience—to “measure” the future incidence of these rare events, with suckers listening to them—against experience and the track record of such claims. But fragility and antifragility are part of the current property of an object, a coffee table, a company, an industry, a country, a political system.
We can detect fragility, see it, even in many cases measure it, or at least measure comparative fragility with a small error while comparisons of risk have been (so far) unreliable.
A complex system, contrary to what people believe, does not require complicated systems and regulations and intricate policies. The simpler, the better.
Steve Jobs figured out that “you have to work hard to get your thinking clean to make it simple.”
Time is functionally similar to volatility: the more time, the more events, the more disorder. Consider that if you can suffer limited harm and are antifragile to small errors, time brings the kind of errors or reverse errors that end up benefiting you. This is simply what your grandmother calls experience. The fragile breaks with time.
The robust or resilient is neither harmed nor helped by volatility and disorder, while the antifragile benefits from them.
Appendix: The Triad, Or A Map Of The World And Things Along The Three Properties
The idea is to focus on fragility rather than predicting and calculating future probabilities, and that fragility and antifragility come on a spectrum of varying degrees. The task here is to build a map of exposures.
The Triad classifies items in three columns:
- Fragile
- Robust
- Antifragile
Recall that the fragile wants tranquility, the antifragile grows from disorder, and the robust doesn’t care too much.
Antifragility is desirable in general, but not always, as there are cases in which antifragility will be costly, extremely so.
Book 1 – The Antifragile: An Introduction
Between Damocles And Hydra
Hormesis, a word coined by pharmacologists, is when a small dose of a harmful substance is actually beneficial for the organism, acting as medicine. Hormesis is the norm, and its absence is what hurts us.
Depriving systems of stressors, vital stressors, is not necessarily a good thing, and can be downright harmful.
Overcompensation And Overreaction Everywhere
How do you innovate? First, try to get in serious, but not terminal, trouble.
Undercompensation from the absence of a stressor, inverse hormesis, absence of challenge, degrades the best of the best.
The Lucretius problem: the so-called worst-case event, when it happened, exceeded the worst case at the time.
I have called this mental defect, after the Latin poetic philosopher who wrote that the fool believes that the tallest mountain in the world will be equal to the tallest one he has observed.
If humans fight the last war, nature fights the next one. Your body is more imaginative about the future than you are. Consider how people train in weightlifting: the body overshoots in response to exposures and overprepares. This is how bodies get stronger.
Political movements and rebellions can be highly antifragile, and the sucker game is to try to repress them using brute force rather than manipulate them, give in, or find more astute ruses.
Information is antifragile; it feeds more on attempts to harm it than it does on efforts to promote it. For instance, many wreck their reputations merely by trying to defend them.
The Cat And The Washing Machine
Inanimate material when subjected to stress, either undergoes material fatigue or breaks. One of the rare exceptions I’ve seen is in the report of a 2011 experiment by Brent Carey, a graduate student, in which he shows that composite material of carbon nanotubes arranged in a certain manner produces a self-strengthening response previously unseen in synthetic materials, “similar to the localized self-strengthening that occurs in biological structures.”
We can generalize our distinction beyond the biological-nonbiological. More effective is the distinction between noncomplex and complex systems.
Causal opacity: it is hard to see the arrow from cause to consequence.
Another forgotten property of stressors is in language acquisition. You pick up a language best thanks to situational difficulty, from error to error, when you need to communicate under more or less straining circumstances, particularly to express urgent needs.
If I could predict what my day would exactly look like, I would feel a little bit dead.
Another way to see it: machines are harmed by low-level stressors (material fatigue), organisms are harmed by the absence of low-level stressors (hormesis).
What Kills Me Makes Others Stronger
Some parts on the inside of a system may be required to be fragile in order to make the system antifragile as a result. Or the organism itself might be fragile, but the information encoded in the genes reproducing it will be antifragile.
Black Swan Management 101: nature (and nature-like systems) likes diversity between organisms rather than diversity within an immortal organism, unless you consider nature itself the immortal organism.
While hormesis corresponds to situations by which the individual organism benefits from direct harm to itself, evolution occurs when harm makes the individual organism perish and the benefits are transferred to others, the surviving ones, and future generations.
If every plane crash makes the next one less likely, every bank crash makes the next one more likely. We need to eliminate the second type of error—the one that produces contagion—in our construction of an ideal socioeconomic system.
Someone who has made plenty of errors—though never the same error more than once—is more reliable than someone who has never made any.
We saw that antifragility in biology works thanks to layers. This rivalry between suborganisms contributes to evolution: cells within our bodies compete; within the cells, proteins compete, all the way through. Let us translate the point into human endeavors. The economy has an equivalent layering: individuals, artisans, small firms, departments within corporations, corporations, industries, the regional economy, and, finally, on top, the general economy—one can even have thinner slicing with a larger number of layers.
For the economy to be antifragile and undergo what is called evolution, every single individual business must necessarily be fragile, exposed to breaking— evolution needs organisms (or their genes) to die when supplanted by others, in order to achieve improvement, or to avoid reproduction when they are not as fit as someone else. Accordingly, the antifragility of the higher level may require the fragility—and sacrifice—of the lower one.
By disrupting the model with bailouts, governments typically favor a certain class of firms that are large enough to require being saved in order to avoid contagion to other business. This is the opposite of healthy risk-taking; it is transferring fragility from the collective to the unfit.
Book 2 – Modernity And The Denial Of Antifragility
We hurt systems with the very best of intentions by playing conductor. We are fragilizing social and economic systems by denying them stressors and randomness.
The Souk And The Office Building
The more variability you observe in a system, the less Black Swan–prone it is.
Switzerland is the last major country that is not a nation-state, but rather a collection of small municipalities left to their own devices.
Another element of Switzerland: it is perhaps the most successful country in history, yet it has traditionally had a very low level of university education compared to the rest of the rich nations. Its system, even in banking during my days, was based on apprenticeship models, nearly vocational rather than the theoretical ones. In other words, on techne (crafts and know how), not episteme (book knowledge, know what).
Mediocristan has a lot of variations, not a single one of which is extreme;
Extremistan has few variations, but those that take place are extreme.
In Extremistan, one is prone to be fooled by the properties of the past and get the story exactly backwards.
Our mission in life becomes simply “how not to be a turkey,” or, if possible, how to be a turkey in reverse—antifragile, that is. “Not being a turkey” starts with figuring out the difference between true and manufactured stability.
When we look at risks in Extremistan, we don’t look at evidence (evidence comes too late), we look at potential damage: never has the world been more prone to more damage;
never. It is hard to explain to naive data-driven people that risk is in the future, not in the past.
Tell Them I Love (Some) Randomness
Absence of fluctuations in the market causes hidden risks to accumulate with impunity. The longer one goes without a market trauma, the worse the damage when commotion occurs.
Buridan’s Donkey: a donkey equally famished and thirsty caught at an equal distance between food and water would unavoidably die of hunger or thirst. But he can be saved thanks to a random nudge one way or the other.
When some systems are stuck in a dangerous impasse, randomness and only randomness can unlock them and set them free. You can see here that absence of randomness equals guaranteed death.
The idea of injecting random noise into a system to improve its functioning has been applied across fields. By a mechanism called stochastic resonance, adding random noise to the background makes you hear the sounds (say, music) with more accuracy. We saw earlier that the psychological effect of overcompensation helps us get signals in the midst of noise; here it is not psychological but a physical property of the system.
Consider the method of annealing in metallurgy, a technique used to make metal stronger and more homogeneous. It involves the heating and controlled cooling of a material, to increase the size of the crystals and reduce their defects.
Just as with Buridan’s donkey, the heat causes the atoms to become unstuck from their initial positions and wander randomly through states of higher energy; the cooling gives them more chances of finding new, better configurations.
Inspired by the metallurgical technique, mathematicians use a method of computer simulation called simulated annealing to bring more general optimal solutions to problems and situations, solutions that only randomness can deliver.
Naive Intervention
Analyze other trade-offs: probabilistic benefits minus probabilistic costs.
The argument is not against the notion of intervention; in fact I showed above that I am equally worried about underintervention when it is truly necessary. I am just warning against naive intervention and lack of awareness and acceptance of harm done by it.
We need to avoid being blind to the natural antifragility of systems, their ability to take care of themselves, and fight our tendency to harm and fragilize them by not giving them a chance to do so.
As a rule, intervening to limit size (of companies, airports, or sources of pollution), concentration, and speed are beneficial in reducing Black Swan risks.
The true hero in the Black Swan world is someone who prevents a calamity and, naturally, because the calamity did not take place, does not get recognition—or a bonus—for it.
The Romans revered someone who, at the least, resisted and delayed intervention.
A very intelligent group of revolutionary fellows in the United Kingdom created a political movement called the Fabian Society, named after the Cunctator, based on opportunistically delaying the revolution.
Procrastination turned out to be a way to let events take their course and give the activists the chance to change their minds before committing to irreversible policies.
Few understand that procrastination is our natural defense, letting things take care of themselves and exercise their antifragility; it results from some ecological or naturalistic wisdom, and is not always bad—at an existential level, it is my body rebelling against its entrapment. Granted, in the modern world, my tax return is not going to take care of itself—but by delaying a non-vital visit to a doctor, or deferring the writing of a passage until my body tells me that I am ready for it, I may be using a very potent naturalistic filter. I write only if I feel like it and only on a subject I feel like writing about. So I use procrastination as a message from my inner self and my deep evolutionary past to resist interventionism in my writing.
Since procrastination is a message from our natural willpower via low motivation, the cure is changing the environment, or one’s profession, by selecting one in which one does not have to fight one’s impulses. Few can grasp the logical consequence that, instead, one should lead a life in which procrastination is good, as a naturalistic-risk-based form of decision making.
We overreact emotionally to noise. The best solution is to only look at very large changes in data or conditions, never at small ones.
The best way to mitigate interventionism is to ration the supply of information, as naturalistically as possible.
The illusion of local causal chains—that is, confusing catalysts for causes and assuming that one can know which catalyst will produce which effect.
Prediction As A Child Of Modernity
There are ample empirical findings to the effect that providing someone with a random numerical forecast increases his risk taking, even if the person knows the projections are random.
After the occurrence of an event, we need to switch the blame from the inability to see an event coming to the failure to understand (anti)fragility, namely, “why did we build something so fragile to these types of events?” Not seeing a tsunami or an economic event coming is excusable; building something fragile to them is not.
The more intelligent (and practical) action is to make the world greed-proof, or even hopefully make society benefit from the greed and other perceived defects of the human race.
Book 3 – A Nonpredictive View Of The World
Fat Tony And The Fragilistas
You can’t predict in general, but you can predict that those who rely on predictions are taking more risks, will have some trouble, perhaps even go bust.
Why? Someone who predicts will be fragile to prediction errors.
Fat Tony’s model is quite simple:
- He identifies fragilities
- Makes a bet on the collapse of the fragile unit
- Lectures Nero and trades insults with him about sociocultural matters
- Reacts to Nero’s jabs at New Jersey life
- Collects big after the collapse
Seneca’s Upside And Downside
Wisdom in decision making is vastly more important—not just practically, but philosophically—than knowledge.
To become a successful philosopher king, it is much better to start as a king than as a philosopher.
The key phrase reverberating in Seneca’s oeuvre is nihil perditi, “I lost nothing,” after an adverse event. Stoicism makes you desire the challenge of a calamity.
Stoics look down on luxury: about a fellow who led a lavish life, Seneca wrote: “He is in debt, whether he borrowed from another person or from fortune.”
Stoicism, seen this way, becomes pure robustness—for the attainment of a state of immunity from one’s external circumstances, good or bad, and an absence of fragility to decisions made by fate, is robustness. Random events won’t affect us either way (we are too strong to lose, and not greedy to enjoy the upside), so we stay in the middle column of the Triad.
Seneca’s version of that Stoicism is antifragility from fate. No downside from Lady Fortuna, plenty of upside.
Success brings an asymmetry: you now have a lot more to lose than to gain. You are hence fragile.
Seneca’s practical method to counter such fragility was to go through mental exercises to write off possessions, so when losses occurred he would not feel the sting—a way to wrest one’s freedom from circumstances.
For instance, Seneca often started his journeys with almost the same belongings he would have if he were shipwrecked, which included a blanket to sleep on the ground, as inns were sparse at the time
When I was a trader, I would go through the mental exercise of assuming every morning that the worst possible thing had actually happened—the rest of the day would be a bonus. Actually the method of mentally adjusting “to the worst” had advantages way beyond the therapeutic, as it made me take a certain class of risks for which the worst case is clear and unambiguous, with limited and known downside.
An intelligent life is all about such emotional positioning to eliminate the sting of harm, which as we saw is done by mentally writing off belongings so one does not feel any pain from losses. The volatility of the world no longer affects you negatively.
Stoicism is about the domestication, not necessarily the elimination, of emotions.
The modern Stoic sage is someone who transforms fear into prudence, pain into information, mistakes into initiation, and desire into undertaking.
“The bookkeeping of benefits is simple: it is all expenditure; if any one returns it, that is clear gain; if he does not return it, it is not lost, I gave it for the sake of giving.”
Fragility implies more to lose than to gain, equals more downside than upside, equals (unfavorable) asymmetry .
vs
Antifragility implies more to gain than to lose, equals more upside than downside, equals (favorable) asymmetry.
Never Marry The Rock Star
The first step toward antifragility consists in first decreasing downside, rather than increasing upside; that is, by lowering exposure to negative Black Swans and letting natural antifragility work by itself.
Mitigating fragility is not an option but a requirement.
To make profits and buy a BMW, it would be a good idea to, first, survive.
Notions such as speed and growth—anything related to movement—are empty and meaningless when presented without accounting for fragility.
The barbell is meant to illustrate the idea of a combination of extremes kept separate, with avoidance of the middle. In our context it is not necessarily symmetric: it is just composed of two extremes, with nothing in the center.
Antifragility is the combination aggressiveness plus paranoia—clip your downside, protect yourself from extreme harm, and let the upside, the positive Black Swans, take care of itself.
The legendary investor Ray Dalio has a rule for someone making speculative bets: “Make sure that the probability of the unacceptable (i.e., the risk of ruin) is nil.
The barbell is simply an idea of insurance of survival; it is a necessity, not an option.
Book 4 – Optionality, Technology, And The Intelligence Of Antifragility
The teleological fallacy: the illusion that you know exactly where you are going, and that you knew exactly where you were going in the past, and that others have succeeded in the past by knowing where they were going.
Never ask people what they want, or where they want to go, or where they think they should go, or, worse, what they think they will desire tomorrow. The strength of the computer entrepreneur Steve Jobs was precisely in distrusting market research and focus groups and following his own imagination. His modus was that people don’t know what they want until you provide them with it.
An option is what makes you antifragile and allows you to benefit from the positive side of uncertainty, without a corresponding serious harm from the negative side.
Thales’ Sweet Grapes
Thales put himself in a position to take advantage of his lack of knowledge— and the secret property of the asymmetry. The key to our message about this upside-downside asymmetry is that he did not need to understand too much the messages from the stars.
Simply, he had a contract that is the archetype of what an asymmetry is, perhaps the only explicit asymmetry you can find in its purest form. It is an option, “the right but not the obligation” for the buyer and, of course, “the obligation but not the right” for the other party, called the seller.
The option is an agent of antifragility.
We just don’t need to know what’s going on when we buy cheaply —when we have the asymmetry working for us. But this property goes beyond buying cheaply: we do not need to understand things when we have some edge.
And the edge from optionality is in the larger payoff when you are right, which makes it unnecessary to be right too often.
If you “have optionality,” you don’t have much need for what is commonly called intelligence, knowledge, insight, skills, and these complicated things that take place in our brain cells. For you don’t have to be right that often. All you need is the wisdom to not do unintelligent things to hurt yourself (some acts of omission) and recognize favorable outcomes when they occur. (The key is that your assessment doesn’t need to be made beforehand, only after the outcome.) This property allowing us to be stupid, or, alternatively, allowing us to get more results than the knowledge may warrant, I will call the “philosopher’s stone” for now, or “convexity bias,” the result of a mathematical property called Jensen’s inequality.
Options benefit from variability, but also from situations in which errors carry small costs. So these errors are like options—in the long run, happy errors bring gains, unhappy errors bring losses.
Lecturing Birds On How To Fly
Randomness plays a role at two levels: the invention and the implementation. The first point is not overly surprising, though we play down the role of chance, especially when it comes to our own discoveries. But it took me a lifetime to figure out the second point: implementation does not necessarily proceed from invention. It, too, requires luck and circumstances.
Creative destruction: some things need to break for the system to improve.
Cherry-picking has optionality: the one telling the story (and publishing it) has the advantage of being able to show the confirmatory examples and completely ignore the rest—and the more volatility and dispersion, the rosier the best story will be (and the darker the worst story). Someone with optionality—the right to pick and choose his story—is only reporting on what suits his purpose. You take the upside of your story and hide the downside, so only the sensational seems to count.
When Two Things Are Not The “Same Thing”
Serious empirical investigation shows no evidence that raising the general level of education raises income at the level of a country. But we know the opposite is true, that wealth leads to the rise of education.
The fooled by randomness effect: mistaking the merely associative for the causal. If rich countries are educated, immediately inferring that education makes a country rich, without even checking.
I am not saying that for an individual, education is useless: it builds helpful credentials for one’s own career—but such effect washes out at the country level.
Education has benefits aside from stabilizing family incomes. Education makes individuals more polished dinner partners, for instance, something non-negligible
in ancient times, learning was for learning’s sake, to make someone a good person, worth talking to, not to increase the stock of gold in the city’s heavily guarded coffers.
The green lumber fallacy: the situation in which one mistakes a source of necessary knowledge—the greenness of lumber—for another, less visible from the outside, less tractable, less narratable.
When you are fragile you need to know a lot more than when you are antifragile. Conversely, when you think you know more than you do, you are fragile (to error).
History Written By The Losers
We don’t put theories into practice. We create theories out of practice.
In Extremistan, it is more important to be in something in a small amount than to miss it.
A management scholar, William Starbuck, has published a few papers debunking the effectiveness of planning—it makes the corporation option-blind, as it gets locked into a non-opportunistic course of action.
To repeat, absence of evidence is not evidence of absence, a simple point that has the following implications: for the antifragile, good news tends to be absent from past data, and for the fragile it is the bad news that doesn’t show easily.
When engaging in tinkering, you incur a lot of small losses, then once in a while you find something rather significant.
We will return to these two distinct payoffs, with “bounded left” (limited losses, like Thales’ bet) and “bounded right” (limited gains, like insurance or banking). The distinction is crucial, as most payoffs in life fall in either one or the other category.
Let me stop to issue some rules:
- Look for optionality; in fact, rank things according to optionality,
- Preferably with open-ended, not closed-ended, payoffs;
- Do not invest in business plans but in people, so look for someone capable of changing six or seven times over his career, or more; one gets immunity from the backfit narratives of the business plan by investing in people. It is simply more robust to do so;
- Make sure you are barbelled, whatever that means in your business.
A Lesson In Disorder
There are two domains, the ludic, which is set up like a game, with its rules supplied in advance in an explicit way, and the ecological, where we don’t know the rules and cannot isolate variables, as in real life. Seeing the nontransferability of skills from one domain to the other led me to skepticism in general about whatever skills are acquired in a classroom, anything in a non-ecological way, as compared to street fights and real-life situations.
But I read voraciously, wholesale, initially in the humanities, later in mathematics and science, and now in history.
The trick is to be bored with a specific book, rather than with the act of reading.
Trial and error is freedom.
Avoidance of boredom is the only worthy mode of action. Life otherwise is not worth living.
I started, around the age of thirteen, to keep a log of my reading hours, shooting for between thirty and sixty a week, a practice I’ve kept up for a long time. I read the likes of Dostoyevsky, Turgenev, Chekhov, Bishop Bossuet, Stendhal, Dante, Proust, Borges, Calvino, Céline, Schultz, Zweig (didn’t like), Henry Miller, Max Brod, Kafka, Ionesco, the surrealists, Faulkner, Malraux (along with other wild adventurers such as Conrad and Melville; the first book I read in English was Moby-Dick) and similar authors in literature, many of them obscure, and Hegel, Schopenhauer, Nietzsche, Marx, Jaspers, Husserl, Lévi-Strauss, Levinas, Scholem, Benjamin, and similar ones in philosophy because they had the golden status of not being on the school program.
When I decided to come to the United States, I repeated, around the age of eighteen, the marathon exercise by buying a few hundred books in English (by authors ranging from Trollope to Burke, Macaulay, and Gibbon, with Anaïs Nin and other then fashionable authors de scandale), not showing up to class, and keeping the thirty-to sixty-hour discipline.
There is such a thing as nonnerdy applied mathematics: find a problem first, and figure out the math that works for it (just as one acquires language), rather than study in a vacuum through theorems and artificial examples, then change reality to make it look like these examples.
There is something central in following one’s own direction in the selection of readings: what I was given to study in school I have forgotten; what I decided to read on my own, I still remember.
Fat Tony Debates Socrates
Socrates’ technique was to make his interlocutor, who started with a thesis, agree to a series of statements, then proceed to show him how the statements he agreed to are inconsistent with the original thesis, thus establishing that he has no clue as to what he was taking about. Socrates used it mostly to show people how lacking in clarity they were in their thoughts, how little they knew about the concepts they used routinely—and the need for philosophy to elucidate these concepts.
Fat Tony’s power in life is that he never lets the other person frame the question. He taught Nero that an answer is planted in every question; never respond with a straight answer to a question that makes no sense to you.
In defense of Socrates, his questions lead to a major result: if they could not allow him to define what something was, at least they allowed him to be certain about what a thing was not
The sucker-rationalistic fallacy: “What is not intelligible to me is not necessarily unintelligent” is perhaps the most potent sentence in all of Nietzsche’s century—and we used a version of it in the very definition of the fragilista who mistakes what he does not understand for nonsense.
For Tony, the distinction in life isn’t True or False, but rather sucker or nonsucker. Things are always simpler with him. In real life, as we saw with the ideas of Seneca and the bets of Thales, exposure is more important than knowledge; decision effects supersede logic. Textbook “knowledge” misses a dimension, the hidden asymmetry of benefits—just like the notion of average.
The payoff, what happens to you (the benefits or harm from it), is always the most important thing, not the event itself.
You decide principally based on fragility, not so much on True/False.
Book 5 – The Nonlinear And The Nonlinear
On The Difference Between A Large Stone And A Thousand Pebbles
Fragility was simply vulnerability to the volatility of the things that affect it. For the fragile:
- Shocks bring higher harm as their intensity increases (up to a certain level).
- The cumulative effect of small shocks is smaller than the single effect of an equivalent single large shock
For the antifragile, shocks bring more benefits (equivalently, less harm) as their intensity increases (up to a point).
Inject redundancy into people’s lives.
Another intuitive way to look at convexity effects: consider the scaling property.
If you double the exposure to something, do you more than double the harm it will cause? If so, then this is a situation of fragility. Otherwise, you are robust.
Squeezes are exacerbated by size. When one is large, one becomes vulnerable to some errors, particularly horrendous squeezes. The squeezes become nonlinearly costlier as size increases.
In spite of what is studied in business schools concerning “economies of scale,” size hurts you at times of stress; it is not a good idea to be large during difficult times.
Always keep in mind the difference between a stone and its weight in pebbles.
It is completely wrong to use the calculus of benefits without including the probability of failure.
Bottlenecks are the mothers of all squeezes.
No psychologist who has discussed the “planning fallacy” has realized that, at the core, it is not essentially a psychological problem, not an issue with human errors; it is inherent to the nonlinear structure of the projects. Just as time cannot be negative, a three-month project cannot be completed in zero or negative time.
So, on a timeline going left to right, errors add to the right end, not the left end of it.
Governments do not need wars at all to get us in trouble with deficits: the underestimation of the costs of their projects is chronic for the very same reason 98 percent of contemporary projects have overruns. They just end up spending more than they tell us. This has led me to install a governmental golden rule: no borrowing allowed, forced fiscal balance.
A simple ecological policy and risk management rule for pollution: simply, just as with size, split your sources of pollution among many natural sources. The harm from polluting with ten different sources is smaller than the equivalent pollution from a single source.
The Aleuts, a North American native tribe, for which we have ample data, covering five millennia. They exhibit a remarkable lack of concentration in their predatorial behavior, with a strategy of prey switching. They were not as sticky and rigid as us in their habits. Whenever they got low on a resource, they switched to another one, as if to preserve the ecosystem.
The Philosopher’s Stone And Its Inverse
A measure for fragility, hence antifragility: figuring out if our miscalculations or misforecasts are on balance more harmful than they are beneficial, and how accelerating the damage is.
A simple heuristic called the fragility (and antifragility) detection heuristic works as follows. Let’s say you want to check whether a town is overoptimized. Say you measure that when traffic increases by ten thousand cars, travel time grows by ten minutes. But if traffic increases by ten thousand more cars, travel time now extends by an extra thirty minutes. Such acceleration of traffic time shows that traffic is fragile and you have too many cars and need to reduce traffic until the acceleration becomes mild (acceleration, I repeat, is acute concavity, or negative convexity effect).
Likewise, government deficits are particularly concave to changes in economic conditions. Every additional deviation in, say, the unemployment rate —particularly when the government has debt—makes deficits incrementally worse. And financial leverage for a company has the same effect: you need to borrow more and more to get the same effect. Just as in a Ponzi scheme.
The same with operational leverage on the part of a fragile company. Should sales increase 10 percent, then profits would increase less than they would decrease should sales drop 10 percent
We can see here that the function of something becomes different from the something under nonlinearities:
- The more nonlinear, the more the function of something divorces itself from the something
- The more volatile the something—the more uncertainty—the more the function divorces itself from the something
- If the function is convex (antifragile), then the average of the function of something is going to be higher than the function of the average of something. And the reverse when the function is concave (fragile).
Take a conventional die (six sides) and consider a payoff equal to the number it lands on, that is, you get paid a number equivalent to what the die shows—1 if it lands on 1, 2 if it lands on 2, up to 6 if it lands on 6. The square of the expected (average) payoff is then (1+2+3+4+5+6 divided by 6)2, equals 3.52, here 12.25. So the function of the average equals 12.25.
But the average of the function is as follows. Take the square of every payoff, 12+22+32+42+52+62 divided by 6, that is, the average square payoff, and you can see that the average of the function equals 15.17.
So, since squaring is a convex function, the average of the square payoff is higher than the square of the average payoff. The difference here between 15.17 and 12.25 is what I call the hidden benefit of antifragility—here, a 24 percent “edge.”
There are two biases: one elementary convexity effect, leading to mistaking the properties of the average of something (here 3.5) and those of a (convex) function of something (here 15.17), and the second, more involved, in mistaking an average of a function for the function of an average, here 15.17 for 12.25. The latter represents optionality.
Someone with a linear payoff needs to be right more than 50 percent of the time. Someone with a convex payoff, much less. The hidden benefit of antifragility is that you can guess worse than random and still end up outperforming. Here lies the power of optionality—your function of something is very convex, so you can be wrong and still do fine—the more uncertainty, the better.
This explains my statement that you can be dumb and antifragile and still do very well.
This hidden “convexity bias” comes from a mathematical property called Jensen’s inequality.
The hidden harm of fragility is that you need to be much, much better than random in your prediction and knowing where you are going, just to offset the negative effect.
If you have favorable asymmetries, or positive convexity, options being a special case, then in the long run you will do reasonably well, outperforming the average in the presence of uncertainty. The more uncertainty, the more role for optionality to kick in, and the more you will outperform. This property is very central to life.
Book 6 – Via Negativa
If we cannot express what something is exactly, we can say something about what it is not—the indirect rather than the direct expression.
The method began as an avoidance of direct description, leading to a focus on negative description, what is called in Latin via negativa, the negative way. It proceeds by the process of elimination.
Remember from the logic of the barbell that it is necessary to first remove fragilities.
I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing;
people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to avoid. You reduce most of your personal risks of accident thanks to a small number of measures.
I advocate as follows: we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition—given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily. If I spot a black swan, I can be quite certain that the statement “all swans are white” is wrong. But even if I have never seen a black swan, I can never hold such a statement to be true. Rephrasing it again: since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.
Negative knowledge is more robust. But it is not perfect. It is not clear-cut: it is impossible to figure out whether an experiment failed to produce the intended results—hence “falsifying” the theory—because of the failure of the tools, because of bad luck, or because of fraud by the scientist.
“People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.” – Steve Jobs
Subtractive knowledge is a form of barbell. Critically, it is convex. What is wrong is quite robust, what you don’t know is fragile and speculative, but you do not take it seriously so you make sure it does not harm you in case it turns out to be false.
The pair of Goldstein and Gigerenzer coined the notion of “fast and frugal” heuristics that make good decisions despite limited time, knowledge, and computing power.
I realized that the less-is-more heuristic fell squarely into my work in two places.
- Extreme effects: there are domains in which the rare event (I repeat, good or bad) plays a disproportionate share and we tend to be blind to it, so focusing on the exploitation of such a rare event, or protection against it, changes a lot, a lot of the risky exposure. Just worry about Black Swan exposures, and life is easy.
Less is more has proved to be shockingly easy to find and apply—and “robust” to mistakes and change of minds. There may not be an easily identifiable cause for a large share of the problems, but often there is an easy solution (not to all problems, but good enough; I mean really good enough), and such a solution is immediately identifiable, sometimes with the naked eye rather than the use of complicated analyses and highly fragile, error-prone, cause-ferreting nerdiness.
If you have more than one reason to do something, just don’t do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error) require no more than a single reason.
I have often followed what I call Bergson’s razor: “A philosopher should be known for one single idea, not more” (I can’t source it to Bergson, but the rule is good enough).
Time And Fragility
Antifragility implies—contrary to initial instinct—that the old is superior to the new, and much more than you think. No matter how something looks to your intellectual machinery, or how well or poorly it narrates, time will know more about its fragilities and break it when necessary.
What survives must be good at serving some (mostly hidden) purpose that time can see but our eyes and logical faculties can’t capture.
The way to do it rigorously, according to the notions of fragility and antifragility, is to take away from the future, reduce from it, simply, things that do not belong to the coming times. Via negativa. What is fragile will eventually break; and, luckily, we can easily tell what is fragile. Positive Black Swans are more unpredictable than negative ones.
The problem is that almost everything that was imagined never took place. The prime error is as follows. When asked to imagine the future, we have the tendency to take the present as a baseline, then produce a speculative destiny by adding new technologies and products to it and what sort of makes sense, given an interpolation of past developments. We also represent society according to our utopia of the moment, largely driven by our wishes—except for a few people called doomsayers, the future will be largely inhabited by our desires. So we will tend to over-technologize it and underestimate the might of the equivalent of these small wheels on suitcases that will be staring at us for the next millennia.
Technology can cancel the effect of bad technologies, by self-subtraction.
Let us separate the perishable (humans, single items) from the nonperishable, the potentially perennial. The nonperishable is anything that does not have an organic unavoidable expiration date. The perishable is typically an object, the nonperishable has an informational nature to it.
When you see a young and an old human, you can be confident that the younger will survive the elder. With something nonperishable, say a technology, that is not the case. We have two possibilities: either both are expected to have the same additional life expectancy (the case in which the probability distribution is called exponential), or the old is expected to have a longer expectancy than the young, in proportion to their relative age
Building on the so-called Lindy effect: for the perishable, every additional day in its life translates into a shorter additional life expectancy. For the nonperishable, every additional day may imply a longer life expectancy.
Remember the following principle: I am not saying that all technologies do not age, only that those technologies that were prone to aging are already dead.
The second mistake is to believe that one would be acting “young” by adopting a “young” technology, revealing both a logical error and mental bias.
This is a mistake similar to believing that one would turn into a cow by eating cow meat. It is actually a worse fallacy than the inference from eating: a technology, being informational rather than physical, does not age organically, like humans, at least not necessarily so.
This idea of “young” and “old” attached to certain crowd behavior is even more dangerous.
Much progress comes from the young because of their relative freedom from the system and courage to take action that older people lose as they become trapped in life. But it is precisely the young who propose ideas that are fragile, not because they are young, but because most unseasoned ideas are fragile. And, of course, someone who sells “futuristic” ideas will not make a lot of money selling the value of the past! New technology is easier to hype up.
How we could teach children skills for the twenty-first century: make them read the classics. The future is in the past.
Fooled by randomness effect: information has a nasty property – it hides failures.
Another mental bias causing the overhyping of technology comes from the fact that we notice change, not statics.
If you announce to someone “you lost $10,000,” he will be much more upset than if you tell him “your portfolio value, which was $785,000, is now $775,000.” Our brains have a predilection for shortcuts, and the variation is easier to notice (and store) than the entire record. It requires less memory storage. This psychological heuristic, the error of variation in place of total, is quite pervasive, even with matters that are visual.
We notice what varies and changes more than what plays a large role but doesn’t change. We rely more on water than on cell phones but because water does not change and cell phones do, we are prone to thinking that cell phones play a larger role than they do. Second, because the new generations are more aggressive with technology, we notice that they try more things, but we ignore that these implementations don’t usually stick. Most “innovations” are failures, just as most books are flops, which should not discourage anyone from trying.
These impulses to buy new things that will eventually lose their novelty, particularly when compared to newer things, are called treadmill effects. But it looks as though we don’t incur the same treadmilling techno-dissatisfaction with classical art, older furniture—whatever we do not put in the category of the technological.
I follow the Lindy effect as a guide in selecting what to read: books that have been around for ten years will be around for ten more; books that have been around for two millennia should be around for quite a bit of time, and so forth.
Academic work, because of its attention-seeking orientation, can be easily subjected to Lindy effects: think of the hundreds of thousands of papers that are just noise, in spite of how hyped they were at the time of publication.
The problem in deciding whether a scientific result or a new “innovation” is a breakthrough, that is, the opposite of noise, is that one needs to see all aspects of the idea—and there is always some opacity that time, and only time, can dissipate.
Time can act as a cleanser of noise by confining to its dustbins all these overhyped works.
A rule on what to read: “As little as feasible from the last twenty years, except history books that are not about the last fifty years.”
What is fragile? The large, optimized, overreliant on technology, overreliant on the so-called scientific method instead of age-tested heuristics.
If something that makes no sense to you; if that something has been around for a very, very long time, then, irrational or not, you can expect it to stick around much longer, and outlive those who call for its demise.
Medicine, Convexity, And Opacity
The first principle of iatrogenics is as follows: we do not need evidence of harm to claim that a drug or an unnatural via positiva procedure is dangerous. Recall my comment earlier with the turkey problem that harm is in the future, not in the narrowly defined past. In other words, empiricism is not naive empiricism.
Second principle of iatrogenics: it is not linear. We should not take risks with near-healthy people; but we should take a lot, a lot more risks with those deemed in danger.
The iatrogenics is in the patient, not in the treatment. If the patient is close to death, all speculative treatments should be encouraged—no holds barred. Conversely, if the patient is near healthy, then Mother Nature should be the doctor.
The philosopher’s stone explained that the volatility of an exposure can matter more than its average—the difference is the “convexity bias.” If you are antifragile (i.e., convex) to a given substance, then you are better off having it randomly distributed, rather than provided steadily.
Every time you take an antibiotic, you help, to some degree, the mutation of germs into antibiotic-resistant strains. Add to that the toying with your immune system. You transfer the antifragility from your body to the germ.
The solution, of course, is to do it only when the benefits are large. Hygiene, or excessive hygiene, has the same effect, particularly when people clean their hands with chemicals after every social exposure.
The ingestion of food combined with one’s activity brings about hormonal cascades, causing cravings (hence consumption of other foods) or changes in the way your body burns the energy, whether it needs to conserve fat and burn muscle, or vice versa. Complex systems have feedback loops, so what you “burn” depends on what you consume, and how you consume it.
If there is something in nature you don’t understand, odds are it makes sense in a deeper way that is beyond your understanding. So there is a logic to natural things that is much superior to our own. Just as there is a dichotomy in law: innocent until proven guilty as opposed to guilty until proven innocent, let me express my rule as follows: what Mother Nature does is rigorous until proven otherwise; what humans and science do is flawed until proven otherwise.
Everything nonstable or breakable has had ample chance to break over time. Further, the interactions between components of Mother Nature had to modulate in such a way as to keep the overall system alive. What emerges over millions of years is a wonderful combination of solidity, antifragility, and local fragility, sacrifices in one area made in order for nature to function better. We sacrifice ourselves in favor of our genes, trading our fragility for their survival. We age, but they stay young and get fitter and fitter outside us. Things break on a small scale all the time, in order to avoid large-scale generalized catastrophes.
We are built to be dupes for theories. But theories come and go; experience stays. Explanations change all the time, and have changed all the time in history (because of causal opacity, the invisibility of causes) with people involved in the incremental development of ideas thinking they always had a definitive theory; experience remains constant.
I just want to understand as little as possible to be able to look at regularities of experience.
So the modus operandi in every venture is to remain as robust as possible to changes in theories (let me repeat that my deference to Mother Nature is entirely statistical and risk-management-based, i.e., again, grounded in the notion of fragility).
To Live Long, But Not Too Long
Reason backward, starting from the iatrogenics to the cure, rather than the other way around. Whenever possible, replace the doctor with human antifragility. But otherwise don’t be shy with aggressive treatments.
Another application of via negativa: spend less, live longer is a subtractive strategy. We saw that iatrogenics comes from the intervention bias, via positiva, the propensity to want to do something, causing all the problems we’ve discussed. But let’s do some via negativa here: removing things can be quite a potent (and, empirically, a more rigorous) action.
Why? Subtraction of a substance not seasoned by our evolutionary history reduces the possibility of Black Swans while leaving one open to improvements.
Should the improvements occur, we can be pretty comfortable that they are as free of unseen side effects as one can get.
Ennius wrote, “The good is mostly in the absence of bad”
Happiness is best dealt with as a negative concept; the same nonlinearity applies.
The “pursuit of happiness” is not equivalent to the “avoidance of unhappiness.” Each of us certainly knows not only what makes us unhappy (for instance, copy editors, commuting, bad odors, pain, the sight of a certain magazine in a waiting room, etc.), but what to do about it.
The regimen of the Salerno School of Medicine: joyful mood, rest, and scant nourishment.
From a scientific perspective, it seems that the only way we may manage to extend people’s lives is through caloric restriction—which seems to cure many ailments in humans and extend lives in laboratory animals. But, such restriction does not need to be permanent—just an occasional (but painful) fast might do.
It has been shown that many people benefit from the removal of products that did not exist in their ancestral habitat.
A considerable jump in my personal health has been achieved by removing offensive irritants: the morning newspapers, the boss, the daily commute, air-conditioning (though not heating), television, emails from documentary filmmakers, economic forecasts, news about the stock market, gym “strength training” machines, and many more.
Being poorer might not be completely devoid of benefits if one does it right.
This practical consequence of Jensen’s inequality: irregularity has its benefits in some areas; regularity has its detriments. Where Jensen’s inequality applies, irregularity might be medicine.
Perhaps what we mostly need to remove is a few meals at random, or at least avoid steadiness in food consumption. The error of missing nonlinearities is found in two places, in the mixture and in the frequency of food intake.
Take the following principles derived from the random structure of the environment: when we are herbivores, we eat steadily; but when we are predators we eat more randomly. Hence our proteins need to be consumed randomly for statistical reasons.
Deprivation is a stressor—and we know what stressors do when allowed adequate recovery.
I wonder how people can accept that the stressors of exercise are good for you, but do not transfer to the point that food deprivation can have the same effect.
We get sharper and fitter in response to the stress of the constraint.
Biological mechanisms are activated by food deprivation.
Giving people food before they expend energy would certainly confuse their signaling process. And we have ample evidence that intermittently (and only intermittently) depriving organisms of food has been shown to engender beneficial effects on many functions
The antifragility of a system comes from the mortality of its components.
The principal disease of abundance can be seen in habituation and jadedness (what biologists currently call dulling of receptors); Seneca: “To a sick person, honey tastes better.”
Book 7 – The Ethics Of Fragility And Antifragility
Skin In The Game: Antifragility And Optionality At The Expense Of Others
The robustness—even antifragility —of society depends on them; if we are here today, it is because someone, at some stage, took some risks for us. But courage and heroism do not mean blind risk taking—it is not necessarily recklessness.
Megalopsychon (a term expressed in Aristotle’s ethics), a sense of grandeur that was superseded by the Christian value of “humility.”
If you take risks and face your fate with dignity, there is nothing you can do that makes you small; if you don’t take risks, there is nothing you can do that makes you grand, nothing.
And when you take risks, insults by half-men (small men, those who don’t risk anything) are similar to barks by nonhuman animals: you can’t feel insulted by a dog.
Hammurabi’s code—now about 3,800 years old—identifies the need to reestablish a symmetry of fragility, spelled out as follows:
If a builder builds a house and the house collapses and causes the death of the owner of the house—the builder shall be put to death. If it causes the death of the son of the owner of the house, a son of that builder shall be put to death. If it causes the death of a slave of the owner of the house—he shall give to the owner of the house a slave of equal value.
It looks like they were much more advanced 3,800 years ago than we are today.
The entire idea is that the builder knows more, a lot more, than any safety inspector, particularly about what lies hidden in the foundations—making it the best risk management rule ever, as the foundation, with delayed collapse, is the best place to hide risk. Hammurabi and his advisors understood small probabilities.
Now, clearly the object here is not to punish retrospectively, but to save lives by providing up-front disincentive in case of harm to others during the fulfillment of one’s profession.
Fat Tony has two heuristics:
- Never get on a plane if the pilot is not on board.
- The first heuristic addresses the asymmetry in rewards and punishment, or transfer of fragility between individuals.
- Predicting—any prediction— without skin in the game can be as dangerous for others as unmanned nuclear plants without the engineer sleeping on the premises.
- Make sure there is also a copilot.
- The second heuristic is that we need to build redundancy, a margin of safety, avoiding optimization, mitigating (even removing) asymmetries in our sensitivity to risk.
In traditional societies even those who fail—but have taken risks— have a higher status than those who are not exposed.
A writer with arguments can harm more people than any serial criminal.
The asymmetry (antifragility of postdictors): postdictors can cherry-pick and produce instances in which their opinions played out and discard mispredictions into the bowels of history. It is like a free option—to them; we pay for it.
Since they have the option, the fragilistas are personally antifragile: volatility tends to benefit them: the more volatility, the higher the illusion of intelligence.
But evidence of whether one has been a sucker or a nonsucker is easy to ferret out by looking at actual records, actions. Actions are symmetric, do not allow cherry-picking, remove the free option.
Stiglitz Syndrome = fragilista (with good intentions) + ex post cherry-picking
The cure to many ethical problems maps to the exact cure for the Stiglitz effect: never ask anyone for their opinion, forecast, or recommendation. Just ask them what they have—or don’t have—in their portfolio.
Never ask the doctor what you should do. Ask him what he would do if he were in your place. You would be surprised at the difference.
Suckers try to win arguments, nonsuckers try to win.
A wrong idea that is harmless can survive. Those who have wrong heuristics—but with a small harm in the event of error—will survive. Behavior called “irrational” can be good if it is harmless.
The Romans had even more powerful heuristics for situations few today have thought about, solving potent game-theoretic problems. Roman soldiers were forced to sign a sacramentum accepting punishment in the event of failure—a kind of pact between the soldier and the army spelling out commitment for upside and downside.
The Romans removed the soldiers’ incentive to be a coward and hurt others thanks to a process called decimation. If a legion loses a battle and there is suspicion of cowardice, 10 percent of the soldiers and commanders are put to death, usually by random lottery
Playing on one’s inner agency problem can go beyond symmetry: give soldiers no options and see how antifragile they can get.
Forcing researchers to eat their own cooking whenever possible solves a serious problem in science. Take this simple heuristic—does the scientific researcher whose ideas are applicable to the real world apply his ideas to his daily life? If so, take him seriously. Otherwise, ignore him.
Only he who has true beliefs will avoid eventually contradicting himself and falling into the errors of postdicting.
Word of mouth is a potent naturalistic filter. Anything one needs to market heavily is necessarily either an inferior product or an evil one.
One may make others aware of the existence of a product, say a new belly dancing belt, but I wonder why people don’t realize that, by definition, what is being marketed is necessarily inferior, otherwise it would not be advertised.
A good lesson: never trust the words of a man who is not free.
Fitting Ethics To A Profession
Greed is antifragile—though not its victims.
The more complex the regulation, the more bureaucratic the network, the more a regulator who knows the loops and glitches would benefit from it later, as his regulator edge would be a convex function of his differential knowledge.
- The more complicated the regulation, the more prone to arbitrages by insiders. This is another argument in favor of heuristics. The incentive of a regulator is to have complex regulation. Again, the insiders are the enemies of the less-is-more rule.
- The difference between the letter and the spirit of regulation is harder to detect in a complex system. Complex environments with nonlinearities are easier to game than linear ones with a small number of variables. The same applies to the gap between the legal and the ethical.
- In African countries, government officials get explicit bribes. In the United States they have the implicit, never mentioned, promise to go work for a bank at a later date with a sinecure offering, say $5 million a year, if they are seen favorably by the industry. And the “regulations” of such activities are easily skirted.
If someone has an opinion, like, say, the banking system is fragile and should collapse, I want him invested in it so he is harmed if the audience for his opinion are harmed—as a token that he is not an empty suit. But when general statements about the collective welfare are made, instead, absence of investment is what is required.
People fit their beliefs to actions rather than fit their actions to their beliefs.
There exists an inverse Alan Blinder problem, called “evidence against one’s interest.” One should give more weight to witnesses and opinions when they present the opposite of a conflict of interest. A pharmacist or an executive of Big Pharma who advocates starvation and via negativa methods to cure diabetes would be more credible than another one who favors the ingestion of drugs.
Experiments can be marred with bias: the researcher has the incentive to select the experiment that corresponds to what he was looking for, hiding the failed attempts. He can also formulate a hypothesis after the results of the experiment—thus fitting the hypothesis to the experiment.
Data can only truly deliver via negativa–style knowledge—it can be effectively used to debunk, not confirm.
Conclusion
Everything in religious law comes down to the refinements, applications, and interpretations of the Golden Rule, “Don’t do unto others what you don’t want them to do to you.”
Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty.
We can detect what likes volatility thanks to convexity or acceleration and higher orders, since convexity is the response by a thing that likes disorder. We can build Black Swan–protected systems thanks to detection of concavity. We can take medical decisions by understanding the convexity of harm and the logic of Mother Nature’s tinkering, on which side we face opacity, which error we should risk.
Ethics is largely about stolen convexities and optionality.
Pingback: The Bed of Procrustes by Nassim Taleb | Summary Notes
Pingback: Skin In The Game by Nassim Taleb | Summary Notes