I’ve spent most of today reading.
The modern world appears divided into two camps (no — not those who believe the world can be divided into two camps, and those who don’t): those who believe that man has mastered nature, and those who know better.
The first camp share two chief delusions:
- That human beings — through the use of simulations, models, mathematics and other predictive techniques — can routinely make accurate predictions about the future.
- That these predictions should be deployed — usually via the power of government — to advance society.
The first supposition has been shown to be largely wrong, both empirically (predictions from models and simulations routinely miss — sometimes by wild margins, as they did during the 2008 meltdown) and rationally (economies are nonlinear systems where the output is not proportional to the input, and therefore chaos will always skew predictions). The second supposition is problematic, because it is dependent upon the first one being true.
The problem is that human beings want to predict the future — and accurately. Businesses want to be able to know what products will be selling in six months, six years or sixty years, so they can make money from it. Children want to know what field to study at college, so that they can get a paying job. Governments want to know what interest rates will be in three months, six months or 10 years, so they can decide how much to borrow. Scientists want to know what amounts of greenhouse gases will be emitted over the next half century, so they can attempt to model climate change.
And most significantly, those in power want the (often-undeserved) authority granted by a “window on the future “.
The best way to come-to-terms with this problem is to treat it as an advantage and not as a disadvantage. It is inevitable that there will be some forms of mathematical, statistical and predictive modelling, just as it is inevitable that philosophers and historians will write predictive literary works. Some will be wrong, and some will be right. The key is that it must be safe to be wrong. Societies, communities, individuals and organisations should plan for the future based not on the idea that the future is predictable, but based around the fact that the future is uncertain, and fundamentally difficult to predict. This means that everything needs leeway to break.
Some crucial examples:
- An international financial system which is torn down via a default-cascade through default of one bank or one nation is not robust to bad predictions.
- A banking system which is torn down through debt-deflation during a credit contraction is not robust to bad predictions.
- A lifestyle in which one bad prediction leads to serious illness or injury is not robust to bad predictions.
- An organisation or family whose wealth (or health) can be destroyed by uncontrollable externalities is not robust to bad predictions.
- A nation dependent on the import of credit, resource and goods from hostile nations is not robust to bad predictions.
Robustness is becoming a crucial “Theory” to implement.
Taleb needs to buy a bus and megaphone and hit the pavement.
Pingback: Can I Have a Bailout? « azizonomics
meh, businesses do this everyday, and business leaders regularly “deploy” their visions (supported by forecasts and pro formas.) The “usually via the power of the government” quip is simply wrong, it is usually via the power of the corporation/market. The Treas. didn’t go to the I.Banks and say “we want you to increase your leverage,” the I.Banks went to the Treas. and said “we want to increase our leverage.” When the market was crashing, who do you think approached whom first? I bet the I.Banks contacted the Treas. The point about cost of failure is spot on, but business leaders aren’t good at knowing when their forecasts’ levels of uncertainty are uncertain and they can’t assume that everything is so uncertain that they can’t make decisions.
The “power of government” quip is inspired by people like Paul Krugman, etc, who want the gov’t to centrally plan more based on the “strength” of models (he’s making a big deal right now over IS/LM predicting that rates would stay low, when everything he had at his disposal in ’06 and ’07 failed to predict the subprime problems). I know businesses/the market use predictive models too, but the key difference here is that corporations and businesses who screw up get punished by the market (so are naturally more sceptical and cautious) whereas governments who screw up can ignore the market (look at the USSR) and keep on blundering ’til they collapse completely.
The dangerous new development is that “too big to fail” has transferred the government’s “we can screw up as much as we want and it doesn’t matter because it’s not our money” mentality into anything that can get itself cast as “systemically important”, e.g. AIG, GM, etc.
The key is that if the government-life-support is turned off, businesses have to deploy more robust strategies (including less leverage) in order to be able to survive in the marketplace. If they don’t they fail — and open a hole in the marketplace for the next kid who comes along.