How to Predict the Future

I’ve spent most of today reading.

The modern world appears divided into two camps (no — not those who believe the world can be divided into two camps, and those who don’t): those who believe that man has mastered nature, and those who know better.

The first camp share two chief delusions:

  1. That human beings — through the use of simulations, models, mathematics and other predictive techniques — can routinely make accurate predictions about the future.
  2. That these predictions should be deployed — usually via the power of government — to advance society.

The first supposition has been shown to be largely wrong, both empirically (predictions from models and simulations routinely miss — sometimes by wild margins, as they did during the 2008 meltdown) and rationally (economies are nonlinear systems where the output is not proportional to the input, and therefore chaos will always skew predictions). The second supposition is problematic, because it is dependent upon the first one being true.

The problem is that human beings want to predict the future — and accurately. Businesses want to be able to know what products will be selling in six months, six years or sixty years, so they can make money from it. Children want to know what field to study at college, so that they can get a paying job. Governments want to know what interest rates will be in three months, six months or 10 years, so they can decide how much to borrow. Scientists want to know what amounts of greenhouse gases will be emitted over the next half century, so they can attempt to model climate change.

And most significantly, those in power want the (often-undeserved) authority granted by a “window on the future “.

The best way to come-to-terms with this problem is to treat it as an advantage and not as a disadvantage. It is inevitable that there will be some forms of mathematical, statistical and predictive modelling, just as it is inevitable that philosophers and historians will write predictive literary works. Some will be wrong, and some will be right. The key is that it must be safe to be wrong. Societies, communities, individuals and organisations should plan for the future based not on the idea that the future is predictable, but based around the fact that the future is uncertain, and fundamentally difficult to predict. This means that everything needs leeway to break.

Some crucial examples:

  1. An international financial system which is torn down via a default-cascade through default of one bank or one nation is not robust to bad predictions.
  2. A banking system which is torn down through debt-deflation during a credit contraction is not robust to bad predictions.
  3. A lifestyle in which one bad prediction leads to serious illness or injury is not robust to bad predictions.
  4. An organisation  or family whose wealth (or health) can be destroyed by uncontrollable externalities is not robust to bad predictions.
  5. A nation dependent on the import of credit, resource and goods from hostile nations is not robust to bad predictions.
This just means that economists, writers, historians, bureaucrats and just about anyone who claims that they need to predict the future (that’s all of us, occasionally) needs to frequently and honestly ask themselves the question: what happens if I am wrong?