The Trouble With Shadowstats

Often, when I talk about inflation being low, people who disagree tend to cite John Williams’ Shadowstats as evidence that price inflation is not low at all.

Now, I don’t disagree with the idea that some people have experienced a higher level of price inflation than the CPI. Everyone experiences a different rate of inflation based on their purchasing habits, so by definition everyone’s individual rate will diverge from the official rate to some degree; some will be higher, and some will be lower. And I don’t disagree that rising food and fuel prices have been a problem for welfare recipients and seniors on a fixed income, etc, who spend a higher proportion of their income on food and fuel than, say, young professionals with a lot of disposable income.

What I do disagree with is bad statistical methodology. Shadowstats is built on the belief that the Bureau of Labor Statistics changed their methodology in the 1980s and 1990s, and that if we were using their original methodology the level of inflation would be much higher. Shadowstats presents what they claim to be the original methodology. But Shadowstats is not calculating inflation any differently.They are not using the 1980s or 1990s methodology that they believe would be higher.  All Shadowstats is doing is taking the CPI data and adding on an arbitrary constant to make it look like inflation is higher!

This should be obvious from their data, which has the exact same curve as the CPI data at a higher level:

alt-cpi-home2 (1)

In fact, according to James Hamilton of Econbrowser, John Williams admitted in 2008 that his numbers are just inflated CPI data:

Last month I called attention to an analysis by BLS researchers John Greenlees and Robert McClelland of some of the claims by John Williams of Shadowstats about the consequences for reported inflation of assorted technical decisions made by the BLS. Williams asked me to update with a link to his response to the BLS study. I am happy to do so, along with offering some further observations of my own.

You can follow the link to Shadowstats’ response to Greenlees and McClelland and judge for yourself, but my impression is that the response is more philosophical than quantitative. In a separate phone conversation, Williams further clarified the Shadowstats methodology. Here’s what John said to me: “I’m not going back and recalculating the CPI. All I’m doing is going back to the government’s estimates of what the effect would be and using that as an ad factor to the reported statistics.”

Price changes and inflation are important topics, and constructing alternate measures of inflation is a worthwhile activity. Researchers at MIT have tried to do this with their Billion Prices Project, which measures price trends across a much, much larger range of products and locations than CPI:

BPP

What the Billion Prices Project implies for Shadowstats is that the CPI is roughly correct, and there is no vast divergence between real-world price trends and the CPI number. Of course, maybe the 1980s and 1990s methodology would be different from the current numbers. It would be very interesting to compare the current CPI methodology with the older CPI methodologies and with the BPP data! But assessing this empirically would require someone to mine through the raw CPI data since the 1980s and recalculate the outputs with the real earlier methodology — a far longer, more difficult and sophisticated process than taking the CPI outputs and adding an arbitrary constant!

Advertisements

How to Predict the Future

I’ve spent most of today reading.

The modern world appears divided into two camps (no — not those who believe the world can be divided into two camps, and those who don’t): those who believe that man has mastered nature, and those who know better.

The first camp share two chief delusions:

  1. That human beings — through the use of simulations, models, mathematics and other predictive techniques — can routinely make accurate predictions about the future.
  2. That these predictions should be deployed — usually via the power of government — to advance society.

The first supposition has been shown to be largely wrong, both empirically (predictions from models and simulations routinely miss — sometimes by wild margins, as they did during the 2008 meltdown) and rationally (economies are nonlinear systems where the output is not proportional to the input, and therefore chaos will always skew predictions). The second supposition is problematic, because it is dependent upon the first one being true.

The problem is that human beings want to predict the future — and accurately. Businesses want to be able to know what products will be selling in six months, six years or sixty years, so they can make money from it. Children want to know what field to study at college, so that they can get a paying job. Governments want to know what interest rates will be in three months, six months or 10 years, so they can decide how much to borrow. Scientists want to know what amounts of greenhouse gases will be emitted over the next half century, so they can attempt to model climate change.

And most significantly, those in power want the (often-undeserved) authority granted by a “window on the future “.

The best way to come-to-terms with this problem is to treat it as an advantage and not as a disadvantage. It is inevitable that there will be some forms of mathematical, statistical and predictive modelling, just as it is inevitable that philosophers and historians will write predictive literary works. Some will be wrong, and some will be right. The key is that it must be safe to be wrong. Societies, communities, individuals and organisations should plan for the future based not on the idea that the future is predictable, but based around the fact that the future is uncertain, and fundamentally difficult to predict. This means that everything needs leeway to break.

Some crucial examples:

  1. An international financial system which is torn down via a default-cascade through default of one bank or one nation is not robust to bad predictions.
  2. A banking system which is torn down through debt-deflation during a credit contraction is not robust to bad predictions.
  3. A lifestyle in which one bad prediction leads to serious illness or injury is not robust to bad predictions.
  4. An organisation  or family whose wealth (or health) can be destroyed by uncontrollable externalities is not robust to bad predictions.
  5. A nation dependent on the import of credit, resource and goods from hostile nations is not robust to bad predictions.
This just means that economists, writers, historians, bureaucrats and just about anyone who claims that they need to predict the future (that’s all of us, occasionally) needs to frequently and honestly ask themselves the question: what happens if I am wrong?