How To Euthanize Rentiers (Wonkish)

In my last post, I established that the “rentier’s share” of interest — resulting from as Keynes put it the “power of the capitalist to exploit the scarcity-value of capital” — can be calculated as the real-interest rate on lending to the monetary sovereign, typically known as the real risk free interest rate. That is because it is the rate that is left over after deducting for credit risk and inflation risk.

However, I have been convinced that my conclusion — that euthanizing rentiers should be an objective of monetary policy — is either wrong or impractical.

It would at very least require a dramatic shift in monetary policy orthodoxy. My initial thought was thus: the real risk-free interest rate (r) can be expressed as the nominal risk free interest rate minus the rate of inflation (r=n-i). To eliminate the rentier’s share, simply substitute 0 for r so that 0=n-i and n=i. In other words, have the central bank target a rate of inflation that offsets the expected future nominal risk free interest rate, resulting in a future real risk free interest rate as close to zero as possible.

There are some major problems with this. Presently, most major central banks target inflation. But they target a fixed rate of inflation of around 2 percent. The Fed explains its rationale:

Over time, a higher inflation rate would reduce the public’s ability to make accurate longer-term economic and financial decisions. On the other hand, a lower inflation rate would be associated with an elevated probability of falling into deflation, which means prices and perhaps wages, on average, are falling — a phenomenon associated with very weak economic conditions. Having at least a small level of inflation makes it less likely that the economy will experience harmful deflation if economic conditions weaken. The FOMC implements monetary policy to help maintain an inflation rate of 2 percent over the medium term.

Now, it is possible to argue that inflation targets should vary with macroeconomic conditions. For example, if you’re having a problem with deflation and getting stuck in a liquidity trap, a higher inflation target might be appropriate, as Jared Bernstein and Larry Ball argue. And on the other side of the coin, if you’re having a problem with excessive inflation — as occurred in the 1970s — it is arguable a lower inflation target than 2 percent may be appropriate.

But shifting to a variable rate targeting regime would be a very major policy shift, likely to be heavily resisted simply because the evidence shows that a fixed rate target results in more predictability, and therefore enhances “the public’s ability to make accurate longer-term economic and financial decisions”.

A second sticking point is the argument that such a regime would be trying to target a real-variable (the real risk free interest rate), which central banks have at best a very limited ability to do.

A third sticking point is Goodhart’s Law: “when a measure becomes a target, it ceases to be a good measure.” By making the future spread between the nominal risk free interest rate and inflation a target, the spread would lose any meaning as a measure.

A fourth sticking point is the possibility that such a severe regime change might create a regime susceptible to severe accelerative macroeconomic problems like inflationary and deflationary spirals.

And in this age of soaring inequality, the euthanasia of the rentier is simply too important an issue to hinge on being able to formulate a new workable policy regime and convince the central banking establishment to adopt it. Even if variable-rate inflation targeting or some alternative was actually viable, I don’t have the time, or the energy, or the inclination, or the expertise to try to do what Scott Sumner has spent over half a decade trying to do — change the way central banks work.

Plus, there is a much better option: make the euthanasia of the rentier a matter for fiscal policy and specifically taxation and redistribution. So here’s a different proposal: a new capital gains tax at a variable rate equal to the real risk-free interest rate, with the proceeds going toward business grants for poor people to start new businesses.

The Subtle Tyranny of Interest Rates

Interest rates are the price of credit. They are the price of access to capital.

Now, it is obvious that pricing credit is not tyrannical in and of itself. Interest compensates a lender for default risk and the risk of inflation eroding the purchasing power of the money that they lend.

The tyranny I am getting at is subtle. It is the tyranny that Keynes pointed to when he proposed a euthanasia of the rentier. Keynes proposed that low interest rates would:

mean the euthanasia of the rentier, and, consequently, the euthanasia of the cumulative oppressive power of the capitalist to exploit the scarcity-value of capital.

Keynes pointed to an important feature of interest rates: the fact that capital has a cost is not just the result of default risk and the risk of inflation. It is also a result of the scarcity of capital.

Now, that is inevitable in a world where financial capital consists of metal that you dig up out of the ground.

But in our brave new state-backed fiat monetary system, why should capital be so scarce that those who have it can profit from its scarcity?

Obviously, central banks should not print money to the extent that it becomes worthless. But capital availability is absolutely critical to the advancement of society: the investment of capital is how societies become productive. It is how technology improves, and it is the key to wealth accumulation.

What Keynes didn’t specify was what exactly in the interest rate paid was the part that represented the “scarcity value” of capital.

Obviously, it doesn’t include the part that compensates for inflation, which is why we need to look at inflation-adjusted interest rates. And it isn’t the part that compensates for default risk. This is easily calculable too: it is the excess paid over lending to the monetary sovereign.

In the U.S. and Britain, that would be the American and British governments. In the eurozone — for complicated political reasons — there is no monetary sovereign exactly, but we might measure it by looking at it in terms of the spread against German government borrowing, because Germany seems to be the nation calling the lion’s share of the shots.

Here’s the real interest rate on U.S. 10-year government borrowing (I chose the 10-year because it is a benchmark, although I would have preferred to use a harmonized rate from across the yield curve.):

fredgraph-20

So what are we really seeing? The general trend is that real interest rates on U.S. government borrowing are overwhelmingly positive, with a few periodical exceptions where real rates on borrowing went a bit negative. This bias toward positive real interest rates on lending to the monetary sovereign, I would argue, is the rentier’s profit resulting from the scarcity of financial capital.

Year over year, that is going to compound heavily. It is these rentiers, I would argue, who should be euthanized. Not because they should be resented for doing well out of the system.  No. They should be euthanized because of the opportunity cost of devoting resources to enriching rentiers, resources that could be deployed productively elsewhere.

And how to euthanize the rentiers? Because we have identified what the rentier’s share is, the answer is very simple: making a real interest rate of zero on lending to the monetary sovereign an objective of monetary policy.

Update: After much debate, I have decided that euthanizing rentiers is not a matter for monetary policy, but a matter for fiscal policy. I have written another post discussing this.

On the Race to Mars and the Economics of Colonizing Space

It is at least a little exciting to see that NASA wants to put a human on an asteroid by 2025 and on Mars by 2030.

Why not very exciting? Because NASA — unlike with the Moon — is not on track to be the first one there. Yes, governments — the largest of which are capable of borrowing, spending and taxing trillions — still have the most economic power of any agency at their disposal. Yet if their plans for space exploration are anything to go by, they no longer have anything like the most ambition.

Elon Musk, by contrast, plans to be on Mars by 2020, and establish a colony he wants to grow to a million humans by 2100. Mars One plans to establish a colony by 2024. Overambitious? Nobody really knows. Sending flesh-and-bone humans to Mars is a pioneering act, not just on par with but considerably more ambitious than the pioneering explorations of Magellan and Columbus. Scurvy, foreign diseases, unpredictable weather at sea and geomagnetic anomalies are one thing. But nobody knows how the human body will respond to multi-year space travel trips across the vast void of space that separates Earth and Mars, nor to life in a metal box on the Martian surface.

Of course, if the private colonists fail — as many are expecting them to do — it is nice to know that the U.S. government will try and get the job done instead. After all, as Stephen Hawking has argued, space colonization is absolutely central to humanity’s future. In our current state as a one-planet species, one stray asteroid, one nearby gamma ray burst, one large scale industrial accident, one explosive supervolcano, or one stray genetic mutation — not to mention climate change, and all the cataclysmic risks we don’t know about — could send us to the edge of extinction. As a two-planet or ten-planet or two-solar system species those risks progressively diminish further.

Simply, we face a choice as a species. Turn inward and remain an earthbound species and face inevitable extinction in the next few thousand years, or possibly even the next few hundred years. Or turn upward, colonize other worlds and human beings — like us, and descended from us — have a chance of still being around one million or even one billion years from now.

Of course, the ultimate viability of all this really comes down to economics. If Musk, Bezos, Branson and the other stargazing private space interests can make space technology profitable, they can fund their way (and our way) to the stars. If not, then humanity’s hopes of colonizing space are tied up with the inward-looking, climate change-denying, and stupefied reality of scientifically and economically illiterate politicians who care more about their 19th century ideologies, election campaigns, and parliamentary champagne than the state of humanity 10 or 1,000 generations from now.

In theory, the resources floating up in space may be the economic fuel necessary to take us to the stars. As I noted last year: “An asteroid less than a mile in diameter could hold more than $20 trillion in industrial and precious metals” at 1997 prices. And that’s in addition to the massive potential of tapping into the sun’s rays as a self-perpetuating energy source. And while abundance may bring down the price of such commodities (including energy) early asteroid miners may reap massive enough rewards to turn themselves into the next Google, Apple, or Facebook, capable of pumping billions or trillions of dollars into research into further technologies.

As I argued last year, those who believe that the global economy may be entering an era of so-called “secular stagnation” clearly have either not thought very much about the potential economic growth possible from growing into space, or they think it a very unlikely possibility. Do you know how much one interstellar spacecraft or large-scale space station could add to GDP? Not just in its construction, but in the huge amount of research and development needed to develop and deploy such a thing? This is a whole new economy.

And while robots may mean that this spending does not create many jobs, and while off-planet tax havens are likely to become a thing, at the very least the technologies will trickle down to the wider public. Already, the widespread availability of the internet is creating a widely accessible and levelled playing field in the dissemination of information, news and ideas. Distributed solar energy and 3-D printers have the potential to create similar effects in energy markets, and in manufacturing and lift billions out of poverty.

But none of this is guaranteed. Even with the recent upsurge in interest in private space industry from titans of industry like Musk and Bezos, uncleared technical hurdles may stymie the development of large scale space industry for decades to come. NASA may still beat the privateers to Mars. But NASA is no longer the tip of the spear. Hopefully, NASA’s exploits will begin to look like afterthoughts.

In defense of economic thinking

My colleague Damon Linker recently wrote a piece entitled “How economic thinking is ruining America,” arguing that political considerations such as community, loyalty, citizenship, and the common good have been “sacrificed on the altar of economic profit-seeking.”

As an economic thinker myself, I was bound to find some disagreement with Linker’s view. But there is also a fair amount of common ground. As Linker argues, the years since the 2008 recession have been rough: “Inequality is up, while growth, job creation, and middle class wages are running far below historic norms. That’s enough to drive even the cheeriest American to despair.”

One economic measure, of course, that is not down is corporate profits, which are at all-time highs relative to the size of the economy. The same thing is true for the incomes of the top 1 percent. So Linker is absolutely correct to argue that corporate profit-seeking has been allowed to override political and cultural loyalties and restraints. The middle class has been trampled into the dirt.

But is that really a product of economic thinking? Or is it a product of a broken political system that funnels insider access, tax cuts, and bailouts to the well-connected, while largely ignoring the concerns of the middle class?

Read More At TheWeek.com

What does the Big Mac Index really measure?

The Economist is out with the latest Big Mac Index measurement. The Big Mac Index — which compares the price of the famous McDonald’s hamburger in various countries around the world — was started as a joke in 1986, and purports to act as a light-hearted proxy for measuring the purchasing power of currencies around the world. The Big Mac is a homogeneous good that is sold worldwide, which makes it a plausible candidate for such a comparison.

Read More At TheWeek.com

Will Robots Drive Us To Socialism?

robots-out-to-get-me

I read recently that Rio Tinto has already replaced 30% of their mining machinery operators and drivers with robots.

The future of work in an age of increasing automation is a topic that a lot of economic thinkers have considered. Frances Coppola ponders the question of how the future may look in a world in which automation is driving people out of the labour market:

Automation only happens when machines are cheaper to run than people, and it is probably fair to say that in the last few decades automation has not happened quite as fast as one might have anticipated because companies have discovered that labour in emerging markets is cheaper than the cost of investing in machinery. But as the standard of living rises in emerging markets, and the cost of technology falls, that will not remain the case. Hazlitt, writing in 1952, pointed out that it was automation of production that enabled families to survive without children’s labour, because the price of goods produced with the new machinery was so much lower than those produced in a more labour-intensive way. In the short term automation caused hardship, as people whose livelihoods depended on the old way of doing things lost their jobs: but in the longer term there was benefit to society in the reduced cost of goods that enabled many people to work less, and in the development of new industries to employ those people no longer needed in the old ones. The change we are seeing today is every bit as great, and the short-term consequences are the same – high unemployment, particularly among those with poor or irrelevant skills.

Automation should both require fewer people to work AND enable people to work less, since the whole point of automation is to reduce the cost of production, which in a competitive system would result in falling prices. Unfortunately this isn’t always the case: the owners of automated industry may use reduced production cost as an opportunity to take more profit, and they may use political influence to create barriers to entry and trade tariffs to prevent competition driving down prices. But assuming that governments don’t use subsidies and protections to keep inefficient companies alive and prices artificially high, where does that leave us in terms of employment and incomes in the future?

Where it leaves us is with increasing inequality, which of course is something that we have seen in recent years, alongside increased unemployment, which is another thing we have seen. And in a world where a minority, historically known as the capitalists own the physical means of production like factories, robots and patents this will result in greater inequality as labour becomes less and less important as an economic factor. The owners of capital will be able to produce to satisfy market demand with little labour input. This will result in more former labourers leaving that field and either becoming entrepreneurs themselves, or becoming dependent on welfare. I think this transformational phenomenon is already well under way in the West — and I only expect it to become more pronounced globally as automation becomes ubiquitous.

In theory, physical labour may become totally obsolete. If every house has a decentralised energy source like solar panels and reliable energy storage, as well as an advanced 3-D printer or molecular assembler that can produce almost physical object imaginable from a few basic recyclable chemicals then human poverty will essentially have been abolished. We can just spend the vast majority of our time doing things that we enjoy, while spending only a few minutes or at most hours a day programming our machines to fulfil our material desires.

That is the more optimistic vision. In a less optimistic vision, only a small minority of people will have access to such technologies as while the technology may exist, the costs of mass distribution remain too high (at least for a time). The vast masses, will be stuck in impoverished material conditions — dependent on welfare, and charity — without any real prospect being able to climb the ladder through selling their labour. Only a lucky few — who have an inimitably good idea, or a creative skill that cannot be replicated by a robot — will have a prospect of joining the capital-owning upper class. And for the others who are left out in the cold, political action may look attractive. Simply have the government take a larger chunk of the capital-owning class’s income or wealth, and redistribute it to the poor. Ideally, this would be done with the intent of abolishing poverty through making cheap electricity, internet access and molecular assemblers available to all. Less ideally, rather than giving the poor the means to fish (so to speak), it might instead take the shape of a giant welfare net, keeping the means of production in limited hands and simply confiscating and redistributing some wealth. These issues unresolved could create a lot of tension between the two classes. In a worst-case scenario, that could lead to social breakdown or even revolution.

Fortunately, I think that this can be avoided through voluntary charity. The billionaire class today is split between those on one hand determined to give it all away with the intent of improving the material conditions of the very poor — Bill Gates, Warren Buffett — and those on the other determined to create new futuristic technologies and systems — Elon Musk — that can improve the material conditions of the masses. As we proceed through the 21st Century and as the technologies of superabundance — solar energy, nuclear energy, wind energy, cybernetics, genomics, the internet, 3-D printing, molecular manufacturing, desalination, etc — create more and more wealth and more and more billionaires, this trend may accelerate. Simply, the wealthy may have so much wealth that eliminating material poverty through voluntary charity may in the long run be an obvious and inevitable move.

As Brian Caplan notes:

At first glance, I admit, a vision of a superabundant world where people who own only their labor eke out a meager existence seems frightening.  But put your fears aside.  In an ultra-productive world, a relatively tiny amount of non-labor resources would make your rich by current standards.  Labor + zero non-labor assets = poverty; labor + token non-labor assets = riches.  In any case, a slight charitable impulse in the better-off is all you need to ensure fabulous riches for every human on earth.

Once you’ve got a world this wonderful, the last thing you’d want to do is start down a potentially slippery slope with a high tech Russian Civil War at the bottom.  Indeed, a more sensible reaction would be abolish the welfare state as obsolete.  If half of us were billionaires, mopping up any residual human poverty with voluntary charity would be child’s play.

Ironically, this kind of world could be strangely like the decentralised and classless society that Marx originally envisaged. The route to which we appear to be travelling toward it on, though, is totally and completely different to the one Marx envisaged. Instead of violent revolution, the road to superabundance may be paved by technological progress made by the capital-owners.

Incomprehensible Bullshit

The mathematics professor Alan Sokal famously shamed much of the humanities profession by publishing Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’  — a paper intended as ambiguous gobbledegook — in the peer-reviewed postmodern cultural studies Journal Social Text in 1996.

Sokal’s paper was a cleverly and artfully constructed piece of trolling. Sokal did it by conforming to the stylistic trappings of postmodernists like Jacques Derrida, Jean Baudrillard and Luce Irigaray — mimicking their dense and obscure verbiage, misusing concepts from science like quantum gravity (of which there exists no widely accepted scientific theory), and shrouding his argument in a great deal of ambiguity. The piece described the notion of a “postmodern science”, one that discarded the notion of objective truth.

The fallout from this paper underlined the divide between science (which seeks objective truth), and postmodernism (which does not seek objective truth). But more than that, it exposed postmodernism and cultural studies as being ambiguous, self-absorbed, and incomprehensible, to the extent that its own journals were tricked into publishing an article intended as nonsense.

Yet this issue — of baffling with incomprehensible bullshit — is not just a problem in postmodernism. Mathematics recently had a Sokal moment (and frankly, it is surprising that it took this long). Via the LRB:

Last month That’s Mathematics! reported another landmark event in the history of academic publishing. A paper by Marcie Rathke of the University of Southern North Dakota at Hoople had been provisionally accepted for publication in Advances in Pure Mathematics. ‘Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE’ concludes:

Now unfortunately, we cannot assume that

It is difficult, as a non-specialist, to judge the weight of that ‘unfortunately’. Thankfully, the abstract is a model of concision:

Let ρ = A. Is it possible to extend isomorphisms? We show that D´ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway–d’Alembert.

Baffled? You should be. Each of these sentences contains mathematical nouns linked by the verbs mathematicians use, but the sentences scarcely connect with each other. The paper was created using Mathgen, an online random maths paper generator. Mathgen has a set of rules that define how papers are arranged in sections and what kinds of sentence make up a section and how those sentences are made up from different categories of technical and non-technical words. It creates beautifully formatted papers with the conventional structure, complete with equations and citations but, alas, totally devoid of meaning.

So mathematicians and mathematics journals are also susceptible to being trolled by their own bullshit, their own conventions, syntax and “rigour”. If a mathematics journal and the peer-review process can be fooled by a meaningless paper spat out by a computer program, how much well-intentioned but bad or meaningless mathematics has also slipped through the peer review process?

And what about the other subjects that have adopted mathematical symbols as their lexicon, like economics?

I have written at length about some of the problems connected to the very great increase of mathematical terminology in economics — and remain highly sceptical of the use of assumptive models in economics.  The social sciences are particularly unsuited to simplified mathematical modelling — unlike the physical sciences, the phenomena they seek to explain tend to be far less linear in observable causation, and so far more susceptible to wildness. No model or theory less than reality itself can fully represent human behaviour and human action; each transaction in an economy is unique, and arises from a different set of circumstances, representing a constantly varying order of human preferences. This tendency toward nonlinear causality is why transparency is critical to bullshit detection in the social sciences. Just as a sheen of ambiguous, obscure and poorly-defined English can make theories incomprehensible and closed-off from scrutiny and understanding, so too can a sheen of obscure and specialised mathematics.

Well-written work — whether in plain language or mathematics — requires comprehensible explanations and definitions, so that a non-specialist with a moderate interest in the subject can quickly and easily grasp the gist of the concepts, the theory, the reasoning, and the predictions. Researchers can use as complex methods as they like — but if they cannot explain them clearly in plain language then there is a transparency problem. Without transparency, academia — whether cultural studies, or mathematics, or economics — has sometimes produced self-serving ambiguous sludge. Bad models and theories produce bad predictions that can inform bad policy and bad investment decisions.  It is so crucial that ideas are expressed in a comprehensible way, and that theories and the thought-process behind them are not hidden behind opaque or poorly-defined words or mathematics.

Explaining Hyperinflation

This is a post in three sections. First I want to outline my conception of the price level phenomena inflation and deflation. Second, I want to outline my conception of the specific inflationary case of hyperinflation. And third, I want to consider the predictive implications of this.

Inflation & Deflation

What is inflation? There is a vast debate on the matter. Neoclassicists and Keynesians tend to define inflation as a rise in the general level of prices of goods and services in an economy over a period of time.

Prices are reached by voluntary agreement between individuals engaged in exchange. Every transaction is unique, because the circumstance of each transaction is unique. Humans choose to engage in exchange based on the desire to fulfil their own subjective needs and wants. Each individual’s supply of, and demand for goods is different, and continuously changing based on their continuously varying circumstances. This means that the measured phenomena of price level changes are ripples on the pond of human needs and wants. Nonetheless price levels convey extremely significant information — the level at which individuals are prepared to exchange the goods in question. When price levels change, it conveys that the underlying economic fundamentals encoded in human action have changed.

Economists today generally measure inflation in terms of price indices, consisting of the measured price of levels of various goods throughout the economy. Price indices are useful, but as I have demonstrated before they can often leave out important avenues like housing or equities. Any price index that does not take into account prices across the entire economy is not representing the fuller price structure.

Austrians tend to define inflation as any growth in the money supply. This is a useful measure too, but money supply growth tells us about money supply growth; it does not relate that growth in money supply to underlying productivity (or indeed to price level, which is what price indices purport and often fail to do). Each transaction is two-way, meaning that two goods are exchanged. Money is merely one of two goods involved in a transaction. If the money supply increases, but the level of productivity (and thus, supply) increases faster than the money supply, this would place a downward pressure on prices. This effect is visible in many sectors today — for instance in housing where a glut in supply has kept prices lower than their pre-2008 peak, even in spite of huge money supply growth.

So my definition of inflation is a little different to current schools. I define inflation (and deflation) as growth (or shrinkage) in the money supply disproportionate to the economy’s productivity. If money grows faster than productivity, there is inflation. If productivity grows faster than money there is deflation. If money shrinks faster than productivity, there is deflation. If productivity shrinks faster than money, there is inflation.

This is given by the following equation where R is relative inflation, ΔQ is change in productivity, and ΔM is change in the money supply:

R= ΔM-ΔQ

This chart shows relative inflation over the past fifty years. I am using M2 to denote the money supply, and GDP to denote productivity (GDP and M2 are imperfect estimations of both the true money supply, and the true level of productivity. It is possible to use MZM
for the money supply and industrial output for productivity to produce different estimates of the true level of relative inflation):

Inflation and deflation are in my view a multivariate phenomenon with four variables: supply and demand for money, and supply and demand for other goods. This is an important distinction, because it means that I am rejecting Milton Friedman’s definition that inflation is always and only a monetary phenomenon.

Friedman’s definition is based on Irving Fisher’s equation MV=PQ where M is the money supply, P is the price level, Q is the level of production and V is the velocity of money. To me, this is a tenuous relationship, because V is not directly observed but instead inferred from the other three variables. Yet to Friedman, this equation stipulates that changes in the money supply will necessarily lead to changes in the price level, because Friedman assumes the relative stability of velocity and of productivity. Yet the instability of the money velocity in recent years demonstrates empirically that velocity is not a stable figure:

And additionally, changes in the money supply can lead to changes in productivity — and that is true even under a gold or silver standard where a new discovery of gold can lead to a mining-driven boom. MV=PQ is a four-variable equation, and using a four-variable equation to establish causal linear relationships between two variables is tenuous at best.

Through the multivariate lens of relative inflation, we can grasp the underlying dynamics of hyperinflation more fully.

Hyperinflation

I define hyperinflation as an increase in relative inflation of above 50% month-on-month. This can theoretically arise from either a dramatic fall in ΔQ or a dramatic rise in ΔM.

There are zero cases of gold-denominated hyperinflation in history; gold is naturally scarce. Yet there have been plenty of cases of fiat-denominated hyperinflation:

This disparity between naturally-scarce gold which has never been hyperinflated and artificially-scarce fiat currencies which have been hyperinflated multiple times suggests very strongly that the hyperinflation is a function of governments running printing presses. Of course, no government is in the business of intentionally destroying its own credibility. So why would a government end up running the printing presses (ΔM) to oblivion?

Well, the majority of these hyperinflationary episodes were associated with the end of World War II or the breakup of the Soviet Union. Every single case in the list was a time of severe physical shocks, where countries were not producing enough food, or where manufacturing and energy generation were shut down out of political and social turmoil, or where countries were denied access to import markets as in the present Iranian hyperinflation. Increases in money supply occurred without a corresponding increase in productivity — leading to astronomical relative inflation as productivity fell off a cliff, and the money supply simultaneously soared.

Steve Hanke and Nicholas Krus of the Cato Institute note:

Hyperinflation is an economic malady that arises under extreme conditions: war, political mismanagement, and the transition from a command to market-based economy—to name a few.

So in many cases, the reason may be political expediency. It may seem easier to pay workers, and lenders, and clients of the welfare state in heavily devalued currency than it would be to default on such liabilities — as was the case in the Weimar Republic. Declining to engage in money printing does not make the underlying problems — like a collapse of agriculture, or the loss of a war, or a natural disaster — disappear, so avoiding hyperinflation may be no panacea. Money printing may be a last roll of the dice, the last failed attempt at stabilising a fundamentally rotten situation.

The fact that naturally scarce currencies like gold do not hyperinflate — even in times of extreme economic stress — suggests that the underlying mechanism here is of an extreme exogenous event causing a severe drop in productivity. Governments then run the printing presses attempting to smooth over such problems — for instance in the Weimar Republic when workers in the occupied Ruhr region went on a general strike and the Weimar government continued to print money in order to pay them. While hyperinflation can in theory arise either out of either ΔQ or ΔM, government has no reason to inject a hyper-inflationary volume of money into an economy that still has access to global exports, that still produces sufficient levels of energy and agriculture to support its population, and that still has a functional infrastructure.

This means that the indicators for imminent hyperinflation are not economic so much as they are geopolitical — wars, trade breakdowns, energy crises, socio-political collapse, collapse in production, collapse in agriculture. While all such catastrophes have preexisting economic causes, a bad economic situation will not deteriorate into full-collapse and hyperinflation without a severe intervening physical breakdown.

Predicting Hyperinflation

Hyperinflation is notoriously difficult to predict, because physical breakdowns like an invasion, or the breakup of a currency union, or a trade breakdown are political in nature, and human action is anything but timely or predictable.

However, it is possible to provide a list of factors which can make a nation or community fragile to unexpected collapses in productivity:

  1. Rising Public and-or Private Debt — risks currency crisis, especially if denominated in foreign currency.
  2. Import Dependency — supplies can be cut off, leading to bottlenecks and shortages.
  3. Energy Dependency — supplies can be cut off, leading to transport and power issues.
  4. Fragile Transport Infrastructure — transport can be disrupted by war, terrorism, shortages or natural disasters.
  5. Overstretched Military — high cost, harder to respond to unexpected disasters.
  6. Natural Disaster-Prone — e.g. volcanoes, hurricanes, tornadoes, drought, floods.
  7. Civil Disorder— may cause severe civil and economic disruption.

Readers are free to speculate as to which nation is currently most fragile to hyperinflation.

However none of these factors alone or together — however severe — are guaranteed to precipitate a shock that leads to the collapse of production or imports.

But if an incident or series of incidents leads to a severe and prolonged drop in productivity, and so long as government accelerates the printing of money to paper over the cracks, hyperinflation is a mathematical inevitability.

Penis Length, LIBOR & Soviet Growth

Healthy markets require solid data based on reality.

It is hard enough to determine what, when and how to invest even with solid data. We live in an unpredictable and chaotic world, and the last thing that investors need is misinformation and distortions. That is why the LIBOR manipulation scandal is so infuriating; as banks skewed the figures, they skewed entire marketplaces. The level of economic distortion is incalculable — as LIBOR is used to price hundreds of trillions of assets, the effects cascaded across the entire financial system and the wider world. An unquantifiable number of good trades were made bad, and vice verse. Yet in truth we should not expect anything else from a self-reported system like LIBOR. Without real checks and balances to make sure that the data is sturdy, data should be treated as completely unreliable.

Unsurprisingly, it is emerging that many more self-reported figures may have been skewed by self-reporting bullshittery.

The Telegraph noted:

The Libor scandal could be repeated in a number of other “self-certifying” markets where prices are determined, he said

“Self-certification is clearly open to abuse, so this could occur elsewhere,” he said.

A Financial Services Authority inquiry into Libor should be extended to other self-certifying markets, he said. The Treasury said last night that the review, led by Martin Wheatley, was free to examine markets other than Libor.

An expansion of the FSA review could take in a number of other interest-rate-related data as well as some complex financial instruments measuring the difference between banks’ borrowing costs and that of the US government.[i.e. the Ted spread]. Some markets in gold and oil are also based on self-certification.

This all reminds me of this:

When humans have an incentive to exaggerate or lie — either to bolster their ego by lying about penis size, or to cream an easy profit by rigging rates — it seems they have a propensity to do so.

Hopefully there will be one beneficial side-effect of the LIBOR rigging — self-reporting will die. It seems inevitable that market participants will pay a premium for solid, independent data. But sadly, any auditor can be bribed. And in a generation’s time, the LIBOR-rigging scandal of 2008 (and probably much earlier) may just be an antique detail known to only a savvy few. Scepticism, caution and portfolio robustification will always remain essential tools for savvy investors who don’t want to lose their shirt and shoes.

It was scepticism that was the difference between economists who refused to buy into the notion of Soviet prosperity in spite of impressive (and entirely self-reported) figures emerging from the Soviet Union, and those Western economists like Paul Samuelson (perhaps spurred on by ideological fervour) who predicted again and again in textbooks spanning thirty years that the USSR would overtake the USA in GDP:

Alex Tabarrok notes:

In the 1961 edition of his famous textbook of economic principles, Paul Samuelson wrote that GNP in the Soviet Union was about half that in the United States but the Soviet Union was growing faster.  As a result, one could comfortably forecast that Soviet GNP would exceed that of the United States by as early as 1984 or perhaps by as late as 1997 and in any event Soviet GNP would greatly catch-up to U.S. GNP.  A poor forecast — but it gets worse because in subsequent editions Samuelson presented the same analysis again and again except the overtaking time was always pushed further into the future so by 1980 the dates were 2002 to 2012.  In subsequent editions, Samuelson provided no acknowledgment of his past failure to predict and little commentary beyond remarks about “bad weather” in the Soviet Union.

The reason for his prediction? Apparently, bad data.

“No incentive to amend data to show strong Russian proletarian outperforms weak American capitalist, Comrade!”

Matthew Ashton writes:

To his credit Samuelson was always fairly open about it when his predictions failed to come true, stating that he was using the best data available at the time and he changed his mind as the evidence changed. I’d argue that in some cases, especially concerning evidence coming out of the Soviet Union, he possibly should have been a bit more sceptical as to its accuracy, however almost everyone in economics is guilty of that.

One can only wonder how bad the state of misreporting, fraud and delusion is in the various economies where central planning plays an even larger role than here in the West.

The Pseudoscience of Economics

Modern economics is obsessed with modelling. An overwhelming majority of academic papers on the subject work like so: they take data, and use data to construct formal mathematical models of economic processes. Models mostly describe a situation, and describe how that situation would be changed by a given set of events; a very simple example is that as the supply of a good diminishes, its price will increase. Another is that deficit spending increases the national income. A mathematical model is a predictive tool created to demonstrate the outcome of events in a massively simplified alternate universe.

As someone who rather enjoys voyages of the imagination, the use of mathematical models in economics is intriguing. The pretension that through using formal mathematical techniques and process  we can not only accurately understand, but accurately predict the result of changes in the economy is highly seductive. After all,we can accurately predict the future, right?

Wrong. The wonderful and terrible and confounding thing about our world is that it is a deeply unpredictable place, at least in the economic sphere where each number (for instance “aggregate demand” or “aggregate supply”) in an equation may loosely refer to millions of huge, complex and dynamic events. When you’re using huge simplifications to describe reality, those simplifications may miss the important details, and your projections may go askew.

Not all modelling is equal. Newton’s model of gravitation (since superseded by Einstein’s relativity) makes relatively accurate predictions about how gravitation works, and what would happen to an object dropped 500 metres above the Earth. NASA used Newton’s equations to fly to the Moon. Of course, even in physics there are occasionally divergences and oddities (which is why there are quite often unrepeatable or anomalous experimental results, for instance the recent experiment that seemed to show neutrinos travelling faster than the speed of light). So economics — with its fixation on creating models of situations, and using these models to attempt to predict the future, mimics physics, chemistry and biology, where data is collected, and used to produce theories of physical processes which allow a modestly accurate representation of the future.

The key qualitative difference, though, is that mathematical economic theories don’t accurately predict the future. Ben Bernanke — the chairman of the Federal Reserve, and one of the most-cited academic economists in the world told the world that subprime housing was contained. That is the economic equivalent of Stephen Hawking telling the world that a meteorite is going to miss the Earth, when it is really going to hit. Physicists can very accurately model the trajectories of rocks in space. But economists cannot accurately model the trajectories of prices, employment and interest rates down on the rocky ground.

The thing that I believe modern economists are most useful for is pointing out the glaring flaws in everyone else’s theories. Steve Keen has made a public name for himself by publishing a book entitled Debunking Economics, in which he explains the glaring and various flaws in modern economic modelling (DSGE, New Classical, etc).

Economics is a complex and multi-faceted subjects. Economists must be in some measure, philosophers, historians, linguists, mathematicians, statisticians, political scientists, sociologists and psychologists, and many other things. The trouble is that at some stage in the last century the multi-faceted multi-dimensional economics (like that of Xenophon) was hijacked by mathematicians who tried to turn this huge and delicate subject into an equation. Yet economics — and economic decisions, from the macro to the micro level — is a human subject. It is subtle and psychological and sporadic. A human subject requires human language, human emotion, human intuition.

The grand theoretical-mathematical approach to economics is fundamentally flawed. Trying to smudge the human reality of economics and politics into cold mathematical shackles is degenerative.

So what to do if you want to understand the economy?

Follow the data, consider the history (similarities and differences between the past and the present) and explain your conclusions simply, as you would to a child. Consider philosophical definitions: what is money? What is demand? What is supply? What is value? How does demand affect supply? What are the global patterns of trade? Why have they emerged this way and not an alternative way? Consider possibilities. Admit the limitations of your knowledge and explore the boundaries. Stop forcing the construction of absolutes, grand frameworks, grand theories. No theory will ever be robust to everything nature will throw at it, but simple microeconomic heuristics (opportunity cost, cost-benefit analysis) combined with data-focussed historical analysis may be more robust than cold, dead mathematics.

As Heraclitus noted:

No man ever steps in the same river twice

No two situations are identical. And in this universe even tiny differences can have huge effects on the outcome of a situation. This is the butterfly effect, a term coined by Edward Lorenz, and derived from the theoretical example of a hurricane’s formation being contingent on whether or not a distant butterfly had flapped its wings several weeks before.

The pseudo-scientific school of mathematical economics hungers and craves for a perfect world, where each river is the same, where there is no butterfly effect, where human preferences are expressed in equation form, where there is no subtlety or ambiguity or uncertainty.

It is a dreamworld constructed by and for people with Asperger’s Syndrome.