The “Unemployment Is Voluntary” Myth

Loyd S. Pettegrew and Carol A. Vance of the Ludwig von Mises Institute ask and answer a question:

Why does a large portion of the population choose not to work when there are many jobs available? The answer is simple. If you can receive 2-3 times as much money from unemployment, disability, and/or welfare benefits (subsidized housing, food stamps, free cellphones, etc.) as you can from a temporary or part-time job, and live a life of leisure, why work? In 2011, the U.S. government spent over $800 billion on this “welfare,” exceeding expenditures on Social Security or Medicare.

So, is it true? Is the reason why unemployment is elevated that millions of Americans are choosing not to work because of cushy government welfare provisions?

After all, welfare payments as a percentage of GDP and unemployment have risen in tandem:


However, in this case it is clear that correlation is not causation. Why?

Well, if labour was truly slacking off then we would expect to see a shortage of labour. But instead we see an elevated level of applicants per job openings:


This means that there are not enough job openings in the economy even for the number of current jobseekers, let alone the discouraged workers and disabled individuals who are claiming welfare. If the Federal government were to throw them all off welfare, the number of jobseekers per opening — already elevated — would soar. This means that the issue causing unemployment is not individuals dropping out of the labour force, but an economy that isn’t creating jobs very rapidly. So welfare is not acting as a disincentive to work, in this case. It is acting as supplementary income for those who cannot otherwise find an opening in the economy due to factors like job migration and automation reducing the level of labour desired by employers.

Under other conditions, it is possible that welfare payments could act as a disincentive to work. If there were a low number of applicants per opening, then welfare that paid better than the lowest-paid jobs available could be seen as a disincentive to work. But now, with job openings at a very low level? Don’t be ridiculous.


The Trouble With Shadowstats

Often, when I talk about inflation being low, people who disagree tend to cite John Williams’ Shadowstats as evidence that price inflation is not low at all.

Now, I don’t disagree with the idea that some people have experienced a higher level of price inflation than the CPI. Everyone experiences a different rate of inflation based on their purchasing habits, so by definition everyone’s individual rate will diverge from the official rate to some degree; some will be higher, and some will be lower. And I don’t disagree that rising food and fuel prices have been a problem for welfare recipients and seniors on a fixed income, etc, who spend a higher proportion of their income on food and fuel than, say, young professionals with a lot of disposable income.

What I do disagree with is bad statistical methodology. Shadowstats is built on the belief that the Bureau of Labor Statistics changed their methodology in the 1980s and 1990s, and that if we were using their original methodology the level of inflation would be much higher. Shadowstats presents what they claim to be the original methodology. But Shadowstats is not calculating inflation any differently.They are not using the 1980s or 1990s methodology that they believe would be higher.  All Shadowstats is doing is taking the CPI data and adding on an arbitrary constant to make it look like inflation is higher!

This should be obvious from their data, which has the exact same curve as the CPI data at a higher level:

alt-cpi-home2 (1)

In fact, according to James Hamilton of Econbrowser, John Williams admitted in 2008 that his numbers are just inflated CPI data:

Last month I called attention to an analysis by BLS researchers John Greenlees and Robert McClelland of some of the claims by John Williams of Shadowstats about the consequences for reported inflation of assorted technical decisions made by the BLS. Williams asked me to update with a link to his response to the BLS study. I am happy to do so, along with offering some further observations of my own.

You can follow the link to Shadowstats’ response to Greenlees and McClelland and judge for yourself, but my impression is that the response is more philosophical than quantitative. In a separate phone conversation, Williams further clarified the Shadowstats methodology. Here’s what John said to me: “I’m not going back and recalculating the CPI. All I’m doing is going back to the government’s estimates of what the effect would be and using that as an ad factor to the reported statistics.”

Price changes and inflation are important topics, and constructing alternate measures of inflation is a worthwhile activity. Researchers at MIT have tried to do this with their Billion Prices Project, which measures price trends across a much, much larger range of products and locations than CPI:


What the Billion Prices Project implies for Shadowstats is that the CPI is roughly correct, and there is no vast divergence between real-world price trends and the CPI number. Of course, maybe the 1980s and 1990s methodology would be different from the current numbers. It would be very interesting to compare the current CPI methodology with the older CPI methodologies and with the BPP data! But assessing this empirically would require someone to mine through the raw CPI data since the 1980s and recalculate the outputs with the real earlier methodology — a far longer, more difficult and sophisticated process than taking the CPI outputs and adding an arbitrary constant!

What Are Interest Rates And Can They Be Artificially Low Or High?

Many economic commentators believe that interest rates in America and around the world are “artificially low”. Indeed, I too have used the term in the past to refer to the condition in Europe that saw interest rates across the member states converge to a uniformly low level at the introduction of the Euro, only to diverge and soar in the periphery during the ongoing crisis.

So what is an interest rate? An interest rate is the cost of money now. As Eugen von Böhm-Bawerk noted, interest rates result from people valuing money in the present more highly than money in the future. If a business is starting out, and has insufficient capital to carry out its plans it will seek investment, either through selling equity in the ownership of the business, or through credit from lenders. For a lender, an interest rate is their profit for giving up the spending power of their capital to another who desires it now, attached to the risk that the borrower will default.

In monetary economies, money tends to be distributed relatively scarcely. In a commodity-based monetary system, the level of scarcity is determined by the physical limits of how much of a commodity can be pulled out of the ground. In a fiat-based monetary system, there is no such natural scarcity, but money’s relative scarcity is controlled by the banking system and central bank that lends it into the economy. If money was distributed infinitely widely and freely, there would be no such thing as an interest rate as there would be no cost to obtaining money now, just as there is no cost to obtaining a widely-distributed and freely-available commodity like air (at least on the face of the Earth!). Without scarcity money would lose its usability as a currency, as there is no incentive to trade for a substance which is uniformly and effectively infinitely available to everyone. So an interest rate is not only the cost of money, but also a symptom of its scarcity (and, as Keynes pointed out, a key mechanism through which rentiers profit).

So, where does the idea that interest rates can be made artificially low or artificially high arise from?

The notion of an artificially low or high interest rate implies the existence of a natural interest rate, from which the market rate diverges. It is a widely-held notion, and indeed, Ron Paul made reference to the notion of a natural rate of interest in his debate with Paul Krugman last year. A widely-used definition of the “natural rate of interest” appears in Wicksell (1898):

There is a certain rate of interest on loans which is neutral in respect to commodity prices, and tends neither to raise nor to lower them.

This is easy to define and hard to calculate. It is whatever interest rate yields a zero-percent inflationary level. Because interest rates have a nonlinear relationship with inflation, it is difficult to say precisely what the natural interest rate is at any given time, but Wicksell’s definition specifies that a positive inflation rate means the market rate is above the natural rate, and a negative inflation rate means the market is below the natural rate. (Interestingly, it should be noted that the historical Federal Funds Rate comes pretty close to loosely approximating the historical difference between 0 and the CPI rate, despite questions of whether the CPI really reflects the true price level due to not including housing and equity markets which often record much greater gains or greater losses than consumer prices).

The notion of a natural rate of interest is interesting and helpful — certainly, high levels of inflation can be challenged through decreasing interest rates (or more generally increasing credit-availability), and deflation can be challenged by decreasing interest rates (or more generally increasing credit availability). If the goal of monetary policy is price stability, then the notion of a “natural interest rate” as a guide for monetary policy is useful.

But policies of macrostabilisation have been strongly questioned by the work of Hyman Minsky, which posited the idea that stability is itself destabilising, because it leads to overconfidence which itself results in malinvestment and credit and price bubbles.

Austrian Business Cycle Theory (ABCT) developed by Ludwig von Mises and Friedrich Hayek, most influentially in Mises’ 1912 work The Theory of Money and Credit, theorises that the business cycle is caused by credit expansion (often fuelled by excessively low interest rates) which pours into unsustainable projects. The end of this credit expansion (as a result of a collapse resulting from excessive leverage, or from the failure of unsustainable projects, or from general overproduction, or for some other reason) results in a panic and bust. According to ABCT, the underlying issue is that the banking system made money cheaply available, and the market rate of interest falls beneath the natural rate of interest, manifesting as price inflation.

I do not dispute the idea that bubbles tend to coincide with credit expansion and easy lending. But it is tough to say whether credit expansion is a consequence or a cause of the bubble. What is the necessary precursor of an unsustainable credit expansion? Overconfidence, and the idea that prices will just keep going up when sooner or later the credit expansion will run out steam. This could be the overconfidence of central bankers, who believe that macrostabilisation policies have produced a “Great Moderation”, or the overconfidence of traders who hope to get rich quick, or the overconfidence of homeowners who see rising home prices as an easy opportunity to remortgage and consume more, or the overconfidence of private banks who hope to make bumper gains on loans or loan-related securities (Carl Menger noted that fractional reserve banking and credit-fuelled bubbles originated in economies with no central bank, in contradiction of those ABCT-advocates who go so far as to say that without central banking there would be no business cycle at all).

And is price stability really “natural”? Wicksell (and other advocates of a “natural rate of interest” like RBCT and certain Austrians) seem to imply so. But why should it be the norm that prices are stable? In competitive markets — like modern day high-tech markets — the tendency may be toward deflation rather than stability, as improving technology lowers manufacturing costs, and firms lower prices to stay competitive with each other. Or in markets for scarce goods — like commodities of which there exists a limited quantity — the tendency may be toward inflation, as producers may have to spend more to extract difficult-to-extract resources form the ground. Ultimately, human action in market activity is unpredictable and determined by the subjective preferences of all market participants, and this applies as much to the market for money as it does for any market. There is no reason to believe that prices tend toward stability, and the empirical record shows a significant level of variation in price levels under both the gold standard and the modern fiat system.

Ultimately, if interest rates are the cost of money, and in a fiat monetary system the quantity and availability of money is determined by lending institutions and the central bank, how can any interest rate not be artificial (i.e. an expression of the subjective opinions, forecasts and plans of those involved in determining the availability of credit and money including governments and central bankers)? Even under a commodity-money system, the availability of money is still determined by the lending system, as well as the miners who pull the monetary commodity or commodities out of the ground (and any legal tender laws that define money, for example monetising gold and demonetising silver).

And if all interest rates in contemporary markets are to some degree artificial this raises some difficult questions, because it means that the availability of capital, and thus the profitability (or unprofitability) of rentiers are effectively policy choices of the state (or the central bank).

Are Markets Informationally Efficient?

A key assumption in many mainstream macroeconomic models (both formal and informal) is the Efficient Market Hypothesis. Very simply, this is the belief that markets are informationally efficient — that they reflect information with little (or no) delay, leaving few (or no) arbitrage opportunities.

So the real question here is what information do markets (and by markets, I mean free markets where market participants are free to pay and receive any negotiated value for an asset) really reflect? When we see the price of an asset or asset class fluctuating, what does this movement signify? Is it fluctuation in the fundamental value of the asset? Is it just a fluctuation in the market’s perception of an asset? Is it some combination of these two factors? Or is it just random noise? Fundamentally, markets are composed of a series of transactions, each between a bidder and a seller. Each transaction in itself reflects a discrete set of information — specifically, what the bidder and the seller are willing to pay for, and take for that specific asset. This in turn is typically (although not always!) influenced by a some or all of the following: what others are willing to pay and accept for an asset presently, the use-value of an asset, notions of fundamental value (price-over-earnings, stock-to-flow, EBITDA, cashflow, etc), notions of momentum and what others may be willing to pay and accept for an asset in the future (trendlines, gut feelings, “hot stock tips”, etc).

An intriguing addendum to this is that the automation of trading (high-frequency trading) has created bidders and sellers who are acting on the instructions of algorithms. As these instructions are programmed by humans — usually automating some form of technical analysis — the only real difference is that of (extreme) speed. The beliefs reflected in high-frequency trading reflect the underlying algorithmic instructions programmed by the humans who created the algorithm.

Ultimately, whenever we purchase an asset for the purpose of speculation or investment (and even use-value — prices can change, and the price we paid last week or last year could end up looking very expensive, or very cheap) we are taking a guess as to whether the current bid or sell value is worth it. Each agent makes their guess based on a different set of data and expectations. What the prices in markets signify is the operation of this mechanism — different agents evaluating information and making guesses about the future.

Let’s consider the example of Bitcoin, the price of which is currently soaring. Some choose to buy Bitcoins based on momentum, or their liking of the cryptography, or Bitcoin’s inherent deflationary bias or some other positive belief. This is a speculation that the price may continue to climb. Some may choose to buy bitcoins based on their use-value, as an anonymous, decentralised currency that can be used to buy a wide array of things. Holders of Bitcoins may be motivated to sell by the fact that the price has risen since they bought or mined their coins, or by the belief that bitcoin is “in a bubble”, or some other negative belief.

What the market reflects is the net weight of different opinions and resultant human actions. If those who are motivated to buy outweigh those who are motivated to sell, the price  rises and vice versa. This means that the beliefs of big players in a particular market can have strange and disproportionate effects. Consider the effect of the Hunt Brothers’ attempts to coin the silver market in 1980. The price of silver rose from $11 an ounce in September 1979 to almost $50 an ounce in January 1980, as the Hunt Brothers bought more and more. The market was very efficient at reflecting the fact that the Hunt brothers were willing to buy more than the market could supply at lower prices. And once the Hunt Brothers faced margin calls, the market quickly adjusted to reflect the fact that they were now selling instead of buying, and prices fell.

That’s what (transparent) markets are guaranteed to reflect — bidders and sellers, supply and demand. This information is still useful to firms trying to gauge what, and how much to produce.  Everything else — the information that bidders and sellers are acting upon — is not necessarily reflected in market activity. Very often, bidders and sellers are brought to the market by new information regarding a large number of things — price changes, earnings, business decisions, technologies and inventions, macroeconomic data, etc — but there is no systematic or reliable way to predict what humans will respond to, or how they will respond. Human psychology and human action in this sense is totally unstable and nonlinear — consider the recent contrast in market reaction to earnings data from Apple and Google. This instability is an alternative explanation for why consistently beating the market is indeed very difficult, as the Efficient Market Hypothesis implies.

And prices do not even reflect an aggregation of sentiment toward an asset or asset class — they only reflect the sentiment of those who are involved in the market, in proportion to their level of buying and selling activity. This means that the opinions of big players who buy or sell a lot, are reflected many times more than those of small players who buy or sell a little. And irrationality can create a feedback loop — if stock prices are rising, and macroeconomic fundamentals are weak, many market participants may initially be sceptical. Yet as more participants pile into the stock market purely for reasons of sustained upward momentum, more and more participants may begin to suspend their disbelief, if only to not miss out on a profit opportunity. This is one mechanism (of infinitely many) through which price bubbles can form.

Yet accurately reflecting supply and demand is not the same thing as informational efficiency. Empirical data show that arbitrage opportunities are widely exploitable and exploited even in modern marketsOne of the largest forms of high frequency trading is of course statistical arbitrage. This reality should probably be a final nail in the coffin of the idea that markets reflect anything more than the actions of bidders and sellers. Unfortunately, very many models rest on the assumption of informational efficiency in markets, meaning that this approach is very unlikely to die out any time soon.

What is Profit?

In neoclassical macroeconomic models that assume perfect competition, there can in the long run be no such thing as profit — defined as revenue left over after all costs have been subtracted.

Clearly, in the real world where many businesses have lived and died profitably there is no such thing as perfect competition, and therefore the neoclassical models that treat profit as a short-run anomaly are working from an unrealistic assumption.

My definition of profit is that profit is what happens when a business’s input transactions are priced less than its output transactions. That is, the sum of the cost of a business’s inputs from those it buys is transacted for a lower price than the sum of those it sells its goods and services to. Because transactions are assumed to be voluntary — and when they are not voluntary, any residual gain is theft, not profit — and businesses are assumed to try to negotiate the best price in both inputs and outputs, any profit is due to those who purchase the business’s output valuing the output higher than those who sold the business its inputs. That an output or input transactor would accept a profit-creating price could be for any number of perceived reasons: convenience, or expertise, or prestige, or necessity, or even outright trickery. Their decision to accept the price is subject to their own subjective valuation, and it is the difference between prices that creates the profit.

Marx and Lenin represented this idea as surplus value; that businesses make a profit by extracting uncompensated labour value out of their workers. But why not the other transactors? In my view, profit is derived from the sum of the business’s transactions with all of its transactors: consumers, supplies, labour (etc). Workers (etc) cannot extract a greater share of the firm’s revenue than they can negotiate, and at various points in history (including the present day) the working class seems to have had little real leverage for negotiation.

In my view, any model that attempts to represent real world markets should begin from the historical fact of profit and loss, and the historical fact of a disequilibrium between input transactions and output transactions.

A Critique of the Methodology of Mises & Rothbard

I find myself in the middle of a huge blowup between Max Keiser and Tom Woods over Mises, Menger and Austrian economics and feel that this is an opportune moment to express some doubts I have regarding contemporary Austrian methodology.

I am to some extent an Austrian, on three counts.

First, I subscribe to the notion that value is subjective; that goods’ and services’ values differ according to different individuals because they serve various uses to various users, and that value is entirely in the eye of the beholder.

Second, I subscribe to the notion that free markets succeed because of the sensitive price feedback mechanism that allocates resources according to the real underlying shape of supply and demand and conversely the successful long-term allocation of labour, capital and resources by a central planner is impossible (or extremely unlikely), because of the lack of a market feedback mechanism.

Third, I subscribe to the notion that human thought is neither linear nor rational, and the sphere of human behaviour is complicated and multi-dimensional, and that attempts to model it using linear, mechanistic methods will in the long run tend to fail.

It is not, then, the overall drift of Misesean-Rothbardian economics that I find problematic — indeed, I often find myself drawing similar conclusions by different means — but rather the methodology.

I reached my views — some of which new evidence will eventually wash away — through a lot of theorising mixed with much careful observation and consideration of case studies, historical examples and all sorts of real world data. I love data; and one of the things that attracted me toward thinking and writing about economics is the beautiful superabundant growth of new data opened up to the world by computers and the internet. No, it is not universal or complete, and therefore building a perfect predictive model is not possible, but that is not the point. If I want to know how the corn price in the USA moved during the first half of the twentieth century, the data is accessible. If I want to know the rate of GDP growth in Ghana in 2009, the data is accessible. If I want to know the crime rate in France, the data is accessible.

Miseseans choose to reach their conclusions not from data, but instead from praxeology; pure deduction and logic.

This is quite unlike the early Austrians like Menger who mainly used a mixture of deductionism and data.

According to Rothbard:

Praxeology rests on the fundamental axiom that individual human beings act, that is, on the primordial fact that individuals engage in conscious actions toward chosen goals. This concept of action contrasts to purely reflexive, or knee-jerk, behavior, which is not directed toward goals. The praxeological method spins out by verbal deduction the logical implications of that primordial fact. In short, praxeological economics is the structure of logical implications of the fact that individuals act.

And Mises:

Our statements and propositions are not derived from experience. They are not subject to verification or falsification on the ground of experience and facts.

This is completely wrongheaded. All human thought and action is derived from experience; Mises’ ideas were filtered from his life, filtered from his experience. That is an empirical fact for Mises lived, Mises breathed, Mises experienced, Mises thought. Nothing Mises or his fellow praxeologists have written can be independent of that — it was all ultimately derived from human experience. And considering the Austrian focus on subjectivity it is bizarre that Mises and his followers’ economic paradigm is wrapped around the elimination of experience and subjectivity from economic thought.

If, as I often do, I produce a deductive hypothesis — for instance, that the end of Bretton Woods might produce soaring income inequality — it is essential that I refer to data to show whether or not my hypothesis is accurate. If I make a deductive prediction about the future, it is essential that I refer to data to determine whether or not my prediction has been correct.

Exposing a hypothesis to the light of evidence augments its strong parts and washes away its weaker ones. When the evidence changes, I change my opinion irrespective of what my deductions led me to believe or what axioms those deductions were based upon. Why reach the conclusion that central planning can induce civilisational failure through pure logic when the historical examples of Mao’s China and Stalin’s Russia and Diocletian’s Rome illustrate this in gory detail?

This is elementary stuff. Deduction is important — indeed, it is a critical part of forming a hypothesis — but deductions are confirmed and denied not by logic, but by the shape of the evidence. In rejecting modelling — which has produced fallacious work like DSGE and RBCTbut also some relatively successful models like those of Minsky and Keen — praxeologists have made the mistake of rejecting empiricism entirely. This has confined their methods to a grainier simulation; that of their own verbal logic.

It is not necessary to define a framework through mathematical models in order to practice empirical economics. Keynes was cited by Rothbard in support of the notion that economics should not be fixated on mathematical models:

It is a great fault of symbolic pseudo-mathematical methods of formalizing a system of economic analysis, that they expressly assume strict independence between the factors involved and lose
all their cogency and authority if this hypothesis is disallowed: whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep “at the back of our heads” the necessary reserves and qualifications and the adjustments which we have to make later on, in a way in which we cannot keep complicated partial differentials “at the back” of several pages of algebra which assume that they all vanish. Too large a proportion of recent “mathematical” economics are mere concoctions, as
imprecise as the initial assumptions they rest on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentious and unhelpful symbols.

And I agree. But nowhere did any of the figures cited by Rothbard; not Keynes, nor Wild, nor Frola, nor Menger endorse a wholly deductionist framework. All of these theorists wanted to work with reality, not play with logic. Create a theory; test; refine; test; refine; etc.

Praxeologists claim that praxeology does not make predictions about the future, and that any predictions made by praxeologists are not praxeological predictions, but instead are being made in a praxeologist’s capacity as an economic historian. But this is a moot point; all predictions about the future are deductive. Unless predictions are being made using an alien framework (e.g. a neoclassical or Keynesian model) what else is the praxeologist using but the verbal and deductive methodology of praxeology?

It has been the predictive success of contemporary Austrian economists — at least in identifying general trends often ignored by the mainstream — that has drawn young minds toward Misesean-Rothbardian economics.

Of those economists who predicted the 2008 crisis, a significant number were Austrians:

Yet Miseseans including Peter Schiff damaged their hard-earned credibility with a series of failed predictions of imminent interest rate spikes and hyperinflation of the dollar by 2010.

That is not to say that interest rate spikes and high inflation cannot emerge further down the line. But these predictive failures were symptomatic of deduction-oriented reasoning; Miseseans who forewarned of imminent hyperinflation over-focused on their deduction that a tripling of the monetary base would produce huge inflation, while ignoring the empirical reality of Japan, where a huge post-housing-bubble expansion of the monetary base produced no such huge inflation. Reality is often far, far, far more complex than either mathematical models or verbal logic anticipates.

Like all sciences, economics should be driven by data. For if we are not driven by data than we are just daydreaming.

As Menger — the Father of Austrianism, who favoured a mixture of deductive and empirical methods — noted:

The merits of a theory always depends on the extent to which it succeeds in determining the true factors (those that correspond to real life) constituting the economic phenomena and the laws according to which the complex phenomena of political economy result from the simple elements.

Praxeology is leading Austrian economics down a dead end.

Austrianism would do well to return to its root — Menger, not Mises.

The Origin of Money

Markets are true democracies. The allocation of resources, capital and labour is achieved through the mechanism of spending, and so based on spending preferences. As money flows through the economy the popular grows and the unpopular shrinks.  Producers receive a signal to produce more or less based on spending preferences. Markets distribute power according to demand and productivity; the more you earn, the more power you accumulate to allocate resources, capital and labour. As the power to allocate resources (i.e. money) is widely desired, markets encourage the development of skills, talents and ideas.

Planned economies have a track record of failure, in my view because they do not have this democratic dimension. The state may claim to be “scientific”, but as Hayek conclusively illustrated, the lack of any real feedback mechanism has always led planned economies into hideous misallocations of resources, the most egregious example being the collectivisation of agriculture in both Maoist China and Soviet Russia that led to mass starvation and millions of deaths. The market’s resource allocation system is a complex, multi-dimensional process that blends together the skills, knowledge, and ideas of society, and for which there is no substitute. Socialism might claim to represent the wider interests of society, but in adopting a system based on economic planning, the wider interests and desires of society and the democratic market process are ignored.

This complex process begins with the designation of money, which is why the choice of the monetary medium is critical.

Like all democracies, markets can be corrupted.

Whoever creates the money holds a position of great power — the choice of how to allocate resources is in their hands. They choose who gets the money, and for what, and when. And they do this again and again and again.

Who should create the monetary medium? Today, money is designated by a central bank and allocated through the financial system via credit creation. Historically, in the days of commodity-money, money was initially allocated by digging it up out of the ground. Anyone with a shovel or a gold pan could create money. In the days of barter, a monetary medium was created even more simply, through producing things others were happy to swap or credit.

While central banks might claim that they have the nation’s best democratic interests at heart, evidence shows that since the world exited the gold exchange standard in 1971 (thus giving banks a monopoly over the allocation of money and credit), bank assets as a percentage of GDP have exploded (this data is from the United Kingdom, but there is a similar pattern around the world).

Clearly, some pigs are more equal than others:

Giving banks a monopoly over the allocation of capital has dramatically enriched banking interests. It is also correlated with a dramatic fall in total factor productivity, and a dramatic increase in income inequality.

Very simply, I believe that the present system is inherently undemocratic. Giving banks a monopoly over the initial allocation of credit and money enriches the banks at the expense of society. Banks and bankers — who produce nothing — allocate resources to their interests. The rest of society — including all the productive sectors — get crumbs from the table. The market mechanism is perverted, and bent in favour of the financial system. The financial system can subsidise incompetence and ineptitude through bailouts and helicopter drops.

Such a system is unsustainable. The subsidisation of incompetence breeds more incompetence, and weakens the system, whether it is government handing off corporate welfare to inept corporations, or whether it is the central bank bailing out inept financial institutions. The financial system never learned the lessons of 2008; MF Global and the London Whale illustrate that. Printing money to save broken systems just makes these systems more fragile and prone to collapse. Ignoring the market mechanism, and the interests of the wider society to subsidise the financial sector and well-connected corporations just makes society angry and disaffected.

Our monopoly will eventually discredit itself through the subsidisation of graft and incompetence. It is just a matter of time.