Deflation is Here — And The Government is Poised to Make it Worse

Consumer prices may not be deflating as quickly as Labour’s electoral chances did earlier this month, but — even after £300 billion of quantitative easing — price deflation for the first time in more than half a century is finally here. The Bank of England continues to throw everything at keeping prices rising at close to their 2 percent target. Yet it’s not working. And this is not just about cheaper oil. Core inflation has also been dropping like a rock.

I argued that “deflation was looming” for Britain last year, and feel a little vindicated that it has come to pass. But I don’t feel at all gratified about the thing itself.

In a highly indebted economy such as Britain’s — where private debt dwarfs government debt — deflation is a dangerous thing. Past debts — and the interest rates paid on those debts — are nominally rigid. Unless specifically stipulated as being inflation-adjusted (like TIPS) they don’t scale to price changes in the broader economy.

Under positive rates of inflation, inflation assists in keeping debt under control, by shrinking the present amount of goods and services and labour that equate to a nominal amount of currency. Under deflation, the opposite process occurs, and the nominal value of currency — as well as that of historical debt — rises, making the debt harder to service and pay down, especially with the ongoing accumulation of interest.

On the face of it, that is good news for net savers and bad news for net debtors. But raising the difficulty of deleveraging and debt service can often be bad for both, because debtors who cannot pay default, bankrupting themselves and injuring their creditors. It can also depress the economy, as individuals and firms are forced to stop spending and investing and start devoting more and more of their income to the rising real cost of deleveraging.

With growth last quarter dropping to 0.3 percent from 0.6 percent, this process might very well already be under way. This raises the prospect of the nightmarish debt-deflationary spiral above.

The last thing that the economy needs under that circumstance is more money being sucked out of it through slashing public spending. Sucking money out of the economy will make deleveraging even more difficult for debtors, and slow growth further as individuals and firms adjust their spending plans to lower levels of national and individual income. Yet that is the manifesto that the country elected to power in the election earlier this month. And although Osborne and Cameron can get out of it — via offsetting cuts in spending with tax cuts — if they go through with their election promises, the prospect of recession, continued deflation and rising levels of unemployment loom clearly.

What the economy really needed in 2010 was a deep and long commitment to public stimulus to provide the economic growth needed to let the private sector deleverage. Unlike the public sector, which is a sovereign creditor borrowing in its own currency — the private sector is far from a secure debtor. Private borrowers can — unlike the central government — “become the next Greece” and run out of money.

With interest rates in the last parliament having sunk down to new historic lows, such a thing was affordable and achievable. Instead, by trying to do public deleveraging at the same time as the private sector was deleveraging Osborne, Cameron and Clegg chose a much rockier path, one in which private deleveraging and public deleveraging are slow and grinding. With private debt levels still very high, the country remains vulnerable to another deleveraging-driven recession.

Confessions of a 21st Century Lionel Robbins

Marx said that history does not repeat itself.

But sometimes, history does. This may be particularly true in the domain of economics and economic knowledge.

Economics is a complex subject. It is the study of a series of abstractions (markets, firms, governments) built upon abstractions (money) built upon abstractions (value), a great metaphysical muddle stacked up to the sky like a wobbly Jenga tower.

At a simpler level, economics is the study of human behaviour, or as von Mises put it, human action. Business cycles from the boom to the bust are a great cavalcade of individual human action and individual decisions. The sum of individual decisions manifests itself as the market. And the behaviour of the market informs future individual decisions, in a great self-reinforcing spiral of feedback.

The question of what the government should do in a bust is one particular realm where little progress has been made since the Great Depression. Indeed, perhaps future historians will refer to the current economic malaise as Great Depression II, given that in various polities including Britain and the eurozone the depth of the slump in gross domestic product has been greater and longer than that in the 1930s.

Why? Well, the urge to purge is strong. And I say that as a former austerian — someone who has done a good deal of urging for purging.

The logic goes that a crashed market is a sick market. A corrupted market. So let the sickness kill off the weakness. Let bad firms fail. Let the bad investments fuelled by false expectations fail. Let resources be reallocated from the unproductive to the productive. Then — through harsh market discipline — market participants will learn lessons that set the stage for abundant new economic growth. As Paul Krugman notes, Friedrich Hayek called for: “the most speedy and complete adaptation possible of the structure of production to the proportion between the demand for consumers’ goods and the demand for producers’ goods as determined by voluntary saving and spending.”

It’s a simple call: no bailouts, no stimulus spending. In fact, cut government spending to balance the budget. Stop subsidising the unproductive. Let the voluntary market of savers, investors and spenders sort itself out.

The trouble is that it doesn’t really sort itself out. And we have a good deal of empirical evidence to back up that idea. The more austerity, the deeper the slump.

Lionel Robbins — who went on a similar journey away from austerianism, only eighty years earlier — reached this conclusion:

Whatever the genetic factors of the pre-1929 boom, their sequelae, in the sense of inappropriate investments fostered by wrong expectations, were completely swamped by vast deflationary forces sweeping away all those elements of constancy in the situation which otherwise might have provided a framework for an explanation in my terms. The theory was inadequate to the facts. Nor was this approach any more adequate as a guide to policy. Confronted with the freezing deflation of those days, the idea that the prime essential was the writing down of mistaken investments and the easing of capital markets by fostering the disposition to save and reducing the pressure on consumption was completely inappropriate.

To treat what developed subsequently in the way which I then thought valid was as unsuitable as denying blankets and stimulants to a drunk who has fallen into an icy pond, on the ground that his original trouble was overheating.

This is a powerful debunking of David Cameron’s confidence fairy logic. The idea that the answer to an economic malaise is to slash spending, slash welfare and let the market sort itself out is a bit ridiculous. It is very much denying blankets and stimulants to people who have been thrown into an icy pond. The great irony is that the brunt of the cuts are falling not upon the drunk market speculators who caused the financial crisis, but vast swathes of disabled and poor who had nothing to do with it.

In the face of Cameron’s austerity policy, the UK had the slowest recovery since the South Sea bubble in the 18th century. And the UK’s recovery only really began when Cameron paused the austerity. With more austerity now, we risk another recession.

The trouble, I think, is that government austerity looks very superficially plausible. It looks the responsible thing to do.

For an individual or as a firm, austerity in a slump can be a very sensible choice. For an individual or firm, getting into an unsustainable level of debt risks going bankrupt. And while it is not incoherent to suggest that someone in debt trouble take on more debt as a means to dramatically increasing their income — for instance, through entrepreneurship — it is undoubtedly a major gamble.

But for government, it’s very much the other way round. As Keynes noted, government spending is an awful lot of people’s incomes. Even if some businesses are cheered by the “fiscal responsibility” shown by a government willing to slash spending, many other businesses are going to be less cheered as their revenue falls due to the falling incomes of the individuals and firms who rely on government spending for a portion of their income.

That’s not to say that it is impossible that slashing spending in the middle of a recession may under some theoretical circumstances be consistent with a fast-recovering economy. But in the overwhelming majority of cases, it is a very, very major gamble that does not pay off. It hasn’t paid of in the eurozone since 2008, it didn’t pay off during the Brüning years in Germany, and it didn’t pay off for Cameron and Osborne, who were forced to pause austerity in 2013 with the economy flatlining. And in a very major vindication for Keynesian thinking, not only did that precede a recovery but it was also the turning point on the government deficit as a percentage of GDP. Once the slashing stopped, the deficit began to fall fast. So the responsible thing to do was to rack up government debt, allow the government to channel resources back into the economy, and let the economy recover.

This is all very counterintuitive stuff. It is hard for its discoverers to spread, which is why it has to keep being rediscovered again and again and again. I am sure there will be people in the 22nd Century who have to learn this lesson, too.

Tesla & The New Economics Of The Coming Renewable Energy Boom

I don’t need to tell anyone of the importance of Tesla’s expansion into home battery technology. A home battery lets you store solar energy to use when the sun isn’t shining, which is a really, really major thing in terms of power distribution. As I’ve been pointing out for years, this is the crucial missing link between photovoltaic cells being a rapidly, rapidly cheapening technology with a lot of rollout potential, and photovoltaic cells being the major source for the world’s power. As I predicted in The Week in 2013:

The promising trends in technology and cost suggest much more than renewable energy becoming the fastest growing energy source in the next 30 years. They suggest that renewables will grow to be the number one energy source in the United States and the world in the next 30 or 40 years.

I’d say that that was actually an overly conservative projection. I now foresee solar to be number one in the next twenty, if not the next ten years.

It’s nice to live in the knowledge that renewable energy will overcome problems posed by diminishing oil reserves and (at least) mitigate anthropogenic climate change. It’s nice to know that as solar efficiencies continue to increase and solar manufacturing costs continue to fall that the long term trend for energy costs is down.

And you can do a heck of a lot of cool things with cheap, decentralized energy, like heating and lighting your home, manufacturing goods and technology and food and tools, and powering computers and artificial intelligence.

This, in my view, is the furnace to power the next fifty or a hundred years of soaring mid-20th century style economic growth. This is the beginning of an energy-driven economic supercycle — which takes us from the era of handheld computing to the era of building asteroid mining space stations and extraterrestrial colonies and maybe even interstellar spacecraft. It’s the main reason why I switched from bearish to bullish in 2013.

But what I really want to know is how to make money out of this trend. If photovoltaic cells and batteries are the new crude oil, coal, gasoline and natural gas (etc), does that mean Musk’s firms (Tesla, SolarCity, SpaceX, etc) are going to be the next Exxon-Mobil or Shell or Gazprom?

Maybe. But I’d tend to see renewable energy and emerging tech index funds as a slightly smarter bet. The trouble is that we’re at a very early stage in the supercycle.

An imperfect analogy: Xerox made an operating system akin to Windows years before Microsoft and Apple did, but Microsoft and Apple were the ones who reaped the bigger rewards. There are a whole load of factors that could dramatically affect which renewable energy systems are the ones that dominate the market: interface, battery-photovoltaic cell integration, price per unit of energy, price per unit of storage, durability and probably some others. And also a slew of more superficial factors such as marketing. If this is going to be as big as I think it is there will be a lot of competition from outside the renewables sector not least from firms like Google, and Apple and Facebook and Samsung as well as from older energy giants like BP, Shell and Exxon-Mobil.

For now, of course, Musk does seem to be establishing himself as the market leader and trendsetter in much the way Steve Jobs once did. But that could all change. It’s even not just a matter of competing firms. Just as the internet decentralized information distribution, and solar is on the cusp of decentralizing energy production, the whole manufacturing and (I’d argue) product design paradigm is edging closer to being transformed by another set of emergent technologies: 3-D printers and home manufacturing. Maybe as home manufacturing begins to become more prominent, open-source collaborative product and component design will beat out the current proprietary model.

The main takeaway here seems to be that this is an incredibly exciting time to be alive. We’re all set to get a lot richer from this, whether or not we bought Tesla at an early stage, just as people in the early 20th century didn’t have to buy Standard Oil shares to do well from that other energy revolution.

I Don’t Understand the Apple Watch

In 2006, I was telling anyone who would listen — which, given that I was a nerdy 19-year old, wasn’t many people — to buy Apple stock. Back then Apple seemed to be on the verge of something amazing. I had had an iPod since 2003, and had just bought a Macbook Pro, and was blown away by OS X. The operating system and interface had a crispness and an attention to detail that made Windows PCs seem like a muddled mess.

Turns out I was right about Apple. The past decade has seen Apple blow up bigger than I dreamed they might. The iPhone and iPad have been stunning successes that have allowed Apple to redefine what personal computing is. And now Apple is the biggest company in the world.

And the Apple Watch — their first new product line since the iPad — seems like a step in the wrong direction. Admittedly, I haven’t used an Apple Watch yet. But why bring out a watch when other elements of your product line have made watches obsolete?

I’m happy to have my wrists free. A smartphone already does what a watch used to do — tell the time — plus so, so, so much more. I don’t want another screen, especially not a tiny and hard-to-click one strapped to my wrist that actually requires tethering to a smartphone to work. Interface design has been the key difference between Apple and Apple’s competitors in the past 10 years and to hear complaints about the interface seems pretty damning.

Yes, I can see some point to a biometric data-collection band, especially for athletes and fitness junkies and for hospital patients. Yes, I understand that sooner or later a new model of the Apple Watch will not require a tethered iPhone to work.

But at this point this is a niche product, functionally akin to the Apple Newton in the early ’90s. It does some cool stuff. But it’s not going to change the world.

The trouble is that I think Apple is barking up the wrong tree. This is like a successful band’s dodgy fourth album where they rehash earlier ideas in pursuit of that indefinable thing that made them great in the first place. The trouble is that that thing — the bleeding edge — has moved on. Trying to recapture it by rehashing old ideas in a slightly different form might sell some records. And Apple will sell some watches. But it’s not going to change the world.

The bleeding edge technologies that will change the world immeasurably in the next 20 years are self-driving cars, artificial intelligence, 3-D printers, ultra-efficient solar cells that can produce energy more cheaply than fossil fuels, and battery and energy distribution technology to allow the ultra-efficient solar cells to power things when the sun isn’t out. Apple are actually working on some of these things.

The Watch is — at best —an unworthy distraction. Of course, like most ageing rockstars, Apple has the time and money for unworthy distractions. And that’s why younger, leaner competitors may be the ones to bring the truly revolutionary products to market.

On Trade Unions & Inequality

This chart is pretty wow:

CC5Nkv4VAAAyaCX

Florence Jaumotte and Carolina Osorio Buitron of the International Monetary Fund have some ideas about how the correlation may have been caused:

The main channels through which labor market institutions affect income inequality are the following:

Wage dispersion: Unionization and minimum wages are usually thought to reduce inequality by helping equalize the distribution of wages, and economic research confirms this.

Unemployment: Some economists argue that while stronger unions and a higher minimum wage reduce wage inequality, they may also increase unemployment by maintaining wages above “market-clearing” levels, leading to higher gross income inequality. But the empirical support for this hypothesis is not very strong, at least within the range of institutional arrangements observed in advanced economies (see Betcherman, 2012; Baker and others, 2004; Freeman, 2000; Howell and others, 2007; OECD, 2006). For instance, in an Organisation for Economic Co-operation and Development review of 17 studies, only 3 found a robust association between union density (or bargaining coverage) and higher overall unemployment.

Redistribution: Strong unions can induce policymakers to engage in more redistribution by mobilizing workers to vote for parties that promise to redistribute income or by leading all political parties to do so. Historically, unions have played an important role in the introduction of fundamental social and labor rights. Conversely, the weakening of unions can lead to less redistribution and higher net income inequality (that is, inequality of income after taxes and transfers).

I have spent a lot of time thinking about what has caused the major upswing in inequality since the 1980s.

Back in 2011 and 2012 my analysis tended to emphasize financialization and specifically the massive growth in credit creation that took place since the 1980s. I think this was a rather naive view to take.

I don’t think I was wrong to look at financialization. Obviously, unchecked credit creation is a plausible pathway for the rich to make themselves and their friends richer. I just think it was naive to not see financialization — like deunionization, like globalization, and like trends in housing wealth — as part of a broader pie.

My hypothesis is that what changed is that politicians decided that greed was good and that “industrial policy” was a dirty phrase. The political structures that emerged in the wake of the Great Depression and World War 2 which together greatly limited inequality — welfare states, nationalized industries, unionized workforces, constrictive financial regulations like Glass Steagall — were severely rolled back. This created an opening for the rich to get much richer very fast, which they did.

If I’m right, it would take a major political shift in the other direction to start reducing inequality.

How To Euthanize Rentiers (Wonkish)

In my last post, I established that the “rentier’s share” of interest — resulting from as Keynes put it the “power of the capitalist to exploit the scarcity-value of capital” — can be calculated as the real-interest rate on lending to the monetary sovereign, typically known as the real risk free interest rate. That is because it is the rate that is left over after deducting for credit risk and inflation risk.

However, I have been convinced that my conclusion — that euthanizing rentiers should be an objective of monetary policy — is either wrong or impractical.

It would at very least require a dramatic shift in monetary policy orthodoxy. My initial thought was thus: the real risk-free interest rate (r) can be expressed as the nominal risk free interest rate minus the rate of inflation (r=n-i). To eliminate the rentier’s share, simply substitute 0 for r so that 0=n-i and n=i. In other words, have the central bank target a rate of inflation that offsets the expected future nominal risk free interest rate, resulting in a future real risk free interest rate as close to zero as possible.

There are some major problems with this. Presently, most major central banks target inflation. But they target a fixed rate of inflation of around 2 percent. The Fed explains its rationale:

Over time, a higher inflation rate would reduce the public’s ability to make accurate longer-term economic and financial decisions. On the other hand, a lower inflation rate would be associated with an elevated probability of falling into deflation, which means prices and perhaps wages, on average, are falling — a phenomenon associated with very weak economic conditions. Having at least a small level of inflation makes it less likely that the economy will experience harmful deflation if economic conditions weaken. The FOMC implements monetary policy to help maintain an inflation rate of 2 percent over the medium term.

Now, it is possible to argue that inflation targets should vary with macroeconomic conditions. For example, if you’re having a problem with deflation and getting stuck in a liquidity trap, a higher inflation target might be appropriate, as Jared Bernstein and Larry Ball argue. And on the other side of the coin, if you’re having a problem with excessive inflation — as occurred in the 1970s — it is arguable a lower inflation target than 2 percent may be appropriate.

But shifting to a variable rate targeting regime would be a very major policy shift, likely to be heavily resisted simply because the evidence shows that a fixed rate target results in more predictability, and therefore enhances “the public’s ability to make accurate longer-term economic and financial decisions”.

A second sticking point is the argument that such a regime would be trying to target a real-variable (the real risk free interest rate), which central banks have at best a very limited ability to do.

A third sticking point is Goodhart’s Law: “when a measure becomes a target, it ceases to be a good measure.” By making the future spread between the nominal risk free interest rate and inflation a target, the spread would lose any meaning as a measure.

A fourth sticking point is the possibility that such a severe regime change might create a regime susceptible to severe accelerative macroeconomic problems like inflationary and deflationary spirals.

And in this age of soaring inequality, the euthanasia of the rentier is simply too important an issue to hinge on being able to formulate a new workable policy regime and convince the central banking establishment to adopt it. Even if variable-rate inflation targeting or some alternative was actually viable, I don’t have the time, or the energy, or the inclination, or the expertise to try to do what Scott Sumner has spent over half a decade trying to do — change the way central banks work.

Plus, there is a much better option: make the euthanasia of the rentier a matter for fiscal policy and specifically taxation and redistribution. So here’s a different proposal: a new capital gains tax at a variable rate equal to the real risk-free interest rate, with the proceeds going toward business grants for poor people to start new businesses.

The Subtle Tyranny of Interest Rates

Interest rates are the price of credit. They are the price of access to capital.

Now, it is obvious that pricing credit is not tyrannical in and of itself. Interest compensates a lender for default risk and the risk of inflation eroding the purchasing power of the money that they lend.

The tyranny I am getting at is subtle. It is the tyranny that Keynes pointed to when he proposed a euthanasia of the rentier. Keynes proposed that low interest rates would:

mean the euthanasia of the rentier, and, consequently, the euthanasia of the cumulative oppressive power of the capitalist to exploit the scarcity-value of capital.

Keynes pointed to an important feature of interest rates: the fact that capital has a cost is not just the result of default risk and the risk of inflation. It is also a result of the scarcity of capital.

Now, that is inevitable in a world where financial capital consists of metal that you dig up out of the ground.

But in our brave new state-backed fiat monetary system, why should capital be so scarce that those who have it can profit from its scarcity?

Obviously, central banks should not print money to the extent that it becomes worthless. But capital availability is absolutely critical to the advancement of society: the investment of capital is how societies become productive. It is how technology improves, and it is the key to wealth accumulation.

What Keynes didn’t specify was what exactly in the interest rate paid was the part that represented the “scarcity value” of capital.

Obviously, it doesn’t include the part that compensates for inflation, which is why we need to look at inflation-adjusted interest rates. And it isn’t the part that compensates for default risk. This is easily calculable too: it is the excess paid over lending to the monetary sovereign.

In the U.S. and Britain, that would be the American and British governments. In the eurozone — for complicated political reasons — there is no monetary sovereign exactly, but we might measure it by looking at it in terms of the spread against German government borrowing, because Germany seems to be the nation calling the lion’s share of the shots.

Here’s the real interest rate on U.S. 10-year government borrowing (I chose the 10-year because it is a benchmark, although I would have preferred to use a harmonized rate from across the yield curve.):

fredgraph-20

So what are we really seeing? The general trend is that real interest rates on U.S. government borrowing are overwhelmingly positive, with a few periodical exceptions where real rates on borrowing went a bit negative. This bias toward positive real interest rates on lending to the monetary sovereign, I would argue, is the rentier’s profit resulting from the scarcity of financial capital.

Year over year, that is going to compound heavily. It is these rentiers, I would argue, who should be euthanized. Not because they should be resented for doing well out of the system.  No. They should be euthanized because of the opportunity cost of devoting resources to enriching rentiers, resources that could be deployed productively elsewhere.

And how to euthanize the rentiers? Because we have identified what the rentier’s share is, the answer is very simple: making a real interest rate of zero on lending to the monetary sovereign an objective of monetary policy.

Update: After much debate, I have decided that euthanizing rentiers is not a matter for monetary policy, but a matter for fiscal policy. I have written another post discussing this.