What The UK’s Low Productivity Is Really Telling Us

This, I would argue, is one of the scariest charts in the world today. The green line is output per hour worked, and the dotted green line is the pre-crisis trend:

growth_fig1

It’s what the Bank of England calls the “UK productivity puzzle.” As the BBC’s Linda Yueh notes: “output per hour is around 16 percentage points lower than it should be if productivity had grown at its pre-crisis pace.”

I don’t think it should be called a “productivity puzzle”. That would imply that we don’t really understand the phenomenon. That the phenomenon is a puzzle. But it’s really a simple phenomenon. The phenomenon is that people are producing less output per hour than they were before the financial crisis. Work is getting done. But the quality of the work is not improving.

The Bank of England points to “reduced investment in both physical and intangible capital, such as innovation, and impaired resource allocation from low to high productive uses” as a cause. In other words, the work is crap because firms aren’t deploying the resources to do good work. And this is a trend that predates the election of the Coalition government in 2010. As the Bank of England notes, the UK has lagged in investment as a percentage of GDP behind its fellow G8 economies since even the 1990s.

But things got really bad under the Coalition. And that shouldn’t really be news. There was a recession resulting from the financial crisis. The recession — as recessions tend to do — resulted in a severe drop in business investment. In the wake of the recession, what did the newly elected government decide to do? It decided to enact sweeping austerity programs — to slash investment even more.

So the story is that the government decided to compound the after-effects of the financial crisis with an austerity program. That means depriving the economy of even more resources needed for productivity, growth and prosperity. And — in truly, truly shocking news — UK investment as a percentage of GDP is currently lagging at a pathetic 15 percent of GDP behind Belgium, Gambia, Jordan, Equatorial Guinea and Costa Rica, and barely ahead of Greece!

The austerian view, of course, is that the austerity was necessary because otherwise the bond vigilantes would have sold UK public debt, and we would have turned into Greece, or something.

The so-called “productivity puzzle” and the related low-investment puzzle categorically proves this claim wrong. If the austerity was imbuing the market with confidence necessary for growth, we would expect to see productivity and investment rising.

That has not been the case. What has occurred is a zombie recovery caused by zombified economic policies. Yes, there has been substantial job growth, and GDP is now above its pre-crisis peak — albeit in the slowest recovery since the South Sea bubble 300 years ago. But the weakness in productivity continues to illustrate the rottenness.

You can’t starve yourself to strength. You can’t beat yourself to growth.

Australia just scrapped its debt ceiling. America should, too.

Debt ceiling fights, it seems, have become a permanent fixture in American politics. Twice in the last couple of years, the United States has been days away from potentially irrevocable economic damage because Congress refused to raise the debt ceiling and let the Treasury issue more debt. The next debt ceiling fight is slated for March 2014.

But isn’t there a better way to increase a borrowing limit — and one that doesn’t freak out markets, investors, and, well, just about everyone every few months?

Read More At TheWeek.com

Do Wages Benefit From A Shrinking Labour Force?

Dean Baker says yes:

The retirement of the baby boom cohorts means that the country’s labor force is likely to be growing far more slowly in the decades ahead than it did in prior decades. The United States is not alone in facing this situation. The rate of growth of the workforce has slowed or even turned negative in almost every wealthy country. Japan leads the way, with a workforce that has been shrinking in size for more than a decade.

Baker concludes:

With a stagnant or declining labor force, workers will have their choice of jobs. It is unlikely that they will want to work as custodians or dishwashers for $7.25 an hour. They will either take jobs that offer higher pay or these jobs will have to substantially increase their pay in order to compete.

This means that the people who hire low-paid workers to clean their houses, serve their meals, or tend their lawns and gardens will likely have to pay higher wages. That prospect may sound like a disaster scenario for this small group of affluent people, but it sounds like great news for the tens of millions of people who hold these sorts of jobs. It should mean rapidly rising living standards for those who have been left behind over the last three decades.

Of course, Baker could just look at the data from Japan. Real wages there have been depressed in recent years, even while the labour force has shrunk:

Japanwages

Even more damningly, labour’s share of income in Japan has declined even more considerably than the United States, and other nations with a growing working-age population:

ShareofLabourincome

Matthew C. Klein asks an important question:

Perhaps Mr Baker was thinking of an older example: the Black Death, which killed about half the people in Europe. Many (including me until I looked it up) believe that the resulting shortage in agricultural labour led to soaring real wages for peasants and a redistribution of economic power away from landowners. Recent evidence, however, casts doubt on this hypothesis. While nominal peasant wages did indeed increase in the aftermath of the Black Death, real wages may have actually fallen for decades. That may have helped heavily indebted peasants, but everyone else had to endure punishing declines in their standard of living, not to mention the psychological trauma of surviving such a devastating plague.

And the evidence on the Black Death seems conclusive:

In southern England, real wages of building craftsmen (rural and urban), having plummeted with the natural disaster of the Great Famine (1315-21), thereafter rose to a new peak in 1336-40. But then their real wages fell during the 1340s, and continued their decline after the onslaught of the Black Death, indeed into the 1360s. Not until the later 1370s – almost thirty years after the Black Death – did real wages finally recover and then rapidly surpass the peak achieved in the late 1330s.

And if we look at China — a country which has seen stunning real wage growth in recent years — it is clear that that growth has come in the context of a growth in the working-age population. China’s working-age population hit one billion for the first time in 2011.

To me at least, this seems to suggest that while all else being equal, a shrinking working age population might lead to a more competitive labour market, all else is not equal. Employers invest in more capital-intensive processes like automation and robots to compensate for a lack of workers, or in our globalised world they shift operations to somewhere with a stronger labour force (like China today, or perhaps like Africa further into the future). Even more simply, a falling population as a result of a natural disaster like the Black Death, or even just as a result of demographic trends like Japan, may lead to an economic depression due to falling demand.

This suggests that Baker’s conclusions are extremely optimistic for labour, and that shrinking populations may be bad news for wages.

Of Wages and Robots

There is a popular meme going around, popularised by the likes of Tyler CowenPaul Krugman and Noah Smith that suggests that recent falls in worker compensation as a percentage of GDP is mostly due to the so-called “rise of the robots”:

For most of modern history, two-thirds of the income of most rich nations has gone to pay salaries and wages for people who work, while one-third has gone to pay dividends, capital gains, interest, rent, etc. to the people who own capital. This two-thirds/one-third division was so stable that people began to believe it would last forever. But in the past ten years, something has changed. Labor’s share of income has steadily declined, falling by several percentage points since 2000. It now sits at around 60% or lower. The fall of labor income, and the rise of capital income, has contributed to America’s growing inequality.

In past times, technological change always augmented the abilities of human beings. A worker with a machine saw was much more productive than a worker with a hand saw. The fears of “Luddites,” who tried to prevent the spread of technology out of fear of losing their jobs, proved unfounded. But that was then, and this is now. Recent technological advances in the area of computers and automation have begun to do some higher cognitive tasks – think of robots building cars, stocking groceries, doing your taxes.

Once human cognition is replaced, what else have we got? For the ultimate extreme example, imagine a robot that costs $5 to manufacture and can do everything you do, only better. You would be as obsolete as a horse.

Now, humans will never be completely replaced, like horses were. Horses have no property rights or reproductive rights, nor the intelligence to enter into contracts. There will always be something for humans to do for money. But it is quite possible that workers’ share of what society produces will continue to go down and down, as our economy becomes more and more capital-intensive.

So, does the rise of the robots really explain the stagnation of wages?

This is the picture for American workers, representing wages and salaries as a percentage of GDP:

WASCURGDP

It is certainly true that wages have fallen as a percentage of economic activity (and that corporate profits as a percentage of economic activity have risen — a favourite topic of mine).

But there are two variables to wages as a percentage of GDP. Nominal wages have actually risen, and continued to rise on a moderately steep trajectory:

WASCUR_Max_630_378

And average wages continue to climb nominally, too. What has actually happened to the wages-to-GDP ratio, is not that America’s wage bill has really fallen, but that wages have just not risen as fast as other sectors of GDP (rents, interest payments, capital gains, dividends, etc). It is not as if wages are collapsing as robots and automation (as well as other factors like job migration to the Far East) ravage the American workforce.

It is more accurate to say that there has been an outgrowth in economic activity that is not yielding wages beginning around the turn of the millennium, and coinciding with the new post-Gramm-Leach-Bliley landscape of mass financialisation and the derivatives and shadow banking megabubbles, as well the multi-trillion dollar military-industrial complex spending spree that coincided with the advent of the War on Terror. Perhaps, if we want to look at why the overwhelming majority of the new economic activity is not trickling down into wages, we should look less at robots, and more at the financial and regulatory landscape where Wall Street megabanks pay million-dollar fines for billion-dollar crimes? Perhaps we should look at a monetary policy that dumps new money solely into the financial sector and which has been shown empirically to enrich the richest few far faster than everyone else?

But let’s focus specifically on jobs. The problem with the view that this is mostly a technology shock is summed up beautifully in this tweet I received from Saifedean Ammous:

The Luddite notion that technology might render humans obsolete is as old as the wheel. And again and again, humans have found new ways to employ themselves in spite of the new technology making old professions obsolete. Agriculture was once the overwhelming mainstay of US employment. It is no more:

farmjobs

This did not lead to a permanent depression and permanent and massive unemployment. True, it led to a difficult transition period, the Great Depression in the 1930s (similar in many ways, as Joe Stiglitz has pointed out, to the present day). But eventually (after a long and difficult depression) humans retrained and re-employed themselves in new avenues.

It is certainly possible that we are in a similar transition period today — manufacturing has largely been shipped overseas, and service jobs are being eliminated by improvements in efficiency and greater automation. Indeed, it may prove to be an even more difficult transition than that of the 1930s. Employment remains far below its pre-crisis peak:

EMRATIO_Max_630_378

But that doesn’t mean that human beings (and their labour) are being rendered obsolete — they just need to find new employment niches in the economic landscape. As an early example, millions of people have begun to make a living online — creating content, writing code, building platforms, endorsing and advertising products, etc. As the information universe continues to grow and develop, such employment and business opportunities will probably continue to flower — just as new work opportunities (thankfully) replaced mass agriculture. Humans still have a vast array of useful attributes that cannot be automated — creativity, lateral thinking & innovation, interpersonal communication, opinions, emotions, and so on. Noah Smith’s example of a robot that “can do everything you can do” won’t exist in the foreseeable future (let alone at a cost of $5) — and any society that could master the level of technology necessary to produce such a thing would probably not need to work (at least in the sense we use the word today) at all. Until then, luckily, finding new niches is something that humans have proven very, very good at.

Why Nate Silver is Wrong

Famed pollster and sabermetrician Nate Silver is calling the US Presidential race for Obama, in a big way:

Silver’s mathematical model gives Obama an 85% chance of winning. The Presidential election is based on an electoral college system, so Silver’s model rightly looks at state-level polls. And in swing state polls, Obama is mostly winning:

This is slightly jarring, because in national polls, the two candidates are locked together:

So who’s right? Is the election on a knife-edge like the national polls suggest, or is Obama strongly likely to win as Silver’s model suggests?

While the election could easily go either way depending on turnout, I think Silver’s model is predicting the wrong result. In order for that to be the case, the state polling data has to be wrong.

There are a number of factors that lead me to believe that this is the case.

First, Republicans tend to outperform their poll numbers. In 2008, the national average got the national race just about right:

In the end, Obama won the election with 52.9% of the vote, against McCain who came out with 45.7%.

However, polls have historically underestimated Republican support. Except 2000 (when a November Surprise revelation of a George W. Bush drunk-driving charge pushed Gore 3.2% higher than the final round of polling), Republican Presidential candidates since 1992 have outperformed their final polls by a mean of 1.8 points. Such an outcome for Romney would put him 1.5% ahead in the national polls, and imperil Obama’s grip on the swing states.

Second, the Bradley Effect. The interesting thing about the swing states is that many of them are disproportionately white. The United States is 72% white, but Iowa is 89% white, Indiana is 81% white, Ohio is 81% white, Minnesota is 83% white, Pennsylvania is 79% white, New Hampshire is 92% white, Maine is 94% white and Wisconsin is 83% white. This means that they are particularly susceptible to the Bradley Effect — where white voters tell a pollster they will vote for a black candidate, but in reality vote for a white alternative. In a state in which Obama holds a small lead in state-level polling, only a small Bradley Effect would be necessary to turn it red.

This effect may have already affected Barack Obama in the past — in the 2008 primaries, Obama was shown by the polls to be leading in New Hampshire, but in reality Hillary Clinton ran out the winner. And many national polls in October 2008 showed Obama with much bigger leads than he really achieved at the polls — Gallup showed Obama as 11% ahead, Pew showed Obama as 16% ahead.

A small Bradley Effect will not hurt Obama where he is 7% or 11% or 16% ahead in the polls. But when polls are closer — as they mostly are in the swing states — it becomes more plausible than such an effect could change the course of the race.

And the Bradley Effect in 2012 may be bigger than in 2008. A recent poll by the Associated Press concluded:

A majority of Americans (51 percent) now hold “explicit anti-black attitudes” — up from 49 percent in 2008 — and 56 percent showed prejudice on an implicit racism test.

Finally, polls have tended to overestimate the popularity of incumbent Presidents, especially Democrats. In 1980, polls put Jimmy Carter 3% of his final tally, and in 1996 polls put Bill Clinton 2.8% ahead of his final tally:

Taken together, these difficult-to-quantify factors pose a serious challenge to Silver’s model. While it is fine to build a predictive model on polling data, if the polling data fed into the model is skewed, then any predictions will be skewed. Garbage in, garbage out.

I rate Obama’s chance of being re-elected as no better than 50:50. If Silver really rates his chances as 85:15, perhaps he should consider taking bets at those odds.

UPDATE:

Obviously, Silver’s predictive model (and far, far more importantly the state-level polling data) proved even more accurate than 2008. However, the 2010 British General Election (in which polls and therefore Silver vastly overestimated the Liberal Democrat support level, leading to an electoral projection that was way off the mark) illustrates that there remain enough issues regarding the reliability of the polling data to ensure that Silver’s model (and similar) continue to suffer from the problem of fat tails. With solid, transparent and plentiful data (as Taleb puts it, in “Mediocristan”) such models work very, very well. But there remains plenty of scope (as Britain in 2010 illustrates) for polls to be systematically wrong (“Extremistan”). Given the likelihood that every news network will have its own state-level poll aggregator and Nate Silver soundalike on-hand come 2016, that might well be a poetic date for the chaotic effects of unreliable polling data to reappear. In the meantime, I congratulate the pollsters for providing Silver with the data necessary to make accurate projections.

Are Cameron’s Economic Policies Working?

Britain has returned to growth:

But compared even to the USA — which has huge problems of its own — Britain is still mired in the depths of a depression:

An Olympic bounce does not constitute a recovery. As I noted in March, Britain is under-performing the United States — in GDP and in unemployment. Although Cameron and Osborne keep claiming that they are deficit hawks who want to cut the government deficit, the debt keeps climbing.

Defenders of Cameron’s policies might claim that we are going through a necessary structural adjustment, and that lowered GDP and elevated unemployment is necessary for a time. I agree that a structural adjustment was necessary after the financial crisis of 2008, but I see little evidence of such a thing. The over-leveraged and corrupt financial sector is still dominated by the same large players as it was before. True, many unsustainable high street firms have gone out of business, but the most unsustainable firms that had  to be bailed out — the banks and financial firms who have caused the financial crisis — have avoided liquidation. The real story here is not a structural adjustment but the slow bleeding out of the welfare state via deep and reaching cuts.

Britain has become welfare-dependent. Britain’s welfare expenditure is now over 25% of its total GDP. Multi-billion pound cuts in that figure are going to (and have) hurt GDP.

I believe countries are better with small governments and a larger private sector. The private sector consists of many, many individuals acting out their subjective economic preferences. This dynamic is largely experimental; businesses come and go, survive, thrive and die based upon their ability to stay liquid and retain a market, and this competition for demand forces innovation. The government sector is centrally directed. Governments do not have to behave like a business, they do not have to innovate or compete, as they have the power to tax and compel. (The exception to this is when governments become overrun by the representatives of private industries and corporations, who then leverage the machinations of the state to benefit corporations. When this occurs and markets become rigged in the favour of certain well-connected competitors, it matters little whether we call such industries “private sector” or “public sector”).

So I am sympathetic to the idea that Britain ought to have a smaller welfare state, and fewer transfer payments than it presently does. But the current and historical data shows very clearly that now is not the time to make such an adjustment. The time to reduce the size of the welfare state is when the economy is booming. This is the time that there is work for welfare claimants to go to. Cutting into a depressed economy might create a strong incentive for the jobless to work, but if there is little or no job creation for the jobless to go to, then what use are cuts? To reduce government deficits? If that’s the case, then why are British government deficits rising even though spending is being reduced? (The answer, of course, is falling tax revenues).

An alternative policy that would reduce unemployment and raise GDP without increasing the size of government is to force bailed-out banks sitting on huge hoards of cash to offer loans to the jobless to start their own private businesses. The money would be transferred to those who could be out working and creating wealth, but who cannot get credit through conventional channels, unlike the too-big-to-fail megabanks who are flush with credit but refuse to increase lending to the wider public. Even if the majority of these businesses were to fail, this would ensure a large boost in spending and incomes in the short run, and the few new businesses that succeed would provide employment and tax revenues for years to come. Once there is a real recovery and solid growth in GDP and in unemployment, then the government can act to decrease its size and slash its debt. Indeed, with growing tax revenues it is probable we would find that the deficit would end up decreasing itself.

Iran’s Insane Rhetoric

Iranian officials are once again firing off belligerent rhetoric.

 

Via the Jerusalem Post:

Hojjat al-Eslam Ali Shirazi, the representative of Iran’s Supreme Leader Ayatollah Ali Khamenei to the Islamic Republic’s Qods Force, said this week that Iran needed just “24 hours and an excuse” to destroy Israel.

In his first public interview in a year, reported in the Persian-language Jahan News, which is close to the regime, Shirazi said if Israel attacked Iran, the Islamic Republic would be able to turn the conflict into a war of attrition that would lead to Israel’s destruction.

“If such a war does happen, it would not be a long war, and it would benefit the entire Islamic umma the global community of Muslims. We have expertise in fighting wars of attrition and Israel cannot fight a war of attrition,” Shirazi said, referring to Iran’s eight-year war of attrition against Iraq.

Such claims are — more or less — inconsequential rubbish. The fact remains that Israel has nuclear weapons and a nuclear second strike, and Iran has no such thing, and the fact remains that the Iranian leadership knows this and are extremely unlikely to start a war where Iran (as Shimon Peres put it) will be the one wiped off the face of the Earth by Israeli plutonium. Yet the facts of military science will do little to stop the hawks of the West sounding off that Iran is irrational and that Iran is cooking up a plan to destroy Israel, and so must face regime change.

To grasp what is really occurring here we must look at how authoritarian Middle Eastern regimes (or, indeed, authoritarian regimes in general)  function. Authoritarian regimes  must maintain a cloak of authority. Tyrants do not attempt to look or sound weak; they try to project an aura of invincibility and indefatigability. We saw this during the last Gulf War, where Iraq’s information minister Muhammad Saeed al-Sahhaf — nicknamed Baghdad Bob in the American media — shot off hundreds of absurd statements during the war about how Iraqi troops were crushing the Americans, quite in contrast to the facts on the ground and right up until American tanks were rolling through the streets of Baghdad.

Baghdad Bob was not deluded. He was merely playing his role, and trying to project an aura of regime invincibility — providing propaganda for domestic consumption to keep the Iraqi population loyal to Saddam Hussein. It was a dog and pony show.

Iran’s belligerent rhetoric in this case is also strictly for domestic consumption — fierce rhetoric to keep the Iranian population fearful of the regime. Just like Baghdad Bob, the Iranian propaganda is far-removed from the real facts of the conflict. Whether the Iranian people really believe the regime’s propaganda — especially as the Iranian economy continues to worsen under sanctions — is dubious.

Yet one group of people — the Western neoconservatives, who are looking for another war — are more than happy to buy into the dog and pony “destroy Israel” bullshit.

As Robert Gates noted this week:

Painting a picture of internal political dysfunction in a dangerous world, former Defense Secretary Robert Gates warned Wednesday night that a U.S. or Israeli attack on Iran would have disastrous consequences.

Neither the United States nor Israel is capable of wiping out Iran’s nuclear capability, he said, and “such an attack would make a nuclear-armed Iran inevitable. They would just bury the program deeper and make it more covert.”

Iran could respond by disrupting world oil traffic and launching a wave of terrorism across the region, Gates said.

“The results of an American or Israeli military strike on Iran could, in my view, prove catastrophic, haunting us for generations in that part of the world.”

And as I wrote last month:

A regional war in the Middle East could result, potentially sucking in the United States and Eurasian powers like China, Pakistan and Russia. China and Pakistan have both hinted that they could defend Iran if Iran were attacked — and for good reason, as Iran supplies significant quantities of energy.

Frustratingly, the Iranian regime keep giving the neoconservatives more rope with which to hang themselves — and the West — on a cross of imperial overstretch, debt and blowback. 

This is Blowback

The YouTube video depicting Mohammed is nothing more than the straw that broke the camel’s back. This kind of violent uprising against American power and interests in the region has been a long time in the making. It is not just the continuation of drone strikes which often kill civilians in Pakistan, Yemen, Somalia and Afghanistan, either. Nor is it the American invasions and occupations of Iraq and Afghanistan. Nor is it the United States and the West’s support for various deeply unpopular regimes such as the monarchies in Bahrain and Saudi Arabia (and formerly Iran). Nor is it that America has long favoured Israel over the Arab states, condemning, invading and fomenting revolution in Muslim nations for the pursuit of nuclear weapons while turning a blind eye to Israel’s nuclear weapons and its continued expansion into the West Bank.

Mark LeVine, Professor of Middle Eastern history at U.C. Irvine, writes:

Americans and Europeans are no doubt looking at the protests over the “film”, recalling the even more violent protests during the Danish cartoon affair, and shaking their heads one more at the seeming irrationality and backwardness of Muslims, who would let a work of “art”, particularly one as trivial as this, drive them to mass protests and violence.

Yet Muslims in Egypt, Libya and around the world equally look at American actions, from sanctions against and then an invasion of Iraq that killed hundreds of thousands of Iraqis and sent the country back to the Stone Age, to unflinching support for Israel and all the Arab authoritarian regimes (secular and royal alike) and drone strikes that always seem to kill unintended civilians “by mistake”, and wonder with equal bewilderment how “we” can be so barbaric and uncivilised.

All of these things (and many more) have contributed to Muslim and Arab anger toward the United States and the West. Yet the underlying fact of all of these historical threads has been the United States’ oil-driven foreign policy. Very simply, the United States has for over half a century pursued a foreign policy in the region geared toward maintaining the flow of oil out of the region at any cost — even at the cost of inflaming the irrational and psychopathic religious elements that have long existed in the region.

This is not to defend the barbaric elements who resort to violence and aggression as a means of expressing their disappointment with U.S. foreign policy. It is merely to recognise that you do not stir the hornet’s nest and then expect not to get stung. 

And the sad thing is that stirring the hornet’s nest is totally avoidable. There is plenty of oil and energy capacity in the world beyond the middle east. The United States is misallocating capital by spending time, resources, energy and manpower on occupying the middle east and playing world policeman. Every dollar taken out of the economy by the IRS to be spent drone striking the middle east into the stone age is a dollar of lost productivity for the private market. It is a dollar of productivity that the market could have been spent increasing American energy capacity and energy infrastructure in the United States — whether that is in oil, natural gas, solar, wind or hydroelectric.

And this effect can spiral; every dollar spent on arming and training bin Laden and his allies to fight the Soviet Union begot many more thousands of dollars of military spending when bin Laden’s mercenaries turned their firepower onto the United States, and the United States chose to spend over ten years and counting occupying Afghanistan (rightly known as the graveyard of empires). It is likely that the current uprisings will trigger even more U.S. interventionism in the region (indeed it already has as marines have already been dispatched to Yemen) costing billions or even trillions of dollars more money (especially if an invasion of Iran is the ultimate outcome). This in turn is likely to trigger even fiercer resistance to America from the Islamist elements, and so the spiral continues on.

The only way out of this money-sucking, resource-sucking, life-sucking trap that is very literally obliterating the American empire is to swallow pride and get out of the middle east, to stop misallocating American resources and productivity on unwinnable wars.

But neither major Presidential candidate is interested in such a policy. Perhaps it is because war is a great profit source for the military-industrial complex, the force to which both the Democratic and Republican parties are beholden?

In any case, we should expect to see much more of this:

Source: Reuters

Spending Problem? Paul Ryan is the Spending Problem

Paul Ryan talks like a small government conservative:

Too much government inevitably leads to bad government. When government grows too much and extends beyond its limits, it usually does things poorly.

And the WSJ is pumping up Ryan as an antidote to the growth of government:

Ryan represents the GOP’s new generation of reformers. More than any other politician, the House Budget Chairman has defined those stakes well as a generational choice about the role of government and whether America will once again become a growth economy or sink into interest-group dominated decline.

But Ryan himself has been responsible for a lot of that government growth. He loyally voted for all the big government programs George W. Bush ensconced into law — Medicare Part D, often described as the largest expansion of the welfare state since Lyndon Johnson’s Great Society; the Department of Homeland Security and the TSA; the wars in Iraq and Afghanistan; the PATRIOT Act and the NDAA; the TARP bailout of Wall Street; the bailout of General Motors. So long as it was debt-fuelled spending authorised by a Republican (and during the Bush years, there was an awful lot of debt-fuelled spending authorised by Republicans) Ryan was out voting for it. 

Ryan’s voting record establishes firmly that Ryan is as much for bailouts and the expansion of government as Obama. He talks like a small government conservative on the deficit, too, but dig into the details and he promises to balance the budget on the back of closing loopholes in the tax code that he refuses to specify, while completely ignoring the severe problem of excessive total debt that is keeping the economy depressed today.

Does Ryan have an explanation for his voting record? Why did he put party loyalty above loyalty to the principles he now claims to espouse? Or did he forget his small government principles during the Bush years? Did he only discover Ayn Rand in 2008?

Ryan was forced to try and explain. Here’s the exchange between Ryan and ABC News’ Christiane Amanpour.

AMANPOUR: Congressman Ryan, you actually voted for the Wall Street bailout, and indeed the auto bailout as well.RYAN: Right. The auto bailout in order to prevent TARP from going to the auto companies, because we already put $25 billion aside in an energy bill, which I disapproved of, to go to auto companies.

What? Ryan later tried to clarify his remarks in an interview with the Daily Caller:

The president’s chief of staff made it extremely clear to me before the vote, which is either the auto companies get the money that was put in the Energy Department for them already — a bill that I voted against because I didn’t want to give them that money, which was only within the $25 billion, money that was already expended but not obligated — or the president was going to give them TARP, with no limit. That’s what they told me. That’s what the president’s chief of staff explained to me. I said, ‘Well, I don’t want them to get TARP. We want to keep TARP on a leash. We don’t want to expand it. So give them that Energy Department money that at least puts them out of TARP, and is limited.’ Well, where are we now? What I feared would happen did happen. The bill failed, and now they’ve got $87 billion from TARP, money we’re not going to get back. And now TARP, as a precedent established by the Bush administration, whereby the Obama administration now has turned this thing into its latest slush fund. And so I voted for that to prevent precisely what has happened, which I feared would happen.

Ryan should take a leaf out of Mr T.’s book and quit his jibber-jabber. He voted for TARP, as well as the auto bailout, and he has no reasonable explanation beyond fierce loyalty.

Republicans had two choices — Ron Paul and Gary Johnson — who are both consistent fiscal conservatives with no record of supporting bailouts or expansions of government, and no record of supporting costly pre-emptive wars. The Republican Party rejected both candidates, and instead went with two defenders of bailouts, two expanders of government, two believers in pre-emptive war and a large, powerful security state. That decision says an awful lot about the Republican Party.

People who want to see government play a smaller role in the economy and society should look elsewhere; outside of rhetoric both of the two major tickets have a track record of increasing the size and scope of government, increasing debt levels and bailing out favoured corporations.

The Origin of Money

Markets are true democracies. The allocation of resources, capital and labour is achieved through the mechanism of spending, and so based on spending preferences. As money flows through the economy the popular grows and the unpopular shrinks.  Producers receive a signal to produce more or less based on spending preferences. Markets distribute power according to demand and productivity; the more you earn, the more power you accumulate to allocate resources, capital and labour. As the power to allocate resources (i.e. money) is widely desired, markets encourage the development of skills, talents and ideas.

Planned economies have a track record of failure, in my view because they do not have this democratic dimension. The state may claim to be “scientific”, but as Hayek conclusively illustrated, the lack of any real feedback mechanism has always led planned economies into hideous misallocations of resources, the most egregious example being the collectivisation of agriculture in both Maoist China and Soviet Russia that led to mass starvation and millions of deaths. The market’s resource allocation system is a complex, multi-dimensional process that blends together the skills, knowledge, and ideas of society, and for which there is no substitute. Socialism might claim to represent the wider interests of society, but in adopting a system based on economic planning, the wider interests and desires of society and the democratic market process are ignored.

This complex process begins with the designation of money, which is why the choice of the monetary medium is critical.

Like all democracies, markets can be corrupted.

Whoever creates the money holds a position of great power — the choice of how to allocate resources is in their hands. They choose who gets the money, and for what, and when. And they do this again and again and again.

Who should create the monetary medium? Today, money is designated by a central bank and allocated through the financial system via credit creation. Historically, in the days of commodity-money, money was initially allocated by digging it up out of the ground. Anyone with a shovel or a gold pan could create money. In the days of barter, a monetary medium was created even more simply, through producing things others were happy to swap or credit.

While central banks might claim that they have the nation’s best democratic interests at heart, evidence shows that since the world exited the gold exchange standard in 1971 (thus giving banks a monopoly over the allocation of money and credit), bank assets as a percentage of GDP have exploded (this data is from the United Kingdom, but there is a similar pattern around the world).

Clearly, some pigs are more equal than others:

Giving banks a monopoly over the allocation of capital has dramatically enriched banking interests. It is also correlated with a dramatic fall in total factor productivity, and a dramatic increase in income inequality.

Very simply, I believe that the present system is inherently undemocratic. Giving banks a monopoly over the initial allocation of credit and money enriches the banks at the expense of society. Banks and bankers — who produce nothing — allocate resources to their interests. The rest of society — including all the productive sectors — get crumbs from the table. The market mechanism is perverted, and bent in favour of the financial system. The financial system can subsidise incompetence and ineptitude through bailouts and helicopter drops.

Such a system is unsustainable. The subsidisation of incompetence breeds more incompetence, and weakens the system, whether it is government handing off corporate welfare to inept corporations, or whether it is the central bank bailing out inept financial institutions. The financial system never learned the lessons of 2008; MF Global and the London Whale illustrate that. Printing money to save broken systems just makes these systems more fragile and prone to collapse. Ignoring the market mechanism, and the interests of the wider society to subsidise the financial sector and well-connected corporations just makes society angry and disaffected.

Our monopoly will eventually discredit itself through the subsidisation of graft and incompetence. It is just a matter of time.