But isn’t there a better way to increase a borrowing limit — and one that doesn’t freak out markets, investors, and, well, just about everyone every few months?
The retirement of the baby boom cohorts means that the country’s labor force is likely to be growing far more slowly in the decades ahead than it did in prior decades. The United States is not alone in facing this situation. The rate of growth of the workforce has slowed or even turned negative in almost every wealthy country. Japan leads the way, with a workforce that has been shrinking in size for more than a decade.
With a stagnant or declining labor force, workers will have their choice of jobs. It is unlikely that they will want to work as custodians or dishwashers for $7.25 an hour. They will either take jobs that offer higher pay or these jobs will have to substantially increase their pay in order to compete.
This means that the people who hire low-paid workers to clean their houses, serve their meals, or tend their lawns and gardens will likely have to pay higher wages. That prospect may sound like a disaster scenario for this small group of affluent people, but it sounds like great news for the tens of millions of people who hold these sorts of jobs. It should mean rapidly rising living standards for those who have been left behind over the last three decades.
Of course, Baker could just look at the data from Japan. Real wages there have been depressed in recent years, even while the labour force has shrunk:
Even more damningly, labour’s share of income in Japan has declined even more considerably than the United States, and other nations with a growing working-age population:
Perhaps Mr Baker was thinking of an older example: the Black Death, which killed about half the people in Europe. Many (including me until I looked it up) believe that the resulting shortage in agricultural labour led to soaring real wages for peasants and a redistribution of economic power away from landowners. Recent evidence, however, casts doubt on this hypothesis. While nominal peasant wages did indeed increase in the aftermath of the Black Death, real wages may have actually fallen for decades. That may have helped heavily indebted peasants, but everyone else had to endure punishing declines in their standard of living, not to mention the psychological trauma of surviving such a devastating plague.
In southern England, real wages of building craftsmen (rural and urban), having plummeted with the natural disaster of the Great Famine (1315-21), thereafter rose to a new peak in 1336-40. But then their real wages fell during the 1340s, and continued their decline after the onslaught of the Black Death, indeed into the 1360s. Not until the later 1370s – almost thirty years after the Black Death – did real wages finally recover and then rapidly surpass the peak achieved in the late 1330s.
And if we look at China — a country which has seen stunning real wage growth in recent years — it is clear that that growth has come in the context of a growth in the working-age population. China’s working-age population hit one billion for the first time in 2011.
To me at least, this seems to suggest that while all else being equal, a shrinking working age population might lead to a more competitive labour market, all else is not equal. Employers invest in more capital-intensive processes like automation and robots to compensate for a lack of workers, or in our globalised world they shift operations to somewhere with a stronger labour force (like China today, or perhaps like Africa further into the future). Even more simply, a falling population as a result of a natural disaster like the Black Death, or even just as a result of demographic trends like Japan, may lead to an economic depression due to falling demand.
This suggests that Baker’s conclusions are extremely optimistic for labour, and that shrinking populations may be bad news for wages.
There is a popular meme going around, popularised by the likes of Tyler Cowen, Paul Krugman and Noah Smith that suggests that recent falls in worker compensation as a percentage of GDP is mostly due to the so-called “rise of the robots”:
For most of modern history, two-thirds of the income of most rich nations has gone to pay salaries and wages for people who work, while one-third has gone to pay dividends, capital gains, interest, rent, etc. to the people who own capital. This two-thirds/one-third division was so stable that people began to believe it would last forever. But in the past ten years, something has changed. Labor’s share of income has steadily declined, falling by several percentage points since 2000. It now sits at around 60% or lower. The fall of labor income, and the rise of capital income, has contributed to America’s growing inequality.
In past times, technological change always augmented the abilities of human beings. A worker with a machine saw was much more productive than a worker with a hand saw. The fears of “Luddites,” who tried to prevent the spread of technology out of fear of losing their jobs, proved unfounded. But that was then, and this is now. Recent technological advances in the area of computers and automation have begun to do some higher cognitive tasks – think of robots building cars, stocking groceries, doing your taxes.
Once human cognition is replaced, what else have we got? For the ultimate extreme example, imagine a robot that costs $5 to manufacture and can do everything you do, only better. You would be as obsolete as a horse.
Now, humans will never be completely replaced, like horses were. Horses have no property rights or reproductive rights, nor the intelligence to enter into contracts. There will always be something for humans to do for money. But it is quite possible that workers’ share of what society produces will continue to go down and down, as our economy becomes more and more capital-intensive.
So, does the rise of the robots really explain the stagnation of wages?
This is the picture for American workers, representing wages and salaries as a percentage of GDP:
It is certainly true that wages have fallen as a percentage of economic activity (and that corporate profits as a percentage of economic activity have risen — a favourite topic of mine).
But there are two variables to wages as a percentage of GDP. Nominal wages have actually risen, and continued to rise on a moderately steep trajectory:
And average wages continue to climb nominally, too. What has actually happened to the wages-to-GDP ratio, is not that America’s wage bill has really fallen, but that wages have just not risen as fast as other sectors of GDP (rents, interest payments, capital gains, dividends, etc). It is not as if wages are collapsing as robots and automation (as well as other factors like job migration to the Far East) ravage the American workforce.
It is more accurate to say that there has been an outgrowth in economic activity that is not yielding wages beginning around the turn of the millennium, and coinciding with the new post-Gramm-Leach-Bliley landscape of mass financialisation and the derivatives and shadow banking megabubbles, as well the multi-trillion dollar military-industrial complex spending spree that coincided with the advent of the War on Terror. Perhaps, if we want to look at why the overwhelming majority of the new economic activity is not trickling down into wages, we should look less at robots, and more at the financial and regulatory landscape where Wall Street megabanks pay million-dollar fines for billion-dollar crimes? Perhaps we should look at a monetary policy that dumps new money solely into the financial sector and which has been shown empirically to enrich the richest few far faster than everyone else?
But let’s focus specifically on jobs. The problem with the view that this is mostly a technology shock is summed up beautifully in this tweet I received from Saifedean Ammous:
@azizonomics I wonder how humanity still manages to find jobs after the automation shock of the invention of the wheel.
— Saifedean Ammous (@saifedean) January 15, 2013
The Luddite notion that technology might render humans obsolete is as old as the wheel. And again and again, humans have found new ways to employ themselves in spite of the new technology making old professions obsolete. Agriculture was once the overwhelming mainstay of US employment. It is no more:
This did not lead to a permanent depression and permanent and massive unemployment. True, it led to a difficult transition period, the Great Depression in the 1930s (similar in many ways, as Joe Stiglitz has pointed out, to the present day). But eventually (after a long and difficult depression) humans retrained and re-employed themselves in new avenues.
It is certainly possible that we are in a similar transition period today — manufacturing has largely been shipped overseas, and service jobs are being eliminated by improvements in efficiency and greater automation. Indeed, it may prove to be an even more difficult transition than that of the 1930s. Employment remains far below its pre-crisis peak:
But that doesn’t mean that human beings (and their labour) are being rendered obsolete — they just need to find new employment niches in the economic landscape. As an early example, millions of people have begun to make a living online — creating content, writing code, building platforms, endorsing and advertising products, etc. As the information universe continues to grow and develop, such employment and business opportunities will probably continue to flower — just as new work opportunities (thankfully) replaced mass agriculture. Humans still have a vast array of useful attributes that cannot be automated — creativity, lateral thinking & innovation, interpersonal communication, opinions, emotions, and so on. Noah Smith’s example of a robot that “can do everything you can do” won’t exist in the foreseeable future (let alone at a cost of $5) — and any society that could master the level of technology necessary to produce such a thing would probably not need to work (at least in the sense we use the word today) at all. Until then, luckily, finding new niches is something that humans have proven very, very good at.
Famed pollster and sabermetrician Nate Silver is calling the US Presidential race for Obama, in a big way:
Silver’s mathematical model gives Obama an 85% chance of winning. The Presidential election is based on an electoral college system, so Silver’s model rightly looks at state-level polls. And in swing state polls, Obama is mostly winning:
This is slightly jarring, because in national polls, the two candidates are locked together:
So who’s right? Is the election on a knife-edge like the national polls suggest, or is Obama strongly likely to win as Silver’s model suggests?
While the election could easily go either way depending on turnout, I think Silver’s model is predicting the wrong result. In order for that to be the case, the state polling data has to be wrong.
There are a number of factors that lead me to believe that this is the case.
First, Republicans tend to outperform their poll numbers. In 2008, the national average got the national race just about right:
In the end, Obama won the election with 52.9% of the vote, against McCain who came out with 45.7%.
However, polls have historically underestimated Republican support. Except 2000 (when a November Surprise revelation of a George W. Bush drunk-driving charge pushed Gore 3.2% higher than the final round of polling), Republican Presidential candidates since 1992 have outperformed their final polls by a mean of 1.8 points. Such an outcome for Romney would put him 1.5% ahead in the national polls, and imperil Obama’s grip on the swing states.
Second, the Bradley Effect. The interesting thing about the swing states is that many of them are disproportionately white. The United States is 72% white, but Iowa is 89% white, Indiana is 81% white, Ohio is 81% white, Minnesota is 83% white, Pennsylvania is 79% white, New Hampshire is 92% white, Maine is 94% white and Wisconsin is 83% white. This means that they are particularly susceptible to the Bradley Effect — where white voters tell a pollster they will vote for a black candidate, but in reality vote for a white alternative. In a state in which Obama holds a small lead in state-level polling, only a small Bradley Effect would be necessary to turn it red.
This effect may have already affected Barack Obama in the past — in the 2008 primaries, Obama was shown by the polls to be leading in New Hampshire, but in reality Hillary Clinton ran out the winner. And many national polls in October 2008 showed Obama with much bigger leads than he really achieved at the polls — Gallup showed Obama as 11% ahead, Pew showed Obama as 16% ahead.
A small Bradley Effect will not hurt Obama where he is 7% or 11% or 16% ahead in the polls. But when polls are closer — as they mostly are in the swing states — it becomes more plausible than such an effect could change the course of the race.
And the Bradley Effect in 2012 may be bigger than in 2008. A recent poll by the Associated Press concluded:
A majority of Americans (51 percent) now hold “explicit anti-black attitudes” — up from 49 percent in 2008 — and 56 percent showed prejudice on an implicit racism test.
Finally, polls have tended to overestimate the popularity of incumbent Presidents, especially Democrats. In 1980, polls put Jimmy Carter 3% of his final tally, and in 1996 polls put Bill Clinton 2.8% ahead of his final tally:
Taken together, these difficult-to-quantify factors pose a serious challenge to Silver’s model. While it is fine to build a predictive model on polling data, if the polling data fed into the model is skewed, then any predictions will be skewed. Garbage in, garbage out.
I rate Obama’s chance of being re-elected as no better than 50:50. If Silver really rates his chances as 85:15, perhaps he should consider taking bets at those odds.
Obviously, Silver’s predictive model (and far, far more importantly the state-level polling data) proved even more accurate than 2008. However, the 2010 British General Election (in which polls and therefore Silver vastly overestimated the Liberal Democrat support level, leading to an electoral projection that was way off the mark) illustrates that there remain enough issues regarding the reliability of the polling data to ensure that Silver’s model (and similar) continue to suffer from the problem of fat tails. With solid, transparent and plentiful data (as Taleb puts it, in “Mediocristan”) such models work very, very well. But there remains plenty of scope (as Britain in 2010 illustrates) for polls to be systematically wrong (“Extremistan”). Given the likelihood that every news network will have its own state-level poll aggregator and Nate Silver soundalike on-hand come 2016, that might well be a poetic date for the chaotic effects of unreliable polling data to reappear. In the meantime, I congratulate the pollsters for providing Silver with the data necessary to make accurate projections.
Britain has returned to growth:
But compared even to the USA — which has huge problems of its own — Britain is still mired in the depths of a depression:
An Olympic bounce does not constitute a recovery. As I noted in March, Britain is under-performing the United States — in GDP and in unemployment. Although Cameron and Osborne keep claiming that they are deficit hawks who want to cut the government deficit, the debt keeps climbing.
Defenders of Cameron’s policies might claim that we are going through a necessary structural adjustment, and that lowered GDP and elevated unemployment is necessary for a time. I agree that a structural adjustment was necessary after the financial crisis of 2008, but I see little evidence of such a thing. The over-leveraged and corrupt financial sector is still dominated by the same large players as it was before. True, many unsustainable high street firms have gone out of business, but the most unsustainable firms that had to be bailed out — the banks and financial firms who have caused the financial crisis — have avoided liquidation. The real story here is not a structural adjustment but the slow bleeding out of the welfare state via deep and reaching cuts.
Britain has become welfare-dependent. Britain’s welfare expenditure is now over 25% of its total GDP. Multi-billion pound cuts in that figure are going to (and have) hurt GDP.
I believe countries are better with small governments and a larger private sector. The private sector consists of many, many individuals acting out their subjective economic preferences. This dynamic is largely experimental; businesses come and go, survive, thrive and die based upon their ability to stay liquid and retain a market, and this competition for demand forces innovation. The government sector is centrally directed. Governments do not have to behave like a business, they do not have to innovate or compete, as they have the power to tax and compel. (The exception to this is when governments become overrun by the representatives of private industries and corporations, who then leverage the machinations of the state to benefit corporations. When this occurs and markets become rigged in the favour of certain well-connected competitors, it matters little whether we call such industries “private sector” or “public sector”).
So I am sympathetic to the idea that Britain ought to have a smaller welfare state, and fewer transfer payments than it presently does. But the current and historical data shows very clearly that now is not the time to make such an adjustment. The time to reduce the size of the welfare state is when the economy is booming. This is the time that there is work for welfare claimants to go to. Cutting into a depressed economy might create a strong incentive for the jobless to work, but if there is little or no job creation for the jobless to go to, then what use are cuts? To reduce government deficits? If that’s the case, then why are British government deficits rising even though spending is being reduced? (The answer, of course, is falling tax revenues).
An alternative policy that would reduce unemployment and raise GDP without increasing the size of government is to force bailed-out banks sitting on huge hoards of cash to offer loans to the jobless to start their own private businesses. The money would be transferred to those who could be out working and creating wealth, but who cannot get credit through conventional channels, unlike the too-big-to-fail megabanks who are flush with credit but refuse to increase lending to the wider public. Even if the majority of these businesses were to fail, this would ensure a large boost in spending and incomes in the short run, and the few new businesses that succeed would provide employment and tax revenues for years to come. Once there is a real recovery and solid growth in GDP and in unemployment, then the government can act to decrease its size and slash its debt. Indeed, with growing tax revenues it is probable we would find that the deficit would end up decreasing itself.
Iranian officials are once again firing off belligerent rhetoric.
Hojjat al-Eslam Ali Shirazi, the representative of Iran’s Supreme Leader Ayatollah Ali Khamenei to the Islamic Republic’s Qods Force, said this week that Iran needed just “24 hours and an excuse” to destroy Israel.
In his first public interview in a year, reported in the Persian-language Jahan News, which is close to the regime, Shirazi said if Israel attacked Iran, the Islamic Republic would be able to turn the conflict into a war of attrition that would lead to Israel’s destruction.
“If such a war does happen, it would not be a long war, and it would benefit the entire Islamic umma the global community of Muslims. We have expertise in fighting wars of attrition and Israel cannot fight a war of attrition,” Shirazi said, referring to Iran’s eight-year war of attrition against Iraq.
Such claims are — more or less — inconsequential rubbish. The fact remains that Israel has nuclear weapons and a nuclear second strike, and Iran has no such thing, and the fact remains that the Iranian leadership knows this and are extremely unlikely to start a war where Iran (as Shimon Peres put it) will be the one wiped off the face of the Earth by Israeli plutonium. Yet the facts of military science will do little to stop the hawks of the West sounding off that Iran is irrational and that Iran is cooking up a plan to destroy Israel, and so must face regime change.
To grasp what is really occurring here we must look at how authoritarian Middle Eastern regimes (or, indeed, authoritarian regimes in general) function. Authoritarian regimes must maintain a cloak of authority. Tyrants do not attempt to look or sound weak; they try to project an aura of invincibility and indefatigability. We saw this during the last Gulf War, where Iraq’s information minister Muhammad Saeed al-Sahhaf — nicknamed Baghdad Bob in the American media — shot off hundreds of absurd statements during the war about how Iraqi troops were crushing the Americans, quite in contrast to the facts on the ground and right up until American tanks were rolling through the streets of Baghdad.
Baghdad Bob was not deluded. He was merely playing his role, and trying to project an aura of regime invincibility — providing propaganda for domestic consumption to keep the Iraqi population loyal to Saddam Hussein. It was a dog and pony show.
Iran’s belligerent rhetoric in this case is also strictly for domestic consumption — fierce rhetoric to keep the Iranian population fearful of the regime. Just like Baghdad Bob, the Iranian propaganda is far-removed from the real facts of the conflict. Whether the Iranian people really believe the regime’s propaganda — especially as the Iranian economy continues to worsen under sanctions — is dubious.
Yet one group of people — the Western neoconservatives, who are looking for another war — are more than happy to buy into the dog and pony “destroy Israel” bullshit.
As Robert Gates noted this week:
Painting a picture of internal political dysfunction in a dangerous world, former Defense Secretary Robert Gates warned Wednesday night that a U.S. or Israeli attack on Iran would have disastrous consequences.
Neither the United States nor Israel is capable of wiping out Iran’s nuclear capability, he said, and “such an attack would make a nuclear-armed Iran inevitable. They would just bury the program deeper and make it more covert.”
Iran could respond by disrupting world oil traffic and launching a wave of terrorism across the region, Gates said.
“The results of an American or Israeli military strike on Iran could, in my view, prove catastrophic, haunting us for generations in that part of the world.”
A regional war in the Middle East could result, potentially sucking in the United States and Eurasian powers like China, Pakistan and Russia. China and Pakistan have both hinted that they could defend Iran if Iran were attacked — and for good reason, as Iran supplies significant quantities of energy.
Frustratingly, the Iranian regime keep giving the neoconservatives more rope with which to hang themselves — and the West — on a cross of imperial overstretch, debt and blowback.
The YouTube video depicting Mohammed is nothing more than the straw that broke the camel’s back. This kind of violent uprising against American power and interests in the region has been a long time in the making. It is not just the continuation of drone strikes which often kill civilians in Pakistan, Yemen, Somalia and Afghanistan, either. Nor is it the American invasions and occupations of Iraq and Afghanistan. Nor is it the United States and the West’s support for various deeply unpopular regimes such as the monarchies in Bahrain and Saudi Arabia (and formerly Iran). Nor is it that America has long favoured Israel over the Arab states, condemning, invading and fomenting revolution in Muslim nations for the pursuit of nuclear weapons while turning a blind eye to Israel’s nuclear weapons and its continued expansion into the West Bank.
Americans and Europeans are no doubt looking at the protests over the “film”, recalling the even more violent protests during the Danish cartoon affair, and shaking their heads one more at the seeming irrationality and backwardness of Muslims, who would let a work of “art”, particularly one as trivial as this, drive them to mass protests and violence.
Yet Muslims in Egypt, Libya and around the world equally look at American actions, from sanctions against and then an invasion of Iraq that killed hundreds of thousands of Iraqis and sent the country back to the Stone Age, to unflinching support for Israel and all the Arab authoritarian regimes (secular and royal alike) and drone strikes that always seem to kill unintended civilians “by mistake”, and wonder with equal bewilderment how “we” can be so barbaric and uncivilised.
All of these things (and many more) have contributed to Muslim and Arab anger toward the United States and the West. Yet the underlying fact of all of these historical threads has been the United States’ oil-driven foreign policy. Very simply, the United States has for over half a century pursued a foreign policy in the region geared toward maintaining the flow of oil out of the region at any cost — even at the cost of inflaming the irrational and psychopathic religious elements that have long existed in the region.
This is not to defend the barbaric elements who resort to violence and aggression as a means of expressing their disappointment with U.S. foreign policy. It is merely to recognise that you do not stir the hornet’s nest and then expect not to get stung.
And the sad thing is that stirring the hornet’s nest is totally avoidable. There is plenty of oil and energy capacity in the world beyond the middle east. The United States is misallocating capital by spending time, resources, energy and manpower on occupying the middle east and playing world policeman. Every dollar taken out of the economy by the IRS to be spent drone striking the middle east into the stone age is a dollar of lost productivity for the private market. It is a dollar of productivity that the market could have been spent increasing American energy capacity and energy infrastructure in the United States — whether that is in oil, natural gas, solar, wind or hydroelectric.
And this effect can spiral; every dollar spent on arming and training bin Laden and his allies to fight the Soviet Union begot many more thousands of dollars of military spending when bin Laden’s mercenaries turned their firepower onto the United States, and the United States chose to spend over ten years and counting occupying Afghanistan (rightly known as the graveyard of empires). It is likely that the current uprisings will trigger even more U.S. interventionism in the region (indeed it already has as marines have already been dispatched to Yemen) costing billions or even trillions of dollars more money (especially if an invasion of Iran is the ultimate outcome). This in turn is likely to trigger even fiercer resistance to America from the Islamist elements, and so the spiral continues on.
The only way out of this money-sucking, resource-sucking, life-sucking trap that is very literally obliterating the American empire is to swallow pride and get out of the middle east, to stop misallocating American resources and productivity on unwinnable wars.
But neither major Presidential candidate is interested in such a policy. Perhaps it is because war is a great profit source for the military-industrial complex, the force to which both the Democratic and Republican parties are beholden?
In any case, we should expect to see much more of this:
Paul Ryan talks like a small government conservative:
Too much government inevitably leads to bad government. When government grows too much and extends beyond its limits, it usually does things poorly.
And the WSJ is pumping up Ryan as an antidote to the growth of government:
Ryan represents the GOP’s new generation of reformers. More than any other politician, the House Budget Chairman has defined those stakes well as a generational choice about the role of government and whether America will once again become a growth economy or sink into interest-group dominated decline.
But Ryan himself has been responsible for a lot of that government growth. He loyally voted for all the big government programs George W. Bush ensconced into law — Medicare Part D, often described as the largest expansion of the welfare state since Lyndon Johnson’s Great Society; the Department of Homeland Security and the TSA; the wars in Iraq and Afghanistan; the PATRIOT Act and the NDAA; the TARP bailout of Wall Street; the bailout of General Motors. So long as it was debt-fuelled spending authorised by a Republican (and during the Bush years, there was an awful lot of debt-fuelled spending authorised by Republicans) Ryan was out voting for it.
Ryan’s voting record establishes firmly that Ryan is as much for bailouts and the expansion of government as Obama. He talks like a small government conservative on the deficit, too, but dig into the details and he promises to balance the budget on the back of closing loopholes in the tax code that he refuses to specify, while completely ignoring the severe problem of excessive total debt that is keeping the economy depressed today.
Does Ryan have an explanation for his voting record? Why did he put party loyalty above loyalty to the principles he now claims to espouse? Or did he forget his small government principles during the Bush years? Did he only discover Ayn Rand in 2008?
Ryan was forced to try and explain. Here’s the exchange between Ryan and ABC News’ Christiane Amanpour.
AMANPOUR: Congressman Ryan, you actually voted for the Wall Street bailout, and indeed the auto bailout as well.RYAN: Right. The auto bailout in order to prevent TARP from going to the auto companies, because we already put $25 billion aside in an energy bill, which I disapproved of, to go to auto companies.
What? Ryan later tried to clarify his remarks in an interview with the Daily Caller:
The president’s chief of staff made it extremely clear to me before the vote, which is either the auto companies get the money that was put in the Energy Department for them already — a bill that I voted against because I didn’t want to give them that money, which was only within the $25 billion, money that was already expended but not obligated — or the president was going to give them TARP, with no limit. That’s what they told me. That’s what the president’s chief of staff explained to me. I said, ‘Well, I don’t want them to get TARP. We want to keep TARP on a leash. We don’t want to expand it. So give them that Energy Department money that at least puts them out of TARP, and is limited.’ Well, where are we now? What I feared would happen did happen. The bill failed, and now they’ve got $87 billion from TARP, money we’re not going to get back. And now TARP, as a precedent established by the Bush administration, whereby the Obama administration now has turned this thing into its latest slush fund. And so I voted for that to prevent precisely what has happened, which I feared would happen.
Ryan should take a leaf out of Mr T.’s book and quit his jibber-jabber. He voted for TARP, as well as the auto bailout, and he has no reasonable explanation beyond fierce loyalty.
Republicans had two choices — Ron Paul and Gary Johnson — who are both consistent fiscal conservatives with no record of supporting bailouts or expansions of government, and no record of supporting costly pre-emptive wars. The Republican Party rejected both candidates, and instead went with two defenders of bailouts, two expanders of government, two believers in pre-emptive war and a large, powerful security state. That decision says an awful lot about the Republican Party.
People who want to see government play a smaller role in the economy and society should look elsewhere; outside of rhetoric both of the two major tickets have a track record of increasing the size and scope of government, increasing debt levels and bailing out favoured corporations.
Markets are true democracies. The allocation of resources, capital and labour is achieved through the mechanism of spending, and so based on spending preferences. As money flows through the economy the popular grows and the unpopular shrinks. Producers receive a signal to produce more or less based on spending preferences. Markets distribute power according to demand and productivity; the more you earn, the more power you accumulate to allocate resources, capital and labour. As the power to allocate resources (i.e. money) is widely desired, markets encourage the development of skills, talents and ideas.
Planned economies have a track record of failure, in my view because they do not have this democratic dimension. The state may claim to be “scientific”, but as Hayek conclusively illustrated, the lack of any real feedback mechanism has always led planned economies into hideous misallocations of resources, the most egregious example being the collectivisation of agriculture in both Maoist China and Soviet Russia that led to mass starvation and millions of deaths. The market’s resource allocation system is a complex, multi-dimensional process that blends together the skills, knowledge, and ideas of society, and for which there is no substitute. Socialism might claim to represent the wider interests of society, but in adopting a system based on economic planning, the wider interests and desires of society and the democratic market process are ignored.
This complex process begins with the designation of money, which is why the choice of the monetary medium is critical.
Like all democracies, markets can be corrupted.
Whoever creates the money holds a position of great power — the choice of how to allocate resources is in their hands. They choose who gets the money, and for what, and when. And they do this again and again and again.
Who should create the monetary medium? Today, money is designated by a central bank and allocated through the financial system via credit creation. Historically, in the days of commodity-money, money was initially allocated by digging it up out of the ground. Anyone with a shovel or a gold pan could create money. In the days of barter, a monetary medium was created even more simply, through producing things others were happy to swap or credit.
While central banks might claim that they have the nation’s best democratic interests at heart, evidence shows that since the world exited the gold exchange standard in 1971 (thus giving banks a monopoly over the allocation of money and credit), bank assets as a percentage of GDP have exploded (this data is from the United Kingdom, but there is a similar pattern around the world).
Clearly, some pigs are more equal than others:
Giving banks a monopoly over the allocation of capital has dramatically enriched banking interests. It is also correlated with a dramatic fall in total factor productivity, and a dramatic increase in income inequality.
Very simply, I believe that the present system is inherently undemocratic. Giving banks a monopoly over the initial allocation of credit and money enriches the banks at the expense of society. Banks and bankers — who produce nothing — allocate resources to their interests. The rest of society — including all the productive sectors — get crumbs from the table. The market mechanism is perverted, and bent in favour of the financial system. The financial system can subsidise incompetence and ineptitude through bailouts and helicopter drops.
Such a system is unsustainable. The subsidisation of incompetence breeds more incompetence, and weakens the system, whether it is government handing off corporate welfare to inept corporations, or whether it is the central bank bailing out inept financial institutions. The financial system never learned the lessons of 2008; MF Global and the London Whale illustrate that. Printing money to save broken systems just makes these systems more fragile and prone to collapse. Ignoring the market mechanism, and the interests of the wider society to subsidise the financial sector and well-connected corporations just makes society angry and disaffected.
Our monopoly will eventually discredit itself through the subsidisation of graft and incompetence. It is just a matter of time.
The whole world knows the name Gavrilo Princip, and that of he man he assassinated, Archduke Franz Ferdinand. Princip’s shot triggered the Austro-Hungarian invasion of Serbia that set in motion the chain of events leading to the Great War of 1914.
After Serbia appealed to Russia for help, Russia began moving towards mobilization of its army, believing that Germany was using the crisis as an excuse to launch war in the Balkans. Upon hearing news of Russia’s general mobilization, Germany declared war on Russia. The German army then launched its attack on Russia’s ally, France, through Belgium, violating Belgian neutrality and bringing Great Britain into the war as well.
Is it possible that a similar chain of events may have already begun unfurling with the Syrian downing of a Turkish F-4 fighter jet? Turkey have already invoked a full meeting of NATO, claimed that Syria have fired on a second Turkish plane, and vowed that Syria’s actions “won’t go unpunished”.
The vast and sprawling system of national alliances that existed prior to the events 1914 were considered by policy makers of the time to be a counterbalance against excessive tension and the threat of war. The great powers created alliances ostensibly for the purpose of deterring war. The dominant view was that the potential for dragging in allies reduced the chances of an attack. In reality, it just meant that one spark could set the entire world aflame.
This is functionally the same as the interconnecting mesh of derivatives and shadow intermediation that foreshadowed the crash of 2008. As financial parties sold each other more and more “hedges“, the consensus of the time was that this made the system safer, as it allowed risk to be dissipated around the system. The theory was — and there were plenty of inaccurate mathematical models to back this up — that spreading risk around the system made the financial system safer. As it turned out, it didn’t. In the wake of MF Global and the London Whale, we know that the financial system has not learned the lessons of 2008. But it seems even more absurd that the diplomatic system has not really learned the lessons of 1914.
The NATO system — set up to oppose the Warsaw Pact system, which no longer exists — functions the same way — rather than dissipating risk, it allows for the magnification of international tensions into full-on regional and global wars. In the late 20th century the threat of nuclear war proved a highly-effective deterrent which limited the potential for all-out-war between the great powers, offsetting much of the risk of the hyper-fragile treaty system. Yet the potential for magnifying small regional problems into bigger wars will continue to exist for as long as NATO and similar organisations prevail.
We do not know exactly what arrangements Syria has with Russia and China — there is no formal defensive pact in place (although there is one between Syria and Iran) though it is fair to assume that Russia will be keen to maintain its Syrian naval assets, a view which is supported by the fact Russia heavily subsidises the Syrian military, and has blocked all the UN-led efforts toward intervention in Syria.
After the Cold War, the Warsaw Pact was allowed to disintegrate. Until NATO is similarly allowed to disintegrate, the threat of magnification will remain large. Could a border skirmish between Syria and Turkey trigger a regional or even global war? Under the status quo, anything is possible.