Ben Bernanke Is Right About Interconnective Innovation

2013-05-18T152144Z_1_CBRE94H16OD00_RTROPTP_2_USA

I’d just like to double down on Ben Bernanke’s comments on why he is optimistic about the future of human economic progress in the long run:

Pessimists may be paying too little attention to the strength of the underlying economic and social forces that generate innovation in the modern world. Invention was once the province of the isolated scientist or tinkerer. The transmission of new ideas and the adaptation of the best new insights to commercial uses were slow and erratic. But all of that is changing radically. We live on a planet that is becoming richer and more populous, and in which not only the most advanced economies but also large emerging market nations like China and India increasingly see their economic futures as tied to technological innovation. In that context, the number of trained scientists and engineers is increasing rapidly, as are the resources for research being provided by universities, governments, and the private sector. Moreover, because of the Internet and other advances in communications, collaboration and the exchange of ideas take place at high speed and with little regard for geographic distance. For example, research papers are now disseminated and critiqued almost instantaneously rather than after publication in a journal several years after they are written. And, importantly, as trade and globalization increase the size of the potential market for new products, the possible economic rewards for being first with an innovative product or process are growing rapidly. In short, both humanity’s capacity to innovate and the incentives to innovate are greater today than at any other time in history.

My reasons for optimism for the long run are predominantly technological rather than social. I tend to see the potential for a huge organic growth in the long run resulting from falling energy and manufacturing costs from superabundant alternative energy sources like solar, synthetic petroleum, wind, and nuclear, as well as decentralised manufacturing through 3-D printing and ultimately molecular manufacturing.

But Bernanke’s reasons are pretty good too. I see it every day. Using Twitter, the blogosphere and various other online interfaces, I discuss and refine my views in the company a huge selection of people of various backgrounds. And we all have access to masses of data to backup or challenge our ideas. Intellectual discussions and disputes that might have taken years now take days or weeks — look at the collapse of Reinhart & Rogoff. Ideas, hypotheses, inventions and concepts can spread freely. One innovation shared can feed into ten or twenty new innovations. The internet has built a decentralised open-source platform for collaborative innovation and intellectual development like nothing the world has ever seen.

Of course, as the 2008 financial collapse as well as the more general Too Big To Fail problem shows greater interconnectivity isn’t always good news. Sometimes, greater interconnectivity allows for the transmission of the negative as well as the positive; in the case of 2008 the interconnective global financial system transmitted illiquidity in a default cascade.

But in this case, sharing ideas and information seems entirely beneficial both to the systemic state of human knowledge and innovation, and to individuals like myself who wish to hook into the human network.

So this is another great reason to be optimistic about the long run.

Advertisements

Does Shelf Stacking Beat Geology?

The British Work and Pensions Secretary Iain Duncan-Smith reacted angrily to the victorious legal challenge made by an unemployed geography graduate who was forced to do unpaid work stacking shelves at Poundland, a British discount chain.

iain-duncan-smith-image-1-760284306

The BBC reports:

Miss Reilly, a University of Birmingham geology graduate, and 40-year-old unemployed HGV driver Jamie Wilson, from Nottingham, both succeeded in their claims that the unpaid schemes were legally flawed.

This was because the regulations behind the schemes did not comply with the Act of Parliament that gave the DWP the power to introduce the programme.

Miss Reilly said that in November 2011 she had to leave her voluntary work at a local museum and work unpaid at the Poundland store in Kings Heath, Birmingham, under a scheme known as the “sector-based work academy”.

“Those two weeks were a complete waste of my time, as the experience did not help me get a job,” she said, after the court ruling on 12 February.

“I was not given any training and I was left with no time to do my voluntary work or search for other jobs.

“The only beneficiary was Poundland, a multi-million pound company. Later I found out that I should never have been told the placement was compulsory.

“I don’t think I am above working in shops like Poundland. I now work part-time in a supermarket. It is just that I expect to get paid for working.”

Now, I don’t think that people should be paid for doing nothing, and I want to see a reduction in the welfare bill through employment growth as much as anyone else. But the idea that people with skills and qualifications should be forced into subsidised menial labour is absurd, and an absolute misallocation of capital and labour.

It is important to emphasise that this was not a paid job, because that has important economic implications. If this were a paid job, offered by the market, then there would be no reason for the unemployed person to refuse it. In a market economy, there will always be a degree of economic mismatch, and people who are trained in one thing may well have to take a job in another temporarily or even permanently. That is undisputed. But that is not the issue at stake here.

If the company in question cannot or will not pay a wage for a worker’s labour, then the position is unsustainable and untenable. Effectively, the government is engaging in subsidisation — providing labour free of cost to corporations to support otherwise unsustainable activities. So in this case the government is choosing to subsidise shelf-stacking over geology.

Iain Duncan-Smith’s words actually make this very clear:

Shelf-stacking is more important than geology.

This is an outstandingly unwise decision, made by a government that has spent the last three years making profoundly unwise decisions that has led to a severe stagnation in growth worse than the Great Depression.

The state should not prioritise one sector over another. The state should certainly not subsidise work in one industry, when an unemployed person has skills and qualifications to work in another industry where there are vacancies. It is a waste of taxpayer’s money to place unemployed people in an irrelevant sector. In fact, the energy and mining industries are a key growth sector today in Britain and around the world, so the notion that someone trained in geology should be subsidised into stacking shelves is eye-poppingly absurd, and reminiscent of the kinds of grotesque capital misallocations in the Soviet Union and North Korea where skilled workers and intellectuals were (and are) often forced to work in demeaning jobs.

The real point of these programs appears to be to provide corporations with a source of free labour, and to engage in demeaning moral paternalism. As Iain Duncan-Smith himself puts it:

I’m sorry, but there is a group of people out there who think they’re too good for this kind of stuff.

Duncan-Smith seems keener to teach young unemployed people a moralising, paternalistic lesson than he is to pursue sound economic policies. In fact that is very much the trajectory of this entire government and its self-defeating “age of austerity” project.

Who Should Be Giving Thanks This Thanksgiving?

Not the wider public.

Our financial system is broken. Our political system is broken. Oligarchs and their cronies reap easy rewards — bailouts, crony capitalism, corporate handouts, liquidity injections, favourable “regulation” (that puts oligarchs’ competition out of a business) — while taxpayers pay the bill.

But no such thing lasts forever.

Thanksgiving is very much the day of the black swan. Nassim Taleb used the example of a turkey fattened up for Thanksgiving as an example of a black swan phenomenon. The turkey sees itself being fed every day by the turkey farmer and assumes based on past behaviour that this will continued indefinitely until the day comes when the farmer kills the turkey. Nothing in the turkey’s limited experiential dataset suggested such an event.

But Thanksgiving also commemorates the end of pre-Columbian America, a huge earth-shattering black swan for the people of the Americas. The day before the first European immigrants landed in North America, very little in the Native Americans’ dataset suggested what was to come.

In a globalised and hyper-connected world, drastic systemic change can occur faster than ever before.

All it takes is the first spark.

Another Planet

The losers in elections often take the loss badly. Just as some Gore supporters in 2000 shouted about moving to Canada, some Romney supporters have taken the loss particularly badly too:

And perhaps the most poignant:

All the Republican rage made me think about the origins of America. So much emigration out of Europe to America came out of political and religious or ethnic friction and disagreement with the regimes in Europe (and later, the rest of the world). Many, many Americans are the descendants of Europeans who came to America to practise religion or politics the way they wanted to, and not the way that their nation, or the Catholic church, or a Feudal lord wanted them to.

That same independent-mindedness and the hunger for self-governance was the force that gave the Founding Fathers the chutzpah to finally sever ties with the British Empire in 1776 and strike out on their own as an independent nation.

For those who want to strike out into the unknown in the pursuit of self-governance, such options don’t exist anymore. There is no great sparsely inhabited continent spread out (except perhaps Antarctica which is already claimed-for) for those who want to strike out on their own. Those of a libertarian temperament and with a hunger for self-governance used to come to America. But in the modern, globalised world, where can they go?

Where is the next America? Where is the next land that people seeking self-governance can emigrate to?

One prospective answer has been seasteading — moving out onto floating cities in international waters. Perhaps that will satisfy the desires of a few in the coming years, but not everyone wants to live at sea. It is another frontier, but there are many challenges to overcome. For one thing, governments have navies, and may lay claim to successful floating cities near their waters, seeking new tax revenues. Pirates may pose a similar challenge.

In the much longer term, the answer will almost certainly be leaving the planet. The only uncolonised great new continents left are the ones up in space, on other planets.  There is no more effective or complete way to depart. So it is rather poetic that in the past couple of days a new Earthlike planet in a star’s habitable zone has been discovered.

Via the BBC:

Astronomers have spotted another candidate for a potentially habitable planet — and it is not too far away.

The star HD 40307 was known to host three planets, all of them too near to support liquid water.

But research to appear in Astronomy and Astrophysics has found three more – among them a “super-Earth” seven times our planet’s mass, in the habitable zone where liquid water can exist.

Many more observations will be needed to confirm any other similarities.

But the find joins an ever-larger catalogue of more than 800 known exoplanets, and it seems only a matter of time before astronomers spot an “Earth 2.0” — a rocky planet with an atmosphere circling a Sun-like star in the habitable zone.

The hunger for self-governance led to the birth of America. It seems highly likely, in the very long run, that the hunger for self-governance will be the force that leads not only to local space colonisation (near-earth asteroids, Mars, asteroid belt, the moons of the gas giants) but ultimately deep space colonisation. The private space industry today is already driven by libertarian-leaning individuals like Bert Rutan, Robert Zubrin and Peter Thiel.

Powerful central government drives nonconformists to find ways to escape it. If the only road to self-governance left is up into space, then that is the road that will be taken. In the end, fury over a lost election may be the thing that drives humanity to the stars.

Why Nate Silver is Wrong

Famed pollster and sabermetrician Nate Silver is calling the US Presidential race for Obama, in a big way:

Silver’s mathematical model gives Obama an 85% chance of winning. The Presidential election is based on an electoral college system, so Silver’s model rightly looks at state-level polls. And in swing state polls, Obama is mostly winning:

This is slightly jarring, because in national polls, the two candidates are locked together:

So who’s right? Is the election on a knife-edge like the national polls suggest, or is Obama strongly likely to win as Silver’s model suggests?

While the election could easily go either way depending on turnout, I think Silver’s model is predicting the wrong result. In order for that to be the case, the state polling data has to be wrong.

There are a number of factors that lead me to believe that this is the case.

First, Republicans tend to outperform their poll numbers. In 2008, the national average got the national race just about right:

In the end, Obama won the election with 52.9% of the vote, against McCain who came out with 45.7%.

However, polls have historically underestimated Republican support. Except 2000 (when a November Surprise revelation of a George W. Bush drunk-driving charge pushed Gore 3.2% higher than the final round of polling), Republican Presidential candidates since 1992 have outperformed their final polls by a mean of 1.8 points. Such an outcome for Romney would put him 1.5% ahead in the national polls, and imperil Obama’s grip on the swing states.

Second, the Bradley Effect. The interesting thing about the swing states is that many of them are disproportionately white. The United States is 72% white, but Iowa is 89% white, Indiana is 81% white, Ohio is 81% white, Minnesota is 83% white, Pennsylvania is 79% white, New Hampshire is 92% white, Maine is 94% white and Wisconsin is 83% white. This means that they are particularly susceptible to the Bradley Effect — where white voters tell a pollster they will vote for a black candidate, but in reality vote for a white alternative. In a state in which Obama holds a small lead in state-level polling, only a small Bradley Effect would be necessary to turn it red.

This effect may have already affected Barack Obama in the past — in the 2008 primaries, Obama was shown by the polls to be leading in New Hampshire, but in reality Hillary Clinton ran out the winner. And many national polls in October 2008 showed Obama with much bigger leads than he really achieved at the polls — Gallup showed Obama as 11% ahead, Pew showed Obama as 16% ahead.

A small Bradley Effect will not hurt Obama where he is 7% or 11% or 16% ahead in the polls. But when polls are closer — as they mostly are in the swing states — it becomes more plausible than such an effect could change the course of the race.

And the Bradley Effect in 2012 may be bigger than in 2008. A recent poll by the Associated Press concluded:

A majority of Americans (51 percent) now hold “explicit anti-black attitudes” — up from 49 percent in 2008 — and 56 percent showed prejudice on an implicit racism test.

Finally, polls have tended to overestimate the popularity of incumbent Presidents, especially Democrats. In 1980, polls put Jimmy Carter 3% of his final tally, and in 1996 polls put Bill Clinton 2.8% ahead of his final tally:

Taken together, these difficult-to-quantify factors pose a serious challenge to Silver’s model. While it is fine to build a predictive model on polling data, if the polling data fed into the model is skewed, then any predictions will be skewed. Garbage in, garbage out.

I rate Obama’s chance of being re-elected as no better than 50:50. If Silver really rates his chances as 85:15, perhaps he should consider taking bets at those odds.

UPDATE:

Obviously, Silver’s predictive model (and far, far more importantly the state-level polling data) proved even more accurate than 2008. However, the 2010 British General Election (in which polls and therefore Silver vastly overestimated the Liberal Democrat support level, leading to an electoral projection that was way off the mark) illustrates that there remain enough issues regarding the reliability of the polling data to ensure that Silver’s model (and similar) continue to suffer from the problem of fat tails. With solid, transparent and plentiful data (as Taleb puts it, in “Mediocristan”) such models work very, very well. But there remains plenty of scope (as Britain in 2010 illustrates) for polls to be systematically wrong (“Extremistan”). Given the likelihood that every news network will have its own state-level poll aggregator and Nate Silver soundalike on-hand come 2016, that might well be a poetic date for the chaotic effects of unreliable polling data to reappear. In the meantime, I congratulate the pollsters for providing Silver with the data necessary to make accurate projections.

What is Profit?

In neoclassical macroeconomic models that assume perfect competition, there can in the long run be no such thing as profit — defined as revenue left over after all costs have been subtracted.

Clearly, in the real world where many businesses have lived and died profitably there is no such thing as perfect competition, and therefore the neoclassical models that treat profit as a short-run anomaly are working from an unrealistic assumption.

My definition of profit is that profit is what happens when a business’s input transactions are priced less than its output transactions. That is, the sum of the cost of a business’s inputs from those it buys is transacted for a lower price than the sum of those it sells its goods and services to. Because transactions are assumed to be voluntary — and when they are not voluntary, any residual gain is theft, not profit — and businesses are assumed to try to negotiate the best price in both inputs and outputs, any profit is due to those who purchase the business’s output valuing the output higher than those who sold the business its inputs. That an output or input transactor would accept a profit-creating price could be for any number of perceived reasons: convenience, or expertise, or prestige, or necessity, or even outright trickery. Their decision to accept the price is subject to their own subjective valuation, and it is the difference between prices that creates the profit.

Marx and Lenin represented this idea as surplus value; that businesses make a profit by extracting uncompensated labour value out of their workers. But why not the other transactors? In my view, profit is derived from the sum of the business’s transactions with all of its transactors: consumers, supplies, labour (etc). Workers (etc) cannot extract a greater share of the firm’s revenue than they can negotiate, and at various points in history (including the present day) the working class seems to have had little real leverage for negotiation.

In my view, any model that attempts to represent real world markets should begin from the historical fact of profit and loss, and the historical fact of a disequilibrium between input transactions and output transactions.

Incomprehensible Bullshit

The mathematics professor Alan Sokal famously shamed much of the humanities profession by publishing Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’  — a paper intended as ambiguous gobbledegook — in the peer-reviewed postmodern cultural studies Journal Social Text in 1996.

Sokal’s paper was a cleverly and artfully constructed piece of trolling. Sokal did it by conforming to the stylistic trappings of postmodernists like Jacques Derrida, Jean Baudrillard and Luce Irigaray — mimicking their dense and obscure verbiage, misusing concepts from science like quantum gravity (of which there exists no widely accepted scientific theory), and shrouding his argument in a great deal of ambiguity. The piece described the notion of a “postmodern science”, one that discarded the notion of objective truth.

The fallout from this paper underlined the divide between science (which seeks objective truth), and postmodernism (which does not seek objective truth). But more than that, it exposed postmodernism and cultural studies as being ambiguous, self-absorbed, and incomprehensible, to the extent that its own journals were tricked into publishing an article intended as nonsense.

Yet this issue — of baffling with incomprehensible bullshit — is not just a problem in postmodernism. Mathematics recently had a Sokal moment (and frankly, it is surprising that it took this long). Via the LRB:

Last month That’s Mathematics! reported another landmark event in the history of academic publishing. A paper by Marcie Rathke of the University of Southern North Dakota at Hoople had been provisionally accepted for publication in Advances in Pure Mathematics. ‘Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE’ concludes:

Now unfortunately, we cannot assume that

It is difficult, as a non-specialist, to judge the weight of that ‘unfortunately’. Thankfully, the abstract is a model of concision:

Let ρ = A. Is it possible to extend isomorphisms? We show that D´ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway–d’Alembert.

Baffled? You should be. Each of these sentences contains mathematical nouns linked by the verbs mathematicians use, but the sentences scarcely connect with each other. The paper was created using Mathgen, an online random maths paper generator. Mathgen has a set of rules that define how papers are arranged in sections and what kinds of sentence make up a section and how those sentences are made up from different categories of technical and non-technical words. It creates beautifully formatted papers with the conventional structure, complete with equations and citations but, alas, totally devoid of meaning.

So mathematicians and mathematics journals are also susceptible to being trolled by their own bullshit, their own conventions, syntax and “rigour”. If a mathematics journal and the peer-review process can be fooled by a meaningless paper spat out by a computer program, how much well-intentioned but bad or meaningless mathematics has also slipped through the peer review process?

And what about the other subjects that have adopted mathematical symbols as their lexicon, like economics?

I have written at length about some of the problems connected to the very great increase of mathematical terminology in economics — and remain highly sceptical of the use of assumptive models in economics.  The social sciences are particularly unsuited to simplified mathematical modelling — unlike the physical sciences, the phenomena they seek to explain tend to be far less linear in observable causation, and so far more susceptible to wildness. No model or theory less than reality itself can fully represent human behaviour and human action; each transaction in an economy is unique, and arises from a different set of circumstances, representing a constantly varying order of human preferences. This tendency toward nonlinear causality is why transparency is critical to bullshit detection in the social sciences. Just as a sheen of ambiguous, obscure and poorly-defined English can make theories incomprehensible and closed-off from scrutiny and understanding, so too can a sheen of obscure and specialised mathematics.

Well-written work — whether in plain language or mathematics — requires comprehensible explanations and definitions, so that a non-specialist with a moderate interest in the subject can quickly and easily grasp the gist of the concepts, the theory, the reasoning, and the predictions. Researchers can use as complex methods as they like — but if they cannot explain them clearly in plain language then there is a transparency problem. Without transparency, academia — whether cultural studies, or mathematics, or economics — has sometimes produced self-serving ambiguous sludge. Bad models and theories produce bad predictions that can inform bad policy and bad investment decisions.  It is so crucial that ideas are expressed in a comprehensible way, and that theories and the thought-process behind them are not hidden behind opaque or poorly-defined words or mathematics.