Will Robots Drive Us To Socialism?

robots-out-to-get-me

I read recently that Rio Tinto has already replaced 30% of their mining machinery operators and drivers with robots.

The future of work in an age of increasing automation is a topic that a lot of economic thinkers have considered. Frances Coppola ponders the question of how the future may look in a world in which automation is driving people out of the labour market:

Automation only happens when machines are cheaper to run than people, and it is probably fair to say that in the last few decades automation has not happened quite as fast as one might have anticipated because companies have discovered that labour in emerging markets is cheaper than the cost of investing in machinery. But as the standard of living rises in emerging markets, and the cost of technology falls, that will not remain the case. Hazlitt, writing in 1952, pointed out that it was automation of production that enabled families to survive without children’s labour, because the price of goods produced with the new machinery was so much lower than those produced in a more labour-intensive way. In the short term automation caused hardship, as people whose livelihoods depended on the old way of doing things lost their jobs: but in the longer term there was benefit to society in the reduced cost of goods that enabled many people to work less, and in the development of new industries to employ those people no longer needed in the old ones. The change we are seeing today is every bit as great, and the short-term consequences are the same – high unemployment, particularly among those with poor or irrelevant skills.

Automation should both require fewer people to work AND enable people to work less, since the whole point of automation is to reduce the cost of production, which in a competitive system would result in falling prices. Unfortunately this isn’t always the case: the owners of automated industry may use reduced production cost as an opportunity to take more profit, and they may use political influence to create barriers to entry and trade tariffs to prevent competition driving down prices. But assuming that governments don’t use subsidies and protections to keep inefficient companies alive and prices artificially high, where does that leave us in terms of employment and incomes in the future?

Where it leaves us is with increasing inequality, which of course is something that we have seen in recent years, alongside increased unemployment, which is another thing we have seen. And in a world where a minority, historically known as the capitalists own the physical means of production like factories, robots and patents this will result in greater inequality as labour becomes less and less important as an economic factor. The owners of capital will be able to produce to satisfy market demand with little labour input. This will result in more former labourers leaving that field and either becoming entrepreneurs themselves, or becoming dependent on welfare. I think this transformational phenomenon is already well under way in the West — and I only expect it to become more pronounced globally as automation becomes ubiquitous.

In theory, physical labour may become totally obsolete. If every house has a decentralised energy source like solar panels and reliable energy storage, as well as an advanced 3-D printer or molecular assembler that can produce almost physical object imaginable from a few basic recyclable chemicals then human poverty will essentially have been abolished. We can just spend the vast majority of our time doing things that we enjoy, while spending only a few minutes or at most hours a day programming our machines to fulfil our material desires.

That is the more optimistic vision. In a less optimistic vision, only a small minority of people will have access to such technologies as while the technology may exist, the costs of mass distribution remain too high (at least for a time). The vast masses, will be stuck in impoverished material conditions — dependent on welfare, and charity — without any real prospect being able to climb the ladder through selling their labour. Only a lucky few — who have an inimitably good idea, or a creative skill that cannot be replicated by a robot — will have a prospect of joining the capital-owning upper class. And for the others who are left out in the cold, political action may look attractive. Simply have the government take a larger chunk of the capital-owning class’s income or wealth, and redistribute it to the poor. Ideally, this would be done with the intent of abolishing poverty through making cheap electricity, internet access and molecular assemblers available to all. Less ideally, rather than giving the poor the means to fish (so to speak), it might instead take the shape of a giant welfare net, keeping the means of production in limited hands and simply confiscating and redistributing some wealth. These issues unresolved could create a lot of tension between the two classes. In a worst-case scenario, that could lead to social breakdown or even revolution.

Fortunately, I think that this can be avoided through voluntary charity. The billionaire class today is split between those on one hand determined to give it all away with the intent of improving the material conditions of the very poor — Bill Gates, Warren Buffett — and those on the other determined to create new futuristic technologies and systems — Elon Musk — that can improve the material conditions of the masses. As we proceed through the 21st Century and as the technologies of superabundance — solar energy, nuclear energy, wind energy, cybernetics, genomics, the internet, 3-D printing, molecular manufacturing, desalination, etc — create more and more wealth and more and more billionaires, this trend may accelerate. Simply, the wealthy may have so much wealth that eliminating material poverty through voluntary charity may in the long run be an obvious and inevitable move.

As Brian Caplan notes:

At first glance, I admit, a vision of a superabundant world where people who own only their labor eke out a meager existence seems frightening.  But put your fears aside.  In an ultra-productive world, a relatively tiny amount of non-labor resources would make your rich by current standards.  Labor + zero non-labor assets = poverty; labor + token non-labor assets = riches.  In any case, a slight charitable impulse in the better-off is all you need to ensure fabulous riches for every human on earth.

Once you’ve got a world this wonderful, the last thing you’d want to do is start down a potentially slippery slope with a high tech Russian Civil War at the bottom.  Indeed, a more sensible reaction would be abolish the welfare state as obsolete.  If half of us were billionaires, mopping up any residual human poverty with voluntary charity would be child’s play.

Ironically, this kind of world could be strangely like the decentralised and classless society that Marx originally envisaged. The route to which we appear to be travelling toward it on, though, is totally and completely different to the one Marx envisaged. Instead of violent revolution, the road to superabundance may be paved by technological progress made by the capital-owners.

Humanities Scholars Baffled By Math

Via the Wall Street Journal:

In the latest study, Kimmo Eriksson, a mathematician and researcher of social psychology at Sweden’s Mälardalen University, chose two abstracts from papers published in research journals, one in evolutionary anthropology and one in sociology. He gave them to 200 people to rate for quality—with one twist. At random, one of the two abstracts received an additional sentence, the one above with the math equation, which he pulled from an unrelated paper in psychology. The study’s 200 participants all had master’s or doctoral degrees. Those with degrees in math, science or technology rated the abstract with the tacked-on sentence as slightly lower-quality than the other. But participants with degrees in humanities, social science or other fields preferred the one with the bogus math, with some rating it much more highly on a scale of 0 to 100.

Specifically, 62% of humanities and social science scholars preferred the paper with the irrelevant equation, compared with 46% from a background of mathematics, science and technology.

This is a significant result, and I hope the experiment is repeated and replicated. It is all well and good for humanities and social science scholars to mostly eschew the use of mathematics in their work. But if humanities scholars begin to take work more seriously simply for the inclusion of (faux-) mathematics without themselves understanding the mathematics, then maybe it’s time for humanities and social science scholars to increase their mathematical and statistical literacy so as not to be so easily tricked by faux-mathematical rigour.

And this isn’t just a case of not understanding the equation — it seems like a nontrivial chunk of humanities and social science scholars have quite an inferiority complex. That should be a great embarrassment; there is nothing inherently inferior about the study of the human condition, or its (mostly non-mathematical) tools.

Last year, I wrote:

Well-written work — whether in plain language or mathematics — requires comprehensible explanations and definitions, so that a non-specialist with a moderate interest in the subject can quickly and easily grasp the gist of the concepts, the theory, the reasoning, and the predictions. Researchers can use as complex methods as they like — but if they cannot explain them clearly in plain language then there is a transparency problem. Without transparency, academia — whether cultural studies, or mathematics, or economics — has sometimes produced self-serving ambiguous sludge. Bad models and theories produce bad predictions that can inform bad policy and bad investment decisions.  It is so crucial that ideas are expressed in a comprehensible way, and that theories and the thought-process behind them are not hidden behind opaque or poorly-defined words or mathematics.

But in this case, I think the only real solution is mathematical and scientific literacy.

On the other hand, prestigious mathematics journals have also recently been conned into publishing papers of (literally) incomprehensible gibberish, so it is not like only humanities and social science scholars have the capacity to be baffled by bullshit.

Free Market Ecology

These gargantuan global conferences where the emissaries of governments meet in hallowed halls to thrash out a global planning agenda — dressed in the clothes of ecology, or sustainable development, or whatever the buzzword of the day — are a waste of time.

They are a waste of time for the taxpayer, who has to stump up to pay for such efforts. They are a waste of time for the protestors who swarm to such events holding placards and shouting slogans. They are a waste of time for the ecologists who — whether right or wrong — believe that the present shape of human civilisation is unsustainable. Possibly the only group that really benefits are the self-perpetuating bureaucratic classes, who often take home huge salaries they could never earn in the private sector.

And the Malthusian targets of the bureaucracy have a history of missing.

The Guardian notes:

Rio+20 was intended as a follow up on the 1992 Earth Summit, which put in place landmark conventions on climate change and biodiversity, as well as commitments on poverty eradication and social justice. Since then, however, global emissions have risen by 48%, 300m hectares of forest have been cleared and the population has increased by 1.6bn people. Despite a reduction in poverty, one in six people are malnourished.

If these bureaucratic classes knew the first thing about economics or markets, they would begin to question whether such conferences — and all the promises, intergovernmental commissions, and regulatory pledges they spawn — are necessary. The more I question, the more I come to believe that all that is needed to halt any man-made ecological crises are free markets and free speech.

The history of human civilisation has been one of triumph over the limits of nature. While we have had our ups and downs, recent projections of imminent ecological ruin — such as those in the 1970s produced by Ehrlich and Holdren and the Club of Rome, or earlier by Keynes, Malthus and Galton (etc) — have all failed to materialise. But the trend goes back much further, into the distant past. Throughout our history our species has done what has been necessary to survive. Humanity has lived on this planet for upwards of 500,000 years, and through that time, we have survived a myriad of climate changes — solar variation, atmospheric variation, cycles of glaciation, supervolcanoes, gamma ray bursts, and a host of other phenomena.

It will be no different this time. We are dependent on our environment for our life and for our future. That is widespread knowledge, and so as the capable and creative species that we are, we have already developed a wide array of technological solutions to potential future environmental problems. This is a natural impulse; humanity as individuals and as a species hungers for survival, for opportunities to pass on our genes.

As I wrote last month:

If we are emitting excessive quantities of CO2 we don’t have to resort to authoritarian centralist solutions. It’s far easier to develop and market technologies (that already exist today) like carbon scrubbing trees that can literally strip CO2 out of the air than it is to try and develop and enforce top-down controlling rules and regulations on individual carbon output. Or (even more simply), plant lots of trees and other such foliage (e.g. algae).

If the dangers of non-biodegradable plastic threaten our oceans, then develop and market processes (that already exist today) to clean up these plastics.

Worried about resource depletion? Asteroid mining can give us access to thousands of tonnes of metals, water, and even hydrocarbons (methane, etc). For more bountiful energy, synthetic oil technology exists today. And of course, more capturable solar energy hits the Earth in sunlight in a single day than we use in a year.

The only reason why these technologies are not widespread is that at present the older technologies are more economically viable. Is that market failure? Are markets failing to reflect our real needs and wants?

No; those who so quickly cry “market failure!” fail to grasp markets. Certainly, I think GDP is a bad measure of economic growth. But throwing out the concept of money altogether as a measure of society’s needs and wants is completely foolish. Markets are merely an aggregation of society’s preferences. Capital and labour is allocated as the market — in other words, as society — sees fit. As Hayek showed in the 1930s, the market gives society the ability to decide how a good or service should be distributed based on individuals willingness to give money for it. The market gives feedback to producers and consumers through the price mechanism about the allocation of resources and capital, which in turn allows on the basis of individual consensual decisions corrections that prevent shortages and surpluses. Under a planned system there is no such mechanism.

The fact that greener technologies have not yet been widely adopted by the market is merely a symptom of the fact that society itself is not yet ready to make a widespread transition. But the fact that research and development and investment continues to pour into green technologies shows that the market is developing toward such an end.

Solar consumption has gone parabolic:

And so it will continue; as society evolves and progresses, the free market — so long as there is a free market — will naturally reallocate resources and labour based on society’s preferences. Without a free market — and since 2008 when the banks were bailed out and markets became junkiefied intervention-loving zombies, it is highly dubious that there is such a thing as a free market in the West — planners will just end up guessing at how to allocate resources, labour and capital, and producing monstrous misallocations of capital.

The political nature of such reallocation is irrelevant; whether the centralists call themselves communists or socialists or environmentalists, their modus operandi is always the same: ignore society’s true economic preferences, and reallocate resources based on their own ideological imperatives (often for their own enrichment).

My view is that the greatest threat to the planet’s ecology is from the centralists who wish to remove or pervert the market mechanism in order to achieve ideological goals. It is not just true that removing the market mechanism retard society’s ability to evolve into new forms of production, resource-allocation, and capital-allocation based on society’s true preferences. The command economies of the 20th Century — particularly Maoist China and Soviet Russia — produced much greater pollution than the free markets. Under a free market, polluters who damage citizens or their property can be held to account in the market place, and through the court system.There is no such mechanism through the kind of command of economy that the centralists seem to wish to implement.

The answer is not central planning and government control. The answer is the free market. 

The Pseudoscience of Economics

Modern economics is obsessed with modelling. An overwhelming majority of academic papers on the subject work like so: they take data, and use data to construct formal mathematical models of economic processes. Models mostly describe a situation, and describe how that situation would be changed by a given set of events; a very simple example is that as the supply of a good diminishes, its price will increase. Another is that deficit spending increases the national income. A mathematical model is a predictive tool created to demonstrate the outcome of events in a massively simplified alternate universe.

As someone who rather enjoys voyages of the imagination, the use of mathematical models in economics is intriguing. The pretension that through using formal mathematical techniques and process  we can not only accurately understand, but accurately predict the result of changes in the economy is highly seductive. After all,we can accurately predict the future, right?

Wrong. The wonderful and terrible and confounding thing about our world is that it is a deeply unpredictable place, at least in the economic sphere where each number (for instance “aggregate demand” or “aggregate supply”) in an equation may loosely refer to millions of huge, complex and dynamic events. When you’re using huge simplifications to describe reality, those simplifications may miss the important details, and your projections may go askew.

Not all modelling is equal. Newton’s model of gravitation (since superseded by Einstein’s relativity) makes relatively accurate predictions about how gravitation works, and what would happen to an object dropped 500 metres above the Earth. NASA used Newton’s equations to fly to the Moon. Of course, even in physics there are occasionally divergences and oddities (which is why there are quite often unrepeatable or anomalous experimental results, for instance the recent experiment that seemed to show neutrinos travelling faster than the speed of light). So economics — with its fixation on creating models of situations, and using these models to attempt to predict the future, mimics physics, chemistry and biology, where data is collected, and used to produce theories of physical processes which allow a modestly accurate representation of the future.

The key qualitative difference, though, is that mathematical economic theories don’t accurately predict the future. Ben Bernanke — the chairman of the Federal Reserve, and one of the most-cited academic economists in the world told the world that subprime housing was contained. That is the economic equivalent of Stephen Hawking telling the world that a meteorite is going to miss the Earth, when it is really going to hit. Physicists can very accurately model the trajectories of rocks in space. But economists cannot accurately model the trajectories of prices, employment and interest rates down on the rocky ground.

The thing that I believe modern economists are most useful for is pointing out the glaring flaws in everyone else’s theories. Steve Keen has made a public name for himself by publishing a book entitled Debunking Economics, in which he explains the glaring and various flaws in modern economic modelling (DSGE, New Classical, etc).

Economics is a complex and multi-faceted subjects. Economists must be in some measure, philosophers, historians, linguists, mathematicians, statisticians, political scientists, sociologists and psychologists, and many other things. The trouble is that at some stage in the last century the multi-faceted multi-dimensional economics (like that of Xenophon) was hijacked by mathematicians who tried to turn this huge and delicate subject into an equation. Yet economics — and economic decisions, from the macro to the micro level — is a human subject. It is subtle and psychological and sporadic. A human subject requires human language, human emotion, human intuition.

The grand theoretical-mathematical approach to economics is fundamentally flawed. Trying to smudge the human reality of economics and politics into cold mathematical shackles is degenerative.

So what to do if you want to understand the economy?

Follow the data, consider the history (similarities and differences between the past and the present) and explain your conclusions simply, as you would to a child. Consider philosophical definitions: what is money? What is demand? What is supply? What is value? How does demand affect supply? What are the global patterns of trade? Why have they emerged this way and not an alternative way? Consider possibilities. Admit the limitations of your knowledge and explore the boundaries. Stop forcing the construction of absolutes, grand frameworks, grand theories. No theory will ever be robust to everything nature will throw at it, but simple microeconomic heuristics (opportunity cost, cost-benefit analysis) combined with data-focussed historical analysis may be more robust than cold, dead mathematics.

As Heraclitus noted:

No man ever steps in the same river twice

No two situations are identical. And in this universe even tiny differences can have huge effects on the outcome of a situation. This is the butterfly effect, a term coined by Edward Lorenz, and derived from the theoretical example of a hurricane’s formation being contingent on whether or not a distant butterfly had flapped its wings several weeks before.

The pseudo-scientific school of mathematical economics hungers and craves for a perfect world, where each river is the same, where there is no butterfly effect, where human preferences are expressed in equation form, where there is no subtlety or ambiguity or uncertainty.

It is a dreamworld constructed by and for people with Asperger’s Syndrome.

Precrime in America

The U.S. Department of Homeland security is working on a project called FAST, the Future Attribute Screening Technology. FAST will remotely monitor physiological and behavioural signals like elevated heart rate, eye movement, body temperature, facial patterns, and body language, and analyse these signals algorithmically for statistical aberrance in an attempt to identify people with criminal or terroristic intentions.

It’s useful to briefly talk about a few of the practical problems that such a system would face.

Firstly, the level of accuracy in remote monitoring. Is it possible to engineer a system that can remotely tell you the heart-rate of a hundred passengers  passing through a TSA checkpoint? Yes. Is it possible to do so accurately? That is much, much harder. The obvious conclusion is that such a system, were it to be deployed in the wilds of airports (and presumably, other locations where our ever-benevolent technocratic overlords determine “terrorists” or “criminals” may be operating) would — given a large enough number of scans — produce a lot of false positives stemming from erroneous data.

But let’s assume that such a system can be calibrated to produce a relatively accurate data set. Now we are faced with the problem of defining “suspicious” behaviour. Surely a passenger with the flu or a cold — who might have an elevated body temperature and a faster heart rate — would set alarm bells ringing. So too would someone suffering from pre-flight anxiety, people taking certain medications, the elderly and so on. Given that TSA screening protocols have prevented precisely zero terrorist attacks on American soil since 9/11 (even in spite of the fact that 630 million passengers fly each year ) this merely suggests that vulnerable people will end up getting hassled by the TSA to an even greater extent than they already would be today. This is no laughing matter — a nervous but otherwise perfectly innocent passenger might end up getting tasered and die — something which of course has  happened multiple times already. Or —  under the NDAA (2011) — false-positives might end up being indefinitely detained on totally erroneous grounds.

Of course, the next problem is distinguishing the guilty from the innocent. Simply, this system would seem to produce nothing other than circumstantial evidence. Given that no crime would have yet been committed, how would it be possible to prove nefarious intent? Perhaps one day a terrorist or drug smuggler (got to keep fighting the war on drugs…) will be foolish enough to try to carry a gun or a knife through a TSA checkpoint and onto an aeroplane, but given that a metal detector could have detected that anyway, what is the point of this new technology? Surely it is to pinpoint potential terrorists who would otherwise not be picked out by the body scanners? In that case, would the end result just be that people — with no real evidence against them other than a fast heart rate and some perspiration — end up being thrown off their flight? Would people who are subject to a false positive and as a result miss a flight try to sue the TSA for wasting their time and money?

Next, just as a committed and composed liar can fool a polygraph, surely terrorists and drug smugglers out in the wild would adapt their behaviour to avoid detection. There are of course prescription drugs that can be taken to reduce the physiological symptoms of anxiety, and thus fool the detector.

Then there are the problems in testing. Subjects in the laboratory trials (taxpayer-funded, of course) have been told to go through the system with the intent to cause a disruptive act. The system has been fine-tuned to detect subjects in a controlled laboratory environment. Simply, there is no data on the effectiveness of this system against terrorists in the wild. The wild is a totally different environment, and the mindset and physiological cues of a real terrorist may well be entirely different to those of a laboratory subject who is pretending (we just don’t know until we try it on a large enough sample of real terrorists). The notion that it can catch terrorists seems wholly pseudo-scientific, and based on the false premise that terrorism has an identifiable set of physiological cues.  The entire operation is based on the (possibly flawed) premise that a terrorist will be nervous, and that therefore we should cast an extremely wide dragnet to further interrogate and intimidate nervous people. That is guesswork, not science.

As Alexander Furnas writing in the Atlantic states:

We should ask, in a world where we are already pass through full-body scanners, take off our shoes, belts, coats and only carry 3.5 oz containers of liquid, is more stringent screening really what we need and will it make us any safer? Or will it merely brand hundreds of innocent people as potential terrorists and provide the justification of pseudo-scientific algorithmic behavioral screening to greater invasions of their privacy?

It is ridiculous — and totally contrary to the Fourth Amendment — that the courts have franked the notion that air travellers can be subject to invasive pat-downs and body scans without probably cause. But they did. In U.S. vs Davis, 482 F.2d 893, 908 the judge ruled that “airport screenings are considered to be administrative searches because they are conducted as part of a general regulatory scheme, where the essential administrative purpose is to prevent the carrying of weapons or explosives aboard aircraft” and thatan administrative search is allowed if no more intrusive or intensive than necessary, in light of current technology, to detect weapons or explosives, confined in good faith to that purpose, and passengers may avoid the search by electing not to fly.”

But to effectively conduct a medical scan on passengers? Surely this goes well beyond being “no more intrusive or intensive than necessary“? How many successful terrorist attacks occurred after 9/11, even before the more invasive pat-downs and body scans were brought in? None. So why would deepening the security regime be necessary?

And now that the TSA has expanded its regime beyond airports and out onto the roads of America we must ask ourselves what the endgame of all of this is? Could it be to deploy these technologies on a widespread basis throughout American cities, malls, sports stadiums and using it to scout out potential troublemakers? Would that be deemed an “administrative search” too (and thus not subject to the Fourth Amendment)?

This logic — of giving incontrovertible and unchallengeable power to our benevolent administrative overlords and then hoping for the best — takes us to a dark and nasty place. It requires us to assume they have our best interests at heart, and it requires us to assume that they will not abuse their power. The power to monitor these kinds of cues is a power that could easily be abused. A corrupt TSA agent might call a person they find attractive — even a child — out of the queue for a secondary search so that he or she can molest them with an enhanced pat-down. These new tools just enhance that power, providing a cloak of pseudo-scientific justification to the reality of citizens bowing down at the feet of their government and kissing the ring of power. Unquestioning obedience to power is a recipe for social catastrophe.

As Jefferson put it:

When governments fear the people, there is liberty. When the people fear the government, there is tyranny.

Would I be picked out of the queue at the airport? Sure. I already am for my Arabic name. But I am nervous. And the things that make me nervous? Encroaching Orwellianism. The potential for the abuse of power. The potential for tyranny. The demand of unquestioning obedience. The money spent and debt accrued to develop these technologies. The fact that our governments are obsessed with terrorism to the extent that they will put tighter and tighter controls in place at airports, even though more people are crushed to death by furniture or televisions every year than are killed in terrorist attacks, while ignoring real threats to our society like excessive systemic risk in the global financial system.

That all scares the shit out of me.

Genius is Not Educated

The WSJ published an interesting article entitled Educating the Next Steve Jobs:

Though few young people will become brilliant innovators like Steve Jobs, most can be taught the skills needed to become more innovative in whatever they do. A handful of high schools, colleges and graduate schools are teaching young people these skills.

In most high-school and college classes, failure is penalized. But without trial and error, there is no innovation. Amanda Alonzo, a 32-year-old teacher at Lynbrook High School in San Jose, Calif., who has mentored two Intel Science Prize finalists and 10 semifinalists in the last two years—more than any other public school science teacher in the U.S.—told me, “One of the most important things I have to teach my students is that when you fail, you are learning.” Students gain lasting self-confidence not by being protected from failure but by learning that they can survive it.

It’s nice to read about the value of failure, a topic that I have written a few words about.

But really, I don’t think that revolutionary thinking can be educated, and I think it’s foolish (and possibly even counter-productive) to try. School by definition inculcates systematic thinking, methodology and dogma. It inculcates competence. That’s generally a good thing; surgeons, medical researchers, lawyers, engineers, musicians and all manner of professionals need to be competent to function. Innovation is not necessarily inherent in any of those fields. But genius and revolutionary thinking is not really about competence and confidence.

Malcolm Gladwell is famous for formulating the idea that with 10,000 hours of practice, it is possible to master a skill.

The key to success in any field is, to a large extent, a matter of practicing a specific task for a total of around 10,000 hours.

So is 10,000 hours of practice all that stands between incompetence and world-changing greatness?

Gladwell grandly theorises that many famous history-changers (“outliers”) like Bill Gates, Steve Jobs, and the Beatles got to where they did with 10,000 hours of practice. But that ignores a lot of silent evidence; for every Bill Gates programming over a mainframe for 10,000 hours, there is a housewife that we have never heard of who has done 10,000 hours of parenting, and (probably much more than) 10,000 hours of housework. There is a surgeon who has done 40,000 hours of operations. There is a truck driver who has driven for 100,000 hours.

Gladwell is keen to point out, of course, that people’s skills also flourish through the networks they cultivate, and the people they meet, and that (of course) it’s just a little more complicated than 10,000 hours of practice.

My view is that all 10,000 hours of practice (something which of course can be delivered within a traditional educational framework) does is lay down a bedrock of competency.

My theory is that revolutionary thinking is not simply a matter of persistence, but is instead attitudinal, and mostly comes out of people who are forced or who force themselves to take a radically different perspective to the rest of the world. They are — almost by definition — autodidacts, simply because their style of thinking has not yet been pioneered. They have to teach themselves, and iron out the kinks. Being an autodidact of course is not necessarily a matter of choice; very often it is a matter of necessity — people who don’t have access to traditional education, or who are forced to exist outside the system. This can be due to poverty, strong personalities, or a preference for self-teaching (very often expressed as a preference for doing over thinking).

The established system is often very useful for such people, because it gives them a framework from which to hang contrarianism. It gives them something to rebel against and kick out against.

On the other hand there are many examples of professional academics and those within the establishment who pioneer and innovate (although of course it should be noted that the overwhelming majority of academic papers today are masturbatory regurgitation). But such activity forces even the most staid into autodidactic learning; it forces them to make mistakes, and challenge themselves and learn their own lessons.

I suppose it is possible to try to inculcate a love of tinkering, of trial-and-error, and an understanding of the value of failure. It is certainly possible to encourage an interest in self-teaching. But it remains to be seen how many of us will really bite. It strikes me as if most of us do not really want to be innovators; I see far more who want job security, loving families, and plenty of leisure time.

I tend to believe that today’s education system is fit for its own purposes; it churns out competent thinkers, competent doers, people who can analyse to a framework and work to a deadline. True autodidacts and philosophers (in the most literal sense of the word — lovers of thinking, learning and wisdom) will find their own way.