Why will people colonize space?

Stanford_Torus_interior.jpg

Noah Smith over at Noahpinion does a rundown on why Firefly doesn’t really resonate with him. I agree with his take:

But in Firefly, why do we – meaning the crew of Serenity – go to space? It’s not for a higher purpose. There’s no science being done, no galaxy being saved. The show’s theme song may be about freedom, but unlike many of the people around them, Mal and his crew aren’t colonists. They aren’t going to found a new, more liberal republic on the virgin soil of a distant world. They aren’t going to build a city on a hill. They have no quest, they seek no knowledge, they fight for no cause, they meet no aliens. Their existence is simply a big fat middle finger to the government in the distance.

And for the same reason, it doesn’t resonate with me much, either.

But neither Noah and I are space colonists. I can’t speak for Noah, but I am above all else a science fiction fan, wedded to romantic notions of human expansion into the wider universe as a higher calling. A secular religion, if you will. Being human feels good, mostly (and I say that as someone who has experienced plenty of strife and difficulty, as well as physical disabilty and mental illness). And being human in the technologically augmented universe of fifty  or five hundred years from now — boosted by 3-D printers, artificial intelligence, robots, smart drugs, transhuman implants — will probably be significantly better, just as the present is vastly better (less poverty, less child mortality, less starvation and hunger, greater variety of tools and products, etc) for the vast majority than fifty or five hundred years ago.

So, if being human feels good, why not go forth and make more of the universe human? After all, if we earthly humans — and our vast chain of simpler and simpler evolutionary ancestors — are little more than awakened chemical elements, why not go forth, spread out, and wake more of the universe up? Let more of the universe experience love, emotion, mathematics, music, logic, technology, and all of the other things that make us human. Could there be a more manifest destiny?

In reality, though, I suspect that the motivations for space colonization will be far baser and more mundane. America’s early European colonists were not exactly motivated by the romantic ideals of free speech and free religion. They were much more motivated by lebensraum, and freedom from rulers they did not like.

First, resources. Resources are limited on earth Asteroid, moon and interplanetary mining offering up the potential to vastly expand the human resource base. This, of course, is basic economics. Humans are often greedy and avaricious. On earth, resources are guarded by the international military order and mutually assured destruction. Invading a country to take its resources is, to say the least, increasingly difficult. And I predict it will get more so as more and more technology (e.g. drones) up the stakes in terms of mutually assured destruction.

In space, no such thing. The universe is — to the best of human knowledge — effectively limitless. If the U.S. — or Microsoft, or China, or SpaceX —  seizes one asteroid, there are plenty more to seize. Once we’re done with near-Earth asteroids, beyond that there’s the asteroid built, and Venus, Mars, Jupiter and their moons, and so on. Then there’s the Kuiper belt and Oort cloud and onward and outward to the nearest stars. Beyond the nearest stars are billions more in our galaxy. And beyond that, lie billions more galaxies. If we are alone in the universe, there’s a whole universe for us to bring to exploit (or, in my romantic vision, bring to life). If not, then we may well have to fight other species for that right.

Hunger for resources and for lebensraum, I expect, will be a very major factor in bringing humanity to the stars, then.

But so too will also be the need to stick a “big fat middle finger to the government in the distance.” In absolutely no way are we humans ideologically homogeneous. Watching the rise and fall and rise of Donald Trump and American nationalism is reinforcing this point.

I am a universalist humanist, and that view stems directly from my view of humanity as a planetary species with the potential to go interplanetary. The vast majority of humanity are substantially more tribal than I am. And very many different tribes of people alienated by the earthly mainstream are likely to want to go. Racial and religious and ideological and tribal supremacists will go to space seeking out their own pure paradises, where rules are set by them, and not by the mainstream. And let them go, these Nazis, and radical Islamists, and cultish sects, and neo-Confederates. Let them fly off to some distant planet or asteroid or space station off in the black infinities to pursue their authoritarian dreams, rather than have them subjugate a corner of the Earth.

The point is that these ideological minorities have far more concrete reason to travel away than anyone from the mainstream. Our species is not homogeneous. That is one of our strengths. Our decentralization allows us to experiment with different modes of government and ideology. Our species over the aeons of history has undoubtedly been carried forward to each new generation by many men and women that we today would deem to be insufferably awful — genocidalists, bigots, rapists, murderers, alongside a few who, I assume, we would see as good people.

It doesn’t matter if we are carried to the stars by the dull and the bad. The point is that we are going. And we — the species — are as a whole species neither dull nor bad. Our children all possess the capacity to deviate from us. Such is the long and winding road of genetic and cultural evolution. Maybe that doesn’t make for great science fiction. But often reality is unspeakably dull, and unspeakably bad.

Political Correctness And The Extreme Fragmentation Of Society In Modernity

One of the defining cultural events of the 2016 election season so far has been the overwhelming rejection of the notion of political correctness expressed in the Republican selection of Donald Trump as presidential nominee. Here is Trump expounding his view on political correctness:

trumphillaryfeature.png

What is the political correctness that the Trump supporters are rejecting?

Trump-supporting website Infowars.com gives the following definition:

In his novel 1984, George Orwell imagined a future world where speech was greatly restricted.

He called that the language that the totalitarian state in his novel created “Newspeak”, and it bears a striking resemblance to the political correctness that we see in America right now.

According to Wikipedia, Newspeak is “a reduced language created by the totalitarian state as a tool to limit free thought, and concepts that pose a threat to the regime such as freedom, self-expression, individuality, peace, etc. Any form of thought alternative to the party’s construct is classified as ‘thoughtcrime.’”

Infowars then lists 19 examples, from “The Missouri State Fair… permanently bann[ing] a rodeo clown from performing because he wore an Obama mask” to “a Florida police officer” losing his job for calling Trayvon Martin a “thug”, to “the governor of California signing a bill to allow transgendered students to use whatever bathroom and gym facilities they would like”.

The overriding concern expressed by the Trumpians appears to be that liberals are trying to enforce their worldview through the use of language. They are trying, in other words, to promote their own worldview through making it difficult to dissent from the “politically correct” version of reality.

I disagree that political correctness is an entirely or even largely liberal phenomenon. To be blunt and upfront with my thesis, this is because what is politically correct is a matter of subjective opinion. We each — as human beings — have our own notion of what is the politically correct way to frame an argument or think about a situation or system. So that which is “politically correct” for one person or group of people is absolutely politically incorrect for another person or group of people. In other words, every side of the argument has its own “politically correct” version of reality.

For example, advocates of transgender rights and particularly the notion that it is possible for a person to be born transgender would likely be outraged at the notion that Caitlyn Jenner was born as a male, and so is still a man in spite of transitioning to living as a woman. The notion that Caitlyn Jenner is a man is politically (and factually) incorrect to this first group. And by contrast, advocates of rigid and unchangeable gender roles would likely be outraged by the notion that Caitlyn Jenner is now a woman, and can use the women’s bathroom. The notion that Caitlyn Jenner is a woman is politically (and factually) incorrect to this second group.

I even disagree that political correctness is a new phenomenon. What was McCarthyism, if not a hardcore form of right-wing political correctness? What was the Bush Administration renaming French Fries as Freedom Fries as protest over the French government’s refusal to participate in the Iraq war if not trying to use language to police reality?

Of course, it is completely possible for someone to believe that X is true and respectfully disagree with the opposing view that X is not true, and vice versa.

But that is hardly the direction that the country is headed. Many metrics show that Americans are becoming more and more politically polarized, as this chart via Pew illustrates:

FT_Polarization.Politically.Engaged.png

Perhaps what people really mean when they say they are frustrated with political correctness is that they are frustrated with just how disengaged they are from the other side.

With that in mind, what the selection of Donald Trump represents is not so much a rejection of political correctness as a scorched-earth rejection of the other side’s version of reality. In other words, the polarization is becoming more extreme and both sides’ versions of what is “politically correct” are becoming more distinct and noticeable.

This all, of course, is an outgrowth of the pluralism of modernity. American society has become increasingly pluralistic as it has become increasingly diverse and tolerant of alternative lifestyles.

This is entirely unsurprising. With more freedom and liberty comes divergence. People are variable and heterogeneous. They are not all motivated by the same things and in pursuit of the same goals. Giving people freedom to pursue their own goals and interests inevitably leads to pluralism, if not to full-blown polarization.

This is why Trump’s policies are necessarily authoritarian. In order to beat back the pluralism of modernity, Trump advocates authoritarian policies that reduce liberty with the design of building a more cohesive society. Banning Muslims from entering the U.S. decreases diversity and pluralism. Deporting undocumented migrants decreases diversity and pluralism. Building a wall at the border is an instrument of reducing diversity and pluralism. And the show of naked authoritarianism itself makes society fearful. The most successful totalitarian states are the ones — such as North Korea — where a sheepish public polices itself.

Trump, of course, would point out that these measures were the norm throughout most of American history and that the status quo is some kind of freakish digression. But to boil it down to its core essence, “Making American Great Again” is about turning back multiculturalism toward monoculture. It is, ultimately, about enforcing an idea — that a more cohesive and less diverse society is a good thing — on everyone else.

Of course, when you have two groups whose understanding of the world fundamentally disagrees, it is very hard to achieve unity and stability. Lots of wars have been fought over this very kind of thing. The notion of a culture war is actually quite prescient as cultural warfare is exactly what is occurring between the Trumpians and the liberals.

I doubt that either side will be victorious. The fragmentation of the world that has led to these divergences is probably not the result of a liberal conspiracy or liberal control of government. It is much more likely to be a result of technology. Why? Well, consider the way that technology is fragmenting the media. It is much easier to live in a local monoculture when your main source of global news is a town notice board, or two radio channels, or four TV channels, or even fifty cable channels, than it is when your main source of global news is the huge and varied and exponentiating internet. As technology continues to fragment communication and the spread of ideas, people will continue to pursue their own individual interests with the effect of further cultural divergence. Virtual reality will be a very important technology in developing this, as it will begin to let us not only listen to our own FOX News/MSNBC echo chambers, but live in virtual worlds to suit our own tastes. We are heading toward a world where we can build our own echo chambers and shut off anything we find offensive or unpleasant.

In other words, if you think that cultural fragmentation is bad now — or that the Trump supporters are suggesting extreme measures in order to reimpose a degree of cultural hegemony — you ain’t seen nothing yet. The decentralization of warfare through the adaptation of drone technology and things like 3-D printed guns and bullets means that many skirmishes will likely be fought over this stuff again.

How crowdfunding can help us find aliens


I can’t think of many things that would change the global economy more than finding out that we’re not alone in the universe.

Other intelligent species out in the cosmos may have advanced technology to share with us — which would obviously be a potentially huge economic stimulus here on Earth. Or perhaps they’d want to wage war against us and wipe us out. But even if there are hostile species out there, it is better that we know about them as early as possible so we have a better chance to make massive investments to try to defend ourselves.

Meeting intelligent alien species may also provide new markets for us to sell goods. And even the discovery of primitive life — say, in the oceans of Jupiter’s moon Europa — would greatly enrich human scientific knowledge by giving us a new tree of evolutionary life to study.

Read More At TheWeek.com

Why We Should Build The Death Star

2469071

In January 2012, Zero Hedge made a sarcastic proposal to boost US GDP by $852 quadrillion — building the Death Star, a fictional moon-sized space station from the Star Wars film series:

Building a massive space weapon is all very well, but you have to find the materials to build it with. It’s easy to say that “sure, the Death Star would be expensive” but is there actually enough iron in the Earth to make the first Death Star? Centives decided to find out.

We began by loo king at how big the Death Star is. The first one is reported to be 140km in diameter and it sure looks like it’s made of steel. But how much steel? We decided to model the Death Star as having a similar density in steel as a modern warship. After all, they’re both essentially floating weapons platforms so that seems reasonable.

Scaling up to the Death Star, this is about 1.08×1015 tonnes of steel. 1 with fifteen zeros.

Which seems like a colossal mass but we’ve calculated that from the iron in the earth, you could make just over 2 billion Death Stars. You see the Earth’s crust may have a limited amount of iron, but the core is mostly our favourite metal and is both very big and very dense, and it’s from here that most of our death-star iron would come.

But, before you go off to start building your apocalyptic weapon, do bear in mind two things. Firstly, the two billion death stars is mostly from the Earth’s core which we would all really rather you didn’t remove. And secondly, at today’s rate of steel production (1.3 billion tonnes annually), it would take 833,315 years to produce enough steel to begin work. So once someone notices what you’re up to, you have to fend them off for 800 millennia before you have a chance to fight back. In context, it takes under an hour to get the steel for HMS Illustrious.

Oh, and the cost of the steel alone? At 2012 prices, about $852,000,000,000,000,000. Or roughly 13,000 times the world’s GDP.

The point was one against fiscal stimulus — while it may be possible to boost GDP by any amount through government spending, there is no guarantee whatever that that government spending will do anything productive. After all the toil and effort of building a Death Star what is an economy left with? On the surface of things, a giant metallic orb in space and very little else. In Misesian terms, this would be seen as a massive misallocation of capital, resources, labour and technology, building something that nobody in the market demanded and which could be ostensibly used to oppress people (“do what we say or we’ll fire our laser cannon at you!”).

Yet, I am going to try to defend it. I think that building the Death Star, or something similar is a very good idea and would have massive beneficial economic effects for employment, output, science, technology and so forth. And furthermore, I think it is possible in the very, very long run for a government to build the Death Star or something similar of a smaller scale without misallocating any capital, labour, technology or resources whatever.

First, I think that right now humanity is sitting in dangerous territory. There are over seven billion of us, yet we are all concentrated on one ecosystem — the Earth, with one tiny totally-dependent off-planet colony (the International Space Station) that houses less than ten people at a time. Simply, in our current predicament we are incredibly exposed. A single mass viral pandemic, asteroid strike or other cataclysm could completely wipe our species out. With humanity spread throughout the solar system (and preferably, the galaxy and the universe) our species is far less fragile to random extinction events. The Death Star itself — a giant space weapon — would be a safeguard against a particular kind of cataclysmic risk, that of hostile alien attack. If there are other advanced lifeforms populating our universe, they may see life on Earth and especially humans as an existential threat. Having a large, powerful weapon like a Death Star could be a strong safeguard against our own destruction by other species.

Zero Hedge’s mock proposal is actually quite thin, only taking into account the resource cost of the steel, and not the cost of getting the steel into space, building a moon-sized steel satellite in space, presumably including the development of laser cannon technology, some kind of propulsion system, the feeding and housing of a large permanent crew including oxygen and water recycling facilities, hydroponics and artificial food technologies, a transport system to get people and things between the Earth and the Death Star, etc. Nor does it take into account the cost of the labour in employing scientists and technologists to develop and prototype the technologies, employing engineers to deploy the technology, and employing labourers or automated robots to produce components and parts and to assemble the finished article. Simply, the cost would far exceed even what Zero Hedge projects, possibly by many times over.

So why the hell would I think that committing to spend vastly more than global GDP on a single project that nobody in the market is demanding is a good idea? Have I completely lost my mind, and any concept of sound economics that I once had? Well, on a potentially infinite timeline, such a huge figure (let’s say the necessary figure is ten times what Zero Hedge estimated, which could still be rather low in my honest opinion) pales into insignificance as we go further along the timeline. Building the Death Star is not currently a short term project that could be done to boost GDP in a single year to make up an output gap, deploy idle capital or reduce unemployment. In fact even if we committed to building the Death Star today, it is highly unlikely that we would actually even begin work on it in the next 100 or even 200 years. There would be vast technological, social and organisational challenges ahead before we could even begin to think seriously about commencing production. What we would begin work on are challenges far more modest and far closer to our present capabilities — sending a human to Mars, setting up a permanent base on the moon, setting up a permanent base on Mars, and developing technologies for those purposes — specifically multi-use lifters, a space elevator, improved solar energy collection and storage, improved nuclear batteries, improved 3-D printing technologies, higher energy particle accelerators, space mining technologies, robots, machine learning, computing, life support systems and things as mundane as increased science and science education spending.

Those kinds of tasks are much, much, much lower cost than actually committing to building the Death Star in one go, and can relatively easily be funded from presently idle resources (thus not misallocating any resources) as measured by the output gap which currently sits at around $856 billion (5.8% of potential GDP). The United States (alongside like-minded countries with similarly large output gaps) could fund a manned mission to Mars ($6 billion), build a new high energy particle accelerator ($12 billion), give ten-thousand million-dollar basic research grants ($10 billion), build a base on the Moon ($35 billion) and invest $20 billion more in science education for less than 10% of the current output gap. Better still, NASA and space-related spending historically has a relatively high multiplier of at least $2 (and possibly as much as $14 for certain projects, as well as a multiplier of 2.8 jobs for every job directly created) of extra economic activity generated per dollar spent. Given that space-spending yields new technologies like global positioning systems, satellite broadcasting, 3-D printers and memory foam that lead to new products, this is unsurprising. It also means that such spending is likely to get the economy back to full employment more quickly. Once this round of projects is completed, we will have a better idea of where we need to go technologically to be able to build a Death Star. The next time the economy has a negative output gap and unemployment, a new series of large-scale projects can commence. Eventually, with the growth of technology, automation and knowledge, a project on the scale of the Death Star may become not only economically viable but a valuable contribution to human capacity.

Many free market purists will wonder what the point of all of this is. Didn’t the Soviet economy collapse under the weight of huge misallocation of capital to large-scale grandiose projects that nobody wanted? What about all the projects that could have been undertaken by the free market in the absence of such a grandiose project?  My answer to this is twofold — first of all, I am only proposing deploying idle resources that the market has chosen to allow to sit idle and unproductive for a long time. Second, there are some projects that are actually important but which are not currently viable in the market. Space technology is probably the most obvious example. While I greatly admire the new generation of space entrepreneurs, and while I concede that long-term space colonisation will be undertaken be private individuals and groups (in the manner of the Pilgrim Fathers who colonised America — people seeking the ability to live by their own rules, instead of those of established Earth-based jurisdictions) the private space industry is still a long way behind where states were forty or fifty years ago. The Apollo program that put human beings on the Moon has still not been matched by private enterprise.

Ultimately, the Death Star itself is far beyond current human capacities, and far beyond the capacity of the idle capital, labour and resources that we have the option of using up through public initiatives. This I must concede. But, as a super-long-term goal, the capacity to build such things is what our civilisation ought to aspire to. And getting to such super-long-term objectives requires investment and investigation today.

Will Gold Be a Medium of Exchange Again?

While gold is widely held as a store of purchasing power, and while it is possible to use gold as a unit of account (by converting its floating value to denominate anything in gold terms), gold is no longer widely used as a medium of exchange.

Noah Smith says that gold will never be a widespread medium of exchange again:

In the days when people carried around gold doubloons and whatnot as money, you had a global political system characterized by pockets of stability (the Spanish Empire, or the Chinese Empire, or whatever) scattered among large areas of anarchy. Those stable centers minted and gave out the gold coins. But in the event of a massive modern global catastrophe that brought widespread anarchy, the gold bars buried in your backyard would not be swappable for eggs or butter at the corner store. You’d need some big organization to turn the gold bars into coins of standard weights and purity. And that big organization is not going to do that for you as a free service. More likely, that big organization will simply kill you and take your gold bars, Dungeons and Dragons style.

In other words, I think gold is never coming back as a medium of exchange, under any circumstances. It is no more likely than a return of the Holy Roman Empire. Say goodbye forever to gold money.

Well, forever is a very long time. Human history stretches back just six million years. Recorded history suggests that gold has only been used as a medium of exchange for five or six thousand years. But for that tiny sliver of human history, gold became for many cultures entirely synonymous with money, and largely synonymous with wealth. So I think Noah is over egging his case by using the word forever. Societies have drastically changed in the last six thousand years, let alone the last one hundred. We don’t know how human culture and technology and societies will progress in the future. As humans colonise space, we may see a great deal of cultural and social fragmentation; deeper into the future, believers in gold as money may set up their own planetary colonies or space stations.

But what about the near future? Well, central banks are still using gold as a reserve. In the medium term, it is a hedge against the counter-party risks of a global fiat reserve system in flux. But central banks buying and acquiring gold is not the same thing as gold being used as a medium of exchange. Gold as a reserve never went away, and even in the most Keynesian of futures may not fully die for a long time yet.

And what about this great hypothetical scenario that many are obsessed with where the fragile interconnective structure of modern society — including electronics — briefly or not-so-briefly collapses? Such an event could result from a natural disaster like a megatsunami, or extreme climate change, or a solar flare, or from a global war. Well, again, we can’t really say what will or won’t be useful as a medium of exchange under such circumstances. My intuition is that we would experience massive decentralisation, and trade would be conducted predominantly either in terms of barter and theft. If you have gold coins or bars, and want to engage in trade using them — and have a means to protect yourself from theft, like guns and ammunition — then it is foreseeable that these could be bartered. But so too could whiskey, cigarettes, beer, canned food, fuel, water, IOUs and indeed state fiat currencies. If any dominant media of exchange emerges, it is likely to be localised and ad hoc. In the longer run, if modern civilisation does not return swiftly but instead has to be rebuilt from the ground up over generations then it is foreseeable that physical gold (and other precious metals, including silver) could emerge as the de facto medium of exchange, simply because such things are nonperishable, fungible, and relatively difficult to fake. On the other hand, if modern civilisation is swiftly rebuilt, then it is much more foreseeable that precious metal-based media of exchange will not have the time to get off the ground on anything more than the most localised and ad hoc of bases.

Noah concludes:

So when does gold actually pay off? Well, remember that stories do not have to be true for people to believe them. Lots and lots of people believe that gold or gold-backed money in the event of a global social disruption. And so when this story becomes more popular (possibly with the launching of websites like Zero Hedge?), or when large-scale social disruption seems more likely while holding the popularity of the story constant, gold pays off. Gold is like a credit default swap backed by an insolvent counterparty – it has no hope of actually being redeemed, but you can keep it around forever, and it goes up in price whenever people get scared.

In other words, gold pays off when there is an outbreak of goldbug-ism. Gold is a bet that there will be more goldbugs in the future than there are now. And since the “gold will be money again” story is very deep and powerful, based as it is on thousands of years of (no longer applicable) historical experience, it is highly likely that goldbug-ism will break out again someday. So if you’re the gambling type, or if you plan to start the next Zero Hedge, or if your income for some reason goes down when goldbug-ism breaks out, well, go ahead and place a one-way bet on gold.

Noah, of course, is right that gold is valuable when other people are willing to pay for it. The reason why gold became money in the first place was because people chose to use it as a medium of exchange. They liked it, and they used it, and that created demand for it. If that happens again, then gold will be an in-demand medium of exchange again. But for many reasons — including that governments want monetary flexibility — most of the world today has rejected gold as a medium of exchange.

But there is another pathway for gold to pay off. Noah is overlooking the small possibility that gold may at some point become more than a speculative investment based on the future possibility that gold may at some point return as a monetary media. In 2010, scientists from the Brookhaven National Laboratory on Long Island, using their Relativistic Heavy Ion Collider (RHIC) collided some gold nuclei, traveling at 99.999% of the speed of light. The plasma that resulted was so energetic that a tiny cube of it with sides measuring about a quarter of the width of a human hair would contain enough energy to power the entire United States for a year. So there exists a possibility that gold could be used at some date in the future as an energy source — completely obliterating any possibility of gold becoming a medium of exchange again. Of course, capturing and storing that energy is another matter entirely, and may prove impossible. In that case — if gold does not become a valuable energy source — it is almost inevitable that some society somewhere at some stage will experiment again with gold as a medium of exchange.

Have Financial Markets Gone Post-Human?

So, Thomson-Reuters pays the University of Michigan a million dollars a year to provide selected clients with the results of the latest survey of consumer sentiment 5 minutes before the rest of the world sees them — and to provide higher-paying clients with this information in a machine-readable format ready for algorithmic trading 2 seconds before the rest.

This was not first revealed by the issuers of the consumer confidence survey, or by Thomson-Reuters, but rather by Nanex:

On May 28, 2013, about 1/4 second before the expected release of the Consumer Confidence number, trading exploded in SPY, the e Mini and hundreds of other stocks. Even more interesting, activity exploded just 1 millisecond earlier in the futures (traded in Chicago) than stocks (traded in NYC). The speed of light separates information between Chicago and NYC by at least 4 or 5 milliseconds. Which means this was more likely the result of a timed trading in both futures and stocks, rather than a arbitrage reaction between the two.

We found no other instances of early trading in the 11 previous monthly releases of the same Consumer Confidence data.

Nanex’s data:

screen shot 2013-05-28 at 1.55.25 pm

So, is having a two second jump on the market “insider trading”? Well, yes — but it’s legal insider trading with consent, out in the open. And it likely provides a valuable income stream for the University of Michigan. With or without an early information premium, the algorithmic traders would still have the jump on the wider market. A two second delay in the high-frequency world is an eternity. This kind of early information premium is more like a financial tax on high-frequency traders. If we’re going to have high-frequency trading at all, it may be better for publicly-funded information providers to be able to recoup some or all of their costs by charging the high-frequency traders. In fact, while high-frequency trading continues states might want to look at rolling this out across other datasets, and putting the proceeds toward infrastructure spending or some other public good. After all, while banning high-frequency trading makes for attractive rhetoric, it would probably send an even greater amount of financial activity offshore into a jurisdiction that allowed it. That implies that it would probably be about as effective as prohibiting marijuana and alcohol.

And is this financial markets going post-human? These kinds of barriers to entry cannot be healthy for inclusive, open, transparent markets. If there was anything that might drive retail investors out of the markets — retail investors remain significantly under-invested on where they were before the advent of high-frequency trading, for example — it is massive information asymmetries that render the little guy entirely uncompetitive. Of course, retail investors can still be fundamental value investors, buying and holding. But trying to daytrade against the algorithms seems analogous to a human runner competing against a Ferrari. In fact, given the timeframes (microseconds, in some cases) this analogy is many orders of magnitude too small; it’s closer to a human runner competing against an Alcubierre warp drive. Not so much picking up nickels in front of a steamroller as picking up nickels in front of a Borg cube. If this continues, trading is going post-human.

But in the long run — given how badly traders tend to do against the market — perhaps driving daytraders out of daytrading where they tend to lose money against the market, and into holding diversified index funds is a cloud with a silver lining. Let the robots read the tape (i.e. use regression analyses) and do financial battle. Robots are fast, they don’t get bored or discouraged. Just as in other areas where human endeavour is threatened by robots, it is important to note that while robots can do many things, there are many spheres where humans still have a great advantage. Let humans act in the roles in which they have a natural advantage, and in which robots do not have any skill at all — abstract thought, creativity, social interaction. Robots are still largely confined to drudgery; the word itself is rooted in the Czech word for drudgery, after all. In finance, while robots may some day soon do the overwhelming majority of the trading, humans can still devise the trading strategy, still devise the marketing and sales strategies, and still devise the broader macro strategy.

So perhaps the beginning of the end for human traders is just the end of the beginning for global financial markets. Perhaps that is less of a death sentence, and more of a liberation, allowing talented human labour that in recent years has been channelled into unproductive and obscure projects in big finance to move into more productive domains.

Paying For Our Past Sins

Michael Kinsley’s argument for immediate austerity is about “paying for our past sins”:

Krugman also is on to something when he talks about paying a price for past sins. I don’t think suffering is good, but I do believe that we have to pay a price for past sins, and the longer we put it off, the higher the price will be. And future sufferers are not necessarily different people than the past and present sinners. That’s too easy. Sure let’s raise taxes on the rich. But that’s not going to solve the problem. The problem is the great, deluded middle class—subsidized by government and coddled by politicians. In other words, they are you and me. If you make less than $250,000 a year, Obama has assured us, you are officially entitled to feel put-upon and resentful. And to be immune from further imposition.

Austerians don’t get off on other people’s suffering. They, for the most part, honestly believe that theirs is the quickest way through the suffering. They may be right or they may be wrong. When Krugman says he’s only worried about “premature” fiscal discipline, it becomes largely a question of emphasis anyway. But the austerians deserve credit: They at least are talking about the spinach, while the Krugmanites are only talking about dessert.

To Kinsley, austerity is the necessary spinach. I don’t really understand this. In the United States a crisis in shadow finance spread into the banking industry leading to a default cascade throughout the financial system, which resulted in a wider crisis throughout the economy, and ever since 2008 even after the banking sector was propped-up, unemployment throughout the wider economy has been rife, economic output has fallen far below its long-term trend line, and bank deposits are soaring as the weak economy has damaged confidence and convinced possessors of money to save and not spend or invest.

So many activities in the boom — from home speculation, to NINJA loans, to subprime securitisation, and ultimately the 40-year cycle of total credit growth that led to the Minsky Moment in 2008 — proved unsustainable. But a huge cost has already been paid for those unsustainable activities in the form of the initial crash, and depressed growth, and unemployment, etc. The structure of production has been irrevocably changed by the bust. But are the people suffering the unemployment, the depressed real wage growth, etc, the people who created the total debt growth? No, of course not. Any connection is arbitrary — the people creating the credit default swaps and structured securitised products (ABS, MBS, etc) and NINJA loans that triggered the banking crises in many cases have kept their jobs and been promoted. Certainly, some bankers like Dick Fuld who were involved in creating the crisis lost their jobs, but while people who had nothing whatever to do with the banking crisis have lost their jobs or worse have never even got a job.

So who does Kinsley want to consume the spinach? The people who take the hit to their purchasing power in an austerity program aren’t the ones who caused the financial crisis. Perhaps financial regulators and central bankers were to some degree responsible, but the overwhelming majority of people dependent on government income had nothing whatever to do with financial regulation. Though certainly one side-effect of the crisis has been falling tax revenues, which has meant bigger deficits. But structural deficits are actually relatively low, and nominal deficits are rapidly falling. And the actual interest rate cost of servicing the deficits are at record lows and with current soaring savings levels, unlikely to start rising anytime soon. So any appearance of a deficit problem is a side-effect of a depressed economy. Ultimately, austerity will reduce the government’s use of resources — capital, and labour. And what is the problem with the economy at the moment? Slack resources in capital and labour to such an extent that interest rates are at record lows and unemployment is very high. Kinsley’s “spinach” has nothing whatever to do with the problem. In the long run, once the economy is at full-employment and businesses are booming, and interest rates have risen some austerity will be helpful, not least to take the edge off the boom. But why now? Immediate austerity is iatrogenic medicine — misidentifying the problem, and prescribing a cure that harms the patient.

In my view a bust after an economic boom may be to some degree be unavoidable as an artefact of human psychology. Ultimately, we should remember that a credit-driven boom isn’t a sign of overproduction of goods and services, or a society living beyond its means. After all, the demand for goods and services really existed, and the capacity for the production and use of goods and services really existed. Humans are excitable animals, prone to strange twinges  of spirit both in mania and depression. The business cycle delivers the dessert and the spinach in recurrent cycles. Actions have consequences, and the actions leading into the slump have had huge consequences. But what about our present sins? Having the government force more spinach onto a society already suffering from massive unemployment of people, resources and capital is a strange and cruel prescription. We have already had our spinach in the crash of 2008 and the following slump. Huge numbers of people are unemployed, or have dropped out of the labour force, or have not had the chance to enter the labour force. That is the spinach. If the economy was a man, spinach would be coming out of his ears. Michael Kinsley and his intellectual cousins want to offset spinach with more spinach. Yet the economy has much the same or higher pre-slump capacity for ice cream, and pizza and milkshakes and marshmallows. In the long run, society will rediscover its taste for economic growth, for income growth, and all the slack resources will be used up to produce things that people actually want and need. Yet that does not help the unemployed who have eaten plateful after plateful of spinach as a consequence of actions for which they were mostly not responsible. What could help the unemployed? Job creation and putting slack resources to use.

Ben Bernanke Is Right About Interconnective Innovation

2013-05-18T152144Z_1_CBRE94H16OD00_RTROPTP_2_USA

I’d just like to double down on Ben Bernanke’s comments on why he is optimistic about the future of human economic progress in the long run:

Pessimists may be paying too little attention to the strength of the underlying economic and social forces that generate innovation in the modern world. Invention was once the province of the isolated scientist or tinkerer. The transmission of new ideas and the adaptation of the best new insights to commercial uses were slow and erratic. But all of that is changing radically. We live on a planet that is becoming richer and more populous, and in which not only the most advanced economies but also large emerging market nations like China and India increasingly see their economic futures as tied to technological innovation. In that context, the number of trained scientists and engineers is increasing rapidly, as are the resources for research being provided by universities, governments, and the private sector. Moreover, because of the Internet and other advances in communications, collaboration and the exchange of ideas take place at high speed and with little regard for geographic distance. For example, research papers are now disseminated and critiqued almost instantaneously rather than after publication in a journal several years after they are written. And, importantly, as trade and globalization increase the size of the potential market for new products, the possible economic rewards for being first with an innovative product or process are growing rapidly. In short, both humanity’s capacity to innovate and the incentives to innovate are greater today than at any other time in history.

My reasons for optimism for the long run are predominantly technological rather than social. I tend to see the potential for a huge organic growth in the long run resulting from falling energy and manufacturing costs from superabundant alternative energy sources like solar, synthetic petroleum, wind, and nuclear, as well as decentralised manufacturing through 3-D printing and ultimately molecular manufacturing.

But Bernanke’s reasons are pretty good too. I see it every day. Using Twitter, the blogosphere and various other online interfaces, I discuss and refine my views in the company a huge selection of people of various backgrounds. And we all have access to masses of data to backup or challenge our ideas. Intellectual discussions and disputes that might have taken years now take days or weeks — look at the collapse of Reinhart & Rogoff. Ideas, hypotheses, inventions and concepts can spread freely. One innovation shared can feed into ten or twenty new innovations. The internet has built a decentralised open-source platform for collaborative innovation and intellectual development like nothing the world has ever seen.

Of course, as the 2008 financial collapse as well as the more general Too Big To Fail problem shows greater interconnectivity isn’t always good news. Sometimes, greater interconnectivity allows for the transmission of the negative as well as the positive; in the case of 2008 the interconnective global financial system transmitted illiquidity in a default cascade.

But in this case, sharing ideas and information seems entirely beneficial both to the systemic state of human knowledge and innovation, and to individuals like myself who wish to hook into the human network.

So this is another great reason to be optimistic about the long run.

Does Shelf Stacking Beat Geology?

The British Work and Pensions Secretary Iain Duncan-Smith reacted angrily to the victorious legal challenge made by an unemployed geography graduate who was forced to do unpaid work stacking shelves at Poundland, a British discount chain.

iain-duncan-smith-image-1-760284306

The BBC reports:

Miss Reilly, a University of Birmingham geology graduate, and 40-year-old unemployed HGV driver Jamie Wilson, from Nottingham, both succeeded in their claims that the unpaid schemes were legally flawed.

This was because the regulations behind the schemes did not comply with the Act of Parliament that gave the DWP the power to introduce the programme.

Miss Reilly said that in November 2011 she had to leave her voluntary work at a local museum and work unpaid at the Poundland store in Kings Heath, Birmingham, under a scheme known as the “sector-based work academy”.

“Those two weeks were a complete waste of my time, as the experience did not help me get a job,” she said, after the court ruling on 12 February.

“I was not given any training and I was left with no time to do my voluntary work or search for other jobs.

“The only beneficiary was Poundland, a multi-million pound company. Later I found out that I should never have been told the placement was compulsory.

“I don’t think I am above working in shops like Poundland. I now work part-time in a supermarket. It is just that I expect to get paid for working.”

Now, I don’t think that people should be paid for doing nothing, and I want to see a reduction in the welfare bill through employment growth as much as anyone else. But the idea that people with skills and qualifications should be forced into subsidised menial labour is absurd, and an absolute misallocation of capital and labour.

It is important to emphasise that this was not a paid job, because that has important economic implications. If this were a paid job, offered by the market, then there would be no reason for the unemployed person to refuse it. In a market economy, there will always be a degree of economic mismatch, and people who are trained in one thing may well have to take a job in another temporarily or even permanently. That is undisputed. But that is not the issue at stake here.

If the company in question cannot or will not pay a wage for a worker’s labour, then the position is unsustainable and untenable. Effectively, the government is engaging in subsidisation — providing labour free of cost to corporations to support otherwise unsustainable activities. So in this case the government is choosing to subsidise shelf-stacking over geology.

Iain Duncan-Smith’s words actually make this very clear:

Shelf-stacking is more important than geology.

This is an outstandingly unwise decision, made by a government that has spent the last three years making profoundly unwise decisions that has led to a severe stagnation in growth worse than the Great Depression.

The state should not prioritise one sector over another. The state should certainly not subsidise work in one industry, when an unemployed person has skills and qualifications to work in another industry where there are vacancies. It is a waste of taxpayer’s money to place unemployed people in an irrelevant sector. In fact, the energy and mining industries are a key growth sector today in Britain and around the world, so the notion that someone trained in geology should be subsidised into stacking shelves is eye-poppingly absurd, and reminiscent of the kinds of grotesque capital misallocations in the Soviet Union and North Korea where skilled workers and intellectuals were (and are) often forced to work in demeaning jobs.

The real point of these programs appears to be to provide corporations with a source of free labour, and to engage in demeaning moral paternalism. As Iain Duncan-Smith himself puts it:

I’m sorry, but there is a group of people out there who think they’re too good for this kind of stuff.

Duncan-Smith seems keener to teach young unemployed people a moralising, paternalistic lesson than he is to pursue sound economic policies. In fact that is very much the trajectory of this entire government and its self-defeating “age of austerity” project.

Who Should Be Giving Thanks This Thanksgiving?

Not the wider public.

Our financial system is broken. Our political system is broken. Oligarchs and their cronies reap easy rewards — bailouts, crony capitalism, corporate handouts, liquidity injections, favourable “regulation” (that puts oligarchs’ competition out of a business) — while taxpayers pay the bill.

But no such thing lasts forever.

Thanksgiving is very much the day of the black swan. Nassim Taleb used the example of a turkey fattened up for Thanksgiving as an example of a black swan phenomenon. The turkey sees itself being fed every day by the turkey farmer and assumes based on past behaviour that this will continued indefinitely until the day comes when the farmer kills the turkey. Nothing in the turkey’s limited experiential dataset suggested such an event.

But Thanksgiving also commemorates the end of pre-Columbian America, a huge earth-shattering black swan for the people of the Americas. The day before the first European immigrants landed in North America, very little in the Native Americans’ dataset suggested what was to come.

In a globalised and hyper-connected world, drastic systemic change can occur faster than ever before.

All it takes is the first spark.