Incomprehensible Bullshit

The mathematics professor Alan Sokal famously shamed much of the humanities profession by publishing Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’  — a paper intended as ambiguous gobbledegook — in the peer-reviewed postmodern cultural studies Journal Social Text in 1996.

Sokal’s paper was a cleverly and artfully constructed piece of trolling. Sokal did it by conforming to the stylistic trappings of postmodernists like Jacques Derrida, Jean Baudrillard and Luce Irigaray — mimicking their dense and obscure verbiage, misusing concepts from science like quantum gravity (of which there exists no widely accepted scientific theory), and shrouding his argument in a great deal of ambiguity. The piece described the notion of a “postmodern science”, one that discarded the notion of objective truth.

The fallout from this paper underlined the divide between science (which seeks objective truth), and postmodernism (which does not seek objective truth). But more than that, it exposed postmodernism and cultural studies as being ambiguous, self-absorbed, and incomprehensible, to the extent that its own journals were tricked into publishing an article intended as nonsense.

Yet this issue — of baffling with incomprehensible bullshit — is not just a problem in postmodernism. Mathematics recently had a Sokal moment (and frankly, it is surprising that it took this long). Via the LRB:

Last month That’s Mathematics! reported another landmark event in the history of academic publishing. A paper by Marcie Rathke of the University of Southern North Dakota at Hoople had been provisionally accepted for publication in Advances in Pure Mathematics. ‘Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE’ concludes:

Now unfortunately, we cannot assume that

It is difficult, as a non-specialist, to judge the weight of that ‘unfortunately’. Thankfully, the abstract is a model of concision:

Let ρ = A. Is it possible to extend isomorphisms? We show that D´ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway–d’Alembert.

Baffled? You should be. Each of these sentences contains mathematical nouns linked by the verbs mathematicians use, but the sentences scarcely connect with each other. The paper was created using Mathgen, an online random maths paper generator. Mathgen has a set of rules that define how papers are arranged in sections and what kinds of sentence make up a section and how those sentences are made up from different categories of technical and non-technical words. It creates beautifully formatted papers with the conventional structure, complete with equations and citations but, alas, totally devoid of meaning.

So mathematicians and mathematics journals are also susceptible to being trolled by their own bullshit, their own conventions, syntax and “rigour”. If a mathematics journal and the peer-review process can be fooled by a meaningless paper spat out by a computer program, how much well-intentioned but bad or meaningless mathematics has also slipped through the peer review process?

And what about the other subjects that have adopted mathematical symbols as their lexicon, like economics?

I have written at length about some of the problems connected to the very great increase of mathematical terminology in economics — and remain highly sceptical of the use of assumptive models in economics.  The social sciences are particularly unsuited to simplified mathematical modelling — unlike the physical sciences, the phenomena they seek to explain tend to be far less linear in observable causation, and so far more susceptible to wildness. No model or theory less than reality itself can fully represent human behaviour and human action; each transaction in an economy is unique, and arises from a different set of circumstances, representing a constantly varying order of human preferences. This tendency toward nonlinear causality is why transparency is critical to bullshit detection in the social sciences. Just as a sheen of ambiguous, obscure and poorly-defined English can make theories incomprehensible and closed-off from scrutiny and understanding, so too can a sheen of obscure and specialised mathematics.

Well-written work — whether in plain language or mathematics — requires comprehensible explanations and definitions, so that a non-specialist with a moderate interest in the subject can quickly and easily grasp the gist of the concepts, the theory, the reasoning, and the predictions. Researchers can use as complex methods as they like — but if they cannot explain them clearly in plain language then there is a transparency problem. Without transparency, academia — whether cultural studies, or mathematics, or economics — has sometimes produced self-serving ambiguous sludge. Bad models and theories produce bad predictions that can inform bad policy and bad investment decisions.  It is so crucial that ideas are expressed in a comprehensible way, and that theories and the thought-process behind them are not hidden behind opaque or poorly-defined words or mathematics.

Empiricism in Economics

It has long been held that there are two kinds of economics:

  1. Rationalist economics: starting out with theses about philosophy, money and reality (etc) and using logic and reason to reach conclusions about the present and predictions about the future.
  2. Empiricist economics: starting out with data and creating mathematical models representing these data, and using these models to reach conclusions about the present, and predictions about the future.

In traditional circles, the first class tends to include the various schools of Austrian and Marxian economics, and the second class tends to include the various schools of Keynesian and Monetarist economics.

Today, I want to put an entirely new spin on empiricism in economics, by focussing away from modelling. The process of mathematical modelling is just as rationalist as using logic and reason.

Why?

Economies are nonlinear systems.

From Wikipedia:

In mathematics, a nonlinear system is a system which is not linear, that is, a system which does not satisfy the superposition principle, or whose output is not directly proportional to its input. 

Effectively, a nonlinear system is one in which mathematical modelling mostly does not work. This, in a nutshell, is the reason why professional economists within the academic system, at the Federal Reserve, and within the IMF and the World Bank are often so desperately incorrect with their predictions, as we have seen so many times in the last few years. 

This is because nonlinearity is a direct result of incomplete information. Any map or model built will not be an exact replica of reality, and as Benoit Mandelbrot showed tiny divergences in an unmodelled (or unknown) variable can result in a humungous variation in the output of the system (i.e., the economy).

So in dealing with nonlinearity the model always fails — sometimes by a fraction, and sometimes by a huge amount.  The notion of accurate modelling was famously taken to a logical conclusion by the writer Jorge Luis Borges in On Exactitude in Science:

In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coin- cided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.

So if accurate modelling in complex dynamical systems such as economies is effectively impossible without mapping every input what hope can there be for empiricism in economics?

We have to approach it from another angle: if it is impossible to model economies in a laboratory, through equations, or in a supercomputer, the real world must be the testing-ground for ideas.

Actors in economies should be free to experiment. Good ideas should be free to succeed, and bad ones to fail. The role of the government should be to provide a level playing field for experimentalism (and enough of a safety net for when experiments go wrong) — not pick winners or “manage the economy”. People with ideas must be able to access capital so that those ideas can be tested in the market place. If experiments go badly, that is no bad thing: it just means that another idea, or system, or structure needs to be tested. People should be free to go bankrupt and start all over again with a different mindset and different idea.

The corporatist model that most nations around the world have adopted, or fallen into (i.e. “capitalism” led by governments and large corporations) is nothing like this. Small businesses struggle to access capital. Young men and women are thrown onto the scrapheap of unemployment without a chance to develop skills, or entrepreneurial ideas, or even sell their labour, and pushed into leeching off the wealth of the nation through welfare. Large banks and corporations whose business models have failed are routinely declared “infrastructurally important” or “too big to fail” and bailed out to leech off the nation.

This is not empiricism. This is a disaster. To restore society, we must restore empiricism into economies.