The mathematics professor Alan Sokal famously shamed much of the humanities profession by publishing ‘Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity’ — a paper intended as ambiguous gobbledegook — in the peer-reviewed postmodern cultural studies Journal Social Text in 1996.
Sokal’s paper was a cleverly and artfully constructed piece of trolling. Sokal did it by conforming to the stylistic trappings of postmodernists like Jacques Derrida, Jean Baudrillard and Luce Irigaray — mimicking their dense and obscure verbiage, misusing concepts from science like quantum gravity (of which there exists no widely accepted scientific theory), and shrouding his argument in a great deal of ambiguity. The piece described the notion of a “postmodern science”, one that discarded the notion of objective truth.
The fallout from this paper underlined the divide between science (which seeks objective truth), and postmodernism (which does not seek objective truth). But more than that, it exposed postmodernism and cultural studies as being ambiguous, self-absorbed, and incomprehensible, to the extent that its own journals were tricked into publishing an article intended as nonsense.
Yet this issue — of baffling with incomprehensible bullshit — is not just a problem in postmodernism. Mathematics recently had a Sokal moment (and frankly, it is surprising that it took this long). Via the LRB:
Last month That’s Mathematics! reported another landmark event in the history of academic publishing. A paper by Marcie Rathke of the University of Southern North Dakota at Hoople had been provisionally accepted for publication in Advances in Pure Mathematics. ‘Independent, Negative, Canonically Turing Arrows of Equations and Problems in Applied Formal PDE’ concludes:
Now unfortunately, we cannot assume that
It is difficult, as a non-specialist, to judge the weight of that ‘unfortunately’. Thankfully, the abstract is a model of concision:
Let ρ = A. Is it possible to extend isomorphisms? We show that D´ is stochastically orthogonal and trivially affine. In [10], the main result was the construction of p-Cardano, compactly Erdős, Weyl functions. This could shed important light on a conjecture of Conway–d’Alembert.
Baffled? You should be. Each of these sentences contains mathematical nouns linked by the verbs mathematicians use, but the sentences scarcely connect with each other. The paper was created using Mathgen, an online random maths paper generator. Mathgen has a set of rules that define how papers are arranged in sections and what kinds of sentence make up a section and how those sentences are made up from different categories of technical and non-technical words. It creates beautifully formatted papers with the conventional structure, complete with equations and citations but, alas, totally devoid of meaning.
So mathematicians and mathematics journals are also susceptible to being trolled by their own bullshit, their own conventions, syntax and “rigour”. If a mathematics journal and the peer-review process can be fooled by a meaningless paper spat out by a computer program, how much well-intentioned but bad or meaningless mathematics has also slipped through the peer review process?
And what about the other subjects that have adopted mathematical symbols as their lexicon, like economics?
I have written at length about some of the problems connected to the very great increase of mathematical terminology in economics — and remain highly sceptical of the use of assumptive models in economics. The social sciences are particularly unsuited to simplified mathematical modelling — unlike the physical sciences, the phenomena they seek to explain tend to be far less linear in observable causation, and so far more susceptible to wildness. No model or theory less than reality itself can fully represent human behaviour and human action; each transaction in an economy is unique, and arises from a different set of circumstances, representing a constantly varying order of human preferences. This tendency toward nonlinear causality is why transparency is critical to bullshit detection in the social sciences. Just as a sheen of ambiguous, obscure and poorly-defined English can make theories incomprehensible and closed-off from scrutiny and understanding, so too can a sheen of obscure and specialised mathematics.
Well-written work — whether in plain language or mathematics — requires comprehensible explanations and definitions, so that a non-specialist with a moderate interest in the subject can quickly and easily grasp the gist of the concepts, the theory, the reasoning, and the predictions. Researchers can use as complex methods as they like — but if they cannot explain them clearly in plain language then there is a transparency problem. Without transparency, academia — whether cultural studies, or mathematics, or economics — has sometimes produced self-serving ambiguous sludge. Bad models and theories produce bad predictions that can inform bad policy and bad investment decisions. It is so crucial that ideas are expressed in a comprehensible way, and that theories and the thought-process behind them are not hidden behind opaque or poorly-defined words or mathematics.