Free Speech and Absolute Truth

Several weeks ago, Kings College London psychologist Adam Perkins was prevented from delivering an address on the importance of scientific free speech at the College due to the purportedly “high-risk” nature of his content, resulting in yet another ironic case of intolerance in the name of tolerance. Fortunately for Dr. Perkins (and society in general), the thought police have not yet succeeded in banishing dissenting opinions from all corners of civilization, and an abbreviated version of the speech was published by Quillette, an online “platform for free thought.” Thanks to the editors’ generosity, however, readers of Perkins’ piece may discover for themselves that the ideas it promotes are indeed dangerous—dangerous, in fact, to the very scientific free speech they claim to commend. Ironically, Dr. Perkins’ attempted defense of scientific free speech lays out more reasons to doubt the merits of this cause than to defend them.

After opening with a broad recognition of the general importance of free speech to a healthy society, Dr. Perkins dispels any doubt of the necessity and fragility of free speech in science by recounting the story of Robin Warren and Barry Marshall. In their 1984 experiments, these Australian doctors identified bacteria as the cause of gastritis, overturning the then-popular theory that such conditions were stress-related. Unfortunately, obstinate opponents subdued the pair’s findings for nearly two decades before Warren and Marshall finally received the 2005 Nobel Prize in Physiology or Medicine. To summarize one of the implications of this story, Perkins quotes German physicist Max Planck: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” 

While this quote obviously somewhat applies to Warren and Marshall’s case, its central sentiment is curiously unscientific. Planck’s logic is indisputable with the exception of a single word: individual theories incontrovertibly gain favor with the passing of a hostile generation and the inculcation of a new one, but this fact does not necessitate that the ideologies that become popular in this way will be truths. 

Only a few paragraphs later, though, Perkins’ motive for featuring this contradictory quote is elucidated by his reference to the philosophies of several other famous scientists. He writes:

The existence of scientific debate is also crucial because as the Nobel Prize-winning physicist Richard Feynman remarked in 1963: “There is no authority who decides what is a good idea.” The absence of an authority who decides what is a good idea is a key point because it illustrates that science is a messy business and there is no absolute truth. This was articulated in Tom Schofield’s posthumously published essay in which he wrote:

“[S]cience is not about finding the truth at all, but about finding better ways of being wrong. The best scientific theory is not the one that reveals the truth— that is impossible. It is the one that explains what we already know about the world in the simplest way possible, and that makes useful predictions about the future.”

. . . When one side of a scientific debate is allowed to silence the other side, this is an impediment to scientific progress because it prevents bad theories being replaced by better theories. 

Despite the casualness with which Perkins makes these assertions, these propositions are starkly revolutionary to the scientific realm. Presumably, both Warren and Marshall (not to mention Copernicus, whose portrait graces the article’s header) believed that their experiments were worthy of consideration precisely because they manifested absolute truths about the human body. 

Moreover, these claims contain both obvious and insidious contradictions that threaten both comprehension and the legitimacy of scientific free speech. Beyond the elementary (yet valid) critique that Perkins seems to be claiming that the proposition that “there is no absolute truth” about science is an absolute truth about science, his concern that “bad theories” will cease to be “replaced by better theories” should free speech cease ought to be rendered irrelevant by the purported fact that “[t]here is no authority who decides what is a good idea.” Additionally, this denial of absolute truth contradicts Schofield’s (albeit unhelpfully abstruse) statement that science is “about finding better ways of being wrong”: a theory cannot possibly move on a value scale within wrongness without approaching or straying from rightness—or, in other words, objective truth. Also, in order for science to have the predictive power Schofield mentions, scientific fact and the scientific principle of consistency must be absolute. These premises are vital to meaningful scientific enquiry, yet Perkins annihilates them all with the stroke of a pen before quickly turning to decry the horrors of a society in which scientific speech is curtailed, apparently ignoring the fact that if what he says is true, science as it has been practiced throughout human history is based on a falsehood.

Furthermore, the premise that there is no absolute scientific truth contradicts the examples Perkins references throughout the article that seemingly provide instances of good theories replacing bad theories or vice versa. From Warren and Marshall’s gastritis discovery triumphing over the stress hypothesis to socialist inheritance theory suppressing Mendelian genetics and the pride of NASA scientists silencing the truth about Space Shuttle Challenger’s O-rings, he mentions numerous historical cases in which theories that have apparently corresponded to objective reality and those that have not have clashed. Yet if there is truly “no authority who decides what is a good idea,” all of these conflicting hypotheses would be equal—would be, that is, until, as in the last two cases, one theory was embraced by the most powerful class for its profitability relative to that party’s agenda. Indeed, as Planck’s quote reveals, those with the greatest power to promulgate their positions will inevitably be regarded as having attained truth regardless of the reality of the situation—and unfortunately, Perkins’ apparently naturalistic metanarrative provides no reason that the benefit of a select community should not be sought over the that of mankind in general.

Finally, Perkins’ arguments are not simply logically contradictory; unfortunately, they offer significant threats, not support, to scientific free speech and the flourishing, egalitarian society often fantasized of by humanistic scientists. First, having the freedom to debate is only objectively valuable if absolute truth exists—in that case, bad ideas can and should be criticized because they depart from accurate assessment of reality. Otherwise, freedom of discourse regarding science (or anything else) is as useful as having the freedom to share one’s color preferences—an agreeable liberty, but unlikely to benefit society in any real way. 

Although Perkins fails to note the fact, in a world of subjective truth, free speech can only ever be subjectively valuable. As he himself states, “. . . there is . . . cause for optimism, as long as we stand up for the principle that no one has the right to police our opinions.” This may be true, but this basis for optimism remains only so long as universal consensus “for the principle that no one has the right to police our opinions” exists. If there is no ultimate “authority who decides what is a good idea,” free speech will cease to be valuable the moment those with the power to mandate virtues begin to dislike it.

If truth (on this blog, assumed to be objective truth) be told, the scientific relativism suggested by this article promotes havoc and (if the subjectivist will promote the term) injustice in the necessarily and undeniably “messy business” that is science. Confirmation bias is a temptation to the point of being nearly unavoidable to narrative-driven human scientists, and for this reason the formal scientific discourse and disagreement the author mentions is crucial. If no absolute scientific truth exists, however, the scientific freedom of speech Perkins claims to promote cannot possibly be useful, and it will certainly only exist so long as every player in the scientific game agrees to it. With no referee, scientific free speech is as objectively necessary as scientific honesty.

All in all, the secular humanism from which Perkins presumably speaks is frighteningly arrogant. Espousing as its thesis the claim that mankind has the right and ability to impose meaning on an immense universe, this philosophy has the potential to lead its devotees in whichever direction they wish to take their society. Yet humans and especially the words they speak, words Perkins is here so anxious to defend, are minute and temporal, infinitesimal blips on the radar of the universe. If they—if we—are to have any meaning greater than the meaning that can be maintained by the fleeting 21st century attention span, it surely must be derived from something greater than a human consensus.

Artificial Intelligence: Redefining Humanity

In its recent April 2018 issue, Smithsonian featured an article on the future of artificial intelligence by author Stephan Talty titled “Be(a)ware.” Talty’s fictional background is evident throughout the piece (admittedly, the topic does lend itself to imagination), but one of its most alarming statements comes in the form of what appears to be a simple empirical fact. In one of the introductory paragraphs of the piece, Talty writes:

In just the last few years, “machine learning” has come to seem like the new path forward. Algorithms, freed from human programmers, are training themselves on massive data sets and producing results that have shocked even the optimists in the field. Earlier this year, two AIs—one created by the Chinese company Alibaba and the other by Microsoft—beat a team of two-legged competitors in a Stanford reading-comprehension test. The algorithms “read” a series of Wikipedia entries on things like the rise of Genghis Khan and the Apollo space program and then answered a series of questions about them more accurately than people did. 

Literary enthusiast that I am, this last statement seized my notice by the throat. Inherent in this claim, obviously, is the assumption that an objective scale for reading comprehension exists, that the absorption of the content of a written work into one’s worldview can be objectively, comparatively measured. Yet the fact that a work of art can, will, and should be interpreted differently by each observer is as old as art itself, dramatically predating the absurdity of deconstructionism. In this case, however, the machines were presumably able to answer questions on the material with such accuracy because they had been programmed to process the texts through the researchers’ personal evaluation framework. By all appearances, the researchers of whom Mr. Talty speaks believe that they have established an objective grading scale for subjectivity—and they have bottle-fed it to quickly maturing artificial minds.

Obviously, though, the tally of Genghis Khan’s wives is not open for debate in quite the same way as the intended message of a Shakespearean sonnet; none but a grotesquely committed conspiracy theorist would argue that algorithm performance evaluation scales are equivalent to censorship of literary criticism. Nevertheless, Talty’s misleading word choice demonstrates that even factual comprehension may be less straightforward than it may appear. More importantly, though, this article points to one of the greatest potential dangers of artificial intelligence, an impending reality even more alarming than any Terminator-esque subjugation of mankind. 

The superior performance of the bots in this study is betokened by their 82.63 score to human champions’ 82.304, numbers that represent the competitor’s percentage of exactly matched answers to content-related questions. Considering this fact, Microsoft’s computer’s advantage for this task is obvious: while the human brain is certainly a wonder, it simply is not designed to remember hundreds of precise phrases exactly. And for the accessible portion of human history, no human has considered comprehension to be a simple matter of memorization (perhaps exclusively for this reason). Maybe because of the distribution of our species’ gifts, humans have always considered understanding to be fundamentally self-referential. This fact may be simply the inevitable outcome of human arrogance or pragmatism—but even if that is the case, can we afford to risk allowing intelligence researchers, algorithm engineers, and (God help us) Wikipedia editors to just as arbitrarily redefine the standard? 

Without question, the artificial intelligence revolution will redefine life for human beings, both those who participate in its development and all others, and, by all appearances, it will first insidiously do so by changing the standards by which we evaluate our work, our world, and even our humanity. Some of those changes may be changes for the better, but trusting that they can be optimally made by a select, alarmingly homogeneous community quarantined in Northern California seems to me to be naive. If humanity is to change the standards by which it exists, may it be humanity that changes them, not just one segment of it.

Not insignificantly, though, if we are to change the definition of words as relatively benign as “comprehension,” we must bear in mind that we will inevitably be redefining them with reference to the definition of other words. In order to make any part of our existence “better,” we must know what “better” means. And for the most part, we do—for now. But that word has proven itself fickle throughout human history. What child, adult, or civilization hasn’t insisted, “X will make me happy”—X will make things “better”—only to disregard X for Y once X is obtained? In 1938, the Nazis were sure Czechoslovakia would satisfy their appetite; in 1940, it was France. On Palm Sunday, “better” was Jesus enthroned; five days later, “better” was Jesus crucified. 

Undeniably, information technology has brought humanity to the brink of a potentially magnificent revolution—but we must know what we wish to magnify. We must know what we mean by “progress”; what we mean by “good.” And if we restrict our search to the world of algorithms and evolution, we are sure to be disappointed.