The Humanist’s God

Nearly 150 years ago, Friedrich Nietzsche announced `Gott ist tot`—“God is dead,” slaughtered by enlightened mankind. Since Nietzsche’s time, the curious conglomerate of beliefs that is secular humanism has risen to the challenge of creating meaning for a godless mankind, doing so by pushing man himself into the role of moral magistrate and embodied goodness—or, in premodern phraseology, God. Nevertheless, the conflicting claims of this philosophical system in the end cause it not only to collapse on itself but also ultimately logically force humanity once more into the clutches of an unknown transcendent entity.

To begin, humanism’s peculiar collection of foundational premises reflects the poverty of its definitionally suspect attempt to be “a nontheistic religion” (Mirriam-Webster). According to the New Oxford American dictionary, “Humanist beliefs stress the potential value and goodness of human beings . . . and seek solely rational ways of solving human problems,” but secular humanism specifically openly claims to be naturalistic. The system is therefore brazenly contradictory: while claiming to be monistically naturalistic, humanists’ great concern with the metaphysical matters of morality, social justice, and reason reveals a conspicuous (if unacknowledged) dualistic aspect of their philosophy. 

For this reason alone, secular humanism proves to be an illegitimate philosophical system. In order to devise a coherent worldview, the humanist must choose definitively between monism and dualism. If he truly wishes to affirm that the physical world is all that exists, he must abandon the premise that moral differentiation is possible—morality is nonmaterial, after all, so if he continues to acknowledge it he must jettison his naturalism. As long as humanists attempt to hold these two conflicting beliefs simultaneously, however, their worldview must on the basis of the law of noncontradiction be regarded as philosophically inane.

Unfortunately, humanism’s dissension with logic does not end here—actually, despite humanism’s eager embrace of reason, it fails to reasonably justify the legitimacy of logic. While humanists promote reason as the omnipotent savior of the deified yet clearly flawed human race, naturalism once again stands in opposition to this tenet. If the physical is the entirety of reality, then reality (and hence reason) is necessarily determined by the laws of physics and chemistry. In this case, because man’s animal instincts are fundamentally comparable to his messianic reason, reason is rendered interchangeable with the baser aspects of man’s self from which he theoretically requires salvation, a reality that surely conflicts with the claims of humanism. Likewise, if all is matter and the matter that comprises humans is comparable to the matter that comprises the rest of the universe, the humanist’s choice to elevate man as ultimate being might be viewed as arbitrary or even selfish (if it were a free choice, which, obviously, it cannot be on this view). Finally and most importantly, in order to be the useful and rational instrument humanists consider it to be, logic must be free, a tool to be wielded in whichever direction necessary to encounter truth. Conversely, naturalism enslaves humanism’s savior Reason to the inhuman and inhumane hands of nature and chance.

Indeed, continuity and rational errors are not the extent of humanism’s hurdles—the system even ultimately fails in its attempt to liberate man from an exacting God. As mentioned above, among the foremost assumptions of humanism is the assertion that humans are inherently good or are at least capable of achieving moral goodness individually and collectively. Obviously, this proposition requires an objective standard of goodness, a term atheistic humanist philosophers tend to define much like their theistic counterparts, justifying the description on the ability of virtue to promote human flourishing and happiness rather than on the declarations of a presiding God. However, this standard for goodness has an interesting consequence: if humans are inherently good, then, alternately, goodness is inherently human. When phrased this way, it becomes apparent that secular goodness is reliant for its definition upon humanness. In other words, man literally becomes God.

Yet this raises an interesting dilemma for the humanist: according to the materialistic dogma of secular humanism, man is an exclusively physical, mechanical being, the result of untold years of chance evolution. Therefore, the true divinity of atheism is not the wonder that is man but the endless string of accidental DNA replication errors that created him. While he claims to worship man, the humanist finds himself ultimately paying homage to the omnipotent but impersonal, all-forming but invisible hand of Chance. In so doing, he subjects himself once again to the dominion of an unknown deity that brutally yet unfeelingly wreaks pain and terror on that which it has created, escaping the frying pan of religious superstition to fall into the fire of prehistoric Earth worship.

As the rise of secular humanism demonstrates, the past century and a half have proven Nietzsche right—“we ourselves” have “become gods simply to appear worthy of [God’s murder].” Nevertheless, glorious Man cannot escape the reality that he did not create himself, so even humanism is forced to reckon with a superhuman deity. Having killed God in order to free man from God’s imposition, the humanists may in the end find their new master even less agreeable than their first.

Free Speech and Absolute Truth

Several weeks ago, Kings College London psychologist Adam Perkins was prevented from delivering an address on the importance of scientific free speech at the College due to the purportedly “high-risk” nature of his content, resulting in yet another ironic case of intolerance in the name of tolerance. Fortunately for Dr. Perkins (and society in general), the thought police have not yet succeeded in banishing dissenting opinions from all corners of civilization, and an abbreviated version of the speech was published by Quillette, an online “platform for free thought.” Thanks to the editors’ generosity, however, readers of Perkins’ piece may discover for themselves that the ideas it promotes are indeed dangerous—dangerous, in fact, to the very scientific free speech they claim to commend. Ironically, Dr. Perkins’ attempted defense of scientific free speech lays out more reasons to doubt the merits of this cause than to defend them.

After opening with a broad recognition of the general importance of free speech to a healthy society, Dr. Perkins dispels any doubt of the necessity and fragility of free speech in science by recounting the story of Robin Warren and Barry Marshall. In their 1984 experiments, these Australian doctors identified bacteria as the cause of gastritis, overturning the then-popular theory that such conditions were stress-related. Unfortunately, obstinate opponents subdued the pair’s findings for nearly two decades before Warren and Marshall finally received the 2005 Nobel Prize in Physiology or Medicine. To summarize one of the implications of this story, Perkins quotes German physicist Max Planck: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” 

While this quote obviously somewhat applies to Warren and Marshall’s case, its central sentiment is curiously unscientific. Planck’s logic is indisputable with the exception of a single word: individual theories incontrovertibly gain favor with the passing of a hostile generation and the inculcation of a new one, but this fact does not necessitate that the ideologies that become popular in this way will be truths. 

Only a few paragraphs later, though, Perkins’ motive for featuring this contradictory quote is elucidated by his reference to the philosophies of several other famous scientists. He writes:

The existence of scientific debate is also crucial because as the Nobel Prize-winning physicist Richard Feynman remarked in 1963: “There is no authority who decides what is a good idea.” The absence of an authority who decides what is a good idea is a key point because it illustrates that science is a messy business and there is no absolute truth. This was articulated in Tom Schofield’s posthumously published essay in which he wrote:

“[S]cience is not about finding the truth at all, but about finding better ways of being wrong. The best scientific theory is not the one that reveals the truth— that is impossible. It is the one that explains what we already know about the world in the simplest way possible, and that makes useful predictions about the future.”

. . . When one side of a scientific debate is allowed to silence the other side, this is an impediment to scientific progress because it prevents bad theories being replaced by better theories. 

Despite the casualness with which Perkins makes these assertions, these propositions are starkly revolutionary to the scientific realm. Presumably, both Warren and Marshall (not to mention Copernicus, whose portrait graces the article’s header) believed that their experiments were worthy of consideration precisely because they manifested absolute truths about the human body. 

Moreover, these claims contain both obvious and insidious contradictions that threaten both comprehension and the legitimacy of scientific free speech. Beyond the elementary (yet valid) critique that Perkins seems to be claiming that the proposition that “there is no absolute truth” about science is an absolute truth about science, his concern that “bad theories” will cease to be “replaced by better theories” should free speech cease ought to be rendered irrelevant by the purported fact that “[t]here is no authority who decides what is a good idea.” Additionally, this denial of absolute truth contradicts Schofield’s (albeit unhelpfully abstruse) statement that science is “about finding better ways of being wrong”: a theory cannot possibly move on a value scale within wrongness without approaching or straying from rightness—or, in other words, objective truth. Also, in order for science to have the predictive power Schofield mentions, scientific fact and the scientific principle of consistency must be absolute. These premises are vital to meaningful scientific enquiry, yet Perkins annihilates them all with the stroke of a pen before quickly turning to decry the horrors of a society in which scientific speech is curtailed, apparently ignoring the fact that if what he says is true, science as it has been practiced throughout human history is based on a falsehood.

Furthermore, the premise that there is no absolute scientific truth contradicts the examples Perkins references throughout the article that seemingly provide instances of good theories replacing bad theories or vice versa. From Warren and Marshall’s gastritis discovery triumphing over the stress hypothesis to socialist inheritance theory suppressing Mendelian genetics and the pride of NASA scientists silencing the truth about Space Shuttle Challenger’s O-rings, he mentions numerous historical cases in which theories that have apparently corresponded to objective reality and those that have not have clashed. Yet if there is truly “no authority who decides what is a good idea,” all of these conflicting hypotheses would be equal—would be, that is, until, as in the last two cases, one theory was embraced by the most powerful class for its profitability relative to that party’s agenda. Indeed, as Planck’s quote reveals, those with the greatest power to promulgate their positions will inevitably be regarded as having attained truth regardless of the reality of the situation—and unfortunately, Perkins’ apparently naturalistic metanarrative provides no reason that the benefit of a select community should not be sought over the that of mankind in general.

Finally, Perkins’ arguments are not simply logically contradictory; unfortunately, they offer significant threats, not support, to scientific free speech and the flourishing, egalitarian society often fantasized of by humanistic scientists. First, having the freedom to debate is only objectively valuable if absolute truth exists—in that case, bad ideas can and should be criticized because they depart from accurate assessment of reality. Otherwise, freedom of discourse regarding science (or anything else) is as useful as having the freedom to share one’s color preferences—an agreeable liberty, but unlikely to benefit society in any real way. 

Although Perkins fails to note the fact, in a world of subjective truth, free speech can only ever be subjectively valuable. As he himself states, “. . . there is . . . cause for optimism, as long as we stand up for the principle that no one has the right to police our opinions.” This may be true, but this basis for optimism remains only so long as universal consensus “for the principle that no one has the right to police our opinions” exists. If there is no ultimate “authority who decides what is a good idea,” free speech will cease to be valuable the moment those with the power to mandate virtues begin to dislike it.

If truth (on this blog, assumed to be objective truth) be told, the scientific relativism suggested by this article promotes havoc and (if the subjectivist will promote the term) injustice in the necessarily and undeniably “messy business” that is science. Confirmation bias is a temptation to the point of being nearly unavoidable to narrative-driven human scientists, and for this reason the formal scientific discourse and disagreement the author mentions is crucial. If no absolute scientific truth exists, however, the scientific freedom of speech Perkins claims to promote cannot possibly be useful, and it will certainly only exist so long as every player in the scientific game agrees to it. With no referee, scientific free speech is as objectively necessary as scientific honesty.

All in all, the secular humanism from which Perkins presumably speaks is frighteningly arrogant. Espousing as its thesis the claim that mankind has the right and ability to impose meaning on an immense universe, this philosophy has the potential to lead its devotees in whichever direction they wish to take their society. Yet humans and especially the words they speak, words Perkins is here so anxious to defend, are minute and temporal, infinitesimal blips on the radar of the universe. If they—if we—are to have any meaning greater than the meaning that can be maintained by the fleeting 21st century attention span, it surely must be derived from something greater than a human consensus.