Christianity & Liberalism

This past week, I had the opportunity to enjoy American theologian J. Gresham Machen’s celebrated 1923 work Christianity & Liberalism, a timeless response to then infant so-called “liberal Christianity.” In the decades since Machen wrote this book, liberalism has transitioned from being an innovative theological school to being the very air Christians breathe outside and often even inside the church, rendering Machen’s analysis uniquely enlightening for today’s church. In fact, this book is highly relevant today not only to orthodox Christians but also to all thoughtful experiencers of 21st century pluralism because of its shrewd differentiation between the objective basis of conservative Christianity and the subjective basis of liberalism.

In the book’s opening chapter, Machen presents the thesis he defends in subsequent pages: despite theological liberalism’s appropriation of Christian terminology, historic, biblical Christianity and modern liberal Christianity are in fact two separate and antithetical religions. In an attempt to safeguard Christianity against the then rapidly blossoming naturalistic disciplines of science, psychology, and historical criticism, Machen explains, liberal theologians attempted to extricate Christianity from these fields by reducing Christianity to a lifestyle. Yet in so doing, he contends, they created an entirely different religion that is both unscientific and unChristian in any historical sense.

In the next chapter, Machen defends this assertion by examining liberalism’s rejection of doctrine and embrace of “Christian experience” as the whole of the religion. He explains that the historical record of Paul’s epistles, the infant Christian church, and even the words of Jesus leave no justification for such a move in the name of Christianity; while it is possible that the message of Christianity is wrong, separating Christianity from its message is utterly historically unprecedented. Thus liberal theologians have created a completely new religion that Machen insists ought to be divorced from the historically established doctrinal faith of Christianity.

In his third chapter, Machen explores the vast differences between Christianity and liberalism even in their fundamental presuppositions about God and man. While Christianity’s view of God is grounded in the facts found in historical documents, liberalism bases man’s relationship to God on subjective human feelings. Moreover, he demonstrates that liberals’ suggestion that God is revealed only through Jesus is logically and biblically indefensible. Similarly, he maintains that the liberal rejection of the doctrine of sin and human nature utterly ignores not only biblical but also experiential reality, as his original readers, freshly weary from one atrocious world war and unwittingly on the brink of an even worse one, would be far less hasty to deny than many modern Western citizens.

Not surprisingly, then, Machen continues, having rejected the presuppositions of the Christian message, liberal theology rejects the message itself, taking a disdainful attitude towards Scripture. He details how liberal attempts to legitimize only certain words of Jesus constitute a grossly unscientific revision of history and insists that though the Christian message is certainly confirmed to an extent by Christian experience, Christianity that espouses only “Christian experience” while rejecting the message of Scripture is not Christianity at all. In Machen’s words,

…liberalism is totally different from Christianity, for the foundation is different. Christianity is founded upon the Bible. It bases upon the Bible both its thinking and its life. Liberalism on the other hand is founded upon the shifting emotions of sinful men. (p. 67)

In the next chapter of the book, Machen explicates the divide between liberals and Christians on the person and work of Christ: fundamentally, Christians view Jesus as the object of godly faith while liberals view him as the example of true faith, the first Christian rather than the author of Christianity. With only a few Scripture references, however, Machen demonstrates that Jesus as presented in Scripture cannot simply be an example of faith because of the audacious claims he persistently makes about Himself in the gospels; in light of these, He was obviously either an entirely different kind of human or else a perfect example of nothing more than lunacy. By rejecting Jesus’ deity, liberal theologians leave the historical Jesus obscure and unknowable, leading them to rely once more on their own subjective opinions and preferences rather than historical reality.

Next, Machen describes some of the numerous ways in which liberalism minimizes the salvific work of Christ–while Christianity considers salvation to be the result of a work of God, on the rare occasions that it acknowledges a need for salvation, liberalism attributes it to humanity. Frequently, theological liberals interpret Jesus’ death as an example of Christian self-sacrifice, but once again, Machen shows, this conflicts with the historic teachings of all who have called themselves Christians. More soberingly still, he notes, by humanizing Christ’s work, liberals also trivialize it. As previously, he demonstrates that the liberal notion of an unconditionally forgiving God is both biblically unsound and indefensible in light of the repercussions of humans’ sins against one another (let alone against God). Throughout, Machen affirms that Christianity is far more than the bare facts of the life of a Jewish carpenter who lived two thousand years ago; undeniably, the joy and intimacy of a relationship with the Creator that liberal Christians so crave is a fundamental component of biblical Christianity–but this joy is always and only found upon comprehending the awesomely humbling truth of the facts of the Gospel.

In his final chapter, Machen concludes with reflections on how conservative Christians ought to respond to the presence of liberalism in the church. Once more, he shows that true Christianity and liberal Christianity conflict starkly even on the purpose of the Church they claim to share because liberals believe the church as a human institution can and will transform society. Gospel-centered Christians, however, recognize that only the saving work of Christ can change the world, and this revolution progresses one redeemed soul at a time. With his final words, Machen encourages struggling conservative Christians to rest in this fact, knowing that God has always sustained His Bride in the past and that He promises to continue to do so in the future.

From the first page of Christianity and Liberalism to the last, Machen’s point could not be clearer: unlike the Christianity of the Bible, liberalism is based on a denial of reality (and while Machen specifically addresses theological liberalism, to an extent the same can certainly be said of political and social liberalism as well). On the outset, this fact may seem encouraging for conservatives–as Machen points out, philosophies founded on fantasy must inevitably fail. Unfortunately, though, these dangerous ideas do have dangerous consequences for the lives of the Image-bearing but fallen individuals who espouse them, individuals who “[love] the darkness rather than the light” (John 1:19 ESV), individuals no true Christian can be content to resign to their own folly. The Christian’s task here certainly is not easy–the world, flesh, and devil conspire against him–but mercifully, with us in our battle against attractive lies we have the ultimate Beauty, the one who is literally “the truth” (John 14:6), the crucified and resurrected Son of God of biblical doctrine who alone can deliver the freedom liberalism only nominally claims to provide.

Reason and Human Nature (Part III): Indicative and Imperative Truth

What is the relationship between the statements “Thou shalt not kill” and “George Washington was the first president of the United States of America”? Both purport to state facts–but are they equally true facts? Many have certainly thought so, from the writers of the Bible (“Your righteousness is righteous forever, and your law is true” -Psalm 119:142, ESV) to prominent atheist Michael Ruse (“The man who says that it is morally acceptable to rape little children is just as mistaken as the man who says two plus two equals five”). Nevertheless, affirming such self-evident truths is unthinkable for today’s postmodernists–clearly, they insist, moral obligations are simply subjective preferences foisted upon individuals by the power-hungry leaders of their cultures. On this view, the humans with the greatest power become little gods, intimidating but minuscule compared to the ultimate God, power itself. Frighteningly, though, this bizarre compartmentalism has the potential to destroy not only moral truth but indicative truth as well because of its abandonment of the theological foundation of truth.

As discussed in the previous post on this blog, truth relies for its validity upon the infinite, eternal mind of God, which, due to God’s immutable nature, necessarily ensures the eternality of truth. How, then, is imperative truth related to the nature of God? To begin, even if morality is simply a human construction, the propositions describing these arbitrary moral prescriptions must be eternally true in the same way that any fact about the physical world is true because of God’s eternal knowledge of all facts.

Yet to assert only this much and proceed no further is blatantly unbiblical. Does not Scripture resound with the truths of God’s goodness and mercy (Psalm 145:9), His uprightness and justice (Deuteronomy 32:4), and, above all, His faithfulness and love (Psalm 25:10)? Yes, the truths of our physical world are important to and dependent on God, but they touch only the domain of His mind. The truths we call morality, however, touch His very heart and reflect the eternal, incomprehensible bonds of the truest love that unite the Holy Trinity.

Necessarily, then, if we humans render subjective the moral truths that govern not only our universe but the very Godhead, are we not in grave danger of abandoning the reliability of the factual truth imparted to us from God’s knowledge? Rather than being less sure or significant than so-called “factual” truths, imperative truths are equally or perhaps more unshakable because of their indivisibility from God’s character. More sadly still, by thus relinquishing moral truth, we lose not only our certainty of reality–we also abandon our profound God-given key into the heart of our Maker.

Reason and Human Nature (Part II): Subjective and Objective Truth

What is truth? Even if the Enlightenment assertion that reason leads to truth is correct, this theory is useless unless the premises reason employs can themselves be verified. But how can any human being driven fundamentally by a rationality that seeks to ensure survival and to justify pleasure rather than to discover truth hope to lay hold of such absolute knowledge of reality? Our senses clearly are not sufficient; individual sensory experience frequently varies from one person to the next, resulting in innumerable “facts” that may be true, but only subjectively so, reliant for their truthfulness on the imperfect minds that hold them. Reason is not entirely neutralized by subjective premises—but as its postulates are, so also will be its conclusions. Even proper syllogisms necessarily produce falsehood if they employ false presuppositions. Thus, reason must rely upon the superrational to procure objective truth.

Literally, a subjective truth is any truth that relies upon something else—a corresponding state of affairs in the physical or metaphysical world—for its veracity. In a sense, then, this universe does not provide a basis for any ultimately objectively true facts. While the universe itself and every planet, human, and quark within it (presumably) exists objectively at a given moment, any of these things, from the smallest particle to the entire universe, quite conceivably might not have existed and might not exist the next moment. Only an entity that cannot not exist can provide the basis for an unshakable truth—or, more succinctly, as St. Augustine long ago formulated, truth cannot exist objectively unless an unchanging God exists to uphold it.

Yet here is an awesome theological paradox. If God is who He claims to be—“I am who I am” (Exodus 3:14), “the same yesterday and today and forever” (Hebrews 13:8 ESV), the great Creator who does not change His mind (Numbers 23:19) in Whose necessary existence our contingent existence is wrapped up—if this is true, then our existence, thanks to His unchangeable character, is as unshakable as His. Being good Himself, God wonderfully saw fit to create the universe He deemed similarly “very good” (Genesis 1:31) in such a way that its existence would be inextricable from His and, incredibly, His existence inextricable from ours.

What, then, is truth? Truth is the facts about the world that is in an astounding sense as objectively real as the God who created it, and, of course, the facts (barely conceivable to the human mind though they be) of that God Himself. Furthermore, these facts are gathered by the senses and reason with which God equipped humanity to leave His children “without excuse” (Romans 1:20) for denying their knowledge of His existence. Ultimately, reason proves to be integral to the eyes of faith that are more trustworthy than the eyes of flesh because, unlike our physical senses alone, reason can direct us to the great Truth that upholds our own existence.

Reason and Human Nature (Part I)

Lately, reason and the so-called “Enlightenment values” have been experiencing a resurgence in celebrity, finding themselves once again endowed with the ability to transform humanity utterly for the better. Before jumping to such fantastic conclusions, however, reason must be examined to determine if it truly is all that it is purported to be.

More importantly, we must determine whether humans are actually capable of reasoning–after all, a significant pool of facts suggests that we are not rational creatures. If we were, the lure of addictive substances, promiscuous sexuality, and even advertising would surely be lost on us–yet countless people are enticed by such temptations despite the better angels of their rational nature to their own destruction. Emotion, it seems, and not reason guides human decision making.

Nevertheless, are such emotionally influenced choices unreasonable? Under the literal meaning of the word, they obviously are not, because humans universally provide excuses–reasons–to justify even their most “irrational” behavior. The drug addict defends the continuation of his habit with reasons that are perfectly convincing to him, perhaps more convincing than the logic used by an attorney to persuade a jury. Both cases include an appeal to facts; the fact that heroin induces a pleasing psychological state is as true for the addict as the time, place, and method of a murder are to those in the courtroom. Despite the conventional wisdom of the Enlightenment movement, the argument based on relatively subjective facts appealed to by the former arguer and the argument grounded in relatively objective facts presented by the latter are logically parallel cases: both use indicative facts and imperative hypotheses to reach imperative conclusions. Thus, all arguments are rational, even arguments about the validity of rationality; logic is the universal language of human thought whose relationship to reality beyond our minds we will never learn precisely because we cannot process that reality without it. As the tool used to discover certain components of reality (either indicative or imperative) from other components of reality, reason renders all claims rationally equal in the absence of a hierarchy of truth, a scale that is not self-evidently verifiable or absolute.

While we may be unable to conceptualize a world without reason, this essential component of human nature–the human lust for an explanatory premise for every other premise–is still deeply intriguing and mysterious. Why can’t humans be content with brute facts? Why are we so reluctant to accept truth as incorrigible? Why do we insist on linking facts to other facts?

Moreover, these traits are particularly odd considering that both materialistic and theistic worldviews claim that the ultimate existing entity is without cause, having no reason for itself but itself. In either case, God or the universe forms the single break from the reasonedness that otherwise rules our thinking. While we insist on finding a reason to explain every material and psychological phenomenon (and a reason for that reason, and a reason for that reason, and so on), we intuitively know that an infinite regress of causes is ultimately untenable.

What, then, is the reason for this paradox of human nature? Why does the species that views all through the lens of rationality and demands a reason for everything, even its own existence, inevitably resort to a necessary being? Is this an evolutionary fluke, the result of a deviant mental pathway yet to be discarded by the hand of chance? Is it a single part of the cosmic soul craving unity with the expanse?

Or is it the mark of the great Cause Himself, who created us with a predilection for reasons so that we might eventually confront The Reason?

“…Thou madest us for Thyself, O Lord, and our heart is restless, until it repose in Thee.” (St. Augustine of Hippo)

Stay tuned for coming discussions of subjective and objective and indicative and imperative ontological states and their role in reason.

Human Nature and Transhumanism

Is the “human condition” a condition–a factual state of affairs–or is it a condition–a physical ailment? Generally, this phrase is associated with the first definition, but the growing transhumanist movement espouses the second, as is explicated in Mark O’Connell’s recent book To Be a Machine, thoughtfully reviewed by Olga Rachello in the journal The New Atlantis. As Rachello’s review reveals, though, the central pillars of transhumanism are even more radical than this redefinition of words: transhumanism attempts to redefine human nature itself, proposing that the self is ultimately informational rather than material.

This departure from strict materialism is intriguing considering the overwhelmingly secular worldviews of contemporary transhumanists, but this fact does not necessarily indicate inaccuracy. Nevertheless, an alarming prospect overshadows the transhumanist utopia of immortality–if the transhumanists prove to be wrong, if human nature is actually inextricably intwined with human embodiment, the potentially instantaneous annihilation of mankind as we know it will necessarily pass unnoticed. The robots into which we attempt to transform ourselves won’t know there’s something missing from their experience of humanity. Barring the existence of an afterlife from which we the true humans will observe the trajectory of human (or unhuman) history, human memory will be forgotten; the subjective turmoil of human emotion that transcends data will go the way of the water closet in a world of robots. If consciousness turns out to be something more than machines and information, its loss will never be grieved.

As represented by Rachello’s review, O’Connell’s book reflectively acknowledges this grave possibility, but it does not make light of the transhumanist’s longing to transcend the feeble human frame and its inevitable demise. (This review served as an excellent whetstone for my philosophical appetite; if I have the chance to read the book myself this summer, I’ll be sure to share my thoughts more fully here.) Even though the transhumanist party has few members now, its presence, goals, and objections to the truths so many of us take for granted are momentous. Let’s hope that the respect O’Connell gives this issue in his book will become prevalent in the near future.

Politicized Sex & the Proles in 1984

In the modern democratic West, George Orwell’s political novels are widely regarded as all but divinely dictated prophecies warning of the horrors of totalitarianism, but as is so often the case with prophecies, Orwell’s work generally raises far more questions than it attempts to answer. In his magnum opus 1984, for example, Orwell initially introduces as major themes both the politicization of sex and the necessity of the proletariat to beneficial political upheaval—but by the end of the novel, both themes are long forgotten by most readers. Nevertheless, these two evidently insignificant themes are actually intimately related to one another and, like many of Orwell’s observations, highly relevant not only to 20th century victims of totalitarianism but to the democratic citizens of the 21st as well.

Orwell’s exploration of the political power of sex takes place primarily within the protagonist Winston Smith’s affair with the youthful and sensuous Julia, an act of daring disdain towards the Party’s staunch standards of chastity. As Julia explains, the Party’s regulation of sex is far more than an attempt to control members’ bodies: 

“When you make love you’re using up energy; and afterwards you feel happy and don’t give a damn for anything. They can’t bear you to feel like that. They want you to be bursting with energy all the time. All this marching up and down and cheering and waving flags is simply sex gone sour. If you’re happy inside yourself, why should you get excited about Big Brother and the Three-Year Plans and the Two Minutes Hate and all the rest of their bloody rot?” (133)

Consequently, the couple’s sexual interactions are both “a battle” and “a victory”; intercourse is “a blow struck against the Party,” “a political act,” and “the force that would tear the Party to pieces” (126). 

Meanwhile, Orwell develops a second theme through Winston’s thoughts and conversations, suggesting that only the impoverished Proles have the power to undermine the Party. Winston first encounters this theory only as an inexplicable intuition, but he eventually realizes that the Proles’ immunity to the Party is related to their uniquely human emotions. The Proles are “governed by private loyalties which they [do] not question. . . . They [are] not loyal to a party or a country or an idea, they [are] loyal to one another” (165). Understandably, in Orwell’s humanistic socialist opinion, this non-coerced familial love for one’s community promises to preclude the development of a society like the one pictured in 1984.

Consequently, the link between these two evidently disparate themes becomes obvious: because the Proles have largely maintained control over their own sexual (and ergo familial) lives, they have preserved their power over their own emotions, affections, and loyalties. As the ultimate expression of mutual affection and an essential component of the nuclear family unit, sex radically influences one’s attachments—and so, perspicaciously, the Party and the Anti-Sex League mandate their members’ affections and ultimately their political affiliations by legally designating their sex lives. And this phenomenon is not only a literary invention of Orwell’s—while superficially dissimilar to Orwell’s politicized abstinence, historical movements ranging from misogynistic components of the modern alt-right to lesbian feminism to perhaps most explicitly the monogamy-smashing orgies of the Weather Underground and the 20th century sexual revolution have demonstrated and continue to demonstrate that sex truly can be “a political act.” 

Indeed, though they seem to fade from the narrative in Part III of the novel, these secondary themes are actually extrapolations of 1984’s principle theme: “God is power” (264). By sexually directing Party members’ affections, the Party gains power over their hearts, the power to tear “human minds to pieces and [put] them together again in new shapes of [its] own choosing” (266), that ability most coveted by human pride. Consequently, the Proles’ seemingly simple sexual and emotional freedom is of great political significance. 

Nevertheless, the plot of this dystopian novel does not suggest a utopian strategy to normalize this proletarian power of innocent emotion. The climax of the novel reveals that Winston’s prior assumption that “they can’t get inside you” (166)—that the Party ultimately cannot contort one’s deepest affections and emotions—is utterly false. Worse still, Winston learns that the very raw, unadulterated emotions (namely fear) for which he admired the Proles can actually easily overthrow and even reverse the fiercest love. In reality, Orwell suggests, sex is not only politically manipulatable; it can also ultimately demolish personal autonomy. By all appearances, then, while the year 1984 has come and gone, not even the true love of mankind is sufficient to save humanity from the tyranny of the world pictured in Orwell’s novel.

All citations taken from:

Orwell, George. 1984. Signet Classics, 1949.

The Humanist’s God

Nearly 150 years ago, Friedrich Nietzsche announced `Gott ist tot`—“God is dead,” slaughtered by enlightened mankind. Since Nietzsche’s time, the curious conglomerate of beliefs that is secular humanism has risen to the challenge of creating meaning for a godless mankind, doing so by pushing man himself into the role of moral magistrate and embodied goodness—or, in premodern phraseology, God. Nevertheless, the conflicting claims of this philosophical system in the end cause it not only to collapse on itself but also ultimately logically force humanity once more into the clutches of an unknown transcendent entity.

To begin, humanism’s peculiar collection of foundational premises reflects the poverty of its definitionally suspect attempt to be “a nontheistic religion” (Mirriam-Webster). According to the New Oxford American dictionary, “Humanist beliefs stress the potential value and goodness of human beings . . . and seek solely rational ways of solving human problems,” but secular humanism specifically openly claims to be naturalistic. The system is therefore brazenly contradictory: while claiming to be monistically naturalistic, humanists’ great concern with the metaphysical matters of morality, social justice, and reason reveals a conspicuous (if unacknowledged) dualistic aspect of their philosophy. 

For this reason alone, secular humanism proves to be an illegitimate philosophical system. In order to devise a coherent worldview, the humanist must choose definitively between monism and dualism. If he truly wishes to affirm that the physical world is all that exists, he must abandon the premise that moral differentiation is possible—morality is nonmaterial, after all, so if he continues to acknowledge it he must jettison his naturalism. As long as humanists attempt to hold these two conflicting beliefs simultaneously, however, their worldview must on the basis of the law of noncontradiction be regarded as philosophically inane.

Unfortunately, humanism’s dissension with logic does not end here—actually, despite humanism’s eager embrace of reason, it fails to reasonably justify the legitimacy of logic. While humanists promote reason as the omnipotent savior of the deified yet clearly flawed human race, naturalism once again stands in opposition to this tenet. If the physical is the entirety of reality, then reality (and hence reason) is necessarily determined by the laws of physics and chemistry. In this case, because man’s animal instincts are fundamentally comparable to his messianic reason, reason is rendered interchangeable with the baser aspects of man’s self from which he theoretically requires salvation, a reality that surely conflicts with the claims of humanism. Likewise, if all is matter and the matter that comprises humans is comparable to the matter that comprises the rest of the universe, the humanist’s choice to elevate man as ultimate being might be viewed as arbitrary or even selfish (if it were a free choice, which, obviously, it cannot be on this view). Finally and most importantly, in order to be the useful and rational instrument humanists consider it to be, logic must be free, a tool to be wielded in whichever direction necessary to encounter truth. Conversely, naturalism enslaves humanism’s savior Reason to the inhuman and inhumane hands of nature and chance.

Indeed, continuity and rational errors are not the extent of humanism’s hurdles—the system even ultimately fails in its attempt to liberate man from an exacting God. As mentioned above, among the foremost assumptions of humanism is the assertion that humans are inherently good or are at least capable of achieving moral goodness individually and collectively. Obviously, this proposition requires an objective standard of goodness, a term atheistic humanist philosophers tend to define much like their theistic counterparts, justifying the description on the ability of virtue to promote human flourishing and happiness rather than on the declarations of a presiding God. However, this standard for goodness has an interesting consequence: if humans are inherently good, then, alternately, goodness is inherently human. When phrased this way, it becomes apparent that secular goodness is reliant for its definition upon humanness. In other words, man literally becomes God.

Yet this raises an interesting dilemma for the humanist: according to the materialistic dogma of secular humanism, man is an exclusively physical, mechanical being, the result of untold years of chance evolution. Therefore, the true divinity of atheism is not the wonder that is man but the endless string of accidental DNA replication errors that created him. While he claims to worship man, the humanist finds himself ultimately paying homage to the omnipotent but impersonal, all-forming but invisible hand of Chance. In so doing, he subjects himself once again to the dominion of an unknown deity that brutally yet unfeelingly wreaks pain and terror on that which it has created, escaping the frying pan of religious superstition to fall into the fire of prehistoric Earth worship.

As the rise of secular humanism demonstrates, the past century and a half have proven Nietzsche right—“we ourselves” have “become gods simply to appear worthy of [God’s murder].” Nevertheless, glorious Man cannot escape the reality that he did not create himself, so even humanism is forced to reckon with a superhuman deity. Having killed God in order to free man from God’s imposition, the humanists may in the end find their new master even less agreeable than their first.

Morality, Atheism, & Reason

In 2007, atheist writer Adam Lee of PatheosDaylight Atheism wrote a post responding to and attempting to discredit a column from the Washington Post’s Michael Gerson in which Gerson argues that morality is ultimately untenable in the absence of God. In his reply, Lee commits a number of the blunders common to traditional atheistic moral arguments, fallacies that have been widely rebutted and thus will not be addressed here. In one of the arguments near the end of his post, however, Lee does raise an interesting point. Speaking to Gerson, he writes:

You asked what reason an atheist can give to be moral, so allow me to offer an answer. You correctly pointed out that neither our instincts nor our self-interest can completely suffice, but there is another possibility you’ve overlooked. Call it what you will—empathy, compassion, conscience, lovingkindness—but the deepest and truest expression of that state is the one that wishes everyone else to share in it. A happiness that is predicated on the unhappiness of others—a mentality of “I win, you lose”—is a mean and petty form of happiness, one hardly worthy of the name at all. On the contrary, the highest, purest and most lasting form of happiness is the one which we can only bring about in ourselves by cultivating it in others. The recognition of this truth gives us a fulcrum upon which we can build a consistent, objective theory of human morality. Acts that contribute to the sum total of human happiness in this way are right, while those that have the opposite effect are wrong. A wealth of moral guidelines can be derived from this basic, rational principle.

The utilitarian argument here presented for atheistic morality is a common (and insufficient) one, but Lee’s wording uniquely highlights one of its major flaws. Because he labels the sociological phenomenon he addresses as a “truth,’ his argument begs a pivotal question: how does he know that “happiness that is predicated on the unhappiness of others . . . is a mean and petty form of happiness”? Presumably, he makes this claim because his personal experience validates it, but thanks to the unavoidable principle of restricted access in human thought, neither he nor anyone else can definitively prove that this is the case for human beings in general. To assert such a claim, one must appeal to the knowledge of some omniscient psychologist—truly, to some revelation—to do so with confidence.

Indeed, the central crisis of naturalism is not a spiritual or moral crisis; it is an epistemological one. Undeniably, the existence of God is a difficult fact to incontrovertibly prove, but by even approaching the topic in a rational manner, the theist and the atheist alike make a perhaps greater leap of faith even than the theist’s belief in an invisible God by assuming that the inscrutable mind and especially the chemical complex that is the human brain can be trusted to follow a trail of rational arguments to truth in a metaphysical quandary. Even the theist is obligated to be slightly speculative to conclude that the rational mind can be trusted based solely on his belief in the existence of a rational God, but neither of these basic beliefs are remotely so flimsy as the atheist’s insistence that the trustworthy rational brain evolved through sheer chance. By his own logical dogma, the atheist ought to distrust logic because of the extreme improbability of its accuracy—which, ironically, he cannot do without justifying his suspicion with logic.

In the end, then, Lee’s mediocre argument for morality without God is potentially tenable only if God—or, if he finds God too extreme a term, some immaterial, omnipotent, and omniscient being that upholds reason—does exist. Otherwise, the reason on which he bases his moral framework (and presumably his atheism as well) is highly unreasonable.

Free Speech and Absolute Truth

Several weeks ago, Kings College London psychologist Adam Perkins was prevented from delivering an address on the importance of scientific free speech at the College due to the purportedly “high-risk” nature of his content, resulting in yet another ironic case of intolerance in the name of tolerance. Fortunately for Dr. Perkins (and society in general), the thought police have not yet succeeded in banishing dissenting opinions from all corners of civilization, and an abbreviated version of the speech was published by Quillette, an online “platform for free thought.” Thanks to the editors’ generosity, however, readers of Perkins’ piece may discover for themselves that the ideas it promotes are indeed dangerous—dangerous, in fact, to the very scientific free speech they claim to commend. Ironically, Dr. Perkins’ attempted defense of scientific free speech lays out more reasons to doubt the merits of this cause than to defend them.

After opening with a broad recognition of the general importance of free speech to a healthy society, Dr. Perkins dispels any doubt of the necessity and fragility of free speech in science by recounting the story of Robin Warren and Barry Marshall. In their 1984 experiments, these Australian doctors identified bacteria as the cause of gastritis, overturning the then-popular theory that such conditions were stress-related. Unfortunately, obstinate opponents subdued the pair’s findings for nearly two decades before Warren and Marshall finally received the 2005 Nobel Prize in Physiology or Medicine. To summarize one of the implications of this story, Perkins quotes German physicist Max Planck: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” 

While this quote obviously somewhat applies to Warren and Marshall’s case, its central sentiment is curiously unscientific. Planck’s logic is indisputable with the exception of a single word: individual theories incontrovertibly gain favor with the passing of a hostile generation and the inculcation of a new one, but this fact does not necessitate that the ideologies that become popular in this way will be truths. 

Only a few paragraphs later, though, Perkins’ motive for featuring this contradictory quote is elucidated by his reference to the philosophies of several other famous scientists. He writes:

The existence of scientific debate is also crucial because as the Nobel Prize-winning physicist Richard Feynman remarked in 1963: “There is no authority who decides what is a good idea.” The absence of an authority who decides what is a good idea is a key point because it illustrates that science is a messy business and there is no absolute truth. This was articulated in Tom Schofield’s posthumously published essay in which he wrote:

“[S]cience is not about finding the truth at all, but about finding better ways of being wrong. The best scientific theory is not the one that reveals the truth— that is impossible. It is the one that explains what we already know about the world in the simplest way possible, and that makes useful predictions about the future.”

. . . When one side of a scientific debate is allowed to silence the other side, this is an impediment to scientific progress because it prevents bad theories being replaced by better theories. 

Despite the casualness with which Perkins makes these assertions, these propositions are starkly revolutionary to the scientific realm. Presumably, both Warren and Marshall (not to mention Copernicus, whose portrait graces the article’s header) believed that their experiments were worthy of consideration precisely because they manifested absolute truths about the human body. 

Moreover, these claims contain both obvious and insidious contradictions that threaten both comprehension and the legitimacy of scientific free speech. Beyond the elementary (yet valid) critique that Perkins seems to be claiming that the proposition that “there is no absolute truth” about science is an absolute truth about science, his concern that “bad theories” will cease to be “replaced by better theories” should free speech cease ought to be rendered irrelevant by the purported fact that “[t]here is no authority who decides what is a good idea.” Additionally, this denial of absolute truth contradicts Schofield’s (albeit unhelpfully abstruse) statement that science is “about finding better ways of being wrong”: a theory cannot possibly move on a value scale within wrongness without approaching or straying from rightness—or, in other words, objective truth. Also, in order for science to have the predictive power Schofield mentions, scientific fact and the scientific principle of consistency must be absolute. These premises are vital to meaningful scientific enquiry, yet Perkins annihilates them all with the stroke of a pen before quickly turning to decry the horrors of a society in which scientific speech is curtailed, apparently ignoring the fact that if what he says is true, science as it has been practiced throughout human history is based on a falsehood.

Furthermore, the premise that there is no absolute scientific truth contradicts the examples Perkins references throughout the article that seemingly provide instances of good theories replacing bad theories or vice versa. From Warren and Marshall’s gastritis discovery triumphing over the stress hypothesis to socialist inheritance theory suppressing Mendelian genetics and the pride of NASA scientists silencing the truth about Space Shuttle Challenger’s O-rings, he mentions numerous historical cases in which theories that have apparently corresponded to objective reality and those that have not have clashed. Yet if there is truly “no authority who decides what is a good idea,” all of these conflicting hypotheses would be equal—would be, that is, until, as in the last two cases, one theory was embraced by the most powerful class for its profitability relative to that party’s agenda. Indeed, as Planck’s quote reveals, those with the greatest power to promulgate their positions will inevitably be regarded as having attained truth regardless of the reality of the situation—and unfortunately, Perkins’ apparently naturalistic metanarrative provides no reason that the benefit of a select community should not be sought over the that of mankind in general.

Finally, Perkins’ arguments are not simply logically contradictory; unfortunately, they offer significant threats, not support, to scientific free speech and the flourishing, egalitarian society often fantasized of by humanistic scientists. First, having the freedom to debate is only objectively valuable if absolute truth exists—in that case, bad ideas can and should be criticized because they depart from accurate assessment of reality. Otherwise, freedom of discourse regarding science (or anything else) is as useful as having the freedom to share one’s color preferences—an agreeable liberty, but unlikely to benefit society in any real way. 

Although Perkins fails to note the fact, in a world of subjective truth, free speech can only ever be subjectively valuable. As he himself states, “. . . there is . . . cause for optimism, as long as we stand up for the principle that no one has the right to police our opinions.” This may be true, but this basis for optimism remains only so long as universal consensus “for the principle that no one has the right to police our opinions” exists. If there is no ultimate “authority who decides what is a good idea,” free speech will cease to be valuable the moment those with the power to mandate virtues begin to dislike it.

If truth (on this blog, assumed to be objective truth) be told, the scientific relativism suggested by this article promotes havoc and (if the subjectivist will promote the term) injustice in the necessarily and undeniably “messy business” that is science. Confirmation bias is a temptation to the point of being nearly unavoidable to narrative-driven human scientists, and for this reason the formal scientific discourse and disagreement the author mentions is crucial. If no absolute scientific truth exists, however, the scientific freedom of speech Perkins claims to promote cannot possibly be useful, and it will certainly only exist so long as every player in the scientific game agrees to it. With no referee, scientific free speech is as objectively necessary as scientific honesty.

All in all, the secular humanism from which Perkins presumably speaks is frighteningly arrogant. Espousing as its thesis the claim that mankind has the right and ability to impose meaning on an immense universe, this philosophy has the potential to lead its devotees in whichever direction they wish to take their society. Yet humans and especially the words they speak, words Perkins is here so anxious to defend, are minute and temporal, infinitesimal blips on the radar of the universe. If they—if we—are to have any meaning greater than the meaning that can be maintained by the fleeting 21st century attention span, it surely must be derived from something greater than a human consensus.

Artificial Intelligence: Redefining Humanity

In its recent April 2018 issue, Smithsonian featured an article on the future of artificial intelligence by author Stephan Talty titled “Be(a)ware.” Talty’s fictional background is evident throughout the piece (admittedly, the topic does lend itself to imagination), but one of its most alarming statements comes in the form of what appears to be a simple empirical fact. In one of the introductory paragraphs of the piece, Talty writes:

In just the last few years, “machine learning” has come to seem like the new path forward. Algorithms, freed from human programmers, are training themselves on massive data sets and producing results that have shocked even the optimists in the field. Earlier this year, two AIs—one created by the Chinese company Alibaba and the other by Microsoft—beat a team of two-legged competitors in a Stanford reading-comprehension test. The algorithms “read” a series of Wikipedia entries on things like the rise of Genghis Khan and the Apollo space program and then answered a series of questions about them more accurately than people did. 

Literary enthusiast that I am, this last statement seized my notice by the throat. Inherent in this claim, obviously, is the assumption that an objective scale for reading comprehension exists, that the absorption of the content of a written work into one’s worldview can be objectively, comparatively measured. Yet the fact that a work of art can, will, and should be interpreted differently by each observer is as old as art itself, dramatically predating the absurdity of deconstructionism. In this case, however, the machines were presumably able to answer questions on the material with such accuracy because they had been programmed to process the texts through the researchers’ personal evaluation framework. By all appearances, the researchers of whom Mr. Talty speaks believe that they have established an objective grading scale for subjectivity—and they have bottle-fed it to quickly maturing artificial minds.

Obviously, though, the tally of Genghis Khan’s wives is not open for debate in quite the same way as the intended message of a Shakespearean sonnet; none but a grotesquely committed conspiracy theorist would argue that algorithm performance evaluation scales are equivalent to censorship of literary criticism. Nevertheless, Talty’s misleading word choice demonstrates that even factual comprehension may be less straightforward than it may appear. More importantly, though, this article points to one of the greatest potential dangers of artificial intelligence, an impending reality even more alarming than any Terminator-esque subjugation of mankind. 

The superior performance of the bots in this study is betokened by their 82.63 score to human champions’ 82.304, numbers that represent the competitor’s percentage of exactly matched answers to content-related questions. Considering this fact, Microsoft’s computer’s advantage for this task is obvious: while the human brain is certainly a wonder, it simply is not designed to remember hundreds of precise phrases exactly. And for the accessible portion of human history, no human has considered comprehension to be a simple matter of memorization (perhaps exclusively for this reason). Maybe because of the distribution of our species’ gifts, humans have always considered understanding to be fundamentally self-referential. This fact may be simply the inevitable outcome of human arrogance or pragmatism—but even if that is the case, can we afford to risk allowing intelligence researchers, algorithm engineers, and (God help us) Wikipedia editors to just as arbitrarily redefine the standard? 

Without question, the artificial intelligence revolution will redefine life for human beings, both those who participate in its development and all others, and, by all appearances, it will first insidiously do so by changing the standards by which we evaluate our work, our world, and even our humanity. Some of those changes may be changes for the better, but trusting that they can be optimally made by a select, alarmingly homogeneous community quarantined in Northern California seems to me to be naive. If humanity is to change the standards by which it exists, may it be humanity that changes them, not just one segment of it.

Not insignificantly, though, if we are to change the definition of words as relatively benign as “comprehension,” we must bear in mind that we will inevitably be redefining them with reference to the definition of other words. In order to make any part of our existence “better,” we must know what “better” means. And for the most part, we do—for now. But that word has proven itself fickle throughout human history. What child, adult, or civilization hasn’t insisted, “X will make me happy”—X will make things “better”—only to disregard X for Y once X is obtained? In 1938, the Nazis were sure Czechoslovakia would satisfy their appetite; in 1940, it was France. On Palm Sunday, “better” was Jesus enthroned; five days later, “better” was Jesus crucified. 

Undeniably, information technology has brought humanity to the brink of a potentially magnificent revolution—but we must know what we wish to magnify. We must know what we mean by “progress”; what we mean by “good.” And if we restrict our search to the world of algorithms and evolution, we are sure to be disappointed.