In its recent April 2018 issue, Smithsonian featured an article on the future of artificial intelligence by author Stephan Talty titled “Be(a)ware.” Talty’s fictional background is evident throughout the piece (admittedly, the topic does lend itself to imagination), but one of its most alarming statements comes in the form of what appears to be a simple empirical fact. In one of the introductory paragraphs of the piece, Talty writes:

In just the last few years, “machine learning” has come to seem like the new path forward. Algorithms, freed from human programmers, are training themselves on massive data sets and producing results that have shocked even the optimists in the field. Earlier this year, two AIs—one created by the Chinese company Alibaba and the other by Microsoft—beat a team of two-legged competitors in a Stanford reading-comprehension test. The algorithms “read” a series of Wikipedia entries on things like the rise of Genghis Khan and the Apollo space program and then answered a series of questions about them more accurately than people did. 

Literary enthusiast that I am, this last statement seized my notice by the throat. Inherent in this claim, obviously, is the assumption that an objective scale for reading comprehension exists, that the absorption of the content of a written work into one’s worldview can be objectively, comparatively measured. Yet the fact that a work of art can, will, and should be interpreted differently by each observer is as old as art itself, dramatically predating the absurdity of deconstructionism. In this case, however, the machines were presumably able to answer questions on the material with such accuracy because they had been programmed to process the texts through the researchers’ personal evaluation framework. By all appearances, the researchers of whom Mr. Talty speaks believe that they have established an objective grading scale for subjectivity—and they have bottle-fed it to quickly maturing artificial minds.

Obviously, though, the tally of Genghis Khan’s wives is not open for debate in quite the same way as the intended message of a Shakespearean sonnet; none but a grotesquely committed conspiracy theorist would argue that algorithm performance evaluation scales are equivalent to censorship of literary criticism. Nevertheless, Talty’s misleading word choice demonstrates that even factual comprehension may be less straightforward than it may appear. More importantly, though, this article points to one of the greatest potential dangers of artificial intelligence, an impending reality even more alarming than any Terminator-esque subjugation of mankind. 

The superior performance of the bots in this study is betokened by their 82.63 score to human champions’ 82.304, numbers that represent the competitor’s percentage of exactly matched answers to content-related questions. Considering this fact, Microsoft’s computer’s advantage for this task is obvious: while the human brain is certainly a wonder, it simply is not designed to remember hundreds of precise phrases exactly. And for the accessible portion of human history, no human has considered comprehension to be a simple matter of memorization (perhaps exclusively for this reason). Maybe because of the distribution of our species’ gifts, humans have always considered understanding to be fundamentally self-referential. This fact may be simply the inevitable outcome of human arrogance or pragmatism—but even if that is the case, can we afford to risk allowing intelligence researchers, algorithm engineers, and (God help us) Wikipedia editors to just as arbitrarily redefine the standard? 

Without question, the artificial intelligence revolution will redefine life for human beings, both those who participate in its development and all others, and, by all appearances, it will first insidiously do so by changing the standards by which we evaluate our work, our world, and even our humanity. Some of those changes may be changes for the better, but trusting that they can be optimally made by a select, alarmingly homogeneous community quarantined in Northern California seems to me to be naive. If humanity is to change the standards by which it exists, may it be humanity that changes them, not just one segment of it.

Not insignificantly, though, if we are to change the definition of words as relatively benign as “comprehension,” we must bear in mind that we will inevitably be redefining them with reference to the definition of other words. In order to make any part of our existence “better,” we must know what “better” means. And for the most part, we do—for now. But that word has proven itself fickle throughout human history. What child, adult, or civilization hasn’t insisted, “X will make me happy”—X will make things “better”—only to disregard X for Y once X is obtained? In 1938, the Nazis were sure Czechoslovakia would satisfy their appetite; in 1940, it was France. On Palm Sunday, “better” was Jesus enthroned; five days later, “better” was Jesus crucified. 

Undeniably, information technology has brought humanity to the brink of a potentially magnificent revolution—but we must know what we wish to magnify. We must know what we mean by “progress”; what we mean by “good.” And if we restrict our search to the world of algorithms and evolution, we are sure to be disappointed.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s