The Age of AI and Our Human Future
By Henry A. Kissinger, Eric Schmidt & Daniel Huttenlocher (Little Brown, 2021)
I recently found a copy of The Age of AI and Our Human Future at our local library bookstore for $3.00 and bought it mostly because of the big-name authors, which simultaneously generated some skepticism. However, it turned out to be far more than a celebrity attempt to cash in on the current fad for Artificial Intelligence, and everything else AI from the Singularity to Transhumanism.
Instead, The Age of AI and Our Human Future proved to be a deeply worrisome document illustrating significant failures in current Establishment thinking about philosophical implications of the current stage of development in the Information Age that began following the Second World War.
The senior authors, as well as their younger collaborators listed in the acknowledgments—Meredith Potter, Schuyler Shouten, and Ben Daus—appear understandably concerned with dangers surrounding widespread implementation of Artificial Intelligence. Especially troubling is their discussion of nuclear war fought by robot drones controlled by AI networks outside of human control. No doubt the authors know what they are talking about when it comes to the US Defense Department’s reliance upon computers, since Kissinger was National Security Adviser and Secretary of State, Schmidt used to head Google, and Huttenlocher is Dean of MIT’s Schwarzman College of Computing.
Although the book provides a bit of a discussion related to the philosophy of consciousness in a chapter entitled “How We Got Here”, it seemed to me at the level of course notes from an undergraduate class in the history of philosophy. Skipping quickly from Plato and Aristotle to Berkeley, Kant, and Heisenberg, it doesn’t really answer the questions. How about some discussion of the importance of the loom or Lord Byron’s daughter to liven things up?
The next chapter, “From Turing to Today—and Beyond,” discusses limitations on Alan Turing’s theoretical test to determine whether a machine is human and argues that Wittgenstein’s linguistic philosophy is inspiration for machine learning and artificial neural networks. Maybe, maybe not.
The authors consider examples such as AlphaZero playing chess, AlphaDogfight piloting UAVs, AI techniques developed at MIT leading to discovery of the drug Halicin (named after the computer star of Stanley Kubrick’s 2001: A Space Odyssey), and the creation of prose by GPT-3 which the authors claim to be indistinguishable from human writing.
The authors predict that eventually AI will lead to “programs capable of dramatically exceeding human performance in specific areas such as advanced scientific fields.”
Count me skeptical.
The reason is that the authors seem to strangely ignore the relationship between man and machine has been conflicted going back to the invention of the plow, the axe, the loom, the bow-and-arrow and fire itself. Not to mention the industrial factory and the Luddites.
Each technological development creates tensions, and yet at the end of each cycle of turmoil and conflict Frankenstein’s latest monster ends up under man’s control.
And I think the reason the authors don’t discuss this is that they appear to be linear thinkers, rather than cyclical ones. For them, history’s arrow flies in one direction only—that of technological determinism.
Well, I saw that movie before and I didn’t like the way it ended.
The authors appear unaware (but they can’t be, because Kissinger was involved at the time) that this same type of argument was made in the 1960s for computers in the basement of the RAND Corporation that would win the Vietnam War (spoiler alert: we lost); for neurosurgery and drugs that would cure mental illness and allow state hospitals to discharge their patients into the community (spoiler alert: the streets of urban centers have become snake pits of homelessness); for computer-based education such as SRA readers to teach English (spoiler alert: America faces a literacy crisis); and so on.
The 1960s offered in my opinion many excellent analyses of these problems from authors like Norbert Weiner, Marshall McLuhan and Thomas Kuhn, not to mention my favorite thinker Jane Jacobs, among others, which for some unknown reason Kissinger, Schmidt and Huttenlocher don’t discuss. My guess is that they don’t like what they had to say—because nobody in the positions they occupy could be that ignorant.
As it turned out in the 1960s computer revolution, instead of society turning to machines for guidance, much of the world’s population looked not to IBM 360 mainframes but to rock stars, Eastern gurus, Woodstock and Timothy Leary as well as born-again Christianity. Seemingly paradoxically, as the power of mechanical reasoning increased so did the power of human irrationality and the search for non-mechanical brides (with apologies to Prof. McLuhan).
The Age of Aquarius was the Information Age and the Space Age. It all happened at the same time. The revolt against the machine is built into human nature, and eventually a new equilibrium is found as a result of the conflict between reason and emotion—witness the crowds of IT executives who flock to Burning Man each year to seek inspiration.
Which is precisely the message of science fiction tales, ranging from the defeat of the HAL 9000 computer in 2001 to Luke Skywalker’s almost medieval trust in “the Force” in Star Wars.
Even concern about nuclear war caused by computers is not new, as evidenced by screening Fail Safe.
Engineers tend to be unaware of the powerful role emotions play in human consciousness, overestimating the importance of data, information, and rational calculation. Stalin’s Russia revealed the disaster inherent in the attempt to engineer human souls, when he embraced Fordism—seeing the assembly line as the AI technology of that era. Mass production, mass man, and mass movements—where did that lead?
Which is why paying more attention to the psychological and spiritual needs of human beings, including their needs for freedom and dignity, would be the most effective way to tame the Frankenstein-like tendencies of Artificial Intelligence outlined by the authors of this book.
Unfortunately, the book reveals serious blind spots in Establishment thinking when it comes to the importance of religion, culture and society in the formation of government policy in relation to AI.
The most important take-away from reading Kissinger, Schmidt, and Huttenlocher is that Artificial Intelligence is precisely that—artificial.
Because it lacks the most important kind of intelligence in people’s lives: emotional intelligence.
A welcome and very much needed reminder that techie-prognostications are neither as utopian nor as dystopian as forecast. And a fine update of the Hegelian dialectic applied to spiritual counter-reactions to tech-reductionism. Few if any other writers have thought of putting these two trends together, because few are so well-attuned to both elements. But it makes perfect common-sense. The artificial intelligentsia needs a training program in natural intelligence.
Wow…