For some, artificial intelligence represents nothing more than one tool among many aimed at increasing productivity and maximizing economic output. For others though, AI looks like more of a destination, a couple of words pointing to a tectonic shift in global society capable of ripping the ground out from under humanity’s feet. Which camp do you think Henry Kissinger belongs in?
Yes, the same Henry Kissinger who managed to whisper in presidents’ ears long enough to fundamentally alter the course of events in the 20th century has some thoughts on what advances in AI could mean for the next hundred years. The Cold War veteran started prominently expressing his interest and concern over AI in a 2018 issue of The Atlantic titled “How the Enlightenment Ends.” Since then, the 98-year-old puppetmaster turned AI prophet has worked to refine his ideas into a book in which he enlisted the help of former Google CEO Eric Schmidt.
Schmidt, for his part, isn’t a stranger to working with governments. After leaving Google, Schmidt made regular appearances in Barack Obama’s White House, where he definitely did not encourage the president to look favourably on the tech industry. In 2019, under Donald Trump, Schmidt was tasked by the US government to formally co-head the National Security Commission on AI, an organisation whose goal is to produce lengthy reports for the President and Congress detailing methods and strategies for advancing AI in national defence. Schmidt’s first report called on the U.S. to push back against calls for global AI weapons and encourage a tighter connection between the military and private industry to stave off potential AI threats from China and Russia.
With all this in mind, it only makes sense that Kissinger and Schmidt’s recently released book The Age of AI: And our Human Future goes hard on the heart-thumping, world-conquering American exceptionalism. Though excerpts around the military, power, and China are abundant, the book also weighs in on the supposed ways AI may alter the very concept of “humanity.” Trippy stuff.
Here are some of the biggest takeaways from The Age of AI.
Countries and Companies Have No Idea What Each Other Are Working On
One of the core tenants running throughout The Age of AI is also, undoubtedly, one of the least controversial. With artificial intelligence applications progressing at break-neck speed, both in the U.S. and other tech hubs like China and India, government bodies, thought leaders, and tech giants have all so far failed to establish a common vocabulary or a shared vision for what’s to come.
As with most issues discussed in The Age of AI, the stakes are exponentially higher when the potential military uses for AI enter the picture. Here, more often than not, countries are talking past each other and operating with little knowledge of what the other is doing. This lack of common understanding, Kissinger and Co. wager, is like a forest of bone-dry kindling waiting for an errant spark.
“Major countries should not wait for a crisis to initiate a dialogue about the implications — strategic. doctrinal, and moral — of these [AI’s] evolutions,” the authors write. Instead, Kissinger and Schmidt say they’d like to see an environment where major powers, both government and business, “pursue their competition within a framework of verifiable limits.”
“Negotiation should not only focus on moderating an arms race but also making sure that both sides know, in general terms, what the other is doing.” In a general sense, the institutions holding the AI equivalent of a nuclear football have yet to even develop a shared vocabulary to begin a dialogue.
The US and China Are Gearing Up for an AI Cold War
It’s only natural that a book co-authored by one of the men most instrumental in crafting the last Cold War would feature large segments outlining a new one. Old habits truly die hard.
Rather than going toe to toe with the Soviet Union over nuclear weapons, Kissinger sees the current millennia marked by a struggle between the U.S. and China over AI supremacy. Cue the spooky music.
Though the characters and tools have changed, the actual outline of predicted events feels awfully similar to the middle 20th century. Kissinger (I’m assuming it’s his voice speaking through these particular pages) specifically invokes the realist foreign policy concept of a balance of power among nations on the international stage. Kissinger describes a situation where the U.S. and China are in hot competition over all things AI, a competition that includes both algorithms made to get your self-driving car speeding over to Wendy’s faster and able to autonomously operate a drone swarm capable of assassinating some undesirable in a country you’re not supposed to know about.
Human rights groups and activists of varied stripes, both in the U.S. and elsewhere, have spoken out against that latter option, arguing the introduction and wide deployment of AI weapons systems would lead to a more violent war itching for war. The Age of AI authors completely disagree.
“If the United States and its allies recoil before the implications of these capabilities and halt progress on them, the result would not be a more peaceful world,” they write. “Instead, it would be a less balanced world in which the development and use of the most formidable strategic capabilities takes place with less regard for the concepts of democratic accountability.”
On this point, the writers and the U.S. government are aligned. Just last month the U.S. rejected United Nations calls for a binding agreement regulating or banning the use of “killer robot” autonomous weapons systems.
AI Could be More Dangerous Than Nuclear Weapons
The authors spend a great deal of time comparing AI’s potential destructive capabilities to that of nuclear weapons. It just so happens Kissinger had a front-row seat to witness and play a significant role, in the strategic geopolitical decision surrounding nuclear weapons. (Specifically how to prevent a playground full of empire-hungry superpowers from blowing each other to smithereens).
The author provides a brief history of the two main strategies used to avoid catastrophe: deterrence and disarmament. Fans of Kissinger will know the former hit a bit harder than the latter. Though these two strategies can seem at odds, the authors say they both share a similarity in that they both rely on the ability to calculate or predict what the other side is thinking. That logic disappears with AI, the authors warn.
“Most traditional military strategies and tactics have been based on the assumption of a human adversary whose conduct and decision-making calculus fit within a recognisable framework or have been defined by experience and conventional wisdom,” the authors write. “Yet an AI piloting an aircraft or scanning for targets follows its own logic, which may be inscrutable to an adversary and unsusceptible to traditional signals and feints — and which will, in most cases, proceed fasted than the speed of human thought.”
Though “uncertainty” is part and parcel of warfare, the author warns AI introduces a new dimension. What if countries aren’t even aware of their own AI capabilities? “Because AIs are dynamic and emergent, even those powers creating or wielding an AI-designed or AI operated weapon may not know exactly how powerful it is or exactly what it will do in a given situation,” the authors say.
Unchecked AI Could Make a Misinformation Nightmare Even Worse
If you ask anyone right now who “controls” AI your guess is about as good as anyone else’s. Does the U.S. government control AI? Does Google? Does Facebook? What about Elon Musk? The point the authors try to make throughout the book is that, as of now, there’s no clear or established hierarchy or cooperation to make sure more advanced AI capabilities operate under some sort of unified vision. That’s a problem that needs to be fixed, they argue.
“We cannot leave its [AI] development or application to any one constituency, be it researchers, companies, governments, or civil society organisations,” the book reads.
This lack of cooperation, they argue, could lead to some dicey situations. Though the authors steer clear of going full Skynet Terminator takeover, they do outline a range of potential society level oh shit moments that they argue could happen if all parties involved in AI aren’t on the same page. The most convincing of these arguments, to this writer at least, was the claim that more powerful algorithms could lead to a disinformation nightmare, where juiced-up news and other salacious content spreads so rapidly and quickly that even basic neighbours can’t agree on what’s true. That’s the most believable because, depending on who you ask, it’s already happening.
AI Could Alter Human Identity
In a surprising twist, The Age of AI, a book co-written by one of the godfathers of modern imperialistic power, isn’t even at its strangest when speaking about war. The book actually takes a much bigger swing (or, maybe for some, a larger leap) when talking about how the as-yet-undefined “AI” will alter human interaction the authors argue AI will continue to insert itself deeper and deeper into everyday human lives and that it will increasingly make decisions based off of complex data that mortals simply can’t fathom. (Among other things, the authors use the example of DeepMind’s AlphaGo which managed to beat human champions at the notoriously complex board game Go using a move never before conceived by humans).
This backdrop, the authors argue, will create a world where only a handful of highly trained elite engineers have any real understanding of how AI works, while AI meanwhile increasingly runs the shots and dictates life for the wandering masses.
“Some people, particularly those who understand AI, may find this world intelligible,” the book reads. “Others, greater in number, may not understand why AI does what it does, diminishing their sense of autonomy and their ability to ascribe meaning to the world.”
Put more dramatically, the authors argue the outcome of AI, “will be an alternation of human identity and the human experience at levels not experienced since the dawn of the modern age.” In other words, yes we are entering the cool zone.
Thinking, As We Know It, May Cease to Exist
Maybe the strangest of all of Kissinger and Schmidt’s AI predictions really has little to do with AI at all. Instead, it oddly has to do with the enlightenment idea of “reason.” Piggybacking off the idea that AI will fundamentally alter human reality, the authors go a step further and argue AI with all its transcendent, unearthly insight, may actually bring about the death of the basic mode of thinking that’s defined human civilisation for centuries. The logic here is AI will not only find and create new things (think drug discovery) but will actually detect, “aspects of reality humans have not detected.” Increasingly AI models may be based not on “theoretically understanding,” but instead on, “conclusions based on experimental results.”
“In an era in which reality can be predicted, approximated, and simulated by an AI that can assess what is relevant to our lives, predict what will come next, and decide what to do, the role of human reason will change. With it, our sense of our individual and societal purposes will change too.”
At the same time, the flood of information hurling its way toward every human being appears destined to bloat even further, as thicker and thicker data soups are needed to feed hungry algorithms. That data overload, the authors write may “increase the cost and thus decrease the frequency, of contemplation.” Not only will AI make end up making more decisions for us, but we — the lowly meat bags, drunk off our data debauchery — may end up dumber than ever.