Henry Kissinger Warns That AI Will Fundamentally Alter Human Consciousness

Henry Kissinger Warns That AI Will Fundamentally Alter Human Consciousness

Speaking in Washington, D.C. earlier today, former U.S. secretary of state Henry Kissinger said he’s convinced of AI’s potential to fundamentally alter human consciousness”including changes in our self-perception and to our strategic decision-making. He also slammed AI developers for insufficiently thinking through the implications of their creations.

Kissinger, now 96, was speaking to an audience attending the “Strength Through Innovation“ conference currently being held at the Liaison Washington Hotel in Washington, D.C. The conference is being run by the National Security Commission on Artificial Intelligence, which was set up by Congress to evaluate the future of AI in the U.S. as it pertains to national security.

Kissinger, who served under President Richard Nixon during the Vietnam War, is a controversial figure who many argue is an unconvicted war criminal. That he’s speaking at conferences and not spending his later years in a cold jail cell is understandably offensive to some observers.

Moderator Nadia Schadlow, who in 2018 served in the Trump administration as the Assistant to the President and as Deputy National Security Advisor for Strategy, asked Kissinger about his take on powerful, militarised artificial intelligence and how it might affect global security and strategic decision-making.

“I don’t look at it as a technical person,” said Kissinger. “I am concerned with the historical, philosophical, strategic aspect of it, and I’ve become convinced that AI and the surrounding disciplines are going to bring a change in human consciousness, like the Enlightenment,” he said, adding: “That’s why I’m here.” His invocation of the 18th-century European Enlightenment was a reference to the paradigmatic intellectual shift that occurred during this important historical period, in which science, rationalism, and humanism largely replaced religious and faith-based thinking. 

Though Kissinger didn’t elaborate on this point, he may have been referring to a kind of philosophical or existential shift in our thinking once AI reaches a sufficiently advanced level of sophistication”a development that will irrevocably alter the way we engage with ourselves and our machines, not necessarily for the better.

Kissinger said he’s not “arguing against AI” and that it’s something that might even “save us,” without elaborating on the details.

The former national security advisor said he recently spoke to college students about the perils of AI and that he told them, “˜You work on the applications, I work on the implications.’” He said computer scientists aren’t doing enough to figure out what it will mean “if mankind is surrounded by automatic actions” that cannot be explained or fully understood by humans, a conundrum AI researchers refer to as the black box problem.

Artificial intelligence, he said, “is bound to change the nature of strategy and warfare,” but many stakeholders and decision-makers are still treating it as a “new technical departure.” They haven’t yet understood that AI “must bring a change in the philosophical perception of the world,” and that it will “fundamentally affect human perceptions.”

[referenced url=” thumb=” title=” excerpt=”]

A primary concern articulated by Kissinger was in how militarised AI might cause diplomacy to break down. The secret and ephemeral nature of AI means it’s not something state actors can simply “put on the table” as an obvious threat, unlike conventional or nuclear weapons, said Kissinger. In the strategic field, “we are moving into an area where you can imagine an extraordinary capability” and the “enemy may not know where the threat came from for a while.”

Indeed, this confusion cause undue chaos on a battlefield, or a country could mistake the source of an attack. Even scarier, a 2018 report from the RAND Corporation warned that AI could eventually heighten the risk of nuclear war. This means we’ll also have to “rethink the element of arms control” and “rethink even how the concept of arms control” might apply to this future world, said Kissinger.

Kissinger said he’s “sort of obsessed” with the work being done by Google’s DeepMind, and the development of AlphaGo and AlphaZero in particular”artificially intelligent systems capable of defeating the world’s best players at chess and Go. He was taken aback by how AlphaGo learned “a form of chess that no human being in all of history ever developed,” and how pre-existing chess-playing computers who played against this AlphaGo were “defenseless.” He said we need to know what this means in the larger scheme of things, and that we should study this concern”that we’re creating things we don’t really understand. “We’re not conscious of this yet as a society,” he said.

Kissinger is confident that AI algorithms will eventually become a part of the military’s decision-making process, but strategic planners will “have to test themselves in war games and even in actual situations to ensure the degree of reliability we can afford to these algorithms, while also having to think through the consequences.”

Kissinger said the situation may eventually be analogous to the onset of World War I, in which a series of logical steps led to a myriad of unanticipated and unwanted consequences.

“If you don’t see through the implications of the technologies… including your emotional capacities to handle unpredictable consequences, then you’re going to fail on the strategic side,” said Kissinger. It’s not clear, he said, how state actors will be able to conduct diplomacy when they can’t be sure what the other side is thinking, or if they’ll even be able to reassure the other side “even if you wanted to,” he said. “This topic is very important to think about”as you develop weapons of great capacity…how do you talk about it, and how do you build restraint on their use?”

To which he added: “Your weapons in a way become your partner, and if they’re designed for a certain task, how can you modify them under certain conditions? These questions need to be answered.” AI will be the “philosophical challenge of the future,” said Kissinger, because we’ll be partnered with generally intelligent objects that have “never been conceived before, and the limitations are so vast.”

Scary words from a scary guy. The future looks to become a very precarious place.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.