Oculus Founder’s Vision for Military AI, Which He’s Helping Build, Is Kinda Yikes

Oculus Founder’s Vision for Military AI, Which He’s Helping Build, Is Kinda Yikes
Photo: David Fitzgerald, Getty Images

One of the founders of Oculus, now called Meta Quest, has been busy since being pushed out of the company in 2018.

In a frustratingly coy interview with Wired, Palmer Luckey skirted around the question of whether or not technology from Anduril Industries, a military technology company Luckey founded in 2017, is being used in Ukraine.

“There’s a few assumptions in that question, like we aren’t involved,” Luckey responded without saying whether that assumption was correct or not.

In a follow-up question where he was asked explicitly whether he and Anduril are involved in Ukraine, Luckey outright refused to confirm or deny this detail. He did, however, mention that Ukrainian President Volodymyr Zelenskyy “reached out” to Anduril in the interest of deterring conflict.

Anduril also struck a deal with the Trump administration to install surveillance towers around the border between the U.S. and Mexico in 2020. He was also a vocal supporter and donor for former President Donald Trump.

Anduril uses its Lattice technology, among others, which is a counter-drone system that detects hostile drones using AI-powered sentry towers, then deploys its own drones to take the other out of the air. A demo video boasts that Lattice operates autonomously with “computer vision, machine learning, and real-time data.” It’s already under development for the U.S., the UK and Australia.

Luckey mentions in the Wired interview that working on weapons is “less sunny” than the “fun” he had in developing video games. Of course, that could be in part due to his unnerving stance towards AI weaponry. While Luckey acknowledges the controversy behind machine decision-making, his answer makes it a bit difficult to sleep at night. The military startup founder says that he doesn’t want to “make it impossible for these systems to ever be used in certain ways.”

Here’s his approach to AI in layman terms: Luckey doesn’t want to make it impossible for a weapon to fire on a target if an actual human isn’t manning the communications. His rationale is that the enemy could learn that shutting down communications is the key to disabling an entire defence system. Instead, he looks to ensure that “the responsibility for [weapons firing] always lands on a person,” rather than the pulling of the trigger itself. A Republican donor thinking that the ethical ramifications of murder technology should be dictated by personal responsibility? Who could have possibly seen this coming?

Luckey talks big about the future of military technology and his good intentions, but he’s making good money from militarised conflict. The startup recently landed a billion-dollar contract from the Department of Defense in January. And its work on the border wall wasn’t cheap either–that five-year contract with the U.S. Customs and Border Protection agency was reportedly worth $US250 million (around $350 million). And though Luckey likes to send “mean tweets,” as he refers to them, about Anduril having more money than taxpayer-funded weapons manufacturers since Anduril and other private companies aren’t tied only to public funds. But maybe it’s actually a bad thing for private companies to be incentivised by armed conflict.