Anyone remember the 2005 sci-fi movie Stealth? As far as I can remember, it involves a secret, artificially intelligent U.S. Air Force fighter plane that rebels against its creators when it is struck by lightning. It was not very good.
Regardless, the U.S. military is now one step closer to its dream of unleashing an uncontrollable, uncaring immortal aeroplane god. According to Air Force Magazine, an AI absolutely ruined an unnamed human pilot in a simulated dogfight during the Defence Advanced Research Project Agency’s AlphaDogfight trials (part of its Air Combat Evolution program) in a 5-0 shutout on Thursday.
The AI, developed by Heron Systems, beat out seven other companies’ entrants before it went up against “Banger,” a District of Columbia Air National Guard pilot and recent Air Force Weapons School F-16 Weapons Instructor Course graduate with over 2,000 hours of experiencing flying F-16s. It relies on a technique called deep reinforcement learning, which allows the program to continually test-run multiple solutions to a given problem and learn what works and what doesn’t.
Other contestants in the contest included Aurora Flight Sciences, EpiSys Science, Georgia Tech Research Institute, Lockheed Martin, Perspecta Labs, PhysicsAI, and SoarTech.
According to Air Force Mag, the AI was tethered to certain constraints — it was restricted to flying within realistic G-force limits, and along with Banger, was only allowed to fire its simulated M61 Vulcan cannon (no air-to-air missiles). But the artificial pilot had other advantages, such as the ability to make decisions in microseconds and awareness of all system and computer variables, while Banger wore a VR headset and was operating in a simulator, not a real jet.
The AI was able to outmaneuver and destroy Banger’s aircraft in all of five rounds, though according to Air Force Mag, the human pilot was able to survive longer each time.
Per The Next Web, Banger said that after getting shot down the first four times, he had tried to adjust his approach by accelerating to 500 mph (804 km/h) and dropping his jet to 13,000 feet (3,900 metres).
“The standard things that we do as fighter pilots are not working, so for this last one, I’ll try to change it up a little bit just to see if we can do something different,” Banger said. “That initial turn is where I lose a lot of life… I’ve just gotta look for opportunities to minimise that distance separation away from the adversary, try to get him back in so I press inside or stay outside his nose area.”
The trick initially seemed to work, according to The Next Web, because the Heron AI couldn’t lower its gun far enough to target Banger — but the advantage only lasted for a few seconds. Heron quickly adjusted course and knocked Banger out another time.
Commander Vincent “Jell-O” Aiello, former U.S. Navy pilot and host of The Fighter Pilot podcast, told Forbes that humans still hold the advantage by far in anything resembling realistic conditions.
“Humans have been proven to excel in one important area when facing off against AI — they know how to handle the type of uncertainty found in today’s combat engagements,” Aiello told Forbes. “Combat does not occur in sterile, static environments. It occurs in 3D, in real-time, where weather, your adversary, and a whole host of other factors come into play.”
According to Air Force Mag, DARPA’s first phase of the project is slated to end later this year. The next step will be two successive 16-month phases in which the AI will be installed in progressively larger planes. The agency hopes to have some kind of product in the hands of the USAF by 2024, with the eventual aim being to have the system handle some aspects of flight like manoeuvring and targeting (presumably adhering to the U.S. military’s AI murder policy, which on paper mandates that humans be able to “exercise appropriate levels of human judgment over the use of force”). DARPA hopes pilots will eventually be able to rely on the AI to handle some tasks in the middle of a battle:
Down the road, these efforts will move from simulators into live-fly testing with simulated weapons. Aircraft equipped with AI will have “safety pilots” onboard to ensure nothing goes wrong, though the software should be designed to avoid any accidents. Those tests will look at how often pilots rely on the AI system to handle tasks, and how well the humans handle their own battle management mission while AI does the rest.
In 2018, hundreds of companies and thousands of top AI experts, scientists, and researchers signed an open letter vowing they would never put their skills to use creating AI-powered military killing machines, saying such weapons would pose a “clear and present danger to the citizens of every country in the world.” Malfunctions, environmental variables, successful hacks, and other factors could result in fully autonomous weapons targeting civilians, friendly forces, surrendering enemies, or other incorrect targets — or they could be purposefully used to indiscriminately target anything in the line of fire.
No surprise here- dogfighting is extremely rules based, whoever can fly the longest on the edge of the maneuvering envelop wins. Humans had no chance. When the AI in the plane can consistently discriminate between combatants and non-combatants, call me https://t.co/d69yUjuFaL https://t.co/rp81Q6NZ5g
— Missy Cummings (@missy_cummings) August 20, 2020
DARPA ACE program manager Colonel Daniel Javorsek told Air Force Mag that a “fully autonomous Heron flying the entire aeroplane (system) is still quite a ways off” and the current tests are simply to see whether the concept is “feasible.” Javorsek added that even if a perfect system was available today, it would take a decade to integrate it into fighter jets.