Yesterday, Facebook's head of artificial intelligence, Yann LeCun, said that humans have nothing to fear regarding artificial intelligence potentially harming humanity. Why's that? "We have a lot of checks and balances built into society to prevent evil from having infinite power," LeCun said. Is that so, Mr LeCun?
GIF of a robot that is frustrated with his lack of infinite power in the 1991 documentary Terminator 2: Judgment Day
We're glad to see that LeCun is up and about after recovering from what we can only guess was an 18-month coma, but we're concerned that he should probably be resting during this trying time. You can't rush these things.
Those "checks and balances" LeCun mentions are being tested in both the US and around the world at the moment with the global rise of fascism. And anyone arguing that we can rely on those checks and balances to prevent something like the robot apocalypse might be a bit more careful with their words.
From LeCun's interview with Axios:
Axios: You said during your talk that we shouldn't worry about machines taking over the world, because that assumes that computers will have human failings, like greed or the tendency to become violent when threatened. But what about a scenario in which a hedge fund bot is programed to maximise returns, and it turns out the best way to do that is to buy a bunch of food before destroying the rest of the world's food supply. Such a machine would be fulfilling its purpose, but through evil, even if the person who programmed the machine didn't anticipate this reaction.
LeCun: We have a lot of checks and balances built into society to prevent evil from having infinite power. Most companies are not either working for good or evil -- they're just maximizing profits. But we have all sorts of rules and laws to prevent our economy from going haywire. It will be the same thing for AI. Learning to build AI systems that are safe -- not because they're going to take over the world, but because you want them to work reliably -- is going to take some time, similar to how long it took people to figure out how to build aeroplanes that don't crash.
The interviewer quite rightly points out that "we're just maximizing profits" isn't a great defence when you're talking about the potential for AI to get out of control. Sometimes it's really profitable to harm people. In fact, that's why there are institutions like the EPA or the FDA to protect people -- those same institutions that are currently being dismantled piece by piece in the US.
Again, Gizmodo is glad to see that LeCun is recovering from his coma or has emerged from literally living in a cave for the past 18 months. (It could not be confirmed by press time whether it was a coma or the cave thing.)
But Facebook should really give him some time to get his head sorted before they start shoving him out in public to talk with reporters.