We've all worried about artificial intelligence reaching a point in which its cognitive ability is so far beyond ours that it turns against us. But what if we just turned the AI into a spineless weenie that longs for our approval? Researchers are suggesting that could be a great step towards improving the algorithms, even if they aren't out to murder us.
GIF Source: Boston Dynamics
In a new paper, a team of scientists has begun to explore the practical (and philosophical) question of how much self-confidence AI should have. Dylan Hadfield-Menell, a researcher at the University of California and one of the authors of the paper, tells New Scientist that Facebook's newsfeed algorithm is a perfect example of machine confidence gone awry. The algorithm is good at serving up what it believes you'll click on, but it's so busy deciding if it can get your engagement, it doesn't ask whether or not it should. Hadfield-Menell feels that the AI would be better at making choices and identifying fake news if it was programmed to seek out human oversight.
In order to put some data behind this idea, Hadfield-Menell's team created a mathematical model they call the "off-switch game". The premise is simple: A robot has an off switch and a task; a human can turn off the robot whenever they want, but the robot can override the human only if it believes it should. "Confidence" could mean a lot of things in AI. It could mean that the AI has been trained to assume its sensors are more reliable than a human's perception, and if a situation is unsafe, the human should not be allowed to switch it off. It could mean that the AI knows more about productivity goals and the human will be fired if this process isn't completed — depending on the task, it will probably mean a ton of factors are being considered.
The study doesn't come to any conclusions about "how much" confidence is too much — that's really a case-by-case scenario. It does lay out some theoretical models in which the AI's confidence is based on its perception of its own utility and its lack of confidence in human decision making.
The model allows us to see some hypothetical outcomes of what happens when an AI has too much or too little confidence. But more importantly, it's putting a spotlight on this issue. Especially in these nascent days of artificial intelligence, our algorithms need all the human guidance they can get. A lot of that is being accomplished through machine learning and all of us acting as guinea pigs while we use our devices. But machine learning isn't great for everything. For quite a while, the top search result on Google for the question, "Did the Holocaust happen?" was a link to the white supremacist website Stormfront. Google eventually conceded that its algorithm wasn't showing the best judgement and fixed the problem.
Hadfield-Menell and his colleagues maintain that AI will need to be able to override humans in many situations. A child shouldn't be allowed to override a self-driving car's navigation systems. A future breathalyser app should be able to stop you from sending that 3AM tweet. There are no answers here, just more questions.
The team plans to continue working on the problem of AI confidence with larger datasets for the machine to make judgements about its own utility. For now, it's a problem that we can still control. Unfortunately, the self-confidence of human innovators is untameable.