Have you ever used Google Assistant, Apple’s Siri or Amazon Alexa to make decisions for you? Perhaps you asked it what new movies have good reviews, or to recommend a cool restaurant in your neighbourhood.
Artificial intelligence and virtual assistants are constantly being refined, and may soon be making appointments for you, offering medical advice, or trying to sell you a bottle of wine.
Although AI technology has miles to go to develop social skills on par with ours, some AI has shown impressive language understanding and can complete relatively complex interactive tasks.
It’s likely the AI capabilities developed by tech giants such as Amazon and Google will only grow more capable of influencing us in the future.
But what do we actually find persuasive?
My colleague Adam Duhachek and I found AI messages are more persuasive when they highlight “how” an action should be performed, rather than “why”. For example, people were more willing to put on sunscreen when an AI explained how to apply sunscreen before going out, rather than why they should use sunscreen.
We found people generally don’t believe a machine can understand human goals and desires. Take Google’s AlphaGo, an algorithm designed to play the board game Go. Few people would say the algorithm can understand why playing Go is fun, or why it’s meaningful to become a Go champion. Rather, it just follows a pre-programmed algorithm telling it how to move on the game board.
Our research suggests people find AI’s recommendations more persuasive in situations where AI shows easy steps on how to build personalised health insurance, how to avoid a lemon car, or how to choose the right tennis racket for you, rather than why any of these are important to do in a human sense.
Does AI have free will?
Most of us believe humans have free will. We compliment someone who helps others because we think they do it freely, and we penalise those who harm others. What’s more, we are willing to lessen the criminal penalty if the person was deprived of free will, for instance if they were in the grip of a schizophrenic delusion.
But do people think AI has free will? We did an experiment to find out.
Someone is given $100 and offers to split it with you. They’ll get $80 and you’ll get $20. If you reject this offer, both you and the proposer end up with nothing. Gaining $20 is better than nothing, but previous research suggests the $20 offer is likely to be rejected because we perceive it as unfair. Surely we should get $50, right?
But what if the proposer is an AI? In a research project yet to be published, my colleagues and I found the rejection ratio drops significantly. In other words, people are much more likely to accept this “unfair” offer if proposed by an AI.
This is because we don’t think an AI developed to serve humans has a malicious intent to exploit us — it’s just an algorithm, it doesn’t have free will, so we might as well just accept the $20.
The fact people could accept unfair offers from AI concerns me, because it might mean this phenomenon could be used maliciously. For example, a mortgage loan company might try to charge unfairly high interest rates by framing the decision as being calculated by an algorithm. Or a manufacturing company might manipulate workers into accepting unfair wages by saying it was a decision made by a computer.
To protect consumers, we need to understand when people are vulnerable to manipulation by AI. Governments should take this into account when considering regulation of AI.
We’re surprisingly willing to divulge to AI
In other work yet to be published, my colleagues and I found people tend to disclose their personal information and embarrassing experiences more willingly to an AI than a human.
We told participants to imagine they’re at the doctor for a urinary tract infection. We split the participants, so half spoke to a human doctor, and half to an AI doctor. We told them the doctor is going to ask a few questions to find the best treatment and it’s up to you how much personal information you provide.
Participants disclosed more personal information to the AI doctor than the human one, regarding potentially embarrassing questions about use of sex toys, condoms, or other sexual activities. We found this was because people don’t think AI judges our behaviour, whereas humans do. Indeed, we asked participants how concerned they were for being negatively judged, and found the concern of being judged was the underlying mechanism determining how much they divulged.
It seems we feel less embarrassed when talking to AI. This is interesting because many people have grave concerns about AI and privacy, and yet we may be more willing to share our personal details with AI.
But what if AI does have free will?
We also studied the flipside: what happens when people start to believe AI does have free will? We found giving AI human-like features or a human name could mean people are more likely to believe an AI has free will.
This has several implications:
- AI can then better persuade people on questions of “why”, because people think the human-like AI may be able to understand human goals and motivations
- AI’s unfair offer is less likely to be accepted because the human-looking AI may be seen as having its own intentions, which could be exploitative
- people start feeling judged by the human-like AI and feel embarrassed, and disclose less personal information
- people start feeling guilty when harming a human-looking AI, and so act more benignly to the AI.
We are likely to see more and different types of AI and robots in future. They might cook, serve, sell us cars, tend to us at the hospital and even sit on a dining table as a dating partner. It’s important to understand how AI influences our decisions, so we can regulate AI to protect ourselves from possible harms.
This article was originally published in October 2020.