Apps Can Charge You More Based on Sex and Location, and Legislation Can’t Keep Up

Apps Can Charge You More Based on Sex and Location, and Legislation Can’t Keep Up

Algorithms rule our digital lives. They determine what we see and when we see it and in the case of some dating and ride-share apps, they determine how much we pay. Without seeing how an algorithm works, however, that latter aspect can be rife for discriminatory pricing. And there’s no real legislation in Australia to stop it.

A months-long investigation by Choice reporter, Saimi Jeong, revealed Tinder had allegedly been charging its Plus users different amounts depending on age, gender, sexuality and location in Australia. Tinder introduced a tiered pricing system the United States in 2015 which was ultimately removed after a $US11.5 million class action lawsuit in California.

Tinder admitted that pricing was determined by “a number of factors” but didn’t say whether a user’s sexuality or gender could affect them.

“We do offer a variety of subscription options and paid a la carte features designed to help our members stand out and match with new people more efficiently,” a Tinder spokesperson said in a statement to Gizmodo Australia.

“Tinder operates a global business and our pricing varies by a number of factors. We frequently offer promotional rates — which can vary based on region, length of subscription, bundle size and more. We also regularly test new features and payment options.”

Charging users different prices depending on age is not new — we often have children and senior’s fares — but it’s a question of ethics when other markers are incorporated with little-to-no transparency.

We’ve seen it all before

Dating apps are not the only ones to have been hit with pricing discrimination allegations.

Uber’s former head of product, Daniel Graf, told Bloomberg in 2017 the rideshare company had switched to a “route-based” pricing model. This meant, according to Graf, Uber’s algorithm incorporated machine-learning techniques to figure how much a customer was willing to pay for a ride.

According to this model, two users might have be charged different amounts based on how often they used the app or whether they were travelling to a more affluent neighbourhood, even if demand, traffic and distance were the same.

[related_content first=”1232515″]

A New Scientist report in June 2020 revealed a U.S. analysis showing some ride-share users that travelled to an area with a higher population of Black people were charged more. It used data from more than 100 million trips between November 2018 and December 2019 in the Chicago area.

Uber Australia has maintained it doesn’t use this pricing model, pointing to its pricing estimate page. The page explains pricing in Australia is determined by three primary factors — a flat fee depending on your city or location, the time and distance of the trip as well as the demand of drivers at the time.

It also states the above are just “some” of the factors.

How to trust the invisible, all-knowing algorithm

An algorithm is a computer program that follows a series of statements in order to deliver an outcome. In the case of social media sites, it reads your search queries or viewing habits and serves up content based on your interests. A similar thing happens on streaming services, too.

For paid services, an algorithm can also be paired with machine learning to help learn and improve on its capabilities.

Associate Professor Timothy Miller from Melbourne University’s Centre for Artificial Intelligence and Digital Ethics (CAIDE) explained the dating app situation was likely a mixture of these two.

“It seemed to me like, possibly, the more recent [dating app] example was using machine learning,” Professor Miller told Gizmodo Australia in a Zoom call.

“They had actually gotten personal data around what sexuality [users] were, how old they were and where they lived and they’ve kind of learned that, well, people in this particular area would be of a particular affluence or they’d be more likely to pay for it.”

Of course, it’s speculation and that’s due to a few factors. Firstly, companies aren’t going to simply admit they’re charging you differently because the algorithm sees you as more cashed up — that would be detrimental to consumer trust.

It’s also that the tech space is a competitive world and that giving away too much about how your proprietary algorithm works might be a bad business choice.

“There’s a trade off between giving way too much information and gaining enough trust,” Professor Miller said.

“[A company] could say, ‘this is how important this factor is in charging the price’.

“But they start to give more and more information away about the business model effectively, because people can ask enough questions that they can kind of reconstruct the model themselves.”

Unfortunately, this results in tech companies using shadowy methods that ultimately benefit them financially — and users are none the wiser.

“As an individual, it’s very difficult to see. You get the price and that’s it,” Professor Miller said.

There are counter-factuals, Professor Miller added, that might give you a chance to trick the algorithm into showing what others are charged.

“So, changing your gender, changing your sexuality, changing the region you live in … but I don’t see companies like [that] offering those types of explainability tools at all,” Professor Miller said.

“The only real way you can know it is if there is something built in or you get enough people together [to check] or you set up enough accounts yourself in order to change all these parameters and see what the impact is.”

Regulation might be a solution…

If it’s near impossible for a user to really know what an algorithm is designed to do or charge, how can anyone trust them beyond anecdotal evidence? One idea is to introduce a regulatory body or establish an ethics framework that companies would have to abide by.

In 2019, the European Commission proposed a set of guidelines artificial intelligence (AI) systems would need to meet in order to be rubber stamped as trustworthy.

Broadly, it asserted AI systems should be lawful, ethical and robust. On a more granular level, it expects them to consider human agency and incorporate oversight, to take diversity, non-discrimination and fairness into account and provide transparency and accountability among other key points.

Dr Nick Patterson, a cybersecurity researcher at Deakin University, said it makes sense that something similar is established in Australia too.

“[A formal regulatory body] is something which should be established to ensure ethical algorithm decisions — especially when it involves them making important decisions on human lives,” Dr Patterson said.

“They should look at things like cyber safety, biases, overcharging and any other unethical techniques.”

But Dr Patterson thinks it’s quite tricky regulating the algorithms themselves. Some low-stakes algorithmic deciders, like whether you pay a few dollars more for an an app subscription, should be left for customers to decide. It’s the potentially life-changing or saving decisions, Dr Patterson said, that need a tougher look.

“If it’s something like … making court decisions or in a hospital, an algorithm making medical decisions should be reviewed by a panel of experts, such as IEEE [Institute of Electrical and Electronics Engineers] or equivalent, to determine safety.”

… but it’s complicated

It all sounds great in theory, but it’s just not that simple.

“The problem [with regulating algorithms] is how do you operationalise it?” Professor Miller said.

“Yes, there’s a principle around being fair or there’s a principle about being transparent, but what does that mean to the person on Monday morning when they’re sitting down at their desk to operationalise these procedures? There’s a lot of work that needs to be done there.

“In the case of discriminative pricing, we need to look at existing legislation and say ‘we need to update this for the modern era, where people are really discriminating based on a whole lot of factors that we didn’t think about when we wrote this up 20 years ago.’”

Professor Miller and Dr Patterson don’t have the simple answers to a complicated situation but the baby steps happening in Europe are providing hope.

In Australia, the Human Rights Commission launched a discussion paper in late 2019 surrounding the ethics of emerging technologies, including AI-operated algorithms.

“AI is being used to make decisions that unfairly disadvantage people on the basis of their race, age, gender or other characteristic. This problem arises in high-stakes decision making, such as social security, policing and home loans,” Commissioner Edward Santow wrote at its launch.

“These risks affect all of us, but not equally. We saw how new technologies are often ‘beta tested’ on vulnerable or disadvantaged members of our community.”

After consulting with industry and experts, it’s expected to deliver a final report in 2020 sometime.

In the meantime, while discussions remain in their infancy, we’ll have to learn to trust in the omnipresent algorithm.

[related_content first=”1233894″]


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.