How Many Social Media Users Are Real People?

How Many Social Media Users Are Real People?

Illustration: Elena Scotti (Gizmodo)

Most social media users know that bot accounts are among us, whether as fake voters with loud opinions or obsessive re-tweeters of a single corporation’s content. When it comes to telling many ‘fake’ accounts from real ones, however — or just knowing how many non-individuals are active online — even savvy users are mostly in the dark.

Tallies suggest there are over three billion social media users in the world, many of whom maintain accounts on multiple platforms. The total number of social media accounts may be several times that, making the task of sorting out people (and their braver/sassier/sexier online alter-egos) from commercial, political, and general trolling accounts a gargantuan technical challenge. According to the experts we’ve asked for this week’s our center’s paper, where we estimated that between 9% and 15% of active Twitter account may be social bots, which would imply that between 85% to 91% are human. Of course, these are estimates based on various assumptions, but I believe it’s the best we have at this time. You may have heard the 15% figure, as it is widely quoted in the media.

David Caplan

Co-Founder of TwitterAudit

At TwitterAudit we’ve analysed tens of millions of Twitter users over the past six years. We’ve tuned our algorithm to recognise bot patterns distinguish fake accounts from real accounts. Based on our data we would estimate that 40-60% of Twitter accounts represent real people. It’s much easier to sign up a fake/bot account on Twitter than it is on other social media platforms, and in many cases it’s hard to tell if an account is fake or just inactive. On the other hand, it’s a bit easier to define a “real person” on Twitter than it is on, say, Instagram.

Ben Nimmo

Information Defence Fellow, Atlantic Council’s Digital Forensic Research Lab

The short answer is that we don’t know the answer. On one level, almost every social media account has a human being somewhere behind it, to create it and operate it, even if all they do is pre-program it as a bot.

How many social media accounts belong to the person they claim to be? There are millions which don’t. There are troll accounts which pretend to be someone else. There are bot accounts which use somebody else’s profile picture and name, but are automated to post hundreds of times a day. There are accounts which used to be run by a genuine user, but were hijacked and taken over by someone else. There are accounts which claim to be one person, but are operated by a team. There are cyborg accounts which sometimes post authored content, and sometimes post on autopilot, with no human intervention.

That’s why it’s so important to think before you click. You can’t take every account at face value: different people will run impersonation accounts to have political effect, or to spread spam and malware, or just for the fun of causing trouble. It’s always worth looking at the way an account behaves before you engage with it.

Sune Lehmann Jørgensen

Associate Professor, Department of Applied Mathematics and Computer Science, Technical University of Denmark

In my opinion it is exceedingly difficult to estimate the number actual-person accounts on social networks. It turns out that when all you have is tweets or Facebook messages, it is difficult to tell if the author is a bot, a person who is micro-blogging, or someone in Saint Petersburg pretending to be a Minnesota homemaker.

The fact that there are so many categories of accounts that are not actual people makes the task of distinguishing even harder. For example, there are simple follower bots (that you would buy if you need fake followers; designed to boost follower-numbers, just post random content), re-tweet bots (designed to spread content), news-feed bots (designed to tweet; e.g. headlines from news sites), and bots designed to interact with people that post certain content (e.g. @yesyoureracist).

The best bots copy content from human users in a clever way and post content according to daily rhythms, etc (I made some of those years ago). Those accounts are near-impossible to distinguish from humans even for experts.

And we haven’t even covered the category of human users pretending to be someone else on social media for malicious purposes, such as spreading false information.

The problem of identifying accounts belonging to actual people is exacerbated by the fact that platform owners like Facebook and Twitter probably aren’t really interested in finding the bots. The more users they have, the more money they make.

Shu-Sha Angie Guan

Ph.D., Assistant Professor, Child and Adolescent Development, California State University, Northridge

I will say that, in a survey of 378 ‘residents’ of the virtual world Second Life, my co-authors and found that residents reported an average of two avatar accounts (specifically 2.45; standard deviation = 3.58). Although 58% of residents reported having only one primary avatar account, 42% reported alternative accounts; the average number of accounts per resident and high standard deviation is likely due to the few residents who have a large number of accounts (say, 5 to 10).

Virtual worlds are different from social media sites, arguably due to their immersiveness and different range of activities. However, if account-creation behaviour on Second Life is comparable to account-creation behaviour on social media sites like Facebook and Twitter, then it’s likely that a lot of folks have one primary account, some folks have a second alternate account, and a few folks have a large number of alternative accounts.

Rami Essaid

Co-founder and Chief Product and Strategy Officer, Distil Networks (Bot Detection and Mitigation)

It’s difficult to pinpoint a specific amount of real users on social media platforms, but consumers should be aware that bots are more prevalent than many expect. For example, in October 2017, Twitter testified before Congress that about 5% of accounts are run by bots, however some studies have shown that number to be as high as 15%.

A bot is any automated tool or script that’s designed to perform a specific task, and can be used on social media to magnify one person’s agenda so to seem more widespread than it really is. Political bots in particular can be used to exaggerate politicians’ popularity and manipulate public conversation. By tilting the scale of public discourse, social media holds unprecedented power in influencing personal opinions as well as business decisions.

However, there is a reason that social media platforms allow this problem to persist. Social media companies — like most businesses — report to investors or shareholders, where they feel a responsibility to continuously report growth in number of active users as a measurement of success. These companies have already started down the path of including bots in their reports of user growth. So, as time goes on, it becomes increasingly difficult for social media companies to begin filtering out non-human accounts. While it would provide an accurate look at real users, it may not show the growth they desired or promised. As social media companies continue to experience positive reinforcement from the market, they are left with a perverse incentive to avoid policing the bot problem head on.

So, then what is the answer to solving this problem? There needs to be a combination of technology and legislation. While legislation certainly helps bring to light issues in each industry plagued by bots and can help provide some justice, the law alone isn’t going to stop hackers and fraudsters. Technology that is capable of blocking bad bots from even reaching the website can also make a dent in the problem, but there needs to be collaborative effort between the two.

Pete Hunt

CEO, Smyte (Fraud and spam protection/online security stratup)

Fake accounts are an existential threat to social networking. The point of social networking is to interact with real people and engage with legitimate content; once that is called into question, engagement plummets. If we look at what has publicly been reported by Facebook and Twitter, the likely range of fake social accounts has historically been anywhere from 15-25% of users.

Because identity systems are largely based on old, pre-internet paradigms, hiding one’s identity online is trivial. Think about it, in the real world, our identity system is a card: it’s your driver’s licence or passport. A human looks at it and says, “Oh yeah you look legit.” and thinks “this card is not fake.” This mostly works because humans are adept at determining if the ID matches the person standing in front of them.

Online, the “driver’s licence” concept doesn’t exist. Anonymity is a fundamental reality of the online experience today. ID/PW combos do not inherently solve the anonymity problem, so social networks introduce what we call “naive frictions” such as CAPTCHAs, email, SMS, and phone verification. We call these “naive” because they are well intended but actually deter legitimate users and not malicious ones.

The new way of validating user accounts is to deploy a system that can mimic the intuition of a human. Rather than looking at a drivers’ licence or passport, this system looks at content and behavioural data of both the account and the user who set it up, and do so in real time. This data provides ample clues to predict, with precision, if a given account is fake or not.


The Cheapest NBN 50 Plans

It’s the most popular NBN speed in Australia for a reason. Here are the cheapest plans available.

At Gizmodo, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.