You might have a certain idea of where cybercriminals congregate. Maybe you imagine a black hoodie operator working at night on the dark net or something out of Mr Robot. In fact, these things are often much more simple and mundane than they seem.
Researchers at the cybersecurity firm Talos found 74 Facebook groups used to commit various cybercrimes, including selling stolen bank and credit card information, stealing account credentials, and selling spamming tools. The groups were sizeable: Combined, they had about 385,000 members.
That’s a huge group of people operating criminal enterprises. Nestled among Facebook’s over two billion monthly users, however, it starts to shrink in relative size. The researchers showed how, despite initial efforts, Facebook struggled to deal with the groups until Talos took personal action to get most of them taken down.
Facebook’s failure to remove groups of hundreds of thousands of cybercriminals operating openly mirrors the site’s struggles in dealing with bad behaviour across the spectrum including misinformation, hate speech, and incitements to violence. The incident also lays bare just how bad actors can be reinforced and amplified by Facebook’s algorithms.
The criminal Facebook groups were easy to locate with keyword searches such as “spam”, “carding” and “CVV”, all typical cybercriminal language related to credit card theft.
“Of course, once one or more of these groups has been joined, Facebook’s own algorithms will often suggest similar groups, making new criminal hangouts even easier to find,” the researchers wrote. “Facebook seems to rely on users to report these groups for illegal and illicit activities to curb any abuse.”
Facebook’s “you tell us when something’s wrong” is a refrain we’ve heard across the board.
Earlier this week, Gizmodo reported a United Nations investigator’s criticism of Facebook that in the case of the Myanmar genocide, Facebook still had “a long way to go” before they stopped relying on outsiders to report bad behaviour. Even users reporting such behaviour to Facebook doesn’t always work.
The company also leaned into its reliance on users following the March terrorist attack in Christchurch, New Zealand, during which the gunman streamed the mass murder of dozens on Facebook Live.
“During the entire live broadcast, we did not get a single user report,” Facebook said, explaining why it failed to remove the 17-minute livestream until it had been viewed by thousands. “This matters because reports we get while a video is broadcasting live are prioritised for accelerated review.”
“Talos initially attempted to take down these groups individually through Facebook’s abuse reporting functionality,” the researchers wrote in their report.
“While some groups were removed immediately, other groups only had specific posts removed. Eventually, through our contact with Facebook’s security team, the majority of malicious groups was quickly taken down, however new groups continue to pop up, and some are still active as of the date of publishing.”
For their part, Facebook acknowledged the problems.
“These groups violated our policies against spam and financial fraud and we removed them,” Facebook said in a statement to media. “We know we need to be more vigilant and we’re investing heavily to fight this type of activity.”