Company That Aims to Solve the ‘Crisis of Toxicity Online’ Makes Money From the Daily Caller and Ben Shapiro

Company That Aims to Solve the ‘Crisis of Toxicity Online’ Makes Money From the Daily Caller and Ben Shapiro
Photo: Maranie R. Staab, Getty Images

Like just about every other corner of the web, including this one, the Daily Caller’s website is littered with ads. I can count seven on the story I have open while writing this (which, if you’re curious, is a blog heckling the Unicode Consortium daring to add a pregnant man to the impending emoji roster). There are two ads for something called “benefiber,” another for a $US120 ($162) pillow promising to cure acid reflux, and four more for Oculus charging cables, hospital admin software, grocery store coupons, or pleated polo shorts (now 30% off!).

There’s a pretty good chance that pleated shorts-seller is blissfully unaware that their wares would ever end up on the Daily Caller, a site co-founded by Fox News host Tucker Carlson, in the first place. Their ad popping up on my screen was the result of an opaque, automated mess of algorithmic choices overseen by any number of opaque, obscure ad-serving companies taking their own cut from every ad click.

When asked point-blank, most of these companies will tell you that their tech doesn’t touch unsavoury parts of the web, and fuelling this kind of content is the last thing they’d ever want to do. Not all of them are telling the truth.

One of these companies is OpenWeb, an adtech platform that boldly pitches itself as the answer for “toxicity online,” promising to bring “quality conversations” to publishers and hate-free content to advertisers. It’s a selling point that’s netted the company a solid $US73 ($98) million in VC funding, dozens of deals with big-name web outlets, and most recently, the addition of noted marketing professor/gamestonk hater Scott Galloway to its board of directors. But behind the scenes, a Gizmodo investigation found, OpenWeb’s tech is used by — and likely making a killing off of — some of the most politically contentious corners of the web. And it doesn’t seem too inclined to stop.

“Individuals in adtech will jump to say, ‘I voted Dem,’ or ‘I worked for Obama,’ and say they care deeply about the state of America — including the rise of extremism, racism, white supremacy and the like,” said Claire Atkin, a marketing expert and cofounder of a consultancy dedicated to helping marketers root out fake news and far-right sites from their media buys. “But they don’t realise that their inability to draw a line for what’s not ok on their platforms is what’s actually driving the issue.”

Different companies will have different reasons why they don’t want to draw that line. Generally, Atkin said, it’s one of three things: the most obvious is that adtech is a numbers game, and a company plugging its tech into more websites means that it’s taking more cuts from each. In other cases, a company might be worried that shunning these sorts of sites would alienate the conservative clients who are already convinced that Big Tech is determined to muzzle them.

The last scenario, Atkin said, is also the most libertarian: the belief that adtech is pretty much the internet’s plumbing, and that a company should be neutral.

Adtech is a roughly $US455 ($614) billion dollar industry that’s drawn ample scrutiny from the U.S. Federal Trade Commission, the U.S. Department of Justice, dozens of U.S. lawmakers, and countless consumers. At the end of the day though, it is just some really expensive, legally dubious plumbing, built for the purpose of taking dollars from one side of the internet — an advertiser’s budget — to wherever on the web that advertiser’s content plays. The scary thing is that we can’t say for sure where those dollars end up.

We got our first attempt to pry open this black box last year when a UK trade group published the first-ever study detailing how the dollars from roughly 50 different advertisers and agencies were divvied up across the web over the course of three months. For every ad dollar spent, the study found, about half ($0.69) actually makes it to the website where you’d see that ad, while a third ($0.46) were doled out to the myriad tech intermediaries behind the scenes. The last 20 cents wound up in what the researchers called an “unknown delta”: a Bermuda triangle at the centre of the web where these billions of dollars just… vanished.

Talking heads in the ad industry all have their own takes on where that money ends up, with some alluding to what everyone’s already known for decades: the ad industry is full of lying liars who lie. Some adtech players, for example, have been caught using their middleman role to overcharge publishers and advertisers alike, because they know neither party has the means to double-check their numbers.

Agencies and publishers are too busy “struggling with small margins,” Atkin said, and advertisers “just don’t have the sophistication” to wrangle this tech on their own. There have been a few feeble attempts by the adtech sector to self-regulate these bad apples away, like asking publishers to onboard specific standards for interacting with the characters buying up their ad space. That effort went as well as you’d expect.

The staggering number of “different types of relationships in the adtech stack,” means that these self-imposed standards just can’t cover them all, Atkin said. “That leads to mislabeling, misunderstanding, endless ‘nuanced’ reactions to questions about how things are, or how they should be.”

It also leads to a never-ending deluge of buzzy pitches from middlemen, like OpenWeb, pitching themselves as the answer to any hot-button issue big brands are willing to throw their big brand money at. Last summer, that problem was hate speech. A campaign to pull ad dollars from Facebook for the month of July in the hopes of spurring the company to do literally anything about toxic content got support from some of the country’s biggest brands. In reality, nobody ended up pulling much of anything from the platform; just about every company continued to run ads on Facebook overseas or through third-party channels.

When asked why they did this, brands would say that their problem isn’t with the rampant misinformation, homophobia, outright violence, or anything else unsavoury that Facebook has utterly failed to moderate. The problem was that this content wasn’t “safe” for their brands to be seen alongside, so they just shuffled their dollars to content that was.

That was the cue for OpenWeb — formerly called Spot.IM, an adtech org whose main product was commenting tech for web publishers — to rebrand itself as the safe haven these brands were looking for. About two weeks before the Facebook “boycott” was set to kick off, OpenWeb co-founder Nadav Shoval published a blog detailing exactly how his company’s tech addressed the “issues of racism and hate” that Facebook was struggling to handle.

“Individuals are responsible for the things that they say — but when technology provides a platform for these ideas to be shared, and then actively promotes the spread of hateful and harmful ideas in order to monetise them, they too are responsible,” Shoval wrote. “We need to demand more from the hosts of society’s conversations. And we need to support the many places and platforms that host diverse voices and groups to keep our democracy alive.”

This democracy-saving tech, as it turns out, is the same product Shoval had already been selling: a “community engagement platform” that appends every story on a given news pub with a souped-up comments section that lets outlets add polls, live feeds, and a ton of other perks to keep readers engaged, commenting, and clicking.

These comments are overseen by an algorithmic moderator designed to detect the sort of nasty content that inevitably creeps into any conversation about any news story ever. OpenWeb’s tech comes with attuned for awfulness like “author attacks” or “incivility,” and purports to scan every comment before it’s published to ensure only the freshest, highest-quality reader comments get left under a given story. (Hilariously, this auto-moderation means OpenWeb’s tech is literally censoring commenters on publications that condemn tech companies for censoring conservatives.) It also includes ads. A lot of ads. Alongside your comments, which are mined for data used to target you with more ads.

Is it kind of ugly to look at? Absolutely. But it also gives revenue-starved publications a chance to squeeze out a few more cents, while giving advertisers a place less vile than Facebook where their ads can run — places like HuffPo, Refinery29, and CBS News, all listed on OpenWeb’s site as the kind of quality content you can expect from putting your ad dollars in this specific black box.

Luckily, it’s a black box that we were able to open. Remember those industry standards we were talking about before? The ones meant to make all this stuff a bit less mind-numbing, but ended up doing the exact opposite? It turns out OpenWeb uses at least one of them; a tool with the catchy (and easily pronounceable) title of Sellers.Json. In a nutshell, these are public ledgers meant to be browsed by ad-buying folks curious about where their dollars might wind up — and where their ads might run — if they partner with a certain intermediary.

These files are typically more broken and confusing than that lil’ summary, but luckily for us, OpenWeb’s public ledger is pretty easy to read. Scrolling down the page shows you dozens (and dozens) of blogs, columns, personal diaries, and digital newspapers selling their ad space through OpenWeb’s tech. In other words, every ad dollar the company swallows has a chance to turn into a few cents for any one of the lucky sites listed here.

Considering the whole saving-democracy-with-healthy-conversations sales pitch that assaults you the second you open OpenWeb’s homepage (and every page after that, you’d probably think these cents would exclusively be going to safe, friendly sites hosting safe, friendly conversations. You’d also be wrong. Under a story about Los Angeles reinstating its mask mandate published by Ben Shapiro’s right-wing news site the Daily Wire, this tech is used to post jeers about “authoritarian regimes,” with some anti-vax messages sprinkled in for good measure. It’s also used to spread rumours of election fraud on the Washington Times, jabs about Kamala Harris’s weight in the Daily Caller, and any covid-19 conspiracy you can think of on a site called the Free Thought Project.

All told, there are at least a dozen sites on OpenWeb’s ledger that push the sorts of hyperpartisan, hate-spewing stories responsible for the “crisis of online toxicity” the company keeps saying it’s determined to snuff out. When we asked how the hell OpenWeb approved these sites to begin with, co-founder and COO Roee Goldberg had this to say:

We have a strong internal standards policy for all new partnerships. As a part of this policy, we consult several databases and indexes that compile and monitor fake news, hate speech, disinformation, and conspiracy websites. And, while an audit of our more than 1,000 publisher partners has been completed, I appreciate you bringing these cases to light and expressing your concerns regarding these sites.

We take our commitment to improving the web seriously. In just the past few months, we have declined interest from major publishers that we felt did not meet our standards — including, for instance, both Newsmax and Breitbart. Of course, our partnership with a given publisher does not imply any kind of endorsement of their views or the content that they host.

Practically speaking, there isn’t a whole lot of difference between the right-wing trolls reading Breitbart from the right-wing trolls watching Ben Shapiro. Sure, it’s good news that OpenWeb isn’t helping fund some of the more toxic sides of the internet, but who’s deciding where to draw that line? Goldberg wouldn’t say whether anyone at the company — or any human at all — even reviewed some of these sites before lumping them into their ledger.

Instead, that responsibility apparently falls on nameless algorithms working with third parties to dump any given site categories like “conspiracies,” or “fake news,” and those arbitrary labels dictate whether that site is non-toxic enough to get a little more money.

Elsewhere on the web, countless sets of arbitrary, opaque algorithms are running countless sets of arbitrary, opaque equations on the sites and stories from real, legitimate news outlets employing real, legitimate people. People whose entire livelihoods can collapse when these algorithms decide that stories about the gay community are “too adult” to run ads alongside, or that stories about racial justice are too “upsetting” or “violent.” Hundreds, if not thousands, of adtech vendors are profiting from these sorts of judgment calls, but none of them seem ready to deal with what they’ve created.

“This is the worst game of hot potato,” said Nandini Jammi, a fellow marketing guru that teamed up with Atkin last summer to co-found their brand-safety consultancy. Five years ago, she was one of the figures behind Sleeping Giants, an anonymous Twitter account that singlehandedly convinced thousands of companies to pull their ads from Breitbart’s site.

“Advertisers don’t feel comfortable making value judgements, so they pass those decisions onto their tech partners,” Jammi went on. “The problem is that their tech partners don’t feel comfortable making value judgements either.” Instead, we get more and more startups pitching more and more black boxes, and nobody seems worried about what might be festering inside.