X, formerly known as Twitter, is partnering with Integral Ad Science to convince advertisers they can protect their brands from the toxic content exploding on the platform. There’s just two problems: X has never been more toxic and Integral Ad Science doesn’t actually work.

Since his takeover, Elon Musk has been making an effort to bring back users previously banned for violating the platform’s community standards, including hate, harassment, and bullying. Under Musk’s leadership, hate speech on the platform has reached record highs as the company rolls back its policy on hateful conduct.

Recent examples of how X’s CEO is amplifying its toxicity include: personally intervening to restore an account that gained thousands of followers after posting child sexual assault materials; replying to white nationalists, boosting their visibility; and putting in place ad revenue sharing to make cash payouts to the likes of Tim Pool ($5,800) and Ian Miles Cheong ($16,200) through his new ad revenue sharing program.

But X has also recently struck up a partnership with ad verification company Integral Ad Science (IAS) to offer advertisers “sensitivity settings” to keep their ads off its own growing supply of toxic content.

Isn’t that nice? Advertisers can just opt out of appearing next to neo-Nazi content and harassment on X by using technology to filter it out.
The problem is, IAS’s own “brand safety technology” happens to be a dumpster fire. As in, it does not work.

X’s settings in the X Ads Manager

Integral Ad Science can’t detect hate speech

IAS is a leading ad verification company, offering advertisers protection from fraud and brand safety risks. In theory, they help advertisers protect their ads from appearing alongside toxic content and verify where their ads appeared.

How does it work? IAS claims that their AI and ML-powered algorithms can scan content instantaneously and understand the topic and context just as well as a human reader. If IAS determines the content is “brand safe,” they will allow the ad to be served. If it finds the content to be unsafe, the ad is blocked.
At least that’s what they say.

IAS is extremely secretive about how their allegedly advanced and sophisticated technology works. They don’t share details about how their algorithm works, and they don’t make it easy for advertisers to audit the results either. As a result, advertisers are mostly in the dark about how well this supposed protection performs.

But in 2020, IAS slipped up. They released a demo of their newly released brand safety tool called “Context Control,” allowing advertisers to take it for a spin. (We wrote about it in BRANDED.)

So we tested it out with a bunch of white nationalist sites — and here’s what we found:

IAS’s tool didn’t seem to catch onto terms like “anti-white racism” and “white identity” — pretty standard markers for a white nationalist website. For example, American Renaissance, a white nationalist and pro-eugenics site was rated “neutral” and therefore, brand safe:

Then there’s Liberty Hangout, a website of Kaitlin Bennett, the Kent State grad who harasses students on college campuses about homophobia and guns. The good news: It wasn’t rated neutral. The bad news: It was rated positive.

Meanwhile an innocent article about women and their sex lives that ran on Refinery29 was rated as having negative sentiment.

IAS took down the product demo within hours and later insisted the tools worked as designed. IAS insisted that context control was just one of many indicators used for determining if content was brand safe. According to them, “context control” weighs positive, negative, or neutral elements in content via the nouns, adjectives, verbs, and adverbs present.

Even if Infowars is clearly brand-unsafe, they still use neutral language.
That’s funny, because the product could not even discern explicitly white nationalist sites from actual actual news publications. Quite the opposite. Sites like 4chan (full of white nationalism) and Infowars (bloated with conspiracies) were rated neutral by IAS’s tools despite being blatantly unsafe.
And that’s the technology that X’s brand safety is now built on.

Good luck trying to make it work on a toxic platform

Even if IAS worked like they say, running ads on X is a bad idea.
Musk needs advertisers to make his vision for X work. His plans to monetize the platform include giving people who pay a monthly fee the right to collect ad revenue, meaning advertiser dollars could go directly to people who are radioactive for their brands. This is objectively a recipe for disaster.

Any ad dollars going to X right now support the platform, keeping the lights on for the whole cesspool even the ads don’t run next to unsafe content. Any money you spend is money Musk can funnel to his fascist buddies in the hopes he feels some social validation.

It’s like walking into a restaurant and having the host ask you if you want to be seated in the Nazi or No Nazi section. Even if you say No Nazi, you’re still sharing space — and money — in a place that supports them.

Whether IAS works or not (and it doesn’t), your ad dollars are not safe on X.


Want to get these updates sent to your inbox? Sign up here and find out how we can defund hate speech and disinformation.