CalMatters and The Markup used Facebook’s AI model to count the millions of dollars it makes after violent news events
In the 10 weeks after the shooting, advertisers paid Meta between $593,000 and $813,000 for political ads that explicitly mentioned the assassination attempt, according to The Markup’s analysis. (Meta provides only estimates of spending and reach for ads in its database.)
Even Facebook itself has acknowledged that polarizing content and misinformation on its platform has incited real-life violence. An analysis by CalMatters and The Markup found that the reverse is also true: real-world violence can sometimes open new revenue opportunities for Meta.
If you count all of the political ads mentioning Israel since the attack through the last week of September, organizations and individuals paid Meta between $14.8 and $22.1 million dollars for ads seen between 1.5 billion and 1.7 billion times on Meta’s platforms. Meta made much less for ads mentioning Israel during the same period the year before: between $2.4 and $4 million dollars for ads that were seen between 373 million and 445 million times. At the high end of Meta’s estimates, this was a 450 percent increase in Israel-related ad dollars for the company. (In our analysis, we converted foreign currency purchases to current U.S. dollars.)
The American Israel Public Affairs Committee, a lobbying group that promotes Israel, was the major spender on ads mentioning Israel. In the six months after October 7th, its spending increased more than 300 percent over the previous six months, to between $1.8 and $2.7 million dollars, as the organization peppered Facebook and Instagram with ads defending Israel’s actions in Gaza and pressuring politicians to support the country.
As the war has roiled the region, AIPAC paid Meta about as much for ads in the 15 weeks following October 7th as the entire year before.
To examine the assassination attempt merchandise, we ran a simple search of Meta’s Ad Library for ads that mentioned “assassination,” including any in our analysis that also mentioned “Trump” and hundreds of others that didn’t mention the former president by name but were clearly related to the shooting.
“First they jail him, now they try to end him,” one ad read. A conspiratorial ad for a commemorative two-dollar bill claimed “the assassination attempt was their Plan B,” while “Plan A was to make Biden abandon the presidential campaign.” Some ads used clips from the film JFK to suggest an unseen, malevolent force was at work in the shooting.
Gun advocates paid for ads, using the assassination attempt as a foreboding call to action. One ad promoting a firearms safety course noted that “November is fast approaching.” A clothing business said in an ad that, since “the government can’t save you” from foreign enemies, Americans “need to be self-reliant, self-made, and self-sufficient.”
“Because when those bullets zip by, you are clearly on your own,” the ad read.
After the mass school shooting in Parkland, Fla., the NRA increased its spending on Google and Facebook ads, the Tech Transparency Project noted in one report. In 2018, the year of the shooting, Meta received “more than $2 million in advertising fees from the NRA starting in May of that year,” the report found, which also found that “NRA ad spending reached its highest levels on Google and soared on Facebook” following a week of mass shootings the following year that left dozens of people dead.
Just days before the January 6th insurrection, the Tech Transparency Project found that Meta hosted ads offering gun holsters and rifle accessories in far-right Facebook groups.
Meta’s ad policies forbid calling for violence. But when faced with crucial tests of its content moderation practices, the company has repeatedly failed to detect and remove inflammatory ads. A 2018 report, commissioned by Facebook itself, found that its platform had been used to incite violence in Myanmar, and that the company hadn’t done enough to prevent it.
Alia Al Ghussain, a researcher on technology issues at Amnesty International, said that as troubling as some ads might be in English, ads in other languages may be even more likely to pass Meta’s content moderation. “In most of the non-English-speaking world, Facebook doesn’t have the resources that it needs to moderate the content on the platform effectively and safely,” she said.
Despite later admitting responsibility for violence in Myanmar, the company continues to be faulted for gaps in its international moderation work. Another advocacy organization found in a test that the company approved calls for the murder of ethnic groups in Ethiopia. More recently, a similar test by an advocacy organization found that ads explicitly calling for violence against Palestinians—a flagrant violation of Meta’s rules—were still approved to run by the company.
“If ads which are presenting a risk of stoking tension or spreading misinformation are being approved in the US, in English, it really makes me fearful for what is happening in other countries in non-English-speaking languages,” Al Ghussain said.
I dislike that framing because before the internet and media anonymization news papers didn't have anywhere near this scale of money to make off propaganda. What's happening now is more extreme and damaging than what used to happen.