At least 30 major advertisers have dropped advertising on Twitter after it was revealed that their ads were displayed alongside tweets soliciting illegal child abuse content. For example, a promoted tweet by a Scottish Rite Children’s Hospital in Texas was shown alongside toxic tweets related to child sexual abuse.
Advertisers Revolt Against Twitter
Reuters reported that at least 30 large brands have stopped their accounts when it was revealed that their promoted tweets were shown alongside toxic tweets. A Twitter spokesman was quoted as saying that the research and conclusions by the cyber security firm (which studied tweets and accounts during the first twenty days of September 2022) were not representative of Twitter’s efforts to combat illicit activities. But the article by Reuters quoted multiple big brand advertisers who were notified that their ads appeared next to the toxic tweets.
Twitter Inability to Accurately Detect Toxic Content
The background to Twitter’s toxic content problem first came to light within an article published by The Verge. The article recounts a Twitter project to create a platform similar to Only Fans where users can pay to share sexually explicit content. Before launching the new service, Twitter tasked a group of employees to test if Twitter could be successful in weeding out harmful content so that the platform didn’t devolve into the sharing of illegal content. This group of employees was called the Red Team. Twitter’s project was halted when the Red Team determined that Twitter was incapable of detecting abusive and toxic content. So, in the Spring of 2022 Twitter concluded that it was ill-equipped to launch the service and it was shelved. However, according to the cybersecurity firm Ghost Data, Twitter continued to have difficulties catching rogue users and accounts that were sharing illicit content. Ghost Data was investigated in September 2022 to discover how widespread the child exploitation problem is on Twitter. Starting with a group of known child exploitation accounts they mapped out toxic accounts through the linked follower social connections between accounts, eventually identifying over 500 accounts responsible for nearly 5,000 tweets related to illicit child abuse activities. The researchers noted that these accounts were all in English and that they hadn’t investigated child abuse on Twitter networks in other languages. They concluded that further research into accounts in non-English accounts may reveal even more users sharing child abuse content.
Researchers Claim Twitter Ineffectual
A startling finding from the report is that Twitter only took action against just over 25% of the accounts that they identified as sharing explicit child abuse content, during the period of research covering the first twenty days of September 2022, The researchers concluded that though they identified many illicit activities and accounts on Twitter, they estimate that this is just a fraction of the true scope of the problem. That conclusion reached by the cybersecurity firm Ghost Data seemingly contradicts a statement issued by Twitter and reported by Reuters that Twitter has “zero-tolerance” for these kinds of activities because it’s been months since Twitter’s Red Team identified issues in detecting toxic content. Reuters also reported that Twitter stated that it is hiring more employees to “implement solutions.”