The Guardian
Dan Milmo Global technology editor.
Wed 3 Dec 2025 07.00 CET
Researchers uncovered 354 AI-focused accounts that had accumulated 4.5bn views in a month
Hundreds of accounts on TikTok are garnering billions of views by pumping out AI-generated content, including anti-immigrant and sexualised material, according to a report.
Researchers said they had uncovered 354 AI-focused accounts pushing 43,000 posts made with generative AI tools and accumulating 4.5bn views over a month-long period.
According to AI Forensics, a Paris-based non-profit, some of these accounts attempt to game TikTok’s algorithm – which decides what content users see – by posting large amounts of content in the hope that it goes viral.
One posted up to 70 times a day or at the same time of day, an indication of an automated account, and most of the accounts were launched at the beginning of the year.
Last month TikTok revealed there were at least 1.3bn AI-generated posts on the platform. More than 100m pieces of content are uploaded to the platform every day, indicating that labelled AI material is a small part of TikTok’s catalogue. TikTok is also giving users the option of reducing the amount of AI content they see.
Of the accounts that posted content most frequently, half focused on content related to the female body. “These AI women are always stereotypically attractive, with sexualised attire or cleavage,” the report said.
AI Forensics found the accounts did not label half of the content they posted and less than 2% carried the TikTok label for AI content – which the nonprofit warned could increase the material’s deceptive potential. Researchers added that the accounts sometimes escape TikTok’s moderation for months, despite posting content barred by its terms of service.
Dozens of the accounts revealed in the study have subsequently been deleted, researchers said, indicating that some had been taken down by moderators.
Some of the content took the form of fake broadcast news segments with anti-immigrant narratives and material sexualising female bodies, including girls that appeared to be underage. The female body category accounted for half of the top 10 most active accounts, said AI Forensics, while some of the fake news pieces featured known broadcasting brands such as Sky News and ABC.
Some of the posts have been taken down by TikTok after they were referred to the platform by the Guardian.
TikTok said the report’s claims were “unsubstantiated” and the researchers had singled it out for an issue that was affecting multiple platforms. In August the Guardian revealed that nearly one in 10 of the fastest growing YouTube channels globally were showing only AI-generated content.
“On TikTok, we remove harmful AIGC [artificial intelligence-generated content], block hundreds of millions of bot accounts from being created, invest in industry-leading AI-labelling technologies and empower people with tools and education to control how they experience this content on our platform,” a TikTok spokesperson said.
The most popular accounts highlighted by AI Forensics in terms of views had posted “slop”, the term for AI-made content that is nonsensical, bizarre and designed to clutter up people’s social media feeds – such as animals competing in an Olympic diving contest or talking babies. The researchers acknowledged that some of the slop content was “entertaining” and “cute”.