The flooded TikTok videos depict young girls in sexual clothing or provocative poses, and the comments contain links to groups selling child pornography on Telegram, according to a new report.
Videos created by artificial intelligence showcasing girls in revealing clothing or poses are garnering millions of likes and views on TikTok, despite this content being prohibited.
The Spanish organization Maldita, which focuses on fact-checking, discovered over 20 accounts on the platform that published more than 5,200 videos featuring young girls in bikinis, school uniforms, and tight clothing. In total, these accounts have amassed over 550,000 followers and nearly 6 million likes.
The comments contain links to external platforms, such as Telegram communities, that sell child pornography, according to the report. Maldita stated that it reported 12 Telegram groups identified during the investigation to the Spanish police.
The accounts also profit by selling AI-generated videos and images through TikTok's subscription service, which pays creators a monthly fee for access to their content. According to the agreement with creators, the platform takes about 50 percent of the profits from this model.
The report comes amid discussions in various countries, such as Australia and Denmark, regarding restrictions on social media use for users under 16 years old as a means of ensuring youth safety online.
TikTok requires content creators to label when artificial intelligence was used in the creation of videos. Content can also be removed from the social media platform if deemed "harmful to people," according to community guidelines.
However, the Maldita report states that most of the analyzed videos did not have labels indicating that artificial intelligence was used.
Some videos featured the "TikTok AI Alive" logo, which is automatically used for capturing photos and turning them into videos on the platform.
In a statement to Euronews Next, Telegram and TikTok said they are "fully committed" to preventing child sexual abuse materials on their platform.
Telegram scans all media uploaded to its public platform and compares it to child sexual abuse materials already removed from the platform to prevent their spread.
"The fact that criminals have to use private groups and algorithms from another platform for their growth proves the effectiveness of Telegram's own moderation," the statement said.
Telegram reported that in 2025, it removed over 909,000 groups and channels containing child sexual abuse materials.
As for TikTok, it stated that 99% of harmful content for minors is removed automatically, and another 97% of AI-generated offensive content is also proactively removed.
The platform claims to respond immediately to the removal or closure of accounts sharing explicit sexual content with children and reports this to the National Center for Missing and Exploited Children (NCMEC).
TikTok also reported that from April to June 2025, it removed over 189 million videos and banned over 108 million accounts.
Leave a comment