IMDA flags X, TikTok for failure to detect and remove child exploitation, terrorism content

IMDA flags X, TikTok for failure to detect and remove child exploitation, terrorism content
The social media companies are required to regularly provide updates to IMDA on the implementation of rectification measures.
PHOTO: Unsplash

The Infocomm Media Development Authority (IMDA) has flagged social media companies X and TikTok for "serious weaknesses" in proactively detecting and removing egregiously harmful content.

The two firms did not adequately act against the child sexual exploitation and abuse material (CSEM) and terrorism content uploaded to the respective platforms, said IMDA on Tuesday (March 31).

The industry regulator issued letters of caution to the two social media services to place them both under enhanced supervision.

The warning requires them to regularly provide updates on implementing rectification measures, which is enhancing their automated detection systems to flag violations.

The Code of Practice for Online Safety - Social Media Services requires designated social media services to proactively detect and swiftly remove CSEM and terrorism content before they are viewed by users.

In its Online Safety Assessment Report in 2025, IMDA identified 73 cases of CSEM that originated from or targeted Singapore users on X, up from 33 in 2024.

On TikTok, 17 cases of terrorism content shared by Singapore-based accounts were found.

IMDA said that the child sexual exploitation cases on X involved content sharing or linking to CSEM as well as self-generated cases, and had occurred despite the authority sharing its analysis of the CSEM cases and their indicators with X in 2024.

Meanwhile, the identified terrorism content on TikTok primarily comprised videos with edited footage or audio related to known transnational terrorist organisations.

Some of the identified content were reported to TikTok via its in-app user reporting mechanism, but TikTok found that the content did not violate its community guidelines.

"This demonstrated that TikTok did not accurately assess the terrorism content when they were user reported," the authority said.

The child sexual exploitation and terrorism content were removed by the respective platforms only when IMDA flagged the cases to them.

The regulatory authority said X and TikTok have accepted the report's findings and committed to enact specific measures to rectify the serious weaknesses.

They will enhance their automated detection systems through using artificial intelligence and incorporating additional signals to improve their proactive detection of CSEM and terrorism content respectively.

"Should X or TikTok fail to satisfy IMDA that they have improved the effectiveness of their measures...IMDA will not hesitate to explore further options, including potential regulatory action under the Broadcasting Act."

[[nid:732015]]

IMDA added that it will continue to engage the six designated social media services (DSMSs) — Facebook, HardwareZone, Instagram, TikTok, X and YouTube — to highlight emerging online safety risks and ensure required measures are in place to protect Singapore users.

"DSMSs will have to remain vigilant and continually improve the effectiveness of their online safety measures, especially for children," it stated.

The authorities have made it mandatory for major app stores to implement age assurance measures to protect children under 18 from accessing age-inappropriate content.

IMDA plans to extend age assurance requirements to DSMSs and is also studying how to further enhance online safety requirements for children.

[[nid:713636]]

lim.kewei@asiaone.com

This website is best viewed using the latest versions of web browsers.