SAN FRANCISCO - Popular video-sharing app TikTok issued a broad ban on Jan 8 against "misleading information" that could cause harm to its community or the public, setting itself apart from rivals like Facebook which say that they do not want to be arbiters of truth.
"We remove misinformation that could cause harm to an individual's health or wider public safety. We also remove content distributed by disinformation campaigns," TikTok, owned by Chinese tech company ByteDance, wrote in new guidelines which expand and add detail to its earlier rules.
TikTok, as a relative newcomer to the social media landscape, has yet to wrestle publicly with the persistent content moderation scandals that have dogged larger and more entrenched competitors.
However, the company has grown rapidly over the last year and come under scrutiny from US lawmakers concerned that it may be censoring politically sensitive content, following reports it blocked videos on protests in Hong Kong.
US officials have also raised national security concerns about TikTok's handling of user data, prompting reviews by the US Army and the Committee on Foreign Investment in the United States. TikTok says it stores US user data outside China.
According to data from research firm Sensor Tower, TikTok and its Chinese counterpart Douyin have been downloaded more than 1.5 billion times, including 680 million downloads in 2019.
TikTok's previous rules around "misleading content" appeared to focus mostly on scams, barring users from creating fake identities or posting false information to make money, but did not mention misinformation or disinformation campaigns.
By contrast, the new rules explicitly ban "misinformation meant to incite fear, hate, or prejudice", "misleading information about medical treatments", and "content that misleads community members about elections or other civic processes".
The guidelines did not explain how TikTok would determine what constitutes "misleading" content and appeared to leave leeway for interpretation in enforcement decisions.
A spokesman said the new policy would likely prompt the removal of content featuring conspiracy theories like Pizzagate, a fictitious story involving child exploitation and a supposedly Clinton-linked Washington pizzeria which went viral on social media in 2016 and prompted a man to fire an assault rifle at the pizzeria.
The spokesman said TikTok would also consider a heavily edited video that attempted to make US House of Representatives Speaker Nancy Pelosi seem incoherent to be misinformation.
Facebook and Twitter weathered intense criticism from Democrats over the video this year after declining to take it down.
On Jan 6, Facebook announced a new policy banning deepfakes and other manipulated media, but said the change would not result in the removal of the doctored Pelosi video.