TikTok, the short-video app known for hosting viral dance challenges and comedy skits, has said it aspires to be an “uplifting and welcoming app environment” for millions of young users.
Long casting itself as an “apolitical” entertainment platform, it has banned all political advertising, while no major US politicians have official accounts.
But as one of the most polarising elections in US history nears, and its legions of users increasingly post political content, the Chinese-owned app faces its first major moderation test: preventing its platform from being poisoned by politics.
At the same time, with growing scrutiny in the west over its alleged ties to Beijing, and the threat of a ban in the country from US President Donald Trump, TikTok is under more pressure than ever to maintain the appearance of political neutrality.
“They’re fighting this public relations campaign that other social platforms haven’t had to in the same way,” said Laura Garcia, a journalist at counter-disinformation non-profit First Draft News. “The story [around it] is that it’s China spying on you and everything you do.”
In response, TikTok, which surpassed 2bn downloads earlier this year, is racing out eleventh-hour policies and policy clarifications to bring it in line with more experienced and deep-pocketed counterparts such as Facebook and Twitter.
In recent weeks it has explicitly laid out how its policies ban voter intimidation and false claims about voter fraud. This week it also blocked videos promoting white nationalism — more than a year after Facebook took similar action.
But experts warned the very technology that has helped it explode in popularity means it will struggle to contain the growing tide of political content.
“The [recommendation] algorithm is both its strength and its Achilles heel,” said Ms Garcia. “If I like a video that is spreading coronavirus [misinformation], I’ll be seeing lots of . . . fake cures and remedies.”
Beyond the algorithm
Despite the fact that no prominent US politicians as yet have official TikTok accounts, political content has become rife on the ByteDance-owned platform since it was launched in the US in 2018.
Much of it takes the form of memes, protest footage and politically-affiliated hashtags, which have soared in popularity in recent months. Videos labelled #Trump2020 have been viewed 13bn times, compared to 3.4bn in May, while those featuring the hashtag #Biden2020 have been viewed 3.8bn times to date, compared to 1.9m times in February.
A category of aspiring political influencers has also emerged, with some would-be stars organising themselves into so-called “hype houses” — named after the collaborative mansions in which popular creators live and work together. The most popular political account, the Conservative Hype House, boasts 1.5m followers — roughly double the number it had in May.
According to Juan Carlos Medina Serrano, a data scientist at the Technical University of Munich, political content has exploded on TikTok partly because it is seen as an easier route to internet fame. “Users think it’s a place to become a political influencer quickly where it would take you a lot of time on YouTube,” he said.
However, alongside genuine political videos is a growing wave of the more troublesome content that also plagues Facebook and Twitter, including voter misinformation, violence-inciting content and foreign interference.
In the first half of the year, TikTok took down nearly 322,000 videos for violating its hate speech policies, and a further 41,820 for breaches of its broad misinformation and disinformation rules.
In some areas, the platform has been more proactive than its more established rivals, for instance blocking hashtags related to the QAnon conspiracy theory in July, a month before Facebook took action against it.
It was also one of the first platforms to label misinformation related to Covid-19, said Ms Garcia. “If any of a list of hashtags were used, even if the video didn’t violate the guidelines, it took you to content [TikTok] had curated from the WHO and verified sources of information.”
But it has lagged behind other platforms in spelling out clear policies around certain election-related scenarios, according to some researchers. For example, it only clarified in recent weeks how its existing policies would be applied to ban false claims about voter fraud and voter intimidation. It also said it would stifle unverified claims such as premature declarations of victory or speculation about a candidate’s health.
Like Facebook, it has launched a US election hub, providing authoritative information about the voting process, and misinformation.
So far, TikTok’s record on enforcement has been mixed. Marcel Schliebs, a researcher at the Oxford Internet Institute, pointed to videos that appeared designed to circumvent its policies. Several attempted to undermine the legitimacy of mail-in voting with unverified claims, and videos with the hashtag #voterfraud have garnered 7.5m views.
QAnon content also has a continued presence on its platform, according to an analysis by counter-misinformation group Predicta Lab. Among the most popular is a video posted by a pro-Trump influencer, which features Tom Hanks, Hillary Clinton and others and has had more than 750,000 views since it was posted at the start of July.
“It has to be such a multi-faceted approach [to moderation],” said Ms Garcia. “Maybe [content creators] don’t use QAnon in the name of their videos [but use] specific profile pictures which [are flags] for people looking for QAnon content.”
TikTok said that it had a “cross functional team of experts across safety, security, product, policy” that had been “working to prepare for this since last year, and are focused on protecting the integrity of our platform including identifying and removing misinformation related [to the] election”.
It added that this had included “scenario planning across a range of potential issues”.
‘It’s so hard to fight’
In the battle to contain problematic political content, TikTok’s structure is a double-edged sword, said Mr Medina Serrano. On the one hand, unlike Facebook and Twitter, it has the advantage of not allowing users to simply post links to articles from other sites, thereby shutting off a traditional route for the spread of misinformation.
However, its struggle to contain these videos is partly a function of the way its algorithm surfaces content. “You can have five followers but if you produce something that resonates with your audience, a video can have 100,000 views,” said Ms Garcia, adding that this route to going viral was harder for researchers to monitor.
According to Samuel Woolley, a professor at the University of Texas Austin’s school of journalism, TikTok’s relative inexperience in the field and more limited resources also put it at a disadvantage. “[Older platforms] not only have a lot more money and staffing power, but they also have a longer history of working on this stuff.”
No matter what steps TikTok takes, experts warned that the very creativity that makes the platform so popular will always make identifying and removing questionable political content difficult.
“It could just be someone delivering a piece to camera; it could be a sound and a picture of someone’s screen; it could even be a collection of pictures of cats with [misinformation-related] tags on them,” said Ms Garcia. “It’s so, so, so hard [to fight].”