A study by the human rights organization Global Witness shows that the social network TikTok’s algorithm recommends pornographic and sexually explicit content to children’s accounts, writes the BBC.
The researchers created fake accounts belonging to children and enabled safety settings, but search results still contained recommendations for sexually explicit content. The platform has stated that it is committed to providing safe and age-appropriate content, and took action to fix it as soon as it became aware of the problem.
Global Witness researchers created four TikTok accounts in late July and early August, pretending to be 13-year-olds. The researchers used fictitious birth data, and no additional information was required to confirm their identity. The accounts were put in “restricted mode.” This, according to TikTok, should prevent the user from seeing adult content, such as sexually explicit videos. Even without a separate search, the researchers found that the section that suggested videos that users might be interested in offered sexually explicit material.
All of the videos were hidden in other, more innocent content, which successfully bypassed content monitoring.
Ava Lee, a spokeswoman for Global Witness, said the discovery was an unpleasant surprise for the researchers, adding that TikTok not only fails to protect children from inappropriate content, it even offers it as soon as an account is created.
Global Witness is a human rights organization that mainly investigates how technology affects human rights, democracy and climate change. The researchers came across the inappropriate content problem in April while conducting another investigation. TikTok was informed at that time and has reportedly taken steps to make corrections. However, the revelations in July and August show that sexually explicit content is still freely available and even recommended on the app.
TikTok has stated that the app has at least 50 features designed to protect children and teenagers, and that nine out of ten harmful videos are “caught” before they are even published.
Since the 25th of July, a law has been in force that requires companies to protect children online. Currently, platforms must use effective age verification solutions to ensure that pornographic content is not available to children. Algorithms must also be adjusted to prevent content that encourages self-harm, suicide or eating disorders from being made available.
Read also: Australia to ban children from social media; seeks age verification methods