Commentary: Social Media Companies Continue to Profit from Self-Harm Content

by Kalev Leetaru

 

The web’s earliest days were marked by optimism that the digital world would be an unfettered force for good. It would sweep away censorship and oppression, connect the planet, and empower anyone, anywhere, to be heard by the world. Over time, however, the web’s darker byproducts have become more apparent, with companies’ own research confirming the harms that social media, in particular, is having on teens. A recent report sheds light on Twitter’s role in promoting adolescent self-harm like cutting – and the company’s seeming inability to stop it.

In August, a report by the Network Contagion Research Institute and Rutgers University found that over the previous year hashtags relating to self-harm had increased almost 500%, totaling tens of thousands of posts per month. Rather than calls for help, posts often featured graphic images of teens harming themselves that were then “praised, celebrated and encouraged” by other Twitter users. Rather than being a place to seek help or find support networks, Twitter is increasingly a site where teens go for acceptance and encouragement of their most self-destructive impulses, including suicidal ideation.

On paper, Twitter’s content-moderation policies prohibit posts that encourage or glorify self-harm. The reality, though, is that its recommendation algorithms have amplified rather than suppressed these communities. In October 2021, research found that users searching for terms relating to self-harm were directed to communities encouraging self-harm, rather than being directed to support networks. At the time, Twitter claimed that it would crack down on glorification of self-harm and suicide, but such posts have only increased.

Teens use various euphemistic terms like “raspberry filling” or “cat scratches” to refer to self-harm behaviors in order to evade simplistic keyword filters. Yet, the relatively circumscribed nature of these terms, accompanying images, and network structures of these communities should make them relatively straightforward for modern AI systems to detect – raising the question of why Twitter isn’t able to do more to remove such content.

The company did not respond to multiple requests for comment regarding why Twitter struggles to remove such posts. Unfortunately, this is a longstanding trend with social media companies. Despite their outsize roles as the modern digital public squares, they are under no obligation to combat the harms on their platforms or even to provide basic details on why they believe they cannot remove such content more effectively.

The reality is that these tech companies profit monetarily from such content, as every view brings in advertising dollars. Facebook has been asked repeatedly over the years whether it would refund the advertising revenue that it derives from posts it later removes as violations of its harmful-content policies. The company has long refused to do so, reflecting that for social media platforms such content remains a profit, rather than cost, center.

Yes, Every Kid

In a revealing demonstration of the absurdity of Twitter’s own enforcement policies, one of the Network Contagion/Rutgers report’s authors tweeted a screenshot of the kind of horrific content that remains available on Twitter, despite the company’s protestations that it removes such material. In response, Twitter censors locked her account as a violation of its gratuitous-gore policy, despite allowing the posts she screen-captured to remain available. Moreover, although Twitter’s policy offers a specific exemption that posts “focused on research, advocacy, and education related to self-harm” are not considered violations and will not lead to removal, the company denied her appeal.

Asked about why Twitter removed the author’s tweet but allowed the posts that she had screen-captured to remain available, the company, as usual, remained silent.

In the end, as long as companies have no legal obligation to take seriously the threat of content that glorifies self-harm and suicide, such material will continue to spread. Companies can endlessly repeat their boilerplate statements that they abhor such material without taking any meaningful action to remove it – or even face public scrutiny over their failures to do so. Perhaps if companies were forced to make available to researchers and journalists a fuller accounting of how they handle such content – including the rationale for why they fail to take action against reported content and why their algorithms continue to promote it – social pressure could finally force change. Without transparency, companies have little reason to change. Perhaps under its new ownership, Twitter might finally pull back the curtain on its moderation decisions.

– – –

Kalev Leetaru is a leading expert in the data world and a data specialist for RealClearInvestigations. He was named one of Foreign Policy Magazine’s Top 100 Global Thinkers. He writes extensively on social media trends and topics including censorship, AI, and big data. Currently, Kalev produces videos for RealClearPolitics tracking trends in the mainstream media. He also serves as a Media Fellow at the University Center for Cyber and Homeland Security and a member of its Counterterrorism and Intelligence Task Force.
Photo “Checking Social Media” by cottonbro.

 

 

 

Related posts

Comments