Killing the Blue Whale Challenge

Social media companies enabled the deadly meme. They must do more to curb it.
Image may contain Animal Mammal Sea Life and Whale
eco2drew/iStock

On July 18, I opened a browser, pulled up various social media sites, and searched for the #bluewhalechallenge.

The Blue Whale Challenge is a nefarious internet meme that encourages those who embrace it to hurt themselves. Over the course of 50 days, an anonymous administrator is said to assign players escalating dares that involve self harm. The final task is suicide. Interested people find these administrators through hashtags and images on social media. It could well be a threat to public safety. The trouble is, like many things on the internet, no one is really sure what the truth is.

Rumors of the game’s prevalence have existed for several months, but in early July, they escalated. For one, a baby-faced Texas teen killed himself—this really happened—and broadcast the suicide from his phone. His family found evidence he may have been participating in the challenge. Shortly after, a second American teen took her life, and her family also found evidence connecting her to the game. Then, on July 17, Siberian courts sentenced a 22-year-old Russian man to more than three years in prison for his part in launching the game, convicting him of inciting the deaths of two teenage Russian girls.

As US television and newspaper reporters began to report on the alleged trend, warning parents and educators to keep an eye on their teens’ social media use, I decided to see how prominent social media sites were addressing search queries for the term. It may be impossible to tell exactly how many people are participating in the game, but social media sites have streams of data that reveals how many of their users are expressing interest in it.

The results varied broadly. A search on Tumblr produced a blue screen with a single headline: “Everything okay?” Beneath it, Tumblr users were encouraged to seek help from a variety of resources. A Tumblr spokesperson said the company launched this PSA after it was alerted to the challenge in May. Searches for the term spiked back in May at around 60,000.

Similarly, YouTube search results featured two prominent boxes at the top of the screen displaying contact information for the Crisis Text Line and the National Suicide Prevention Lifeline. If you click on the “more” button beneath a disturbing blue whale video, you can report a video for a list of offenses that include “violent or repulsive content” and “harmful dangerous acts.”

By contrast, a Snapchat search offered nothing by way of mental health support, nor did a search on Twitter, which featured a stream of tweets about Blue Whale. Some of those tweets warned against the game, while others asked how to reach game administrators. Twitter has no plans to add mental health resources at the top of its search results, but a Twitter spokesperson told me that users can report concerning posts. So I navigated through the process of reporting a tweeter who was looking to connect with an administrator of the game. It was a straightforward, intuitive process, and the final screen displayed the National Suicide Prevention Lifeline phone number, along with a link to a collection of mental health resources.

Most concerning, a Facebook search produced no mental health resources. A Facebook spokesperson explained that the service was in the process of creating the pop-up box, which he said was slightly complicated from an engineering perspective, and that it would be up in a few days. Sure enough, when I looked again on July 20, it had appeared:

Facebook’s spokesperson also reminded me that the company launched new tools and partnerships to combat self harm in March; the spokesperson noted that people can report posts that concern them. But when I tried to report several posts, the menu was not intuitive. My options were:

Clicking on the second option brings up another menu that includes a more obvious selection for describing harmful or suicidal behavior. Click on that, and a pop-up screen will offer four choices for how to support the poster. These are strong resources, including scripts for how to talk with struggling friends and an option for having Facebook intervene—but they’re buried pretty deep.

In its March 1 news release, Facebook also said it had started limited tests in the US for an artificial intelligence-powered pattern recognition process. It is designed to recognize posts in which the poster is expressing suicidal thoughts and make reporting options more prominent; it’s also supposed to direct them to Facebook’s community operations team for review. This might make it easier to report problematic posts—but at this point, it’s difficult for a reporter to check.

To wit: Social media has enabled these sorts of bizarre internet challenges. The people best positioned to observe the impact of a meme, and the influence it spreads, are those at the social networking companies. These services collect and analyze streams of data about what adolescents discuss. They’re sophisticated enough in their ability to parse it that they can sell it to advertisers. We must demand they also direct that sophistication to issues of public health, particularly where adolescents and self-harm are concerned.

To be fair, figuring out how to address this type of content is tricky. For one, these companies consider themselves platforms, not media companies, and mostly take a hands-off approach to policing content in the name of free speech. Over the past year, Facebook in particular has acknowledged that it must take some responsibility for stopping the scourge of fake news and keeping illegal and dangerous content off its site. But generally, intervening with searches and flagging posts requires editorial judgment that tech companies are sometimes loathe to make, and often uncertain how to address consistently. What’s more, it requires time and attention at engineer-driven companies that are more accustomed to relying on tech than people to surface issues.

Then there’s the fact that chasing down concerning memes becomes a game of whack-a-mole. Blue Whale is particularly horrendous, but there’s a constant stream of memes that encourage people—particularly, but not exclusively adolescents—to harm others or themselves for the sake of spectacle. The #saltandice challenge encourages them to endure the burn created on skin when salt is paired with ice as long as possible. The #knockout challenge involves striking an unsuspecting victim hard enough that she passes out.

We shouldn’t expect platforms to keep us safe from every harmful meme. But we should absolutely demand that they do more—consistently—to make sure that as new threats are identified, they act in good faith to tailor tools to address them. Public service announcements directing searchers to accurate information and sources for mental health interventions should be an industry expectation.

In the case of the Blue Whale Challenge, even as media reports proliferate in the US, Tumblr, at least, has found that data tells a different, more promising story. Searches for the terms on its platform peaked in May at around 60,000. The following month, they fell off by 68 percent. Perhaps the meme is fading into obscurity, where it belongs.

But another meme is sure to follow. And as it emerges, social media companies are bound to be among the first to notice. For the sake of the public good, they must step up.