This week, as announced in a Time exclusive feature on Instagram’s “war on bullying,” Instagram is launching tests of two features: comment warnings and a restrict option. The first is a comment warning, which can supposedly detect offensive or “borderline” content as a user is typing, then prompt them to reconsider before they post. In a press release sent on Monday, Instagram head Adam Mosseri wrote that early tests had shown the feature encouraging “some people to undo their comment and share something less hurtful once they had a chance to reflect.” (Instagram told me that “some” meant “a meaningful minority,” but declined to give an exact number.)
The company has been working toward something like this for a long time. The first comment and emoji filter debuted in the summer of 2016, rolled out first to help Taylor Swift remove thousands of snake emoji from the comments on her page; it was made available to the public that September. Automated “offensive comment” filtering launched in June 2017, followed by automated “bullying comment” filtering in May 2018. Machine learning filters were applied to images and captions as well in October 2018, the day before the Atlantic published a report on Instagram’s raging bullying problem — which detailed how teenagers create cruel parody accounts and post anonymously on secondary burner accounts.
The second feature is called Restrict. It allows users to identify their personal bullies without banning them — something that user research found is important to teens, who often have no choice but to interact with people they hate in real life. When an account is set as restricted, users have the ability to review comments the restricted account attempts to post on their content, choosing to approve it, delete it, or leave it in a limbo state where only the restricted poster can see it. The comment will also first show up behind a “sensitivity screen,” meaning the user will have to tap to see it. Messages from restricted accounts will go to the “message requests” section of a user’s DMs rather than the main inbox
There is a precise term for this already. It’s a shadow ban, a moderation technique as old as forums, in which a user is prevented from posting publicly but still believes they are. The idea is that, when the user becomes frustrated that their vitriolic posts are getting no engagement, because nobody can see them, they’ll give up (and maybe leave).
The main difference here is that, though the goal of the feature is to limit exposure to a bully without banning them — which they can see and may use as cause for escalating a conflict — it will still be pretty easy to figure out if someone has restricted your account. The tell is the new tagging feature embedded in Restrict. Typically, when a user posts a photo, Instagram suggests accounts to tag based on your previous interactions, and it fills in the rest of a username once you start typing the first couple of letters. If you want to tag someone who has set you as a restricted account, you have to type out their entire handle letter by letter. Testing to see if someone had used the restriction feature on you would be incredibly simple, then, and take only a few seconds to confirm.
— Instagram (@instagram) July 8, 2019
Mosseri, the former head of product who took over Instagram when co-founders Kevin Systrom and Mike Krieger abruptly left the company last September, is framing his reign as a new era of anti-bullying. “We are in a pivotal moment. We want to lead the industry in this fight,” he told Time. He also said, “We will make decisions that mean people use Instagram less.” But Instagram is obligated to expand, especially now that Facebook’s user growth is on the rocks, so this won’t include encouraging fewer people to use Instagram at all.
A couple of months ago, I attended a press briefing at Facebook’s New York offices, where many of these ideas were presented to a room full of technology journalists using almost the exact same phrases that Time includes in its report:
The company is trying to build artificial intelligence that is capable of rooting out complex behaviors — ranging from identity attacks to betrayals — that make users feel victimized. That will likely take years. In the meantime, the company is developing ways for users to fight this scourge themselves, with the assistance of machines.
The word “betrayal” was one I found particularly challenging at the press briefing. The example given to us then — and repeated in the Time piece — was a teenage boy tagging his ex-girlfriend in photos of himself with other girls. The term “intentional FOMO” is used in the Time piece (in the briefing it was referred to as “intentionally induced FOMO”), and it’s defined using the same example I heard, of a girl tagging a bunch of friends in a photo, purposely excluding someone else.
Both of these ideas — though you can think of clear-cut examples — are so vague and, simultaneously, so nuanced, that they can only really be understood as “human behavior in general.” The idea that Instagram might be able to use machine learning to control and mediate feelings of being excluded, or less-than, or no longer as important to someone as you used to be, is dubious mostly because Instagram is an image-sharing platform: Those kinds of behaviors are what Instagram is for, as much or more than it is for posting ur-happy moments that nobody in your life will take issue with or feel threatened by.
While these steps are interesting and hopefully useful, there’s no real solution to teenagers making each other feel bad on Instagram because teenagers are very, very good at making each other feel bad, and a platform that is fundamentally about boasting is a catalyst for that, no matter what.
Update: Updated July 8th 1:25 PM ET to include comment from Instagram about user testing.