In Instagram’s latest push to be one of the least toxic of the major social media services, the platform is launching a feature that warns users before they post something that could be offensive.
The company announced in a blog post on Monday that it had begun introducing a new tool that informs people that their caption “may be considered offensive,” in an effort to encourage users “to pause and reconsider their words before posting.”
The feature relies on AI that detects language similar to phrases and words that have been reported. When it notices “potentially offensive” captions, it warns the poster with a prompt that gives them the option to “edit caption,” “learn more,” or “share anyway.” So, while it doesn’t censor users, it lets them know they might be an arsehole.
In the announcement, Instagram said this feature is meant to curb bullying on the platform, and that it is an extension of a tool launched in July that notifies users if their comments could be offensive. At the time, Instagram wrote that the “intervention gives people a chance to reflect and undo their comment and prevents the recipient from receiving the harmful comment notification.”
Now, Instagram claims that the results of its reflection nudges have been “promising,” as the shaming has encouraged users to “reconsider their words when given a chance.”
This feature follows a “Restrict” tool Instagram launched in October. The update allowed users to easily shadowban anyone they wanted, so they can avoid seeing comments from people who may bully or post offensive comments.
Instagram said the new caption-warning tool will launch on Monday in certain countries, then expand across the world over the next few months.