Twitter's live-streaming platform, Periscope, has introduced a feature that will have many breathing a sigh of relief — comment moderation.
The platform's live, unfiltered and open nature is not only one of Periscope's major strengths, but it also increases risk for spam and abuse. This is something that Periscope says it takes seriously — but wants to make sure the community is engaged in the process.
"Above all, we want our community to be safe on Periscope," Periscope said in a statement. "Comments are a vital part of the experience and we've been working hard on a system that still feels true to the live and unfiltered nature of our platform."
With the new moderation system, during a broadcast viewers can report comments as spam or abuse — that's if the broadcaster hasn't opted out, which is an option in the settings. The viewer that reports the comment will no longer see messages from that commenter for the remainder of the broadcast. The system itself also has the ability to identify commonly reported phrases.
When a comment is reported, a few viewers are randomly selected to vote on whether they think the comment is spam, abuse, or looks okay. The result of the vote is shown to voters, and if the majority votes that the comment is spam or abuse, the commenter will be notified that their ability to chat in the broadcast has been temporarily disabled.
Repeat offenses will result in chat being disabled for that commenter for the remainder of the broadcast.
"We've designed this system to be very lightweight," Periscope says. "The entire process above should last just a matter of seconds."
This system works in tandem with other tools already in place — you can still report ongoing harassment or abuse, block and remove people from your broadcasts, and restrict comments to people you know.
"There are no silver bullets," Periscope admits, "but we’re committed to developing tools to keep Periscope a safe and open place for people to connect in real-time".