Not a day goes by without someone, somewhere on the internet, feeling the need to express how they’ve been personally harmed by a news site focused on genre entertainment that committed the unforgivable sin of running headlines that very vaguely allude to the interesting parts of films and shows. “I have been slain by spoilers,” these people cry. “Why have you slain me?”
To be fair, there are certainly people (and publications) that revel in talking about (and running) information that would be considered spoiler-y literally hours after a particular piece of work has become consumable to the public — that can be a feature film or a streaming TV series. But at the same time, there’s a very particular way in which many fandoms have become excessively hostile towards the concept of being spoiled, so much so that they swarm on anything they perceive as being a spoiler, which becomes an issue when it comes to the business of discussing things in public spaces.
This, one imagines, is part of the reason why the Russo brothers made a point of establishing an official Avengers: Endgame spoiler “embargo” for the public in the hopes that they wouldn’t discuss the movie until enough people had seen it. Surprisingly, most people were generally good about keeping quiet about Endgame. But in an age when everyone lives on social media and wants to share their opinions about things as soon as they’ve seen them, films like Endgame are the exception rather than the rule.
With that in mind, a group of researchers at the University of California, San Diego set out to solve the spoiler problem by teaching a neural network to identify spoilers in reviews by analysing the text of said reviews. After compiling a database of book reviews pulled from Goodreads that were explicitly tagged as containing spoilers, the research team was able to develop SpoilerNet, an AI tool it believes is capable of accurately identifying sentences that are likely to give away major plot points.
What the team found was that, in general, most reviewers tend to drop spoilers in big chunks toward the latter end of their pieces, and SpoilerNet was more than capable of recognising spoilers when they were presented in a traditional manner.
When applied to book reviews, SpoilerNet was able to detect spoilers with 89-92 per cent accuracy, and when the team used the tool on reviews about television shows, SpoilerNet was still able to pick up on spoilers with 74-80 per cent accuracy. Where the tech begins to falter, however, is when it comes to nuanced language that depends on a person’s understanding of what’s being discussed.
Researchers found that spoiler sentences tend to clump together in the latter part of reviews. But they also found that different users had different standards to tag spoilers, and neural networks needed to be carefully calibrated to take this into account.
In addition, the same word may have different semantic meanings in different contexts. For example, “green” is just a colour in one book review, but it can be the name of an important character and a signal for spoilers in another book.
SpoilerNet also had difficulty getting off track when coming across certain words like “murder” or “killed,” because while they can be attached to spoilery sentences, they aren’t always, and the tool has no real way of being able to know the difference. Similarly, SpoilerNet also demonstrated how difficult it is for a program to pick up on charged terms whose charged-ness is only understandable when one has a grasp of the full context in which said terms exist.
Take the word “snap,” for example. When speaking about the events of Infinity War and Endgame, “snap” could refer to the infamous snapping while wearing the Infinity Gauntlet. But “snap” could also be in reference to a snap decision made by the movies’ characters, and SpoilerNet would have no way of differentiating between different instances of the word.
In theory, a tool like SpoilerNet could be further refined and turned into a browser plug-in, making it easy for people to just push a button and browse the internet without worry of being spoiled. But all of the time and energy that went into developing SpoilerNet — which is impressive — highlights something larger than no one web tool could ever really hope to address.
The fact of the matter is that spoiler culture has become… intense to the point of ridiculousness, and the only way to really ensure that no one was ever spoiled would be to require writers to speak about things in a way that really defeats the entire purpose of reviews.
Tools like SpoilerNet have the potential to help address these oh-so-first world problems, but it’s important to understand that these kinds of tech solutions can only do but so much to counteract normal human behaviour online. When something big that a lot of people are paying attention to comes out, folks are going to talk about it. There’s really no way to get around that aside from shutting yourself off from the outside world, which is less than ideal.
There are plenty of sites — like Gizmodo — that take measures to try and let people know when posts contain spoilers. More outlets (and people) would do well to do the same because, in the end, this all really boils down to people being mindful of their behaviour and how it might affect others.