By now, you’ve probably seen the satirical posts where someone claims to have “forced a bot” to watch thousands of hours of video and then write their own script. The posts are funny, but if you haven’t realised, they’re jokes that don’t quite encapsulate how artificial intelligence works.
We thought it was pretty obvious, but a lot of folks in the responses to the tweets’ replies are asking if the bot is real (you know who you are). So, here we are to kill the joke.
Most of these posts are tweets with a page or two of nearly-sensical dialogue. Of course they aren’t real – a bot trained on infomercials wouldn’t know how to say “fuck”. There aren’t thousands of hours of Saw movies (there’s more like 20 hours). And surely there’s no way a bot trained on Olive Garden commercials would say “Italian citizens”.
In response to some folks falling for the joke, scientist Janelle Shane, who actually trains computers to do funny things such as name guinea pigs and name conversation hearts, explained how you can tell that the tweets are most likely fake, and how to tell the difference between them and real AI.
I forced a bot to watch over 1,000 hours of Olive Garden commercials and then asked it to write an Olive Garden commercial of its own. Here is the first page. pic.twitter.com/CKiDQTmLeH
— Keaton Patti (@KeatonPatti) June 13, 2018
Neural networks are smart, but not quite this smart. They accept a large training set of data, then spit out something based on the data they receive.
“Neural nets learn by example. If you show it 1000 hours of video (assuming 120,000 unique 30-sec Olive Garden commercials exist), you’ll get video out, not a script with stage directions,” explained Shane in a thread.
There is some really advanced AI out there that really can produce some strange, seemingly human-made results. You may have seen some of these impossibly good deepfake videos – but they’re trained on videos, too. Just this week, a team debuted an AI-generated film which required training a neural network on screenplays and videos, Ars Technica reported. While these bots seem far along in their development, they aren’t humans and can’t think like humans making funny jokes yet.
Neural networks trained on writing struggle in other ways – they typically meander in their outputs and have trouble with grammar, said Shane. She presented recipes her bots have written, which forget the ingredients, and fan fiction that forgets the plot as examples.
There are bots that can assist in writing stories (such as Botnik Studios’) but these are predictive text keyboards like on your iPhone. They still require human direction. Shane’s own lists usually require a human behind the scenes to select the funniest parts of the output.
“I wish people wouldn’t present these fakes as bot-written. Actual AI-written text just isn’t that coherent,” Shane wrote. But she did think that some of them were funny. As to a debate on whether or not these posts are funny, may I direct you to this satirical tweet as well.
Writer Keaton Patti, who has authored many of the “I forced a bot” posts that look dubious, told Gizmodo, “I just want to say that I’m really happy people are enjoying the bot’s work and to look out for more in the future.” Lol. Of course, if the bot is real, Patti could probably make a lot more money working for a company like Google than doing comedy.
Sorry to kill the joke. But it seemed like enough people thought the tweets were real that we had to say something. Blame society, not us.