The first story published about L.A.'s Monday earthquake had an interesting line appended to its end: "This information comes from the USGS Earthquake Notification Service and this post was created by an algorithm written by the author."
The algorithm called Quakebot was written by Los Angeles Times writer Ken Schwencke to quickly gather and publish news about recent earthquakes in the area (not to be confused with LA QuakeBot, a robot that tweets about earthquakes from the United States Geological Survey). When the 6:25 a.m. quake occurred, Schwencke told Slate, he was able to get the story up within three minutes: "He rolled out of bed and went straight to his computer, where he found a brief story about the quake already written and waiting in the system. He glanced over the text and hit 'publish.'"
Algorithmic journalism is nothing new (in fact, there's a similar algorithm at the Los Angeles Times that writes about homicides), and many stories you read might begin life as a data-generated stub that allows human writers to expand upon basic facts. This algorithm works especially well for earthquakes, as the report is completely data-driven: The earth moves, the USGS measures it.
But imagine if a story could be generated automatically and immediately after another kind of civic emergency -- say, a gas line break or a train derailment. This kind of fast-acting, quickly disseminated journalism could help explain what happened and what citizens should do next. For earthquakes in particular, early warning systems have been proven to help keep residents calm -- having this kind of secondary information within a few minutes could keep cities safe. [Slate]