Bots — applications that perform automated tasks — are becoming increasingly commonplace in newsrooms, as a growing number of publishers experiment with various services to expand their coverage, help journalists do their jobs better or improve relationships with readers.
Bots are not an entirely new phenomenon, especially in the word of technology where automated helpers have existed for almost as long as computers. The earliest chatbot, Eliza, was created in 1996 as an early experiment in natural language programming and is still online in various forms.
Now bots have started to find their way into newsrooms as the technology behind them becomes more accessible to a wider audience and new services like Chatfuel and Dialogflow make building a chatbot as easy as pointing and clicking.
One of the earliest uses of a bot in journalism dates back to 2014 when a bot called Quakebot broke the story of a 4.4 magnitude earthquake that hit Los Angeles at 6:25am on a Monday in March 2014. By 6:33 am the Los Angeles Times published the story online under the byline of Ken Schwencke, the developer of the quake-warning bot.
Quakebot is essentially an algorithm that monitors the US Geological Survey for reports of earthquakes over a 3.0 magnitude. When it finds one, the algorithm writes up a simple report, adds a map, puts the story into the content management system of the LA Times, and emails an editor to alert them to a story.
This ability to monitor specific sets of data around the clock without needing a break is what makes bots so appealing to journalists. Unlike bots, journalists need time off and can’t spend their day watching a single corner of the internet in the hope that something happens. Bots, on the other hand are excellent at doing exactly this.
Of course, bots aren’t just for breaking news.
More recently one of Quartz’s bots showed how effective they could be. Shortly after the massive WannaCry ransomware attack, Quartz set up a bot to monitor the bitcoin accounts linked to the attack. Twelve weeks after the attacks, the bot picked up activity on the account and began tweeting about the bitcoin that was being withdrawn from the accounts.
Twitter and Bots
Bots can play an important monitoring role on Twitter. The WikipediaLiveMonitor, for example, monitors Wikipedia for pages that suddenly see an increase in the number of edits made which might suggest a breaking story. The bot then retweets details about the edits to the page.
In a similar vein, the Congress Edits bot monitors Wikipedia for anonymous edits made to a Wikipedia page that come from an IP address within the US Congress. In the UK, the Parliament WikiEdits bot does a similar thing, monitoring anonymous edits from within the UK government. Ed Summers, who created the Congress Edits bot, open-sourced the code for the bot, which has resulted in a number of similar bots in other parts of the world, including one that monitors the Russian government in Russian and English.
Sometimes bots are used to not just keep tabs on those in power but also to monitor the journalists themselves. The NYT Anonymous bot monitors The New York Times and tweets every time it finds an article that uses an anonymous source.
Automating
Bots are also widely used to populate information resources run by media organizations. In India, for example, The Hindustan Times built a real-time air quality monitoring map. The map uses data from a range of sources to keep the information up to date.
The Hindustan Times has also done a number of other experiments with automating content creation. In the run-up to elections in March this year, its @htrealtime Twitter account was used to automatically posts tweets about candidates based on publicly available data. The account also posted live results on the date of the elections.
A couple of months later the Hindustan Times used the same @htrealtime account to live tweet coverage of the Indian Premier League cricket series. Using structured match data from partners, the bot tweeted highlights and facts about the more than 60 matches and 200 players. A script also pulled links to live blog analysis by sports reporters from a spreadsheet to add additional context and color.
Other organizations experimenting with automation bots are those that produce high volumes of formulaic content. Company earnings reports, for example, require simple reporting that covers the numbers contained in annual and quarterly reporting from companies. AP is one of the agencies using automation in this way. Rather than having reporters spend time writing up each of the earnings report, AP plans to have bots perform this function freeing up reporters to write insight pieces that add value to the agency’s business coverage.
Similarly, The Washington Post is using its in-house Heliograf automated storytelling tool to increase local coverage, particularly of high school football games. Heliograf, which the Post first used during the Rio Olympics, has since been used to cover local politics as well as sports stories like this one.
In one of its many bot experiments, Quartz has launched a bot that tracks other bots; @probabot_ is designed to find accounts on Twitter that tweet a lot about politics. The bot then feeds the account into another service called Botometer which uses machine learning to determine if it thinks the account is a wholly or partially-run bot, and @probabot_ then tweets the result.
Two Way and New Insights
One of the latest ways bots are being used are as chatbots, automated tools that enable reporters to either tell stories differently or to collect information from readers. Chatbots are not entirely new but they have shot into prominence recently, largely as a result of platforms like Facebook incorporating these into their messaging services.
Earlier this year, ahead of the elections in France, newsweekly L’Obs followed four undecided voters around the country. They then set up a Facebook chatbot that shared the thoughts of the voters with followers on Facebook. The chatbot was interactive and followers could choose which of the voters they were interested in hearing from.
Pitfalls
Of course using bots for news reporting is not without its pitfalls. Bots are clearly excellent at getting the facts right about a story, but what if the source of the news is wrong? Bots are not particularly good — yet — at spotting the errors in information they are monitoring.
Take, for example, Quakebot which made snappy work of reporting the earthquake in Los Angeles within minutes of it happening. Earlier this year the bot posted a story about an earthquake in Santa Barbara. The problem is that the earthquake actually happened in 1925 and not 2017. A staffer at the US Geological Survey was updating a record from 1925 which accidentally triggered an alert. Quakebot, in all its efficiency, pounced on the alert and turned it into a story for the LA Times website, before the staffer could fix the error.
Not All Work and No Play
Bots may seem to be all work and no play, but even bots can have a lighter side. Take, for example, the Post’s MartyBot (named after Post Editor Marty Baron) which sends reporters a message if it looks like they’re not going to make their deadline. Or the the LA Times’ Slack bot that alerts the newsroom when there is a fresh pot of coffee.
Alastair Otter is managing partner of Media Hack Collective, a data journalism and visualization initiative based in Johannesburg. He is former editor and head of editorial innovation at Independent Online, one of Africa’s largest online news sites. At Media Hack, he handles data visualization and development of online media products.