Another week, another headline warning that artificial intelligence is replacing journalists. No wonder so many of the reporters and copywriters in my community were anxious about their futures over the weekend.

To read the headlines, you’d imagine that AI is on the verge of turning out high-quality content that effectively replaces human authors. 

That’s far from the truth. AI is shaping the future of journalism in many directions. But few, if any, point to the replacement of writers with machines.

I’m going to look at three examples. 

First, that Microsoft story. The journalists in question were curating news from third-party publishers for the MSN news pages: What’s trending, what’s fair and balanced, has it been fact checked, and so on.

We’re talking about process automation rather than actual writing. The software can shorten headlines and copy to fit the MSN format, but that’s about it.

Which brings me to my next point.  

AI can write articles, just not complicated ones. This is known as robo-journalism, the automated writing of stories based on structured data. 

It works well for deadline-driven stories based on sports results, financial news, weather and elections, for example.

But even here, the technology still requires a ‘human in the loop’. The Radar News Service, in the UK, is a good example. Launched in 2018, its five reporters filed 250,000 articles in the first 18 months of service.

Using their investigative skills, the journalists identify data sets from which they can derive a story and then build a template into which the data and standard phrases can be assembled. 

Stories are then published to subscribers, especially local news outlets who may publish the original content or use it as the basis for their own reporting.

Finally, how about the absolute cutting edge of language generation? Machine learning systems with formidable processing resources and the power of the open source community behind them?

This week OpenAI announced GPT-3, one of the best-known text generation AIs. GPT-3, and its predecessor GPT-2, rely on vast volumes of training data to predict the next word, phrase or sentence when fed a line or two of source copy. 

To begin with, the results are impressive, but the further you move away from the source, the greater the tendency of the software to write “hallucinogenic copy”, a polite way of saying “nonsense”.

GPT-3 has had some success in generating news stories that humans find hard to distinguish from the real thing, but it still lacks the human skills required to investigate, interview and then draft a news story.

The greater risk is that unscrupulous agents will use such software to generate fake news. So much so that researchers from Harvard University and the MIT-IBM Watson AI Lab have developed software to detect the ‘give-away’ patterns in AI generated content.

Even the research team behind GTP-3 are conscious of the limitations:

“A more fundamental limitation of the general approach described in this paper – scaling up any LM-like model, whether autoregressive or bidirectional – is that it may eventually run into (or could already be running into) the limits of the pretraining objective.”

In other words, you can always build a taller skyscraper, but it’s never going to reach the moon.

My final gripe is that misleading headlines are a distraction from the real challenge facing professional writers, which is how to survive and flourish during unprecedented change.

If you’re an investigative reporter, how AI can help you sift through thousands of documents and discover connections that would take dozens of journalists months to establish?

Are you prepared for the next wave of fact-checking software in the battle against fake news? How will 5G and the latest mobile devices impact the consumption habits of your audience?

The machines aren’t coming for your jobs. But other journalists are. Make sure you’re equipped with the skills and training to succeed in this brave new world.