Artificial intelligence (AI) is the new buzzword in journalism. Media professionals and academics alike are racing to figure out how this latest technological innovation will reshape an already precarious industry.
Will AI be a resource to newsrooms with declining revenues? Will it take away jobs or free up already-overworked journalists to produce high quality stories?
What Is AI? What is NLP?
First, let’s clarify some definitions. AI refers to the capacity of machines to perform tasks that are usually linked to human cognition and intelligence. In the context of journalism, AI typically refers to applications that analyze, understand and generate text without human intervention.
Natural Language Processing (NLP) is a subset of AI that focuses on the interaction between computers and humans through natural language. It is also worth noting that “natural language” refers to languages spoken by humans, such as English, in contrast to programming languages like Python.
Much of the discussion around AI in journalism is based on NLP capabilities. It is through NLP that AI helps journalists summarize articles, translate content and corroborate information. Essentially, all AI applications that use our everyday language are made possible by NLP.
How we developed an AI tool for journalists
In 2021, I was a part of an interdisciplinary team working on solving an investigative problem. We were trying to parse important information from millions of pages of unstructured data – aka, text. This was made all the more difficult by the fact that we were working with non-English language texts. We began experimenting with the GPT-3 API, and that was when we had our technical “aha!” moment.
This was before ChatGPT came onto the scene and when journalists were very skeptical of AI. We got busy creating a proof of concept to show the power of this new innovation from OpenAI.
We began experimenting with NPR articles and developed a summarizer to turn them into quick bullet point summaries, similar to Axios’ style of article. We chose this style mainly because we all liked NPR’s stories but they were often on the longer side. The tool we developed summarizes NPR articles as soon as they are published and makes them available on our website, Gist, after a journalist reviews and approves the summary.
Early on, we realized that our initial model was “hallucinating” when the sentences would be longer than a couple of lines. The summary generated some quotes that made sense contextually and grammatically, but that did not appear in the source NPR article.
“Hallucinations” in the context of NLP refers to instances where the model generates outputs that are either inaccurate, unanchored in the input data, or plainly nonsensical. In our case, we needed to make sure the quotes in the summary actually existed in the original articles. Hallucinations in a journalistic context could be fatal and can result in misinformation.
We began making adjustments to the model to prevent these hallucinations. This was an iterative process, as we had to continuously train the model and test it. We also added more journalistic guardrails to the process. This process gave us some key takeaways for future AI applications in journalism.
Our takeaways
Training a model with journalistic standards in mind was not easy but this long path has made four points clear for us:
- AI-assisted reporting is doable but AI reporting is not. Human supervision cannot be removed from a journalistic process.
Human judgment and approval is an integral part of any journalistic process and, while we can use technology to take on the tedious, repetitive tasks, it cannot entirely replace journalists. For example, in our model, every single summary is read and approved by a journalist before it is published.
by Ali Tehrani
Photo by Markus Winkler on Unsplash
Related posts
Magazine Training International’s mission is to encourage, strengthen, and provide training and resources to Christian magazine publishers as they seek to build the church and reach their societies for Christ.