Using artificial intelligence

At Media Party Chicago, a conference exploring the intersection of artificial intelligence and journalism, attendees debated and learned about the opportunities and dangers of AI. Ethics experts proposed frameworks for responsible use of powerful new technologies, developers taught journalists how to use AI to bring customized content to their readers and reporters wrestled with how to maintain audiences’ trust while AI-aided disinformation abounds.

The International Center for Journalists (ICFJ) helped organize the three-day event, bringing together entrepreneurs, journalists, developers and designers from five continents to work together on the future of media. They all joined to devise solutions using AI at a hackathon.

Here are some of the key takeaways from the event:

What questions should newsrooms ask themselves before using AI?

In a discussion with ICFJ’s Senior Director of Innovation Maggie Farley, Dalia Hashim of Partnership on AI presented questions newsrooms should ask themselves before even starting to use generative artificial intelligence, the AI system capable of generating text and images in response to prompts.  Communicating how and why you’re using AI, Hashim said, is also important for building trust with audiences. “The more open and transparent you are about it, the more ready the audience is to accept that [AI] is being used,” she explained.

Important considerations include:

  • Are we comfortable with using generative AI tools that were trained using others’ content without consent? Can we find or make tools that are not derivative?
  • How are we going to put guardrails around the use of AI tools in the newsroom?
  • Where could our workflow be automated? Where do we need a human in the loop?
  • If we are using AI to produce content, how will we label it?
  • How will we ensure the accuracy of AI-aided content?
  • If we’re collecting data from the audiences, how is it going to be used and who owns it?

Hashim urged journalists to use the Partnership on AI’s framework on responsible practices for newsrooms’ AI use, alongside their AI tools database for local newsrooms.

How can we prevent AI from spreading disinformation? Is AI hallucinating?

Edward Tian of GPT Zero highlighted some of the dangers of AI when it comes to dis/misinformation.

“AI generative text is prone to spitting out articles and hallucinating bouts,” he reminded the audience.

Continue reading

by Heloise Hakimi Le Grand, International Journalists’ Network

Photo by Zdeněk Macháček on Unsplash

Leave A Comment

Related posts

Magazine Training International’s mission is to encourage, strengthen, and provide training and resources to Christian magazine publishers as they seek to build the church and reach their societies for Christ.