AI-generated disinformation

Fears of AI-generated disinformation are rampant. Journalists and experts warn that it can be used to deceive audiencesprofoundly sway viewers and incite violence.

Although these concerns are widely vocalized, the precise impact of AI-generated disinformation remains unclear. What is evident, however, is that we can expect a surge in disinformation due to AI.

AI and disinformation today

AI technologies, such as voice cloning software, large language models and text-to-image generators, have made it easier to create misleading content. On social media, in particular, it is not uncommon to come across false AI-generated videos, pictures and audio clips.

In one recent example, clips of U.S. President Joe Biden were combined to create a derisive video portraying a fictional day in his life. The video voiceover sounds concerningly like President Biden himself, though the content of the speech is false. Users online have also manipulated images of Trump’s arrest, and edited pictures that depict Indian Prime Minister Narendra Modi and Italian Prime Minister Giorgia Meloni getting married.

Despite an increase in volume, a peer-reviewed paper published in Harvard Kennedy School’s Misinformation Review this month, argues that, with respect to false and misleading content, the “current concerns about the effects of generative AI are overblown.”

Even if we assume that the quantity of misleading generative AI content will increase, it would have “very little room to operate,” the Harvard paper argues. Although a significant amount of misinformation circulates online, it is consumed by only a small fraction, approximately 5%, of American and European news consumers.

“It’s the kind of thing that people can buy into, and it’s an easy story to tell ourselves that automatically generated content is going to be a problem,” said Andrew Dudfield, head of product at Full Fact, a U.K.-based fact-checking organization.

These arguments partly stem from the fact that existing modes of disinformation are already effective without using AI. They also emphasize that predictions that AI-generated content, due to its enhanced quality, will have heightened consequences, lack concrete evidence and remain speculative.

“I don’t think we’re there yet,” said Aimee Rinehart, senior product manager of AI strategy at the Associated Press. “It stands to reason that we are going to have some problems. But I don’t know if the internet is caving in on information that’s problematic yet.”

Why we fall for disinformation in the first place

Although media organizations are worried about realistic AI-generated content, “people tend to believe just a photoshopped image of a member or a politician with a fake phrase she never said,” said David Fernández Sancho, the CTO of Maldita.es, an independent fact-checking organization based in Spain.

Continue reading

by Muskan Bansal, International Journalists’ Network

Photo by Steve Johnson on Unsplash

Leave A Comment

Related posts

Magazine Training International’s mission is to encourage, strengthen, and provide training and resources to Christian magazine publishers as they seek to build the church and reach their societies for Christ.