Deepfakes are on the rise, leading to more mis- and disinformation on social media. AI-generated multimedia makes it increasingly difficult to separate fact from fiction.

The perpetrators behind deepfakes seek to negatively impact public knowledge during elections, mislead people about crises and more.

Take for instance, AI-generated images of an explosion in the U.S. Pentagon that went viral in May. Earlier, in March, false AI-generated images of former U.S. President Donald Trump being arrested and of Pope Francis wearing a puffer coat circulated widely on social media. During this year’s general election in Nigeria, manipulated audio claiming that presidential candidate Atiku Abubakar, his running mate, Dr. Ifeanyi Okowa, and Sokoto State Governor Aminu Tambuwal were planning to rig the vote, spread online.

Deepfakes have undeniably raised important questions and concerns for the media, said Silas Jonathan, a researcher at Dubawa and a fellow at the African Academy for Open Source Investigation. The more that disinformers succeed in deploying deepfakes, the more credibility the media loses as audiences become less able to determine what is real and what isn’t, said Jeffrey Nyabor, a Ghanaian journalist and fact-checker at Dubawa Ghana.

AI can be both the problem and the solution, however.

Here are a few tools journalists can use to combat deepfakes:

Deep neural networks

TensorFlow and PyTorch are two examples of free tools that use deep neural networks to spot deepfakes. They can be used to analyze images, videos and audio cues to detect signs of manipulation. Users simply need to upload examples of real and false media to train a detection model to differentiate between the two.

“These networks can learn from vast amounts of data to identify inconsistencies in facial expressions, movements or speech patterns that may indicate the presence of a deepfake,” said Jonathan. “Machine learning algorithms can also be trained to analyze and detect patterns in videos or images to determine whether they have been manipulated or generated by deepfake techniques.”

Deepware

Deepware is an open-source technology primarily dedicated to detecting AI-generated videos. The website has a scanner to which videos can be uploaded to find out if they are synthetically manipulated.

Similar to other deepfake detectors, deepware models look for signs of manipulation on the human face. The tool’s main limitation is its inability to spot voice-swapping techniques, which is a much greater danger than face-swapping

“The result is not always 100% accurate. Sometimes it is hit or miss, depending on how good the fake is, what language it is in and a few other factors,” said Mayowa Tijani, editor-at-large at TheCable.

Continue reading

by Salako Emmanuel, International Journalists’ Network

Leave A Comment

Related posts

Magazine Training International’s mission is to encourage, strengthen, and provide training and resources to Christian magazine publishers as they seek to build the church and reach their societies for Christ.