21.3 C
Sunday, July 14, 2024

Brace Yourself for the 2024 Deepfake Election

“It consistently amazes me that in the physical world, when we release products there are really stringent guidelines,” Farid says. “You can’t release a product and hope it doesn’t kill your customer. But with software, we’re like, ‘This doesn’t really work, but let’s see what happens when we release it to billions of people.’”

If we start to see a significant number of deepfakes spreading during the election, it’s easy to imagine someone like Donald Trump sharing this kind of content on social media and claiming it’s real. A deepfake of President Biden saying something disqualifying could come out shortly before the election, and many people might never find out it was AI-generated. Research has consistently shown, after all, that fake news spreads further than real news. 

Even if deepfakes don’t become ubiquitous before the 2024 election, which is still 18 months away, the mere fact that this kind of content can be created could affect the election. Knowing that fraudulent images, audio, and video can be created relatively easily could make people distrust the legitimate material they come across.

“In some respects, deepfakes and generative AI don’t even need to be involved in the election for them to still cause disruption, because now the well has been poisoned with this idea that anything could be fake,” says Ajder. “That provides a really useful excuse if something inconvenient comes out featuring you. You can dismiss it as fake.”

So what can be done about this problem? One solution is something called C2PA. This technology cryptographically signs any content created by a device, such as a phone or video camera, and documents who captured the image, where, and when. The cryptographic signature is then held on a centralized immutable ledger. This would allow people producing legitimate videos to show that they are, in fact, legitimate.

Some other options involve what’s called fingerprinting and watermarking images and videos. Fingerprinting involves taking what are called “hashes” from content, which are essentially just strings of its data, so it can be verified as legitimate later on. Watermarking, as you might expect, involves inserting a digital watermark on images and videos.

It’s often been proposed that AI tools can be developed to spot deepfakes, but Ajder isn’t sold on that solution. He says the technology isn’t reliable enough and that it won’t be able to keep up with the constantly changing generative AI tools that are being developed.

One last possibility for solving this problem would be to develop a sort of instant fact-checker for social media users. Aviv Ovadya, a researcher at the Berkman Klein Center for Internet & Society at Harvard, says you could highlight a piece of content in an app and send it to a contextualization engine that would inform you of its veracity.

“Media literacy that evolves at the rate of advances in this technology is not easy. You need it to be almost instantaneous—where you look at something that you see online and you can get context on that thing,” Ovadya says. “What is it you’re looking at? You could have it cross-referenced with sources you can trust.”

If you see something that might be fake news, the tool could quickly inform you of its veracity. If you see an image or video that looks like it might be fake, it could check sources to see if it’s been verified. Ovadya says it could be available within apps like WhatsApp and Twitter, or could simply be its own app. The problem, he says, is that many founders he has spoken with simply don’t see a lot of money in developing such a tool. 

Whether any of these possible solutions will be adopted before the 2024 election remains to be seen, but the threat is growing, and there’s a lot of money going into developing generative AI and little going into finding ways to prevent the spread of this kind of disinformation.

“I think we’re going to see a flood of tools, as we’re already seeing, but I think [AI-generated political content] will continue,” Ajder says. “Fundamentally, we’re not in a good position to be dealing with these incredibly fast-moving, powerful technologies.”

Latest news
Related news