Skip Navigation Skip to Content Skip to Footer

Master’s student explores how legal tools can be used to redress harms of deepfakes

Set to be the biggest election year ever with eight of the world’s ten most populous nations heading to the polls, 2024 will continue the past trend of emerging technologies posing novel challenges to electoral outcomes, says master’s in Social Science of the Internet student at Oriel College Hayden Goldberg.

“In 2024, I believe the challenge will be deepfakes,” he says. “The central fear is that they will be used as a form of disinformation to change people’s opinion, especially at the last minute of an election when there isn’t time for the media or candidates to fact check or otherwise rebut what’s been fabricated.”

Goldberg began his master’s degree in 2023 and is taking modules on emerging internet technologies, subversive technologies, and technology regulation. In his thesis he is looking at regulatory solutions to the possible harms of deepfakes in electoral contexts. He believes existing legal tools can be used to redress harms caused by malicious uses of the technology.

“This global year of elections is providing numerous case studies for the harms deepfakes can cause as well as information about how best to regulate them,” he adds.

After graduating from the University of Washington, where he received a bachelor’s degree, majoring in politics and economics, Goldberg completed a summer research project studying proposed regulatory and staffing mechanisms for a new redistricting commission in Ohio, USA.

“I’d like my work to have practical, tangible impacts on the world,” he says, before remarking on the guiding thread of his research: “Action is critical for my research on deepfakes, elections and law. I am attempting to find ways for people or candidates who are harmed by deepfakes to get redress by using existing legal tools.”

Less apocalyptic about AI than some, Goldberg is interested in subtler dangers posed by the technology relating to electoral administration, such as those regarding signature verification and matching.

“Young people do not have a regular, defined signature,” he explains, “so when signatures get validated with AI models, young people are flagged at higher rates than other age brackets.

“Along with the challenges posed by Voter ID laws, this is one of the foremost ways young people can become disenfranchised.”

Overall Goldberg is sanguine about AI technology. He sees AI being used to “bring people into the democratic space who would otherwise not be a part of it.” And anyway, research suggests political microtargeting using large language models is less effective than was once thought.

The EU’s 2023 AI Act is a promising start in regulating the technology, Goldberg says. But he adds that with deepfakes still unfamiliar and startling accurate to most, the scope for them to wreak havoc is currently as high as it may ever be.

After his studies, Goldberg would like to attend law school, before going on to work “in some combination of AI, privacy and cybersecurity law and election law.”

His ambitions are cross-cutting. “I envision public policy, economic interests, and government as three points of a triangle,” he says.

“Law is critical to all three points, and I’d like to be somewhere in the middle of the triangle, serving as a translator and convener, bringing people together and helping to inform discussions and debates.”