The deep-learning software used to make deepfakes has become cheap and accessible, raising questions about the potential for abuse. While there are plenty of examples that are benign and playful – Salvador Dali taking selfies with museum patrons, for instance – the origins of the technology show how harmful it can be.
First fake
The term first became widely used in 2017, after a Reddit user by the name ‘Deepfakes’ posted pornographic videos featuring actresses whose faces were digitally altered to resemble female celebrities, such as Scarlett Johansson and Gal Gadot. For many, the videos crossed basic lines governing consent and harassment and showcased a potent new tool for revenge porn. That ‘Deepfakes’ used Google’s free open source machine-learning software also drove home how easily a hobbyist, or anyone with an interest in the technology, could masquerade falsehoods as reality.
Since then, other examples of disturbing deepfakes have appeared, including a video of US House of Representatives Speaker Nancy Pelosi altered to make her sound drunk – it circulated widely after Donald Trump posted it and Facebook refused to take it down – and a video by two artists of Facebook CEO Mark Zuckerberg confessing that his company “really owns the future”.
Actor Jordan Peele used a deepfake Barack Obama to warn of the dangers of deepfakes, highlighting how they can distort reality in ways that could undermine people’s faith in trusted media sources and incite toxic behaviour. And while the vast majority of deepfakes are of non-consensual porn not misinformation, some in the intelligence community have warned that foreign governments could spread deepfakes to disrupt or sway elections.
Legislating for lies
Social media companies have started to address the deepfake dilemma – Facebook set up a public contest in 2019 to help it develop models to detect deepfakes and banned them in early 2020, in anticipation of the damage that could be done in an election year. Twitter now deletes reported deepfakes and blocks any of their publishers.
Governments are also putting forward laws to curb the technology. California passed a 2019 law banning deepfakes altogether, and in December 2020 the US Congress passed into law the Identifying Outputs of Generative Adversarial Networks Act.
The entertainment industry has responded coolly to these protections, claiming too much oversight clamps down on free speech rights. In 2018, Walt Disney Company’s Vice President of Government Relations, Lisa Pitney, wrote that a proposed New York law that included controls on the use of “digital replicas”, would “interfere with the right and ability of companies like ours to tell stories about real people and events. The public has an interest in those stories, and the First Amendment protects those who tell them.”
Others feel such legislation is not going far enough. Existing laws put the burden on users to identify deepfakes, exonerating the platforms they circulate on. Social media companies remain exempt from regulations and no industry-wide standards currently exist, keeping them off the hook for now.
This article first appeared in The New Real Magazine on 2 March 2021.
Image credit: BuzzFeed / Monkeypaw Productions
The views expressed in this section are those of the contributors, and do not necessarily represent those of the University.