Artificial intelligence (AI) has made significant advancements in recent years, allowing for the creation of hyper-realistic media. Deepfakes, AI-generated videos designed to mimic real people and make them appear as if they’re saying or doing something they haven’t, are causing serious concern. From Twitch streamers and students to political propaganda, deepfakes are being used to manipulate public perception.
What is a deepfake?
Deepfakes employ AI technology to generate completely new video or audio content with the intention of portraying events or situations that never occurred in reality. Deep learning algorithms underlie this technology, enabling the creation of realistic content featuring real people. While AI-generated media can take many forms, deepfakes are distinguished by their minimal human input during the generation process.
How are deepfakes created?
Typically, deepfakes are created using deep neural networks that employ a face-swapping technique. A target video and a collection of video clips of the person to be inserted into the target are required. The program then maps the person’s likeness onto the target video by identifying common features. Generative Adversarial Networks (GANs) are also used to enhance deepfakes, making them more difficult for detectors to decode.
Creating deepfakes has become increasingly accessible with apps like Zao, DeepFace Lab, FakeApp, and Face Swap, as well as numerous deepfake software options available on GitHub.
How are deepfakes used?
Deepfake technology has often been used for illicit purposes, such as non-consensual pornography. In 2017, a Reddit user named “deepfakes” created a forum for porn featuring face-swapped actors, and Deeptrace reported that 96% of deepfake videos online in 2019 were pornographic.
Politics has also seen the use of deepfake videos to spread misinformation. For example, a Belgian political party released a deepfake video of Donald Trump calling for Belgium’s withdrawal from the Paris climate agreement, which he never actually did.
However, deepfakes have also been used for positive purposes. The 2020 HBO documentary “Welcome to Chechnya” utilized deepfake technology to protect the identities of at-risk Russian LGBTQ refugees, while organizations like WITNESS have expressed optimism about the technology’s potential for human rights advocacy and political satire.
Deepfakes undoubtedly present a double-edged sword. On one hand, they pose significant risks to individual privacy and can contribute to the spread of misinformation, particularly in the realms of pornography and politics. On the other hand, deepfakes offer potential benefits in areas like human rights advocacy, documentary filmmaking, and satire.
As AI-generated media continues to evolve, it is crucial for the public to remain informed about deepfake technology and its capabilities. It is also essential for governments and tech companies to develop robust deepfake detection methods and implement regulations to protect individuals from the malicious use of this technology. Only then can society strike a balance between the benefits and risks posed by deepfakes.
Reference and Source: https://www.businessinsider.com/guides/tech/what-is-deepfake?r=US&IR=T