Mastering the metaverse: The rise of Deepfake Pornography

You are currently viewing Mastering the metaverse: The rise of Deepfake Pornography

Words: Maia Appleby Melamed (She/Her)

It is fair to say that AI has become firmly entrenched within our virtual hive mind, as observers remain keenly focused on examining the potential threats and seemingly endless possibilities found within machine learning techniques. As this technology becomes irrefutably enmeshed within our society, government legislation has been desperate in its attempts to keep up with the rapidly developing field. Amongst these efforts is recent legislation passed in the UK, announced on the 16th of April by the Minister for Victims and Safeguarding Laura Farris MP. The law specifically targets the upsurge in ‘despicable individuals’ who create sexually explicit AI deepfakes, without the consent of those depicted in the pornographic content.
The term deepfake – combining the deep-learning techniques of AI, and the synthetic content of the audio and/or images generated – has reached new levels of recognition in the last few years, with the total number of deepfake videos online increasing by 550% from 2019 to 2023. These videos, ranging from Obama calling Trump a ‘dipshit’ to Greta Thunberg advocating for the use of renewable ‘wooden dildos’, have begun circulating the internet with an unrelenting power.
When Googled, the term amasses over 200 million results; as in 2024 it has never been easier or more accessible to forge digital content. This synthetic media, whilst having the ability to spread fake news through the mouths of well-known celebrities, has most commonly manifested through pornographic content. As the number of ‘undressing’ or ‘nudifying’ apps accumulate to more than 200, it remains vital to acknowledge the (not so humble) beginnings of this synthetic media, to fully comprehend the possible threats this tech poses and, therefore, what legislation is required to combat it.
As what sadly must come as no surprise to anyone with an internet presence, the first deepfake was created to produce violent pornographic content. In 2017 a Reddit user created a subreddit called Deepfakes which pasted celebrities’ faces onto existing pornographic performers videos, gaining nearly 90,000 members before its ban. These videos were created using freely available open-source machine learning tools, as the creator – who remains unknown – used Google to collect images of celebrities’ faces, training an algorithm to swap the face of the porn performer with the celeb. The popularity of this synthetically sexualised content has not wavered since 2017. In 2023 98% of the total deepfake videos uploaded online were pornographic, as today over 100,000 sexually explicit fabricated images and videos are uploaded online daily. Whilst demonstrating the increasing ease with which this non-consensual content can be generated, the unrelenting demand for the total objectification and ownership of women’s bodies must be recognised as the ideological motivator for this technological development in the field of AI.

Samantha Cole, a writer for Vice, has spent much of her career researching the evolution of deepfakes, writing in 2018 that they were ‘created as a way to own women’s bodies’. Cole’s expert and illuminating analysis calls for an understanding of our shared virtual space as one which undeniably replicates existing inequalities of the real world. As a result, women have unsurprisingly been forced to bear the brunt of deepfake crime, with 99% of those targeted by deepfake pornography identifying as female. How then must our society shift its perspective to cultivate a more resilient and trustworthy virtual ecosystem? Whilst immediately criminalising the creation of deep fake pornography is vital to combat the spread of harmful synthetic media, it becomes much harder to foresee a future in which deepfake tech remains freely available without being used to imitate and aggravate the patriarchal capitalist society in which it was created.
Whilst governments have begun to grapple with the deeply damaging effects of deepfakes through legislation, passed this April in the UK, it seems that this will not go far enough in disrupting the culture of misogyny which perpetuates online spaces. Luba Kassova, writing for The Guardian, assesses the ‘male-dominated AI companies and engineering schools’ which ‘appear to incubate a culture that fosters a profound lack of empathy towards the plight of women online’. Kassova’s analysis is further validated when considering a report indicating that women account for only 28% of tech professionals in the US. This gender disparity underscores the fact that deepfakes, and the technology required to create them, are primarily tailored to cater to male fantasies, reflecting the predominant influence of male perspectives within the virtual world.
Despite these disheartening figures, Breeze Liu, a survivor of revenge porn, has founded a company aimed at addressing the ‘lack of diversity and human-centric focus’ within the tech industry and online sphere. Lui has transformed her struggle into a movement which fights against image-based abuse, culminating with the creation of a tool called Alecto AI. Available on the Apple App Store, this tool can detect if users’ images are being used online without their consent. This liberating tool has been devised as part of a wider movement which aims to redistribute power online to minimise the potential harm caused by the spread of deep fakes.
Whilst Liu’s app is still in its pilot phase and government regulations are lagging behind the swiftly evolving technological world, prioritising safety must remain at the forefront of all debates surrounding AI. Recent data on the misuse of deepfake technology has shown that with the emergence of new technology comes new threats of abuse which, unsurprisingly, manifest through existing systems of oppression. It is therefore only by identifying who is at an increased risk of online abuse that the right steps may be taken to cultivate a robust cyber space for all, where new forms of abuse may be tackled with new, accessible, and liberatory technologies.

https://www.gov.uk/government/news/government-cracks-down-on-deepfakes-creation

https://www.youtube.com/watch?v=vYWe2ZT_DHE

https://www.youtube.com/watch?v=cQ54GDm1eL0

https://www.homesecurityheroes.com/state-of-deepfakes/#key-finding

https://www.channel4.com/news/exclusive-hundreds-of-british-celebrities-victims-of-deepfake-porn

https://www.homesecurityheroes.com/state-of-deepfakes/#key-findings

https://abcnews.go.com/US/white-house-calls-legislation-regulate-ai-amid-explicit/story?id=106718520

https://www.vice.com/en/article/nekqmd/deepfake-porn-origins-sexism-reddit-v25n2

https://www.homesecurityheroes.com/state-of-deepfakes/#key-findings

https://www.theguardian.com/global-development/2024/mar/01/tech-bros-nonconsensual-sexual-deepfakes-videos-porn-law-taylor-swift

https://www.womentech.net/en-gb/women-in-tech-stats

https://alectoai.com

Author

0 0 votes
Article Rating

Leave a Reply

0 Comments
Inline Feedbacks
View all comments