Synthetic Bodies, Real Violations: The Legal Gaps Behind AI Deepfakes
By: Madison Ayala
Imagine this: you excitedly post pictures of your recent trip on your favorite social media platform, but you wake up the next day, and someone has taken your photos and turned them into something you never agreed to –something humiliating and impossible to fully erase. This is happening to someone right now.
Deepfake technology, which uses deep learning systems to manipulate human characteristics, first exploded into public awareness around 2017 when users on Reddit began swapping celebrity faces into sexually explicit videos (Karagianni & Doh, 2024). Currently, around 98 percent of deepfake videos online are pornographic, and 99 percent of those target women (Lazard et al., 2025). Technology hasn’t just gone slightly off course; this is a result of the easy accessibility of artificial intelligence (AI) to the general public, and the legal system is still trying to figure out how to respond.
What Exactly Is a Deepfake?
Most sexualized deepfakes rely on something called face-swapping. AI systems, often built on autoencoders, learn to compress a person’s face into data and reconstruct it onto another body (Karagianni & Doh, 2024). This technology means that someone can download a few public photos from your social media platforms and insert your face into explicit content that never involved you. Popular open-source tools like DeepFaceLab or FaceSwap are widely accessible, and apps like REFACE have millions of users (Karagianni & Doh, 2024). What once required intensive technical skill now takes a few clicks. Creation is easy, and distribution is instant, yet removal is almost impossible.
Researchers describe how deepfake pornography spreads in phases: from data collection to generation to mass distribution across platforms (Furizal et al., 2025). Once it circulates, the damage multiplies, snowballing into a situation far out of control and causing real harm.
Digital Violence is Still Violence
Many treat deepfakes as “just images,” but scholars increasingly frame non-consensual sexualized deepfakes as image-based sexual abuse that is part of a broader pattern of online violence against women (Lazard et al., 2025). Due to the distribution of these images, victims report social distress, isolation, reputational harm, and even professional fallout (Furizal et al., 2025). The idea of sending out your résumé, while knowing a quick search of your name could pull up a fake image that looks convincing, is terrifying. Even the mere risk of that occurring shifts how people present themselves online.
Karagianni and Doh analyze deepfake abuse through a feminist legal lens, arguing that these synthetic images function as a form of gender-based violence under European law (Karagianni & Doh, 2024). They connect deepfakes to broader systems that already police women’s bodies and voices in public spaces. This raises broader questions about power in considering who becomes the target and who carries the weight of digital exposure.
Why the Law is Lagging Behind
Many revenge porn laws were written before generative AI became widely accessible and often assume there’s a real, intimate image shared without consent. But deepfakes are fabricated, and no original explicit image exists. It has been pointed out that weak or inconsistent legal definitions create gaps which perpetrators can exploit (Furizal et al., 2025). If a law criminalizes distributing “private sexual images,” does that include a fully synthetic image? Jurisdictions are split regarding this issue.
There have been attempts at regulation, such as the European Union AI Act, the Digital Services Act, and directives targeting gender-based violence, all of which aim to address AI misuse and to hold platforms responsible. In the United States, laws like the TAKE IT DOWN Act focus on removing non-consensual intimate imagery (Karagianni & Doh, 2024; Furizal et al., 2025). But enforcement is uneven, and jurisdictional boundaries complicate everything. A deepfake generated in one country can go viral in another before any takedown request is even processed.
Consent in the Age of Algorithms
Detection tools are improving. AI can now flag manipulated pixels or watermark synthetic media, but technical fixes alone won’t solve the deeper issue (Furizal et al., 2025). Scholars argue that we have to look beyond whether deepfakes can be detected and ask why women are overwhelmingly targeted in the first place (Lazard et al., 2025). If the law fails to clearly recognize non-consensual deepfakes as a form of gender-based violence, it risks treating them as digital pranks instead of what they truly are: violations. The harsh reality is that AI is only going to get more realistic and harder to trace. The question is whether our legal systems will evolve just as fast and whether we’re willing to treat digital autonomy as seriously as physical autonomy. Synthetic bodies may be artificial, but the violations and repercussions are not.
References
Furizal, Ma’arif, A., Maghfiroh, H., Suwarno, I., Prayogi, D., Kariyamin, Lonang, S., & Sharkawy, A.-N. (2025). Social, legal, and ethical implications of AI-generated deepfake pornography on digital platforms: A systematic literature review. Social Sciences & Humanities Open, 12, 101882. https://doi.org/10.1016/j.ssaho.2025.101882
Karagianni, A., & Doh, M. (2024). A feminist legal analysis of non-consensual sexualized deepfakes: Contextualizing its impact as AI-generated image-based violence under EU law. Porn Studies. Advance online publication. https://doi.org/10.1080/23268743.2024.2408277
Lazard, L., Capdevila, R., Turley, E. L., Gilfoyle, K., & Stavropoulou, N. (2025). Deepfake technology and gender-based violence: A scoping review. Trauma, Violence, & Abuse. Advance online publication. https://doi.org/10.1177/15248380251384271