Deepfake Abuse: The Escalating Violence Against Women & Girls Online – UN Report
The images surfaced on X, the platform formerly known as Twitter, in December 2025. UK journalist Daisy Dixon discovered AI-generated, sexualised images of herself created using the platform’s own Grok AI tool. It took days for X to geoblock the function, and even then, the abuse continued to spread.
Dixon’s experience is not isolated. Deepfake abuse – the manipulation of images, audio, or video using artificial intelligence to depict someone doing or saying something they never did – is escalating, overwhelmingly targeting women and girls. While the technology isn’t new, its weaponization against individuals is a rapidly growing phenomenon, with devastating consequences that extend far beyond the digital realm.
According to a 2023 report, 98 percent of all deepfake videos online are pornographic, and 99 percent depict women. The prevalence of these videos has increased by an estimated 550 percent since 2019. The tools to create them are widely available, often free, and require minimal technical skill, making the barrier to entry for perpetrators exceptionally low. Once posted, AI-generated content can be endlessly replicated, saved, and shared, rendering complete removal virtually impossible.
Underreporting remains a significant obstacle to accountability. Survivors who do come forward often face re-traumatization, not only from the initial abuse but also from the processes of reporting and potential legal action. As documented in a field manual on documenting online harassment, survivors are frequently asked to repeatedly view and describe the abusive content to police, lawyers, and platform moderators, and may be subjected to questioning about their own behavior.
The harm extends beyond the online world. A UN Women survey found that 41 percent of women in public life who experienced digital violence also reported facing offline attacks or harassment linked to it. In some cultural contexts, deepfake abuse can even serve as a catalyst for so-called “honour-based crimes,” where perceived breaches of social norms can result in extreme physical violence or even death.
Despite the scale of the problem, prosecutions are rare. Legal frameworks are struggling to keep pace with the technology. Less than half of countries have laws addressing online abuse, and even fewer have legislation specifically covering AI-generated deepfake content. Existing “revenge porn” or image-based abuse laws were largely written before the advent of deepfakes, creating significant loopholes. In many jurisdictions, deepfake pornography or AI-generated nude images exist in a legal grey area, leaving survivors unsure of their rights and the possibility of prosecution.
Enforcement is further hampered by a lack of resources and expertise. Even when laws exist, investigators require specialized digital forensics skills, cross-border coordination, and cooperation from platforms – resources that are often in short supply. Evidence can disappear quickly as content spreads, and perpetrators frequently hide behind anonymity or operate across multiple jurisdictions. Platforms are often slow or unwilling to share data with law enforcement, particularly in cross-border cases, and digital forensics backlogs can stall investigations before they even begin.
Tech platforms have historically shielded themselves behind “intermediary” status, avoiding responsibility for user-generated content. However, mounting pressure is forcing a re-evaluation of this approach. The United Kingdom’s Online Safety Act, for example, prohibits sharing digitally manipulated explicit images, though its applicability to the creation of deepfakes remains unclear, particularly where intent to cause distress cannot be proven.
Some jurisdictions are beginning to capture action. Brazil amended its criminal code in 2025 to increase penalties for causing psychological violence against women using AI or other technology to alter their image or voice. The European Union’s Artificial Intelligence (AI) Act imposes transparency obligations around deepfakes. In the United States, the Take It Down Act explicitly covers AI-generated intimate imagery and requires platform removal within 48 hours.
Addressing deepfake abuse requires urgent, coordinated action from governments, institutions, and tech platforms. This includes passing legislation with clear definitions of AI-generated abuse, investing in law enforcement training and resources, holding platforms accountable for proactively monitoring and removing abusive content, providing real support for survivors, and prioritizing digital literacy and consent education. As UN Women warns, Here’s not a niche internet problem; it is a global crisis.
More than half of deepfake victims in the United States have contemplated suicide, according to recent research. The legal and technological battles continue, but for survivors like Daisy Dixon, the immediate need is for platforms to act decisively and for justice systems to recognize the profound harm caused by this rapidly evolving form of digital violence.
