AI pornography training.
A lawsuit is alleging that a group of men has built a business out of teaching others how to create AI-generated pornography, specifically by using the likenesses of unsuspecting women. This isn’t just about the creation of deepfakes; it’s about monetizing the very act of exploiting personal images for sexualized content, and then, critically, profiting from teaching others to do the same. It’s a deeply unsettling development that pushes the boundaries of current legal and ethical frameworks surrounding artificial intelligence.
The Allegations: A Digital Prey Machine
The core of the complaint, filed in Arizona, names Jackson Webb, Lucas Webb, and Beau Schultz, alongside fifty unnamed individuals, as defendants. Their alleged modus operandi? Scouring the internet for publicly available photos of young women—often with limited social media followings, presumably to minimize immediate pushback or legal hurdles—and then feeding these images into generative AI models. The goal, according to the suit, is to create hyper-realistic AI-generated models that mimic these real women, and then, in many alleged cases, strip them of their clothes digitally to produce explicit content.
This content, the lawsuit claims, was then sold on subscription platforms like Fanvue, generating significant revenue. But the alleged profitability doesn’t stop there. For a monthly fee—$24.95, to be exact—the defendants are accused of offering online courses and tutorials via platforms like Whop. These courses, under brands like AI ModelForge, reportedly provide a ‘playbook’ and ‘blueprints’ detailing how to scrape images and train AI models using CreatorCore software. The implication is clear: they’re not just profiting from the illicit content itself, but from teaching others the ‘skills’ to perpetrate similar acts.
“They provided a whole playbook, including instructions on how to pick the right person so that it’s not someone who can defend themselves, so they all had instructions on what type of women to use and where to get their pictures,” she claims. “It was disgusting on every single level.”
The sheer scale of this alleged operation is staggering. The suit points to a statistic that the CreatorCore platform alone, in 2025, boasted over 8,000 subscribers, resulting in the generation of more than 500,000 AI-generated images and videos. The financial implications are stark, with reports suggesting over $50,000 in income for the defendants in a single month from this specific venture.
The Unwitting Victims: A New Form of Digital Harassment
For individuals like ‘MG’ (whose identity is protected in the lawsuit), this isn’t an abstract ethical debate. It’s a terrifying violation of privacy and personal autonomy. She discovered her likeness being used in explicit AI-generated content, feeling a profound loss of control over her own image. The chilling part? The AI models were reportedly so accurate, complete with tattoos matching her own, that someone not knowing her well could easily mistake the fabricated images for the real thing. This level of impersonation, weaponized for sexual exploitation, is a potent reminder of how vulnerable individuals are in the digital age.
The lawsuit explicitly states that the defendants encouraged their students to target women with fewer than 50,000 followers, a cynical attempt to avoid drawing the attention of law enforcement or facing more significant legal challenges. This detail transforms the alleged scheme from mere creation to active instruction in predatory behavior. It’s a disturbing feedback loop, where the perpetrators are not only exploiting victims but also cultivating a new generation of exploiters.
Nick Brand, one of the attorneys representing MG and the other plaintiffs, articulated the gravity of the situation: “These boys aren’t just using generative AI to disrobe women—they’re selling the ability to do so to other men and boys, who are then going to use other women’s images to do the same thing.” This highlights a systemic issue, where the technology itself becomes a tool for amplification of harm, and the business model exacerbates the problem by creating a demand for these exploitative ‘skills.’
AI’s Shadow: Beyond the Hype
While the public discourse around AI often centers on its potential for innovation, efficiency, and even creativity, this lawsuit forces a confrontation with its darker applications. The proliferation of platforms like AI ModelForge, which seemingly offer ‘side hustles’ in creating AI influencers, taps into a desire for quick financial gain, often without fully grasping—or perhaps deliberately ignoring—the ethical ramifications. The boastful social media posts from self-styled AI entrepreneurs about earning ‘hundreds of thousands of dollars’ paint a picture of an unregulated gold rush, where the source of the ‘raw material’—real people’s images—is treated as inconsequential.
This isn’t a scenario where AI is merely automating a mundane task; it’s a case of AI being actively employed to facilitate and scale a form of sexual harassment and exploitation that was previously more difficult and labor-intensive to execute. The speed and sophistication with which AI can now generate realistic images, combined with the ease of disseminating them online, create a potent cocktail for abuse.
What Happens Next? Legal and Ethical Crossroads
The outcome of this lawsuit will be significant. It will likely test the boundaries of existing laws concerning privacy, defamation, and intellectual property, particularly in the rapidly evolving landscape of AI-generated content. Can individuals effectively sue for the misuse of their likeness when it’s generated by an algorithm trained on their data? How can the platforms that facilitate both the training and distribution of such content be held accountable? These are questions that the legal system is only beginning to grapple with.
Ultimately, this case is a stark warning. It underscores the urgent need for strong legal and ethical guardrails around generative AI. The technological advancements are outpacing our societal ability to manage their consequences, and when profit motives collide with individual privacy and dignity, the results can be devastating. The promise of AI is immense, but so too is its potential for misuse—a reality this lawsuit brings into sharp, and disturbing, focus.
Will This Case Set a Precedent for AI Exploitation?
It certainly has the potential to. Lawsuits like this are crucial for establishing legal precedents in areas where the law has not yet caught up with technology. If the plaintiffs are successful, it could lead to clearer guidelines and stronger legal recourse for victims of AI-generated exploitation.
How Can I Protect My Images from Being Used for AI?
The best defense is a good offense: limit the public availability of your images where possible, use strong privacy settings on social media, and be aware that even publicly available images can be scraped. Some future solutions might involve digital watermarking or AI detection tools, but for now, personal vigilance is key.
Is This the Only Such Operation?
Unfortunately, given the alleged profitability and the ease of access to AI tools, it’s highly unlikely this is an isolated incident. The lawsuit highlights a broader trend that legal and ethical observers are watching closely.