EU AI Act

Hollywood Writers Train AI: A Hidden Income Stream

The dream of Hollywood writers was to keep AI out. Now, many are secretly training it for eye-watering paydays. Legal AI Beat investigates this strange new world.

A Hollywood script lay discarded on a desk, with a glowing AI chatbot interface projected onto it.

Key Takeaways

  • Hollywood writers are taking high-paying AI training jobs, testing LLMs and providing feedback.
  • The pay can be substantial ($50-$60/hour for generalists, with some advertised roles higher), offering a lifeline to creatives affected by industry shifts.
  • This pivot raises questions about the future of creative industries and the role of human intelligence in AI development.

Everyone expected the studios to try and replace screenwriters with AI. That was the big fear, the narrative that fueled the Hollywood strikes for months on end. And sure, that’s still very much on the table, a looming cloud over the industry’s creative classes. But here’s the thing nobody seemed to be talking about – until now: what if the creatives themselves become the digital janitors, the unseen hands propping up the very machines they feared?

That’s the twist nobody saw coming. Instead of fighting the bots, a significant chunk of Hollywood’s creative talent, particularly writers, have pivoted. They’re not just sidelined; they’re actively, and lucratively, involved in the AI training ecosystem. It’s a move that throws a wrench into the simplistic “us vs. them” narrative and forces a question: who’s really cashing in, and what does this mean for the future of content creation and, dare I say, human creativity itself?

Meet ri611, or perhaps h924092b12ee797f. This is the nom de guerre of a seasoned Hollywood writer and showrunner—think Paramount, Hulu, BBC—who, after a particularly frustrating producer default, found a new, if somewhat surreal, gig. They’re now an AI trainer. It’s not glamorous. It involves scrutinizing chatbot tones, identifying furniture patterns in images, painstakingly cutting strangers out of group photos, and even—and this is where it gets dark—generating anime sex scenes or coaxing LLMs into spitting out bomb recipes. All in the name of “red teaming,” testing safety protocols, and probing weaknesses for companies with names that sound like tech startups trying too hard: Mercor, Outlier, Task-ify, Turing.

It’s a far cry from crafting primetime dramas about ladies having bad days, interspersed with salt-of-the-earth police. The irony, of course, is that the very industry this writer belongs to was on strike to prevent AI from taking their jobs. Yet, here they are, not just making money, but making good money. The initial lure? A comment on an unofficial Writers Guild of America Facebook group, rife with tales of debt and desperation, mentioned a company called Mercor paying writers $150 an hour. “Easy money,” the comment declared. Our writer, needing cash for rent, food, and their apartment cleaner (Maggie, bless her), thought, why not? How hard could it be to teach a machine to do their job?

As it turns out, harder than they thought, and with a much steeper learning curve than the “easy money” whispers suggested. The hiring process itself was a gauntlet: ten applications, twenty unpaid hours of testing, and an AI recruiter agent that sounded more like a flickering light than a human resource. The initial assessment? Being asked to critique a mediocre AI-generated snippet about a soldier. Our writer, armed with a Cambridge English Lit degree, apparently said it was “shit.” Six weeks and an acceptance letter later, they were a “generalist” data annotator, earning $52 an hour.

Then came the onboarding. A labyrinth of apps, Slack channels, Airtables, payment portals, and Google “whatnots.” A virtual holding pen of confused souls in Zoom rooms, all trying to make sense of the digital plumbing. Once set up, the real work began: evaluating conversations between users and LLMs. The task was to grade the AI assistant’s responses against a “bible” of acceptable replies. The prompts were often deeply personal, tinged with sadness and existential dread: “Are my feelings justified? Is this person’s behavior acceptable? Am I lovable?” The AI’s responses, according to the article, belonged to an earlier, more blunt era, happily diagnosing autism or bipolar disorder. The training data, it seems, was sourced from users sharing their rawest vulnerabilities.

And who was overseeing this sensitive operation? A “project manager,” a 22-year-old who’d apparently flunked out of investment banking, supervising ten “team leaders” and “data managers.” Their directive to the creatives was clear: your “creative skills and special minds were invaluable,” but keep them “on a tight leash” and stick to copy-pasting from the guidelines. Originality? Fancy language? Not welcome. Deviation was discouraged, likely for fear of “throwing the m” – presumably, the delicate calibration of the AI’s response patterns. It’s an environment that values obedience over insight, compliance over craft.

“Our creative skills and our special minds were invaluable to this very important project! But it would be great if—in typing up justifications for our scores—we could keep our special minds on a tight leash and subordinate them to our ability to copy and paste verbatim from the scoring guidelines.”

This isn’t just about writing prompts or labeling images anymore. This is about the complex, often mundane, process of shaping artificial intelligence. And it’s being done by people who historically would have been on the other side of the AI equation – the creators, not the curators. The companies involved, like Mercor and Outlier, aren’t just hiring data entry clerks; they’re tapping into a reservoir of highly skilled linguistic and creative talent. They’re paying them well, too, much better than many entry-level roles in their original fields. It’s a bizarre symbiosis: Hollywood’s brightest minds are now paid to meticulously groom the AI that Hollywood feared would displace them. The hypocrisy? Perhaps. The pragmatism? Absolutely. It begs the question: are these writers salvaging their careers, or are they inadvertently building the tools that will eventually make their original careers obsolete?

It’s a critical juncture, and one that deserves far more scrutiny than the shiny press releases from AI startups suggest. The narrative is shifting from “AI is coming for your job” to “AI needs your job… to learn your job.” And who better to teach it than the very people who defined the craft?

The Pay-to-Play Data Game

Let’s be brutally honest: the $150/hour figure floated online is aspirational. Most generalist AI trainers, as our Hollywood scribe discovered, are making a solid $50-$60 an hour. Still, that’s a significant chunk of change, especially when compared to the feast-or-famine nature of freelance creative work. These aren’t just gig workers; they’re often professionals whose income streams have been disrupted. The companies behind these training operations? They’re the real beneficiaries here. They’re outsourcing the incredibly labor-intensive, nuanced work of data labeling and model refinement to a highly skilled, readily available workforce. They get to build and improve their AI products at a fraction of the cost it would take to develop internal teams with such diverse expertise. It’s efficient, it’s scalable, and it’s obscenely profitable for the AI firms.

Is This Just a Temporary Fix?

This AI training work feels like a peculiar eddy in the chaotic waters of the tech economy. On one hand, it’s a lifeline for creatives whose industries are undergoing massive disruption. On the other, it’s a stark reminder that the value of human intelligence, particularly creative intelligence, is being meticulously dissected and digitized. The immediate financial incentive is undeniable. But for how long can this last? As AI models become more sophisticated, will the need for human oversight diminish? Or will it simply shift to even more granular, specialized tasks – like, say, generating the precise nuance of a sigh from a character who’s just discovered their script was rejected?

For now, the lucrative paychecks are real. The Hollywood writer, despite their initial naivete, has found a way to survive. Whether this is a sustainable career path or merely a fascinating, and slightly dystopian, chapter in the ongoing AI saga remains to be seen. But one thing is clear: the line between creator and trainer has blurred, and the profit motive is driving innovation in ways that are both surprising and profoundly unsettling.


🧬 Related Insights

Frequently Asked Questions

What does an AI trainer do?

An AI trainer, also known as a data annotator or labeler, helps large language models (LLMs) and other AI systems learn by providing human feedback. This can involve assessing chatbot responses for tone and accuracy, labeling images, categorizing data, or generating text and scenarios to test AI capabilities and safety protocols.

How much do AI trainers get paid?

Pay varies significantly based on experience, the complexity of the task, and the company. While some roles advertised can pay up to $150 per hour, many generalist AI trainer positions offer between $50 and $60 per hour.

Will this AI training work replace my job?

For creative professionals, this work is currently serving as a supplementary income or a bridge during industry disruption. However, the long-term impact of AI on creative jobs is still uncertain. The nature of AI training itself may evolve as models become more sophisticated, potentially shifting the types of human input required.

Written by
Legal AI Beat Editorial Team

Curated insights, explainers, and analysis from the editorial team.

Frequently asked questions

What does an AI trainer do?
An AI trainer, also known as a data annotator or labeler, helps <a href="/tag/large-language-models/">large language models</a> (LLMs) and other AI systems learn by providing human feedback. This can involve assessing chatbot responses for tone and accuracy, labeling images, categorizing data, or generating text and scenarios to test AI capabilities and safety protocols.
How much do AI trainers get paid?
Pay varies significantly based on experience, the complexity of the task, and the company. While some roles advertised can pay up to $150 per hour, many generalist AI trainer positions offer between $50 and $60 per hour.
Will this AI training work replace my job?
For creative professionals, this work is currently serving as a supplementary income or a bridge during industry disruption. However, the long-term impact of AI on creative jobs is still uncertain. The nature of AI training itself may evolve as models become more sophisticated, potentially shifting the types of human input required.

Worth sharing?

Get the best Legal Tech stories of the week in your inbox — no noise, no spam.

Originally reported by Wired - AI

Stay in the loop

The week's most important stories from Legal AI Beat, delivered once a week.