Laid-off professionals train AI models that displaced them, earning $45 per hour in precarious gigs

2 Sources

Share

Unemployed journalists, lawyers, and PhDs are being hired by companies like Mercor to train AI systems—the same technology that eliminated their original careers. Workers create prompts for AI, write ideal chatbot responses, and develop evaluation criteria for AI systems, often without knowing which company they're training models for or how long the work will last.

Laid-off Professionals Turn to AI Training After Job Displacement

Katya, a former content marketing professional whose career was upended by automation, found herself in an ironic predicament: training AI to steal careers similar to the one she lost

1

. After struggling as a freelance journalist and pivoting to content marketing, she discovered that AI had automated much of her work. Desperate for income, she accepted a position with Mercor, a company that sells data to train AI, starting at $45 per hour

2

.

The application process itself reflected the gig economy's digital transformation. Katya interviewed with an AI named Melvin, a disembodied voice that analyzed her résumé and asked specific questions. Despite initial skepticism about what seemed like a scam, her dire financial situation—needing to secure housing quickly—pushed her to accept the offer and install monitoring software on her computer

1

.

Source: The Verge

Source: The Verge

Human Effort in Improving AI Models Drives Industry Forward

Once hired, laid-off professionals like Katya joined hundreds of workers in Slack channels, engaged in creating prompts for AI and writing ideal chatbot responses

2

. Each task required several hours: workers wrote examples of prompts users might ask a chatbot, crafted the chatbot's ideal response, then developed detailed evaluation criteria for AI systems that defined what constituted a quality answer. This data was then passed along a digital assembly line for further review, though workers were never told which AI models they were training—managers referred only to "the client"

1

.

This human expertise remains essential because machine-learning systems learn by finding patterns in enormous quantities of data that must first be sorted, labeled, and produced by people. ChatGPT achieved its fluency from thousands of humans hired by companies such as Scale AI and Surge AI to write examples and grade responses

2

. About a year ago, the industry confronted a plateau in progress—chatbots sounded smart but remained too unreliable for practical use. Unlike software engineering, where code either compiles or doesn't, most professional activities lack objective tests for quality. AI companies responded by collectively paying billions of dollars to professionals including lawyers and scientists to create comprehensive criteria for evaluating work quality .

Precarious and Unstable Nature of Employment in Data Annotation

The reality of contract work in AI training reveals deep job security concerns. Just two days after Katya started, her project was abruptly paused, then canceled entirely

2

. "I'm working assuming that I can plan around this. I'm saving up for first and last month's rent for an apartment, and then I'm back on my ass. No warning, no security, nothing," she explained

1

.

Days later, Mercor offered another position evaluating conversations between chatbots and real users—many from Malaysia and Vietnam practicing English—according to criteria like instruction-following and tone appropriateness. The offer arrived at 6:30 PM on a Sunday night with a Zoom onboarding call scheduled for 45 minutes later. Scarred by the previous project's sudden disappearance, Katya accepted immediately and worked until exhaustion

2

.

Mercor itself was founded in 2023 by three then-19-year-olds from the Bay Area—Brendan Foody, Adarsh Hiremath, and Surya Midha—originally as a jobs platform using AI interviews to match overseas engineers with tech companies

1

. The company's evolution into data annotation work reflects broader shifts in the future of work, where highly educated professionals find themselves in unstable arrangements, training the very technology displacing them.

Societal Impact of Artificial Intelligence on Professional Careers

The irony isn't lost on workers like Katya, who described the situation as depressing: "My job is gone because of ChatGPT, and I was being invited to train the model to do the worst version of it imaginable"

2

. Yet despite the existential contradiction, she found the work engaging and the pay attractive—"It was like having a real job," she noted

1

.

This pattern raises questions about the sustainability of professional careers as AI capabilities expand. While AI models currently require extensive human input to function effectively, the workers providing that input are simultaneously teaching systems to perform tasks that once required their specialized knowledge. The billions being invested in creating objective evaluation criteria for subjective professional work—from financial analysis to advertising copy—suggests AI companies are determined to automate domains previously considered resistant to automation. As these systems improve through data annotation provided by displaced professionals, the precarious and unstable nature of employment may intensify, creating a cycle where expertise is extracted from human workers to build the AI models that will further reduce demand for that same expertise.

Source: NYMag

Source: NYMag

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo