2 Sources
2 Sources
[1]
The $10 Billion Startup Training AI to Replace the White-Collar Workforce
When Tasha Kozak, a social worker for the Hillsborough County Public Schools in Tampa, met the family, they were living in a car. The three children's grades had been slipping. The mother was exhausted. The father was out of the picture. Kozak began helping them connect to housing resources. She checked in with the children every couple of days at school. She called the mother every three or four. After several months the family found stable housing. The children began to improve in school. "I saw the mom got her glow back, and she started getting more consistent work shifts," Kozak says. Moments like that are why Kozak does the work. She didn't set out to become a social worker, but after taking an elective in it in college, she changed her major, and more than a decade later, she still calls social work her passion. The job, she says, is about listening, connecting, helping in a way that feels instantly meaningful. In her view about 70% of effective social work is that human, relational component. The rest is administrative. On many evenings, after finishing her full-time job with the district, Kozak logs in to a website called Mercor. The San Francisco startup Mercor.io Corp. recruits workers in high-skill fields -- doctors, lawyers, investment bankers, journalists, social workers, you name it -- to help teach artificial intelligence systems how to do their work. The company places these experts in part-time or temporary contract roles to work on training projects for major tech companies and AI labs, clients that have included OpenAI, Anthropic and Meta. It's like the Uber of advanced AI training: a gig-work platform for white-collar and skilled professionals that offers a path for them to earn something extra from their expertise -- at the risk of eventually sacrificing their careers to AI. Like many of Mercor's contractors, Kozak first heard about the company through a LinkedIn job posting. School had just let out for the summer in 2025, so she had spare time, and she was curious about the rise of AI. She landed an interview and found herself on camera talking to an AI agent with a gentle female voice and a surprisingly natural, conversational style. When Kozak described a specific case she'd handled, the agent followed up with targeted prompts -- "Tell me more about the parent in that situation" -- with the kind of probing detail she was used to hearing from human supervisors. Two weeks later she was working on her first assignments, and today she earns as much money training AI for 20 hours a week as she does in 40 hours helping actual families. "Social work is a very underpaid occupation," she says. Her work for Mercor is methodical and narrow. Kozak is part of a virtual team that writes prompts for an AI model to perform specific social work tasks based on fictitious case files -- for instance, asking the AI to produce a social developmental history of an elementary school student who needs an individualized education plan. The case file includes notes from a parent interview, a student interview, a review of student records and a doctor's report. Another team reviews the AI's responses, and still another handles additional pieces of the training process. There are hundreds of people on the project, Kozak says, maybe more; her team alone includes about 40 contractors. Hour by hour, highly segmented task by task, they translate professional judgment into training data. Across the US, other professionals are doing the same. In San Francisco a doctor of internal medicine named Melania Poonacha works 50 to 60 hours a week as a full-time pediatric hospitalist on the night shift and logs in to Mercor on her days off for an additional 10 hours or so, asking an AI model to interpret lab results and evaluating its work. In Baton Rouge, Louisiana, the novelist and screenwriter Robin Palmer Blanche started evaluating AI-generated creative writing for voice and structure last August. Palmer Blanche, a mother of two who's struggled to make a living from writing since the 2023 Hollywood writers strike, is part of a large swath of Mercor workers who are underemployed in their professions and use the platform to patch together income. She sometimes finds herself chatting with a Mercor teammate on Slack, only to realize there are other novelists doing it too -- "Oh, my God, I read that woman's book last year and fell in love with it." Mercor has tens of thousands of such experts working on its platform, screened using those AI-run interviews and project-specific skills tests and practice tasks. In the three years since the company started, it's raised almost $500 million in venture capital from a who's who across Silicon Valley and finance, including Benchmark, General Catalyst, Peter Thiel, Jack Dorsey and Larry Summers. The most recent round of funding, in October, valued Mercor at $10 billion, five times what it was thought to be worth just a few months earlier. According to the company, it's been profitable since its inception, it pays out more than $2 million per day to contractors, and it has about 300 full-time employees, largely engineers and project managers. The founders -- Brendan Foody, Adarsh Hiremath and Surya Midha, three early-20s college dropouts who were buddies in high school -- have become the youngest-ever self-made billionaires, at least on paper. Mercor's rapid rise has also brought controversy, including several class-action lawsuits now moving through California courts and a recent data breach that raised questions about how the company handles sensitive information and led one of its clients -- Meta -- to indefinitely pause its work with the startup. If Mercor and its backers are right, the company is uniquely positioned to help AI become the economic force Silicon Valley has been promising. Most of what's been seen from AI so far, they argue, has been something of a dazzling consumer demo. Going beyond that -- making the technology perform reliably in professional fields where mistakes carry real consequences -- is fundamentally about feeding the models better training data, says Sundeep Peechu, managing partner at the VC firm Felicis Ventures, which led Mercor's two most recent funding rounds. "The first generation of data was from the internet," he says, "and that allowed companies to build very general-purpose models." But for AI to become "truly economically useful and not just a toy thing," someone has to get humans to tell the model, step by step, how they actually do their work. Where Have All the Jobs Gone? * Young Job Hunters on What It Takes to Get Hired * Anxious Parents Are Hiring Pricey Career Coaches * The College Students Starting Their Own Companies Read more from the series here. At this moment it's not hard to understand why people are so willing to feed the machines that might one day render them obsolete. Anxiety about job loss is everywhere: Job openings in professional and business services have fallen by more than a million from their post-pandemic peak in 2022, according to the Bureau of Labor Statistics. About 42% of recent college grads are underemployed, according to a 2025 report from the Federal Reserve Bank of New York, and a study last year by the American Psychological Association found that 54% of US workers are experiencing significant stress about job insecurity. Mercor's listings -- sometimes ads from the company itself and sometimes from third parties using Mercor links that allow them to collect referral fees -- saturate LinkedIn and other job boards, offering a respite from all that. Mercor says its average hourly pay is about $90, though the range is wide, from generalists barely scratching minimum wage to elite coders or Ph.D.s making $250 or $300 per hour. Someone working on Mercor projects full time at $90 per hour would earn almost $190,000 a year. There are plenty of workers who view Mercor as a grim final stop to monetize their expertise before professional extinction. Kozak doesn't see it that way, saying her reason for doing the work is far more practical. She envisions herself offloading social work's tedious 30% -- the paperwork, the reports -- so she can spend more time doing the parts that can't be automated: coaching someone through a bureaucratic maze, gaining a grieving mother's trust. She worries about the potential for AI to develop "systematic biases" -- say, overlooking cultural differences -- but sees her training as all the more valuable for that reason. Dr. Poonacha is more explicit about the stakes for her own career. "I am doing this just because I don't want to become obsolete," she says. Medicine is evolving rapidly, and "AI is going to be part of that evolution, whether we want to participate in it or not." "If history has taught us anything about revolutions and productivity, it's that productivity is the tide that lifts all boats," Brendan Foody declares one morning in February. The sandy-haired 23-year-old co-chief executive officer and co-founder of Mercor has the standard techno-optimist's conversational habit of jumping from earthly matters -- in this case, the field of management consulting -- to historical analogies and abstract principles. Asked how a future McKinsey & Co. consultant, replaced by AI, would gain expertise without the traditional entry-level drudgery, Foody zooms out to a mainstay Silicon Valley position: Two hundred years ago, when most Americans were farmers, the tractor didn't destroy work; it pushed people into different kinds of employment. AI will do the same, he says -- eliminating some tasks, yes, but ultimately creating more value, more opportunity, more progress. History offers some support for that view, but the path from one kind of work to another has rarely been painless or quick. "We need to cure cancer and solve climate change and go to Mars," Foody says. "And I think humans will work on a lot of those things once we have more productivity in accounting or whatever these more back-office functions are." From where Foody sits -- in a glass-walled conference room on the 33rd floor of a San Francisco skyscraper -- the bay extends beneath him in miniature, with container ships sliding past and ferries carrying commuters to innumerable office buildings. From this height the physical economy looks less like the labor of millions of individuals living full lives and more like a table model whose tiny pieces could be shuffled with mere keystrokes. "Instead of being Luddites and leaning against the technology, we should instead focus on what are the jobs of the future that we need to lean into," he says. Foody acquired his rhetorical instincts as a competitive debater and perpetual hustler in high school in San Jose, where he met Hiremath and Midha. All three were raised in tech industry households: Hiremath and Midha are the sons of Silicon Valley engineers; Foody's father founded an interactive graphics company, and his mother worked in Meta Platforms Inc.'s real estate division. As high schoolers, Foody earned "hundreds of thousands of dollars" as a consultant for sneaker resellers, while Hiremath was geeking out on computer-vision research. (Midha declined to speak to Bloomberg Businessweek.) By their second year of college -- Hiremath at Harvard University, Foody and Midha at Georgetown -- the three figured they'd learned enough and applied for a Thiel Fellowship, which pays kids $200,000 to leave college and start companies in fields such as cryptocurrency and human longevity. "I think people overestimate the value you get out of a four-year degree but underestimate the value you get in one or two years," Hiremath says. "The bulk of the personal development just comes from living on your own for the first time." Knowledge acquisition is even less relevant today than it was three years ago, he says, "because knowledge is just kind of free in ChatGPT. The knowledge that I want I can just obtain with a couple of prompts." The three had just begun to zero in on AI training when they started the fellowship in early 2024. The company's original idea was a far more sweeping version of LinkedIn: The labor market might be the most inefficient marketplace in the world, Foody figured, with billions of people looking for work, millions of companies looking for talent and no central clearinghouse connecting the two. Mercor would build a kind of global labor aggregator where everyone everywhere could apply and interview and be vetted for every job, and algorithms would match workers to opportunities with near-perfect precision. In Foody's telling, even the concept of a full-time job with a single employer was something of a relic. What companies actually needed were discrete units of expertise -- tasks that could be broken apart, distributed and completed by whoever in the world happened to be best suited to them. To make any of that possible would require assembling an extraordinary amount of data about people. They began with an AI-powered tool for screening software engineers that served as a prototype for the far bigger system they imagined. They changed focus, though, when AI companies began to need not only engineers but also other domain experts who could help train and test the models. Mercor built a tech platform to host the training itself, and demand exploded. Venture capitalists, eager to fund anything tied to the AI boom, started calling. Mercor raised $3 million in early 2024, led by General Catalyst; it raised another round every six or eight months after that -- $30 million, $100 million, then $350 million last October. Foody, Hiremath and Midha were also working around the clock, encouraging their growing team to adhere to a schedule known as 996 -- in the office from 9 a.m. to 9 p.m., Monday through Saturday. (The regimen first became popular in China in the 2010s as the country built out its tech industry; it's now illegal there.) Felicis investor Peechu says Foody and his partners were so hard to pin down that he eventually flew with them to Las Vegas on a Sunday -- the one day they weren't working -- to drive Ferraris on a racetrack, just so he could get a little uninterrupted time with them on the plane. The founders weren't old enough to drink at that time, and none of them had ever held a professional job -- but the hundreds of millions of dollars being plowed into their coffers came with a mandate to remake the very nature of white-collar work. In Silicon Valley, that didn't raise any alarms about naiveté or blind spots. Quite the contrary. Adam D'Angelo, a Mercor investor who became Facebook's chief technology officer at age 22 before co-founding Quora, says he asked Foody at one point about his work experience. Foody said he'd had an internship one summer. "Perfect," D'Angelo said. "Your mind isn't corrupted by the conventional way of doing things." Over the past year, criticism of Mercor's platform has become a refrain in online forums, news reports and court filings -- beginning, as these things often do, with anonymous online posts. Reddit, in particular, contains no shortage of grievances from contractors who've cycled through the platform: Projects are "chaotic, disorganized and unpredictable." Workers are "treated like human cattle." The company is "building the plane while flying." Contractors sign strict confidentiality agreements before beginning work, and many fear being removed from projects if they speak publicly. They gripe about Slack channels full of motivational chatter and rocket-ship emoji from project managers but few concrete answers about when work will arrive or advance notice when it stops. Like many forms of digital piecework, Mercor's projects are closely measured and monitored while they're underway, with software tracking productivity and time spent on each assignment. It turns out some jobs pay not by the hour but by the task. A frustrated contractor Businessweek spoke with, who asked for anonymity because he'd signed one of Mercor's nondisclosure agreements, has been job hunting for almost a year for a full-time position to make use of his master's degree in physics. After he joined Mercor, his first contract paid $30 per task, but he soon learned that the time it took for him to complete tasks -- anywhere from an hour to a full day -- was often far greater than the company's estimates, making the pay too low to justify the work. It's a variation of a complaint that surfaces online about hourly projects too: Contractors sometimes quietly do part of their work off the clock to keep their productivity numbers in line with company benchmarks out of fear of being "offboarded," Mercor-speak for getting booted. A Mercor spokesperson says that "many experienced contractors prefer task-based work because increased efficiency can lead to higher effective hourly earnings." Last fall one Mercor contractor dispute spilled into the press when Forbes reported that thousands of people working on a large project were abruptly locked out without warning. Several hours later, some said, they were invited back to continue the work -- but at pay rates roughly a quarter lower than before. Mercor disputed the accuracy of the claims and said it was working to offer more predictability. The company also alleges that some of the people who complain about being let go are simply liars. "We've found cases where people have been doing no work or doing time fraud," Hiremath says. "We have demonstrable evidence of them committing fraud, but they still post about it on Reddit when their contract has been eliminated." As criticism of Mercor has grown, lawsuits have followed. One of the complaints, filed late last year by a finance professional named Michael Cox, who worked on Mercor projects, lists OpenAI as a co-defendant and accuses the companies of running what it calls a "scheme to misclassify workers" while exercising the kind of control normally associated with an employer. According to the filing, Mercor required Cox to install productivity-monitoring software on his computer, creating a "level of surveillance ... so intrusive as to trivialize the notion that this was a legitimate independent-contractor relationship." In another passage, the complaint says the company's alleged neglect of employment norms was "so brazen as to almost, by definition, constitute willful misclassification." (Mercor and OpenAI haven't publicly responded to the suit, and a Mercor spokesperson says they "don't plan to at this time"; two other suits against Mercor make similar misclassification claims.) Meanwhile, if you were a job seeker on LinkedIn last fall, Mercor suddenly seemed to be everywhere. Listings offering hundreds of dollars an hour to lawyers, doctors and programmers flooded the platform, prompting another wave of social media speculation -- this one less about working conditions than about whether the whole thing was real. Some users began posting on various sites speculating that the listings were an elaborate data-harvesting scheme in which fake applications and AI interviews were designed to collect valuable personal information. Mercor says that this isn't the case and that it uses interviews only to evaluate candidates' skills for jobs on the platform. And, again, the company suggests its critics are the scammers. Foody says the LinkedIn deluge wasn't about Mercor harvesting data but about a few high-volume fraudsters harvesting referral fees, because it pays users to bring new contractors into the fold. "There were probably 10 people specifically that were causing problems," he says. In October, Mercor banned referrers from using the name Mercor when they post job ads for the roles on LinkedIn. Still, postings from third-party referral outfits have continued. Mercor listings from one such organization, a recruiter called Crossing Hurdles based on the outskirts of New Delhi, are a constant presence on LinkedIn. A Mercor spokesperson tells Businessweek that "Crossing Hurdles has no official affiliation or partnership with Mercor. They post roles on various job sites using their Mercor referral link." (Crossing Hurdles didn't respond to multiple requests for comment.) Mercor made another change last fall. It moved its third co-founder, Midha, into a new position, chairman of the board, handing his previous role running operations to a more seasoned executive, Sundeep Jain, a former chief product officer at Uber Technologies Inc. Invoking his previous employer, Jain says that the complaints circulating online aren't surprising and that early-stage marketplaces rarely distribute work evenly. "There will be some drivers that will be busy all day and others that'll be a little bit less busy," he says. Communication hiccups, payment disputes, mismatched expectations: "Those are classic problems of a marketplace." But on March 31 the young company, growing at breakneck speed and reliant on a handful of high-profile clients, experienced something far worse than a hiccup. Mercor disclosed that attackers had infiltrated its systems in a sprawling supply chain hack. The breach, coming through a corrupted open-source developer tool called LiteLLM that thousands of companies use, struck at the core of Mercor's business. As much as 4 terabytes of data, including possibly training data and users' personal information, were exposed, according to online posts from the hackers. The company announced it was "conducting a thorough investigation supported by leading third-party forensics experts," but the fallout was swift: Along with Meta pausing its projects with Mercor, contractors assigned to those projects suddenly found themselves without work. In the first week after announcing the breach, Mercor was hit with five lawsuits accusing it of failing to protect contractors' data. Mercor declined to respond to Businessweek's questions about the origin and extent of the hack and its impact on clients and users. OpenAI, Anthropic and Google didn't respond to requests for comment on whether they're now reevaluating their relationship with Mercor. Beyond the messy day-to-day mechanics of atomizing jobs into training tasks is a bigger question: Are the machines actually getting good enough to do the work themselves? In the past year the AI industry has tried to answer that question, mostly with a growing ecosystem of "evals," or evaluation frameworks. Like a standardized test, an eval measures a large language model's math ability, reasoning or factual accuracy -- but not typically the kinds of specific, judgment-heavy tasks that white-collar workers do all day. Mercor has developed its own test, a project called APEX (AI Productivity Index), that attempts to do just that. It measures professional performance and then posts the results on the web for anyone to follow as a kind of industry leaderboard. As Foody put it when announcing the initiative last October, "AI can pass the bar exam. But can it redline a contract?" To build APEX, Mercor has worked with some of its most accomplished contractors to design short professional scenarios and problems in five fields so far: law, medicine, management consulting, investment banking and software engineering. Advisers overseeing the effort include Harvard law professor Cass Sunstein, cardiologist Eric Topol and former McKinsey global managing partner Dominic Barton. Former Treasury Secretary and Mercor investor Larry Summers was, until recently, also affiliated with the project. Beneath them is a larger bench of experts -- more than a hundred lawyers, bankers, consultants and clinicians -- who build and review the actual test cases. The models do the work, and their responses are graded against expert benchmarks to see how close they come to professional quality. One case, for instance, centers on a fictional wellness company expanding overseas. A Google Workspace contains the kind of unwieldy digital paper trail a junior consultant might inherit on the first day of a project -- sales data, customer surveys, cost projections and strategy memos scattered across spreadsheets and PDFs. Mercor's experts prompt AI models to answer strategic questions, such as calculating how rising ingredient costs might affect pricing or recommending an expansion strategy, and define what the correct answer should look like. Then the work becomes even more granular: building a checklist, or rubric, of precise criteria the model's output must satisfy. Only once the prompt and rubric are complete do the models attempt the task, their answers graded against the rubric line by line. So far the models fall short. The best can produce useful work in certain areas, but they're not exactly reliable employees. Mercor's own research notes that top systems still "struggle on complex real-world tasks, failing to meet the production bar." The results mirror what other attempts to measure AI's real-world impact have been finding. In March, Anthropic released a chart that compared what AI systems appear capable of doing with how often they're actually being used on the job. The gap was striking, and the chart went viral. White-collar knowledge-work fields looked highly exposed to AI in theory, but in practice only a small slice of the work was being handled that way in the real world. The research drew criticism for its methodology (for one thing, the measure of AI's theoretical potential is inherently subjective and doesn't account for logistical or legal hurdles to adoption), but the basic takeaway -- that AI still has a long way to go -- was hard to dispute. Meanwhile, Mercor says it's seeing marked improvement in APEX scores as the AI giants release upgrades. A recent ChatGPT model tops its current rankings, but "Opus is the one that has blown us all away," says Foody, referring to Anthropic's Claude Opus 4.6, which performed 18% better than its predecessor after just a few months. Still, even though APEX tries to approximate real professional scenarios, forcing agents to navigate complex environments and choose the right tools for a given task, it's more structured than the open-ended work people actually do. Gartner analyst Vuk Janosevic says APEX is a "credible bridge between lab performance and business usefulness." But he cautions that a high score on this benchmark "does not prove that the system can be governed and integrated at scale inside a live process." And then there are the human aspects of the work that APEX may never be able to measure. When Dr. Poonacha examines a patient, for instance, a surprising amount of information comes not from lab reports or imaging but from touch. A trained hand on an abdomen can detect subtle tension or swelling that doesn't appear in a chart. "I just don't think AI is going to be able to do that," she says. Still, Kozak's notion that only 30% of her social work will be replaced by AI might be wildly optimistic. Investors are certainly hoping so. When Jack Dorsey's financial technology conglomerate, Block Inc., laid off 40% of its workforce in March -- ostensibly because of efficiency gains from AI -- its stock soared 20%. Mass layoff announcements at Meta and Amazon.com Inc. have been met with similar pops in the stock price. After all, the market optimizes for shareholder value, not professional enrichment. Kristalina Georgieva, managing director of the International Monetary Fund, has warned that AI will affect roughly 40% of global jobs in "the next few years." Foody argues that the transition will ultimately create new categories of work. And what kinds of jobs are those? "We believe that a large portion of what humans do in companies is going to transform to training agents," he says. Mercor is already gearing up to cash in on that future. Earlier this year the company elevated Hiremath to co-CEO and began pushing into a new line of business, helping corporations deploy agents. Mercor is pitching itself as a sort of agent-implementation partner, not designing agents itself but building the guardrails to steer that AI in the right way. This opportunity, Hiremath says, is enormous, and it could give the company a far larger client base than a few AI labs. "All the Fortune 500, all the Fortune 1000, could want to integrate models into their own workflows," he says. "And they're kind of clueless." In this model, Mercor is suddenly also in the technology consulting game too. While Mercor makes a play for an even bigger part of the white-collar workforce, its contractors sometimes find themselves pondering what Poonacha calls "very dystopian" implications. What if patients begin to trust AI over their physicians? What if wealthier people continue to have access to human doctors, and poorer people just get the AI ones? Mercor's new operations head, Jain, meanwhile, is consumed by imagining the limitless number of disciplines and workflows still left to automate. Chefs and private investigators are already in progress, and any number of supposedly AI-proof trades such as plumbing are no less exposed than, say, medicine. "If there is a ceiling," Jain says, "we're nowhere near it." Read next: Why More College Graduates Are Stuck in Jobs That Don't Require Degrees JOIN THE CONVERSATION: As graduation season approaches, what forces are shaping the entry-level job market across the US? Bloomberg journalists answer your questions about this story and more in a live Q&A on April 29 at 11 a.m. EDT. Join the stream here and send your questions in advance to [email protected].
[2]
In the future, will there be any work left for people to do? | Fortune
I am standing on a stage, behind a waist-high podium with my first name on it. To my right is a woman named Vicki; she's behind an identical podium with her name on it. Between us is a third podium with no one behind it, just the name "Watson" on the front. We are about to play Jeopardy! This is the National Retail Federation's mammoth annual conference at New York City's Javits Center, and in addition to doing some onstage moderating, I have insanely agreed to compete against IBM's (IBM) Watson, the cognitive computing system, whose power the company wants to demonstrate to the retailers. Watson's defeat of Jeopardy!'s two greatest champions is almost a year old, so I'm not expecting this to go well. But I'm not prepared for what hits me. We get to a category called Before and After at the Movies. First clue, for $200: "Han Solo meets up with Lando Calrissian while time-traveling with Marty McFly." It picks the same category for $400: "James Bond fights the Soviets while trying to romance Ali MacGraw before she dies." I'm still struggling with the concept, but Watson has already buzzed in. "What is From Russia With Love Story?" Right again. By the time I figure this out, Watson is on the category's last clue: "John Belushi & the boys set up their fraternity in the museum where crazy Vincent Price turns people into figurines." The correct response, as Watson instantly knows, is "What is Animal House of Wax?" Watson has run the category. I do get some questions right in other categories, and Watson gets some wrong, but at the end of our one round I have been shellacked. I actually don't remember the score, which must be how the psyche protects itself. I just know for sure that I have witnessed something profound. Realize that Watson is not connected to the Internet. It's a freestanding machine just like me, relying only on what it knows. It has to hear and understand the emcee's spoken words, just as I do. In addition, Watson has a built-in delay when buzzing in to answer a clue. We humans must use our prehistoric muscle systems to push a button that closes a circuit and sounds the buzzer. Watson could do it at light speed with an electronic signal, so the developers interposed a delay to level the playing field. Otherwise I'd never have a prayer of winning, even if we both knew the correct response. But, of course, even with the delay, I lost. So let's confront reality: Watson is smarter than I am. In fact, I'm surrounded by technology that's better than I am at sophisticated tasks. Google's (GOOG) autonomous car is a better driver than I am. The company has a whole fleet of the vehicles, which have driven hundreds of thousands of miles with only one accident while in autonomous mode, when one of the cars was rear-ended by a human driver at a stoplight. Computers are better than humans at screening documents for relevance in the discovery phase of litigation, an activity for which young lawyers used to bill at an impressive hourly rate. Computers are better at detecting some kinds of human emotion, despite our million years of evolution that was supposed to make us razor sharp at that skill. One more thing. I competed against Watson two years ago. Today's Watson is 240% faster. I am not. And I'll guess that you aren't either. Most things in our world slow down as they get bigger and older: A small startup can easily grow 100% a year, but a major Fortune 500 firm may struggle to grow 5%. Technology isn't constrained that way. Today's systems, as awesomely powerful as they are, will be 100% more awesomely powerful in two years. In a decade they'll be 32 times more powerful. The issue, a momentous one, is obvious. In this environment, what will be the high-value skills of tomorrow, the jobs that will pay well for us and our kids? That eternal concern increasingly comes down to this stark query: What will people do better than computers? Several factors are combining to make the question especially urgent now. The economy's sorry job-generating performance has left economists struggling for an explanation. For decades the U.S. economy regularly returned to pre-recession employment levels about 18 months after the recession started; this time it took 77 months. How come? Why are wages stagnating for wide swaths of the U.S. population? Could advancing technology be part of the reason? For over two centuries the answer to that last question has been clear: No. Practically every advance in technology has sparked worries that it would destroy jobs, and it did destroy them -- but it also created even more new jobs, and the improved technology made those jobs more productive and higher paying. The fears of Luddites past and present have always been unfounded. Technology has lifted living standards spectacularly. That orthodoxy, one of the firmest in economics, is now being questioned. Larry Summers, star economist and former Treasury Secretary, delivered a significant lecture to an audience of top economists last year in which he said, "Until a few years ago I didn't think this was a very complicated subject; the Luddites were wrong, and the believers in technology and technological progress were right. I'm not so completely certain now." Observing the increasingly sophisticated tasks that computers are doing better than humans -- and the growing percentage of men ages 25 to 54 who aren't working -- Summers announced a portentous conclusion: "This set of developments is going to be the defining economic feature of our era." No less an authority on technology than Microsoft (MSFT) co-founder Bill Gates agrees: "Twenty years from now, labor demand for lots of skill sets will be substantially lower. I don't think people have that in their mental model." Even without reducing total jobs, technology has been changing the nature of work and the value of particular skills for over 200 years, since the dawn of the Industrial Revolution. The story so far comprises just three major turning points. At first, the rise of industrial technology devalued the skills of artisans, who handcrafted their products from beginning to end. A gunmaker carved the stock, cast the barrel, engraved the lock, filed the trigger, and painstakingly fitted the pieces together. But in Eli Whitney's Connecticut gun factory, separate workers did each of those jobs, or just portions of them, using water-powered machinery, and components of each type were identical. Low-skilled workers were in demand, and skilled artisans weren't. The second turning point arrived in the early 20th century, when the trend reversed. Widely available electricity enabled far more sophisticated factories, requiring better-educated, more highly skilled workers; companies also grew far larger, requiring a larger corps of educated managers. Now the unskilled were out of luck, and educated workers were in demand. Through most of the 20th century, Americans responded by becoming better educated as technology continued to advance, producing an economic miracle of fast-rising living standards. But then the third major turning point arrived, starting in the 1980s. Information technology developed to a point where it could take over many medium-skilled jobs -- bookkeeping, back-office jobs, repetitive factory work. Those jobs diminished, and their wages stagnated. Yet at both ends of the skill spectrum, high-skill jobs and low-skill service jobs did much better. Information technology couldn't take over the problem-solving, judging, coordinating tasks of high-skill workers; in fact it made those workers more productive by giving them more information at lower cost. And IT didn't threaten low-skill service workers because computers were terrible at skills of physical dexterity: A computer could defeat a grand master chess champion but couldn't pick up a pencil from a tabletop. Home health aides, gardeners, cooks, and others could breathe easy. Until very recently that pattern held: While IT was crushing medium-skill workers, those at the two ends of the skill spectrum were safe. Now, in a rapid series of developments, we are at a fourth turning point. IT is advancing steadily into both ends of the spectrum, threatening workers who thought they didn't have to worry. At the top end, what's happening to lawyers is a model for any occupation involving analysis, subtle interpretation, strategizing, and persuasion. The computer incursion into the legal-discovery process is well known. In cases around the country, computers are reading millions of documents and sorting them for relevance without getting tired or distracted. But that's just the beginning. Computers are also becoming highly skilled at searching the legal literature for appropriate precedents in a given case, far more widely and thoroughly than people can do. Humans still have to identify the legal issues involved, but as Northwestern University law professor John O. McGinnis points out in a recent article, "Search engines will eventually do this by themselves, and then go on to suggest the case law that is likely to prove relevant to the matter." Advancing even deeper into the territory of lawyerly skill, computers can already predict Supreme Court decisions better than lawyers can. As such analytical power expands in scope, computers will move nearer to the heart of what lawyers do by advising better than lawyers can on whether to sue or settle or go to trial before any court and in any type of case. Companies such as Lex Machina and Huron Legal already offer such analytical services, which are improving by the day. None of this means that lawyers will disappear, but it suggests that the world will need fewer of them. It's already happening. "The rise of machine intelligence is probably partly to blame for the current crisis of law schools" -- shrinking enrollments, falling tuitions -- "and will certainly worsen that crisis," says McGinnis. With infotech thoroughly disrupting even a field so advanced that it requires three years of graduate education and can pay extremely well, other high-skill workers -- analysts, managers -- can't help wondering about their own futures. Developments at the opposite end of the skill spectrum are at least as surprising. In the physical realm, robots have been good mainly at closely prescribed, repetitive tasks -- welding on an auto assembly line, for example. That's all changing radically. Google's autonomous cars are an obvious example, but many more are appearing. You can train a Baxter robot from Rethink Robotics to do all kinds of things -- pack or unpack boxes, take items to or from a conveyor belt, carry things around, count them, inspect them -- just by moving its arms and hands ("end effectors") in the desired way. Baxter won't hurt anyone as it hums about the shop floor; it adapts its movements to its environment by sensing everything around it, including people. Still more advanced is a robotic hand developed by a team from Harvard, Yale, and iRobot (IRBT), maker of the Roomba vacuum cleaner and many other mobile robots. So fine are its motor skills that it can pick up a credit card from a tabletop, put a drill bit in a drill, and turn a key. "A disabled person could say to a robot with hands, 'Go to the kitchen and put my dinner in the microwave,' " one of the researchers, Harvard professor Robert Howe, recently told Harvard Magazine. The overwhelming message seems to be that no one is safe. Technological unemployment, the 200-year-old terror that has never arrived, may finally be here. But even if that's true -- and it's far too soon to say that it is -- it will also be true that, as always, technology is making some skills more valuable and others less so. At this fourth great turning point, which skills will be the winners? The answer is becoming clear. Think about lawyers again. Average lawyers "face a bleak future," in McGinnis's view. Their best chance of making a living may be "by persuading angry and irrational clients to act in their self-interest," he says. "Machines won't be able to create the necessary emotional bonds to perform this important service." In addition, a few "superstars" will do well by using technology to cut their costs (they won't need many associates) and to turbocharge their "uniquely human judgment" in highly complex cases. It just seems common sense that the skills that computers can't acquire -- forming emotional bonds, making human judgments -- will be valuable. Yet the lesson of history is that it's dangerous to claim there are any skills that computers cannot eventually acquire. IBM is teaching Watson how to be persuasive; the initiative is called Debater. We haven't reached the world of Her, the recent Oscar-winning movie in which a man falls in love with the operating system in his infotech devices, but the film captivated viewers and critics because it envisioned a future we can imagine. The deeper reality may be that people will value most highly those skills that they simply insist be performed by a human, even if a computer, objectively evaluated, could do them just as well. For example, we'll want our disputes adjudicated by human judges and juries, even if computers could weigh far more factors in reaching a decision. We'll want to hear our diagnosis from a doctor, even if a computer supplied it, because we'll want to talk to the doctor about it -- perhaps just to talk and know we're being heard by a human being. We will want to follow human leaders, even if a computer could say all the right words, which is not an implausible prospect. Consider the skills in highest demand over the next five to 10 years as specified by employers in a recent survey by Towers Watson and Oxford Economics. Those skills didn't include business acumen, analysis, or P&L management. Instead, relationship building, teaming, co-creativity, cultural sensitivity, and managing diverse employees were all near the top. The emerging picture of the future casts conventional career advice in a new light. Most notably, recommendations that students study STEM subjects -- science, technology, engineering, math -- need fine-tuning. It's great advice at the moment; eight of the 10 highest-paying college majors are in engineering, says recent research, and those skills will remain critically important. But important isn't the same as high value or well-paid. As infotech continues its advance into higher skills, value will continue to move elsewhere. Engineers will stay in demand, but tomorrow's most valuable engineers will not be geniuses in cubicles; rather, they'll be those who can build relationships, brainstorm, and lead. It's tempting to find comfort in the notion that right-brain skills will gain value. Calculus is hard, but we all understand emotions, right? Yet not everyone will benefit. We may all understand emotions, but we won't all want to go there. Building the skills of human interaction, embracing our most essentially human traits, will play to some people's strengths and make others deeply uncomfortable. Those people will be in trouble. For as long as computers have existed they've been scaring people, eliminating jobs, creating jobs, devaluing some skills, and exalting others. Yet it would not be correct to say of today's situation that it was ever thus. It wasn't. Because the growth of computing power doesn't slow down as it gets large, we're racing into a genuinely different future. As computers begin to acquire some of the most advanced cognitive and physical human skills, we confront a new reality. In a way that has not been true before, the central issue for the economy and for all of us who work in it will be the answer to the question: What will people do better than computers?
Share
Share
Copy Link
Mercor, a $10 billion startup, recruits doctors, lawyers, and social workers to train artificial intelligence models for OpenAI and Meta. These professionals earn substantial side income teaching AI systems to perform their jobs, potentially accelerating automation of white-collar work. The platform has raised nearly $500 million and employs tens of thousands of experts.
Tasha Kozak, a social worker in Tampa, earns as much money in 20 hours of training artificial intelligence models as she does in 40 hours helping families through Hillsborough County Public Schools
1
. She's one of tens of thousands of professionals working for Mercor, a San Francisco-based gig-work platform that recruits high-skilled workers to teach AI systems how to perform their jobs. The company connects doctors, lawyers, investment bankers, journalists, and social workers with training projects for clients including OpenAI, Anthropic, and Meta1
.
Source: Bloomberg
Mercor has raised nearly $500 million in venture capital from investors including Benchmark, General Catalyst, Peter Thiel, Jack Dorsey, and Larry Summers. The company's October valuation reached $10 billion, five times higher than just months earlier
1
. This rapid growth reflects the urgent demand for training data as AI development accelerates across industries.The work involves methodical, segmented tasks where domain experts create prompts for AI models based on realistic scenarios. Kozak writes prompts asking AI to produce social developmental histories for students needing individualized education plans, working alongside a virtual team of about 40 contractors
1
. In San Francisco, pediatric hospitalist Melania Poonacha works an additional 10 hours weekly asking AI models to interpret lab results and evaluating responses. Novelist Robin Palmer Blanche in Baton Rouge evaluates AI-generated creative writing, part of a large group of underemployed professionals using the platform to patch together income after the 2023 Hollywood writers strike1
.Mercor screens candidates through AI-run interviews that demonstrate machine intelligence capabilities. Kozak described her interview with an AI agent that used a gentle voice and conversational style, following up with targeted prompts like "Tell me more about the parent in that situation" with detail matching human supervisors
1
. This screening process itself shows how AI replacing human performance has already begun in recruitment.The question of whether technology creates more jobs than it destroys has been answered affirmatively for over two centuries, but that orthodoxy now faces serious challenge
2
. IBM's Watson demonstrated this shift dramatically during a Jeopardy! demonstration at the National Retail Federation conference. The cognitive computing system dominated categories requiring complex reasoning, running entire categories before human competitors could grasp the concepts. Watson operated without Internet connectivity, hearing and understanding spoken words while processing responses faster than human muscle systems could trigger buzzers2
.Related Stories
Watson became 240% faster in just two years following that demonstration, while human capabilities remained static
2
. Today's systems will be 100% more powerful in two years and 32 times more powerful within a decade. This exponential growth affects multiple sectors: Google's autonomous driving fleet has driven hundreds of thousands of miles with only one accident caused by a human driver rear-ending the autonomous vehicle2
. Computers now excel at legal document review during litigation discovery, a task that once generated impressive hourly billing rates for young lawyers. They've even surpassed humans at detecting certain types of human emotion, despite our evolutionary advantages in that domain2
.Source: Fortune
The U.S. economy took 77 months to return to pre-recession employment levels in the recent recovery, compared to the historical average of 18 months
2
. This dramatic shift has left economists searching for explanations, with advancing technology emerging as a potential factor behind stagnating wages and weak job creation. Larry Summers, the prominent economist and former Treasury Secretary, has questioned whether the traditional economic orthodoxy about automation still holds2
.For professionals on platforms like Mercor, the immediate benefit is clear: substantial supplemental income from expertise they've developed over years. Social work remains severely underpaid, making Kozak's equal earnings for half the hours attractive
1
. Yet the long-term implications are stark. Hour by hour, task by task, these professionals translate their judgment into training data that could eventually make their roles obsolete. The future of work increasingly centers on identifying what humans will do better than machines as human skills are increasingly marginalized by systems that improve exponentially while we remain fundamentally unchanged. Palmer Blanche's experience captures this tension: discovering other novelists on Slack, recognizing books she loved, all while teaching AI to replicate the creative work that defines their profession1
. The question is no longer whether replacing the white-collar workforce is possible, but how quickly it will happen and what roles will remain distinctly human in an age of accelerating automation and artificial intelligence.Summarized by
Navi
10 Mar 2026•Entertainment and Society

28 May 2025•Business and Economy

31 Jul 2025•Technology

1
Policy and Regulation

2
Technology

3
Policy and Regulation
