4 Sources
[1]
The $10 Billion Startup Training AI to Replace the White-Collar Workforce
When Tasha Kozak, a social worker for the Hillsborough County Public Schools in Tampa, met the family, they were living in a car. The three children's grades had been slipping. The mother was exhausted. The father was out of the picture. Kozak began helping them connect to housing resources. She checked in with the children every couple of days at school. She called the mother every three or four. After several months the family found stable housing. The children began to improve in school. "I saw the mom got her glow back, and she started getting more consistent work shifts," Kozak says. Moments like that are why Kozak does the work. She didn't set out to become a social worker, but after taking an elective in it in college, she changed her major, and more than a decade later, she still calls social work her passion. The job, she says, is about listening, connecting, helping in a way that feels instantly meaningful. In her view about 70% of effective social work is that human, relational component. The rest is administrative. On many evenings, after finishing her full-time job with the district, Kozak logs in to a website called Mercor. The San Francisco startup Mercor.io Corp. recruits workers in high-skill fields -- doctors, lawyers, investment bankers, journalists, social workers, you name it -- to help teach artificial intelligence systems how to do their work. The company places these experts in part-time or temporary contract roles to work on training projects for major tech companies and AI labs, clients that have included OpenAI, Anthropic and Meta. It's like the Uber of advanced AI training: a gig-work platform for white-collar and skilled professionals that offers a path for them to earn something extra from their expertise -- at the risk of eventually sacrificing their careers to AI. Like many of Mercor's contractors, Kozak first heard about the company through a LinkedIn job posting. School had just let out for the summer in 2025, so she had spare time, and she was curious about the rise of AI. She landed an interview and found herself on camera talking to an AI agent with a gentle female voice and a surprisingly natural, conversational style. When Kozak described a specific case she'd handled, the agent followed up with targeted prompts -- "Tell me more about the parent in that situation" -- with the kind of probing detail she was used to hearing from human supervisors. Two weeks later she was working on her first assignments, and today she earns as much money training AI for 20 hours a week as she does in 40 hours helping actual families. "Social work is a very underpaid occupation," she says. Her work for Mercor is methodical and narrow. Kozak is part of a virtual team that writes prompts for an AI model to perform specific social work tasks based on fictitious case files -- for instance, asking the AI to produce a social developmental history of an elementary school student who needs an individualized education plan. The case file includes notes from a parent interview, a student interview, a review of student records and a doctor's report. Another team reviews the AI's responses, and still another handles additional pieces of the training process. There are hundreds of people on the project, Kozak says, maybe more; her team alone includes about 40 contractors. Hour by hour, highly segmented task by task, they translate professional judgment into training data. Across the US, other professionals are doing the same. In San Francisco a doctor of internal medicine named Melania Poonacha works 50 to 60 hours a week as a full-time pediatric hospitalist on the night shift and logs in to Mercor on her days off for an additional 10 hours or so, asking an AI model to interpret lab results and evaluating its work. In Baton Rouge, Louisiana, the novelist and screenwriter Robin Palmer Blanche started evaluating AI-generated creative writing for voice and structure last August. Palmer Blanche, a mother of two who's struggled to make a living from writing since the 2023 Hollywood writers strike, is part of a large swath of Mercor workers who are underemployed in their professions and use the platform to patch together income. She sometimes finds herself chatting with a Mercor teammate on Slack, only to realize there are other novelists doing it too -- "Oh, my God, I read that woman's book last year and fell in love with it." Mercor has tens of thousands of such experts working on its platform, screened using those AI-run interviews and project-specific skills tests and practice tasks. In the three years since the company started, it's raised almost $500 million in venture capital from a who's who across Silicon Valley and finance, including Benchmark, General Catalyst, Peter Thiel, Jack Dorsey and Larry Summers. The most recent round of funding, in October, valued Mercor at $10 billion, five times what it was thought to be worth just a few months earlier. According to the company, it's been profitable since its inception, it pays out more than $2 million per day to contractors, and it has about 300 full-time employees, largely engineers and project managers. The founders -- Brendan Foody, Adarsh Hiremath and Surya Midha, three early-20s college dropouts who were buddies in high school -- have become the youngest-ever self-made billionaires, at least on paper. Mercor's rapid rise has also brought controversy, including several class-action lawsuits now moving through California courts and a recent data breach that raised questions about how the company handles sensitive information and led one of its clients -- Meta -- to indefinitely pause its work with the startup. If Mercor and its backers are right, the company is uniquely positioned to help AI become the economic force Silicon Valley has been promising. Most of what's been seen from AI so far, they argue, has been something of a dazzling consumer demo. Going beyond that -- making the technology perform reliably in professional fields where mistakes carry real consequences -- is fundamentally about feeding the models better training data, says Sundeep Peechu, managing partner at the VC firm Felicis Ventures, which led Mercor's two most recent funding rounds. "The first generation of data was from the internet," he says, "and that allowed companies to build very general-purpose models." But for AI to become "truly economically useful and not just a toy thing," someone has to get humans to tell the model, step by step, how they actually do their work. Where Have All the Jobs Gone? * Young Job Hunters on What It Takes to Get Hired * Anxious Parents Are Hiring Pricey Career Coaches * The College Students Starting Their Own Companies Read more from the series here. At this moment it's not hard to understand why people are so willing to feed the machines that might one day render them obsolete. Anxiety about job loss is everywhere: Job openings in professional and business services have fallen by more than a million from their post-pandemic peak in 2022, according to the Bureau of Labor Statistics. About 42% of recent college grads are underemployed, according to a 2025 report from the Federal Reserve Bank of New York, and a study last year by the American Psychological Association found that 54% of US workers are experiencing significant stress about job insecurity. Mercor's listings -- sometimes ads from the company itself and sometimes from third parties using Mercor links that allow them to collect referral fees -- saturate LinkedIn and other job boards, offering a respite from all that. Mercor says its average hourly pay is about $90, though the range is wide, from generalists barely scratching minimum wage to elite coders or Ph.D.s making $250 or $300 per hour. Someone working on Mercor projects full time at $90 per hour would earn almost $190,000 a year. There are plenty of workers who view Mercor as a grim final stop to monetize their expertise before professional extinction. Kozak doesn't see it that way, saying her reason for doing the work is far more practical. She envisions herself offloading social work's tedious 30% -- the paperwork, the reports -- so she can spend more time doing the parts that can't be automated: coaching someone through a bureaucratic maze, gaining a grieving mother's trust. She worries about the potential for AI to develop "systematic biases" -- say, overlooking cultural differences -- but sees her training as all the more valuable for that reason. Dr. Poonacha is more explicit about the stakes for her own career. "I am doing this just because I don't want to become obsolete," she says. Medicine is evolving rapidly, and "AI is going to be part of that evolution, whether we want to participate in it or not." "If history has taught us anything about revolutions and productivity, it's that productivity is the tide that lifts all boats," Brendan Foody declares one morning in February. The sandy-haired 23-year-old co-chief executive officer and co-founder of Mercor has the standard techno-optimist's conversational habit of jumping from earthly matters -- in this case, the field of management consulting -- to historical analogies and abstract principles. Asked how a future McKinsey & Co. consultant, replaced by AI, would gain expertise without the traditional entry-level drudgery, Foody zooms out to a mainstay Silicon Valley position: Two hundred years ago, when most Americans were farmers, the tractor didn't destroy work; it pushed people into different kinds of employment. AI will do the same, he says -- eliminating some tasks, yes, but ultimately creating more value, more opportunity, more progress. History offers some support for that view, but the path from one kind of work to another has rarely been painless or quick. "We need to cure cancer and solve climate change and go to Mars," Foody says. "And I think humans will work on a lot of those things once we have more productivity in accounting or whatever these more back-office functions are." From where Foody sits -- in a glass-walled conference room on the 33rd floor of a San Francisco skyscraper -- the bay extends beneath him in miniature, with container ships sliding past and ferries carrying commuters to innumerable office buildings. From this height the physical economy looks less like the labor of millions of individuals living full lives and more like a table model whose tiny pieces could be shuffled with mere keystrokes. "Instead of being Luddites and leaning against the technology, we should instead focus on what are the jobs of the future that we need to lean into," he says. Foody acquired his rhetorical instincts as a competitive debater and perpetual hustler in high school in San Jose, where he met Hiremath and Midha. All three were raised in tech industry households: Hiremath and Midha are the sons of Silicon Valley engineers; Foody's father founded an interactive graphics company, and his mother worked in Meta Platforms Inc.'s real estate division. As high schoolers, Foody earned "hundreds of thousands of dollars" as a consultant for sneaker resellers, while Hiremath was geeking out on computer-vision research. (Midha declined to speak to Bloomberg Businessweek.) By their second year of college -- Hiremath at Harvard University, Foody and Midha at Georgetown -- the three figured they'd learned enough and applied for a Thiel Fellowship, which pays kids $200,000 to leave college and start companies in fields such as cryptocurrency and human longevity. "I think people overestimate the value you get out of a four-year degree but underestimate the value you get in one or two years," Hiremath says. "The bulk of the personal development just comes from living on your own for the first time." Knowledge acquisition is even less relevant today than it was three years ago, he says, "because knowledge is just kind of free in ChatGPT. The knowledge that I want I can just obtain with a couple of prompts." The three had just begun to zero in on AI training when they started the fellowship in early 2024. The company's original idea was a far more sweeping version of LinkedIn: The labor market might be the most inefficient marketplace in the world, Foody figured, with billions of people looking for work, millions of companies looking for talent and no central clearinghouse connecting the two. Mercor would build a kind of global labor aggregator where everyone everywhere could apply and interview and be vetted for every job, and algorithms would match workers to opportunities with near-perfect precision. In Foody's telling, even the concept of a full-time job with a single employer was something of a relic. What companies actually needed were discrete units of expertise -- tasks that could be broken apart, distributed and completed by whoever in the world happened to be best suited to them. To make any of that possible would require assembling an extraordinary amount of data about people. They began with an AI-powered tool for screening software engineers that served as a prototype for the far bigger system they imagined. They changed focus, though, when AI companies began to need not only engineers but also other domain experts who could help train and test the models. Mercor built a tech platform to host the training itself, and demand exploded. Venture capitalists, eager to fund anything tied to the AI boom, started calling. Mercor raised $3 million in early 2024, led by General Catalyst; it raised another round every six or eight months after that -- $30 million, $100 million, then $350 million last October. Foody, Hiremath and Midha were also working around the clock, encouraging their growing team to adhere to a schedule known as 996 -- in the office from 9 a.m. to 9 p.m., Monday through Saturday. (The regimen first became popular in China in the 2010s as the country built out its tech industry; it's now illegal there.) Felicis investor Peechu says Foody and his partners were so hard to pin down that he eventually flew with them to Las Vegas on a Sunday -- the one day they weren't working -- to drive Ferraris on a racetrack, just so he could get a little uninterrupted time with them on the plane. The founders weren't old enough to drink at that time, and none of them had ever held a professional job -- but the hundreds of millions of dollars being plowed into their coffers came with a mandate to remake the very nature of white-collar work. In Silicon Valley, that didn't raise any alarms about naiveté or blind spots. Quite the contrary. Adam D'Angelo, a Mercor investor who became Facebook's chief technology officer at age 22 before co-founding Quora, says he asked Foody at one point about his work experience. Foody said he'd had an internship one summer. "Perfect," D'Angelo said. "Your mind isn't corrupted by the conventional way of doing things." Over the past year, criticism of Mercor's platform has become a refrain in online forums, news reports and court filings -- beginning, as these things often do, with anonymous online posts. Reddit, in particular, contains no shortage of grievances from contractors who've cycled through the platform: Projects are "chaotic, disorganized and unpredictable." Workers are "treated like human cattle." The company is "building the plane while flying." Contractors sign strict confidentiality agreements before beginning work, and many fear being removed from projects if they speak publicly. They gripe about Slack channels full of motivational chatter and rocket-ship emoji from project managers but few concrete answers about when work will arrive or advance notice when it stops. Like many forms of digital piecework, Mercor's projects are closely measured and monitored while they're underway, with software tracking productivity and time spent on each assignment. It turns out some jobs pay not by the hour but by the task. A frustrated contractor Businessweek spoke with, who asked for anonymity because he'd signed one of Mercor's nondisclosure agreements, has been job hunting for almost a year for a full-time position to make use of his master's degree in physics. After he joined Mercor, his first contract paid $30 per task, but he soon learned that the time it took for him to complete tasks -- anywhere from an hour to a full day -- was often far greater than the company's estimates, making the pay too low to justify the work. It's a variation of a complaint that surfaces online about hourly projects too: Contractors sometimes quietly do part of their work off the clock to keep their productivity numbers in line with company benchmarks out of fear of being "offboarded," Mercor-speak for getting booted. A Mercor spokesperson says that "many experienced contractors prefer task-based work because increased efficiency can lead to higher effective hourly earnings." Last fall one Mercor contractor dispute spilled into the press when Forbes reported that thousands of people working on a large project were abruptly locked out without warning. Several hours later, some said, they were invited back to continue the work -- but at pay rates roughly a quarter lower than before. Mercor disputed the accuracy of the claims and said it was working to offer more predictability. The company also alleges that some of the people who complain about being let go are simply liars. "We've found cases where people have been doing no work or doing time fraud," Hiremath says. "We have demonstrable evidence of them committing fraud, but they still post about it on Reddit when their contract has been eliminated." As criticism of Mercor has grown, lawsuits have followed. One of the complaints, filed late last year by a finance professional named Michael Cox, who worked on Mercor projects, lists OpenAI as a co-defendant and accuses the companies of running what it calls a "scheme to misclassify workers" while exercising the kind of control normally associated with an employer. According to the filing, Mercor required Cox to install productivity-monitoring software on his computer, creating a "level of surveillance ... so intrusive as to trivialize the notion that this was a legitimate independent-contractor relationship." In another passage, the complaint says the company's alleged neglect of employment norms was "so brazen as to almost, by definition, constitute willful misclassification." (Mercor and OpenAI haven't publicly responded to the suit, and a Mercor spokesperson says they "don't plan to at this time"; two other suits against Mercor make similar misclassification claims.) Meanwhile, if you were a job seeker on LinkedIn last fall, Mercor suddenly seemed to be everywhere. Listings offering hundreds of dollars an hour to lawyers, doctors and programmers flooded the platform, prompting another wave of social media speculation -- this one less about working conditions than about whether the whole thing was real. Some users began posting on various sites speculating that the listings were an elaborate data-harvesting scheme in which fake applications and AI interviews were designed to collect valuable personal information. Mercor says that this isn't the case and that it uses interviews only to evaluate candidates' skills for jobs on the platform. And, again, the company suggests its critics are the scammers. Foody says the LinkedIn deluge wasn't about Mercor harvesting data but about a few high-volume fraudsters harvesting referral fees, because it pays users to bring new contractors into the fold. "There were probably 10 people specifically that were causing problems," he says. In October, Mercor banned referrers from using the name Mercor when they post job ads for the roles on LinkedIn. Still, postings from third-party referral outfits have continued. Mercor listings from one such organization, a recruiter called Crossing Hurdles based on the outskirts of New Delhi, are a constant presence on LinkedIn. A Mercor spokesperson tells Businessweek that "Crossing Hurdles has no official affiliation or partnership with Mercor. They post roles on various job sites using their Mercor referral link." (Crossing Hurdles didn't respond to multiple requests for comment.) Mercor made another change last fall. It moved its third co-founder, Midha, into a new position, chairman of the board, handing his previous role running operations to a more seasoned executive, Sundeep Jain, a former chief product officer at Uber Technologies Inc. Invoking his previous employer, Jain says that the complaints circulating online aren't surprising and that early-stage marketplaces rarely distribute work evenly. "There will be some drivers that will be busy all day and others that'll be a little bit less busy," he says. Communication hiccups, payment disputes, mismatched expectations: "Those are classic problems of a marketplace." But on March 31 the young company, growing at breakneck speed and reliant on a handful of high-profile clients, experienced something far worse than a hiccup. Mercor disclosed that attackers had infiltrated its systems in a sprawling supply chain hack. The breach, coming through a corrupted open-source developer tool called LiteLLM that thousands of companies use, struck at the core of Mercor's business. As much as 4 terabytes of data, including possibly training data and users' personal information, were exposed, according to online posts from the hackers. The company announced it was "conducting a thorough investigation supported by leading third-party forensics experts," but the fallout was swift: Along with Meta pausing its projects with Mercor, contractors assigned to those projects suddenly found themselves without work. In the first week after announcing the breach, Mercor was hit with five lawsuits accusing it of failing to protect contractors' data. Mercor declined to respond to Businessweek's questions about the origin and extent of the hack and its impact on clients and users. OpenAI, Anthropic and Google didn't respond to requests for comment on whether they're now reevaluating their relationship with Mercor. Beyond the messy day-to-day mechanics of atomizing jobs into training tasks is a bigger question: Are the machines actually getting good enough to do the work themselves? In the past year the AI industry has tried to answer that question, mostly with a growing ecosystem of "evals," or evaluation frameworks. Like a standardized test, an eval measures a large language model's math ability, reasoning or factual accuracy -- but not typically the kinds of specific, judgment-heavy tasks that white-collar workers do all day. Mercor has developed its own test, a project called APEX (AI Productivity Index), that attempts to do just that. It measures professional performance and then posts the results on the web for anyone to follow as a kind of industry leaderboard. As Foody put it when announcing the initiative last October, "AI can pass the bar exam. But can it redline a contract?" To build APEX, Mercor has worked with some of its most accomplished contractors to design short professional scenarios and problems in five fields so far: law, medicine, management consulting, investment banking and software engineering. Advisers overseeing the effort include Harvard law professor Cass Sunstein, cardiologist Eric Topol and former McKinsey global managing partner Dominic Barton. Former Treasury Secretary and Mercor investor Larry Summers was, until recently, also affiliated with the project. Beneath them is a larger bench of experts -- more than a hundred lawyers, bankers, consultants and clinicians -- who build and review the actual test cases. The models do the work, and their responses are graded against expert benchmarks to see how close they come to professional quality. One case, for instance, centers on a fictional wellness company expanding overseas. A Google Workspace contains the kind of unwieldy digital paper trail a junior consultant might inherit on the first day of a project -- sales data, customer surveys, cost projections and strategy memos scattered across spreadsheets and PDFs. Mercor's experts prompt AI models to answer strategic questions, such as calculating how rising ingredient costs might affect pricing or recommending an expansion strategy, and define what the correct answer should look like. Then the work becomes even more granular: building a checklist, or rubric, of precise criteria the model's output must satisfy. Only once the prompt and rubric are complete do the models attempt the task, their answers graded against the rubric line by line. So far the models fall short. The best can produce useful work in certain areas, but they're not exactly reliable employees. Mercor's own research notes that top systems still "struggle on complex real-world tasks, failing to meet the production bar." The results mirror what other attempts to measure AI's real-world impact have been finding. In March, Anthropic released a chart that compared what AI systems appear capable of doing with how often they're actually being used on the job. The gap was striking, and the chart went viral. White-collar knowledge-work fields looked highly exposed to AI in theory, but in practice only a small slice of the work was being handled that way in the real world. The research drew criticism for its methodology (for one thing, the measure of AI's theoretical potential is inherently subjective and doesn't account for logistical or legal hurdles to adoption), but the basic takeaway -- that AI still has a long way to go -- was hard to dispute. Meanwhile, Mercor says it's seeing marked improvement in APEX scores as the AI giants release upgrades. A recent ChatGPT model tops its current rankings, but "Opus is the one that has blown us all away," says Foody, referring to Anthropic's Claude Opus 4.6, which performed 18% better than its predecessor after just a few months. Still, even though APEX tries to approximate real professional scenarios, forcing agents to navigate complex environments and choose the right tools for a given task, it's more structured than the open-ended work people actually do. Gartner analyst Vuk Janosevic says APEX is a "credible bridge between lab performance and business usefulness." But he cautions that a high score on this benchmark "does not prove that the system can be governed and integrated at scale inside a live process." And then there are the human aspects of the work that APEX may never be able to measure. When Dr. Poonacha examines a patient, for instance, a surprising amount of information comes not from lab reports or imaging but from touch. A trained hand on an abdomen can detect subtle tension or swelling that doesn't appear in a chart. "I just don't think AI is going to be able to do that," she says. Still, Kozak's notion that only 30% of her social work will be replaced by AI might be wildly optimistic. Investors are certainly hoping so. When Jack Dorsey's financial technology conglomerate, Block Inc., laid off 40% of its workforce in March -- ostensibly because of efficiency gains from AI -- its stock soared 20%. Mass layoff announcements at Meta and Amazon.com Inc. have been met with similar pops in the stock price. After all, the market optimizes for shareholder value, not professional enrichment. Kristalina Georgieva, managing director of the International Monetary Fund, has warned that AI will affect roughly 40% of global jobs in "the next few years." Foody argues that the transition will ultimately create new categories of work. And what kinds of jobs are those? "We believe that a large portion of what humans do in companies is going to transform to training agents," he says. Mercor is already gearing up to cash in on that future. Earlier this year the company elevated Hiremath to co-CEO and began pushing into a new line of business, helping corporations deploy agents. Mercor is pitching itself as a sort of agent-implementation partner, not designing agents itself but building the guardrails to steer that AI in the right way. This opportunity, Hiremath says, is enormous, and it could give the company a far larger client base than a few AI labs. "All the Fortune 500, all the Fortune 1000, could want to integrate models into their own workflows," he says. "And they're kind of clueless." In this model, Mercor is suddenly also in the technology consulting game too. While Mercor makes a play for an even bigger part of the white-collar workforce, its contractors sometimes find themselves pondering what Poonacha calls "very dystopian" implications. What if patients begin to trust AI over their physicians? What if wealthier people continue to have access to human doctors, and poorer people just get the AI ones? Mercor's new operations head, Jain, meanwhile, is consumed by imagining the limitless number of disciplines and workflows still left to automate. Chefs and private investigators are already in progress, and any number of supposedly AI-proof trades such as plumbing are no less exposed than, say, medicine. "If there is a ceiling," Jain says, "we're nowhere near it." Read next: Why More College Graduates Are Stuck in Jobs That Don't Require Degrees JOIN THE CONVERSATION: As graduation season approaches, what forces are shaping the entry-level job market across the US? Bloomberg journalists answer your questions about this story and more in a live Q&A on April 29 at 11 a.m. EDT. Join the stream here and send your questions in advance to [email protected].
[2]
The economist who was terrified of AI just found a rare reason for hope | Fortune
Alex Imas didn't arrive at optimism easily. The University of Chicago economist economist occupies an unusual space in being one of the leading researchers on AI's labor market impact, but also one of its most avid adopters. Unlike many of his peers, he is taking the doomsday scenarios, perhaps best exemplified by Citrini Research's viral essay on "ghost GDP" and spiraling deflation, very seriously. If automation eliminates most jobs and the wage share collapses, the people with money -- capital owners -- will be already satiated, while displaced workers can't afford to buy anything. Demand collapses. The economy shrinks. While Imas has written that he finds actual negative economic growth unlikely, he said the scenario of high unemployment and a drag on the economy as a result of that unemployment is worth taking seriously. "My first reaction was to be very scared," Imas told Fortune. "I needed to work things out carefully in order to be less scared -- not to convince myself not to be scared, just to look at history and look at people's preferences, bring these things together." Wall Street takes Imas' warnings seriously, too. A Morgan Stanley research note last month recommended that investors follow Imas as a primary resource on AI's employment impact, saying he was among the valuable third-party resources on the topic. Imas is no armchair theorist: his research has appeared in the American Economic Review, the Quarterly Journal of Economics, and the Proceedings of the National Academy of Sciences, and he co-authored a recent update of the behavioral economics classic The Winner's Curse, with Nobel laureate Richard Thaler. He may be getting most notoriety for his widely read Substack, Ghosts of Electricity. He wasn't aware of his appearance on Wall Street research desks, when told of Morgan Stanley's citation, "that's funny ... I didn't see that." The reach of Ghosts of Electricity has surprised him more broadly. Imas started the newsletter with a specific ambition: to write with the rigor of an academic paper but for an audience far wider than journal editors, reaching economists, AI researchers, technologists, and policymakers at once. He said it has worked beyond what he anticipated, with responses coming in from, for instance, his mother-in-law's friends. He recently sat down with a neighbor, installed Claude on her computer, and watched her start building apps from scratch within an afternoon. "The ideas need to be out there broadly for a very broad audience," he said. And after several months of writing and rewriting, Imas has something for the doomsday crowd to digest: a vision of how the AI economy could work out not so badly. It's similar to an argument that has been increasingly appearing in the pages of Fortune. He opens with the example of Starbucks. Starbucks is a $112 billion company selling one of the most standardized products in the modern economy. The technology to remove human labor from its stores has existed for years. And yet, after years of cutting staff and installing automated processes to protect thin margins, CEO Brian Niccol recently reversed course entirely. Handwritten notes on cups, ceramic mugs, comfortable seating -- human details -- had proven more valuable to customers than efficiency. More baristas are being hired. Automation is being rolled back. (Starbucks is on ChatGPT as a beta in a way that ideally leads to drink discovery, but that is distinct from its in-store strategy.) For Imas, Starbucks' shift is telling. As AI makes commodity production cheaper and more abundant, he argued in a recent Substack, "What will be scarce?" certain things just can't be commodified in the coming AI world. These are things that Starbucks' Niccol seems to know: human presence, social connection, provenance. They will become more scarce, he argued, and therefore more economically valuable. The question he spent months of writing and revising on is: why, exactly, and how far does that logic extend? For its part, Starbucks referred Fortune to previous company communication on the subject of AI. The company says its approach to AI is "practical and grounded." The company said it wants to "use AI where it helps partners deliver exceptional craft, deepen customer connection and improve the rhythm of the coffeehouse. If it does that, we scale it. If not, we move on." The intellectual scaffolding is structural change theory -- the economics of what happens when technology makes one sector dramatically more productive. The famous example, also beloved of Fundstrat's Tom Lee, is that around 1900, 40% of the American workforce farmed. Today, it's under 2%. People didn't stop eating; they just stopped spending most of their time making food once it became commoditized and cheap. The economy didn't collapse -- it transformed, reallocating labor toward manufacturing and then services as incomes rose. Imas argues the same dynamic will play out with AI: "The economics of scarcity won't disappear, it'll just relocate." Drawing on a landmark 2021 Econometrica paper by Diego Comin, Danial Lashkari, and Martà Mestieri, he noted that income effects -- not just price effects -- account for over 75% of historical patterns of sectoral reallocation. In other words, when people get richer, they don't just buy more of the same things, which are now cheaper. They want different things, namely goods and services with high "income elasticity," meaning demand for them grows faster than income itself. The behavioral ingredient Imas adds is rooted in the French philosopher René Girard's concept of mimetic desire: we don't want things purely for their functional value, but because others want them -- and because others can't have them. In experimental research with colleague Kristof Madarasz, Imas found that willingness to pay for an identical good roughly doubled when subjects learned a random subset of people would be excluded from purchasing it. In follow-up work with Graelin Mandel, AI involvement in creating a product dramatically reduced that premium because people perceived AI-made goods as inherently reproducible, undermining the scarcity that drives desire. The implication is that as AI commoditizes more of the economy, spending and employment will migrate toward what Imas calls the "relational sector," which brings his Starbucks analogy back around. People will pay for things that have a distinct human element to them. In other words, middle-class consumption patterns tomorrow will look like wealthy ones today. Imas told Fortune there is already copious empirical support for this idea hiding in plain sight: today's billionaires, with no financial constraints whatsoever, spend enormous amounts of time on podcasts, at live performances, and on social platforms, consuming and producing human interaction. "You could be alone on an island consuming all the movies, all the video games, all of technology, everything you want," Imas said. "But most of the time, these billionaires, they're on podcasts. They're out there on Twitter, interacting with people, they're going to performances, they're consuming relational goods, basically, or trying to provide relational goods, like the need for socialization to be around humans." The demand for human connection, he argued, has no natural ceiling because it is fundamentally comparative, never fully satiated. Imas is careful to distinguish his argument from a romantic vision of a world full of painters and performers. "A lot of people's reaction [to the essay] was focusing on performers and art. I think those are kind of red herrings," he said. "Starbucks workers are not performers. They're not artists. They're just people. They're human beings and people value interacting with human beings -- not from a highbrow or artistic or entertainment perspective, but just from a basic desire for socialization perspective." The relational sector, in his framework, encompasses nurses, doctors, teachers, therapists, childcare workers, personal chefs, and hospitality workers. These sectors together already employ nearly 50 million people in the United States. Many existing jobs won't disappear wholesale but will transform: as AI automates the routine tasks within a teacher's or doctor's workday, what remains -- the emotional support, the attentiveness, the relationship -- becomes the core of the job and the core of its economic value. Fortune recently made similar arguments, noting that those jobs with a human factor or relational aspect are already pulling in above-average salaries, particularly in nursing and teaching: Nurse Dana from The Pitt is a salutary example. Right now, Imas explained, doctor and teachers are doing jobs that are half relational and half vulnerable to automation, and some of those surely will be. Imas said "the thing that's not being recognized right now" is how those jobs will evolve to be more relational as AI advances. "The widget maker may be gone. The truck driver may be gone, because tasks in that job don't have a relational component. But there's a lot of jobs right now that have a relational component, which will become relational jobs." That theory gets a real-world stress test inside a large medical nonprofit, where a senior data scientist -- who asked not to be identified by name or employer -- told Fortune that he has spent the past six months watching his organization's newly formed data strategy committee deploy an enterprise ChatGPT account to the entire staff. After weeks of all-hands presentations, the only use cases that management could articulate were: writing emails and summarizing emails. In fact, "they wanted employees to be AI champions to come up with other use cases, but few have been interested." The data scientist said that his actual work -- running statistical analyses on cancer patient data for one of the country's largest medical databases -- involves protected health information that the tools aren't even authorized to access. This doesn't mean that AI wasn't capable of essentially doing his job. In fact, he said that after the first release of ChatGPT years ago, he built a cancer survival-risk calculator with that tool in under a month. Because of the relational aspect, though, it's been sitting in legal review indefinitely. He agreed with Fortune's metaphor of AI like being a "sports car," but the problem for most jobs is they are built like New York City, full of traffic lights and gridlock. Have you ever driven in in Manhattan? "What the hell are you doing with a sports car" in that case? In the case of the calculator, he said, it took him about a month to build the prototype and four years to bring to the public, for reasons including legal review, grant submissions and interactions with the NIH. So essentially: paperwork. He's no Luddite. He credits AI with helping him translate statistical code across programming languages and build prototypes faster than he could alone. But his most irreplaceable function, he said, isn't running regressions. It's managing the human layer: communicating with a consortium of international surgical oncologists, from Yale to MD Anderson to the University of Toronto, specializing in cancers ranging from thoracic to orbital sarcomas, translating between their clinical instincts and the demands of statistical rigor. "Their lives are such that if I get 15 minutes a day with them, that's extremely lucky. So I need to make everything as precise and concise as possible." No AI, he added, could replicate the register that relationship requires. Even the approved use case, writing email, would be missing the key relational aspect. "Actually creating the prototype, and I think you've heard this before, create using AI to create a prototype is fantastic. But once you try to get from prototype to scale, it kind of hits all of these roadblocks of red tape and bureaucracy and committees." That is exactly the kind of work Imas has in mind -- not performance, not artistry, but the irreducibly human judgment that holds complex institutions together. Imas hasn't abandoned his fears. His optimistic scenario depends entirely on the pace of transition. If the shift from commodity economy to relational economy happens gradually, history suggests the labor market can absorb and adapt. But if AI automation accelerates faster than workers and institutions can retrain and reallocate, the demand-collapse scenario he spent years warning about remains entirely on the table. "The speed of change really matters," he said, "whether we get to this hopeful version versus the more worrisome one." Imas warned that people who are still skeptical of AI as overblown hype are fooling themselves, likely because they're using a chatbot model from years ago, not a frontier model. "These two things should not be categorized in the same bucket of technology," he argued, saying that that AI is still very "jagged," an increasingly popular term for thinking about AI's probabilistic nature and tendency to hallucinate. "But it's going to be jagged in the sense of, at some point, the valleys are going to be very, very high ... even the low points are going to be very impressive." Morgan Stanley warned in its March research note that AI disruption was "becoming more acute as LLM capabilities increase at a more rapid rate than expected," flagging the potential for large-scale workforce reductions across industries. The gap between that projection and a cancer statistician quietly waiting for the enterprise ChatGPT enthusiasm to blow over captures exactly the uncertainty Imas, despite his hard-won optimism, still can't fully resolve. Imas said he was still "worried about" people who are sticking their heads in the sand about AI: "My primary role right now is to sit people down one on one and get them trained on top-flight technology." He said he sees his relational aspect theory as both plausible and positive, "but it took me a long time to get to it."
[3]
In the future, will there be any work left for people to do? | Fortune
I am standing on a stage, behind a waist-high podium with my first name on it. To my right is a woman named Vicki; she's behind an identical podium with her name on it. Between us is a third podium with no one behind it, just the name "Watson" on the front. We are about to play Jeopardy! This is the National Retail Federation's mammoth annual conference at New York City's Javits Center, and in addition to doing some onstage moderating, I have insanely agreed to compete against IBM's (IBM) Watson, the cognitive computing system, whose power the company wants to demonstrate to the retailers. Watson's defeat of Jeopardy!'s two greatest champions is almost a year old, so I'm not expecting this to go well. But I'm not prepared for what hits me. We get to a category called Before and After at the Movies. First clue, for $200: "Han Solo meets up with Lando Calrissian while time-traveling with Marty McFly." It picks the same category for $400: "James Bond fights the Soviets while trying to romance Ali MacGraw before she dies." I'm still struggling with the concept, but Watson has already buzzed in. "What is From Russia With Love Story?" Right again. By the time I figure this out, Watson is on the category's last clue: "John Belushi & the boys set up their fraternity in the museum where crazy Vincent Price turns people into figurines." The correct response, as Watson instantly knows, is "What is Animal House of Wax?" Watson has run the category. I do get some questions right in other categories, and Watson gets some wrong, but at the end of our one round I have been shellacked. I actually don't remember the score, which must be how the psyche protects itself. I just know for sure that I have witnessed something profound. Realize that Watson is not connected to the Internet. It's a freestanding machine just like me, relying only on what it knows. It has to hear and understand the emcee's spoken words, just as I do. In addition, Watson has a built-in delay when buzzing in to answer a clue. We humans must use our prehistoric muscle systems to push a button that closes a circuit and sounds the buzzer. Watson could do it at light speed with an electronic signal, so the developers interposed a delay to level the playing field. Otherwise I'd never have a prayer of winning, even if we both knew the correct response. But, of course, even with the delay, I lost. So let's confront reality: Watson is smarter than I am. In fact, I'm surrounded by technology that's better than I am at sophisticated tasks. Google's (GOOG) autonomous car is a better driver than I am. The company has a whole fleet of the vehicles, which have driven hundreds of thousands of miles with only one accident while in autonomous mode, when one of the cars was rear-ended by a human driver at a stoplight. Computers are better than humans at screening documents for relevance in the discovery phase of litigation, an activity for which young lawyers used to bill at an impressive hourly rate. Computers are better at detecting some kinds of human emotion, despite our million years of evolution that was supposed to make us razor sharp at that skill. One more thing. I competed against Watson two years ago. Today's Watson is 240% faster. I am not. And I'll guess that you aren't either. Most things in our world slow down as they get bigger and older: A small startup can easily grow 100% a year, but a major Fortune 500 firm may struggle to grow 5%. Technology isn't constrained that way. Today's systems, as awesomely powerful as they are, will be 100% more awesomely powerful in two years. In a decade they'll be 32 times more powerful. The issue, a momentous one, is obvious. In this environment, what will be the high-value skills of tomorrow, the jobs that will pay well for us and our kids? That eternal concern increasingly comes down to this stark query: What will people do better than computers? Several factors are combining to make the question especially urgent now. The economy's sorry job-generating performance has left economists struggling for an explanation. For decades the U.S. economy regularly returned to pre-recession employment levels about 18 months after the recession started; this time it took 77 months. How come? Why are wages stagnating for wide swaths of the U.S. population? Could advancing technology be part of the reason? For over two centuries the answer to that last question has been clear: No. Practically every advance in technology has sparked worries that it would destroy jobs, and it did destroy them -- but it also created even more new jobs, and the improved technology made those jobs more productive and higher paying. The fears of Luddites past and present have always been unfounded. Technology has lifted living standards spectacularly. That orthodoxy, one of the firmest in economics, is now being questioned. Larry Summers, star economist and former Treasury Secretary, delivered a significant lecture to an audience of top economists last year in which he said, "Until a few years ago I didn't think this was a very complicated subject; the Luddites were wrong, and the believers in technology and technological progress were right. I'm not so completely certain now." Observing the increasingly sophisticated tasks that computers are doing better than humans -- and the growing percentage of men ages 25 to 54 who aren't working -- Summers announced a portentous conclusion: "This set of developments is going to be the defining economic feature of our era." No less an authority on technology than Microsoft (MSFT) co-founder Bill Gates agrees: "Twenty years from now, labor demand for lots of skill sets will be substantially lower. I don't think people have that in their mental model." Even without reducing total jobs, technology has been changing the nature of work and the value of particular skills for over 200 years, since the dawn of the Industrial Revolution. The story so far comprises just three major turning points. At first, the rise of industrial technology devalued the skills of artisans, who handcrafted their products from beginning to end. A gunmaker carved the stock, cast the barrel, engraved the lock, filed the trigger, and painstakingly fitted the pieces together. But in Eli Whitney's Connecticut gun factory, separate workers did each of those jobs, or just portions of them, using water-powered machinery, and components of each type were identical. Low-skilled workers were in demand, and skilled artisans weren't. The second turning point arrived in the early 20th century, when the trend reversed. Widely available electricity enabled far more sophisticated factories, requiring better-educated, more highly skilled workers; companies also grew far larger, requiring a larger corps of educated managers. Now the unskilled were out of luck, and educated workers were in demand. Through most of the 20th century, Americans responded by becoming better educated as technology continued to advance, producing an economic miracle of fast-rising living standards. But then the third major turning point arrived, starting in the 1980s. Information technology developed to a point where it could take over many medium-skilled jobs -- bookkeeping, back-office jobs, repetitive factory work. Those jobs diminished, and their wages stagnated. Yet at both ends of the skill spectrum, high-skill jobs and low-skill service jobs did much better. Information technology couldn't take over the problem-solving, judging, coordinating tasks of high-skill workers; in fact it made those workers more productive by giving them more information at lower cost. And IT didn't threaten low-skill service workers because computers were terrible at skills of physical dexterity: A computer could defeat a grand master chess champion but couldn't pick up a pencil from a tabletop. Home health aides, gardeners, cooks, and others could breathe easy. Until very recently that pattern held: While IT was crushing medium-skill workers, those at the two ends of the skill spectrum were safe. Now, in a rapid series of developments, we are at a fourth turning point. IT is advancing steadily into both ends of the spectrum, threatening workers who thought they didn't have to worry. At the top end, what's happening to lawyers is a model for any occupation involving analysis, subtle interpretation, strategizing, and persuasion. The computer incursion into the legal-discovery process is well known. In cases around the country, computers are reading millions of documents and sorting them for relevance without getting tired or distracted. But that's just the beginning. Computers are also becoming highly skilled at searching the legal literature for appropriate precedents in a given case, far more widely and thoroughly than people can do. Humans still have to identify the legal issues involved, but as Northwestern University law professor John O. McGinnis points out in a recent article, "Search engines will eventually do this by themselves, and then go on to suggest the case law that is likely to prove relevant to the matter." Advancing even deeper into the territory of lawyerly skill, computers can already predict Supreme Court decisions better than lawyers can. As such analytical power expands in scope, computers will move nearer to the heart of what lawyers do by advising better than lawyers can on whether to sue or settle or go to trial before any court and in any type of case. Companies such as Lex Machina and Huron Legal already offer such analytical services, which are improving by the day. None of this means that lawyers will disappear, but it suggests that the world will need fewer of them. It's already happening. "The rise of machine intelligence is probably partly to blame for the current crisis of law schools" -- shrinking enrollments, falling tuitions -- "and will certainly worsen that crisis," says McGinnis. With infotech thoroughly disrupting even a field so advanced that it requires three years of graduate education and can pay extremely well, other high-skill workers -- analysts, managers -- can't help wondering about their own futures. Developments at the opposite end of the skill spectrum are at least as surprising. In the physical realm, robots have been good mainly at closely prescribed, repetitive tasks -- welding on an auto assembly line, for example. That's all changing radically. Google's autonomous cars are an obvious example, but many more are appearing. You can train a Baxter robot from Rethink Robotics to do all kinds of things -- pack or unpack boxes, take items to or from a conveyor belt, carry things around, count them, inspect them -- just by moving its arms and hands ("end effectors") in the desired way. Baxter won't hurt anyone as it hums about the shop floor; it adapts its movements to its environment by sensing everything around it, including people. Still more advanced is a robotic hand developed by a team from Harvard, Yale, and iRobot (IRBT), maker of the Roomba vacuum cleaner and many other mobile robots. So fine are its motor skills that it can pick up a credit card from a tabletop, put a drill bit in a drill, and turn a key. "A disabled person could say to a robot with hands, 'Go to the kitchen and put my dinner in the microwave,' " one of the researchers, Harvard professor Robert Howe, recently told Harvard Magazine. The overwhelming message seems to be that no one is safe. Technological unemployment, the 200-year-old terror that has never arrived, may finally be here. But even if that's true -- and it's far too soon to say that it is -- it will also be true that, as always, technology is making some skills more valuable and others less so. At this fourth great turning point, which skills will be the winners? The answer is becoming clear. Think about lawyers again. Average lawyers "face a bleak future," in McGinnis's view. Their best chance of making a living may be "by persuading angry and irrational clients to act in their self-interest," he says. "Machines won't be able to create the necessary emotional bonds to perform this important service." In addition, a few "superstars" will do well by using technology to cut their costs (they won't need many associates) and to turbocharge their "uniquely human judgment" in highly complex cases. It just seems common sense that the skills that computers can't acquire -- forming emotional bonds, making human judgments -- will be valuable. Yet the lesson of history is that it's dangerous to claim there are any skills that computers cannot eventually acquire. IBM is teaching Watson how to be persuasive; the initiative is called Debater. We haven't reached the world of Her, the recent Oscar-winning movie in which a man falls in love with the operating system in his infotech devices, but the film captivated viewers and critics because it envisioned a future we can imagine. The deeper reality may be that people will value most highly those skills that they simply insist be performed by a human, even if a computer, objectively evaluated, could do them just as well. For example, we'll want our disputes adjudicated by human judges and juries, even if computers could weigh far more factors in reaching a decision. We'll want to hear our diagnosis from a doctor, even if a computer supplied it, because we'll want to talk to the doctor about it -- perhaps just to talk and know we're being heard by a human being. We will want to follow human leaders, even if a computer could say all the right words, which is not an implausible prospect. Consider the skills in highest demand over the next five to 10 years as specified by employers in a recent survey by Towers Watson and Oxford Economics. Those skills didn't include business acumen, analysis, or P&L management. Instead, relationship building, teaming, co-creativity, cultural sensitivity, and managing diverse employees were all near the top. The emerging picture of the future casts conventional career advice in a new light. Most notably, recommendations that students study STEM subjects -- science, technology, engineering, math -- need fine-tuning. It's great advice at the moment; eight of the 10 highest-paying college majors are in engineering, says recent research, and those skills will remain critically important. But important isn't the same as high value or well-paid. As infotech continues its advance into higher skills, value will continue to move elsewhere. Engineers will stay in demand, but tomorrow's most valuable engineers will not be geniuses in cubicles; rather, they'll be those who can build relationships, brainstorm, and lead. It's tempting to find comfort in the notion that right-brain skills will gain value. Calculus is hard, but we all understand emotions, right? Yet not everyone will benefit. We may all understand emotions, but we won't all want to go there. Building the skills of human interaction, embracing our most essentially human traits, will play to some people's strengths and make others deeply uncomfortable. Those people will be in trouble. For as long as computers have existed they've been scaring people, eliminating jobs, creating jobs, devaluing some skills, and exalting others. Yet it would not be correct to say of today's situation that it was ever thus. It wasn't. Because the growth of computing power doesn't slow down as it gets large, we're racing into a genuinely different future. As computers begin to acquire some of the most advanced cognitive and physical human skills, we confront a new reality. In a way that has not been true before, the central issue for the economy and for all of us who work in it will be the answer to the question: What will people do better than computers?
[4]
AI Job Loss Is Coming. Does Anyone Have a Plan?
The tech companies have ideas. In early April, OpenAI -- whose most cheery prediction says 18 percent of jobs will soon be automated -- rolled out a plan for a "New Deal" for workers: a 32-hour workweek, a public wealth fund, a tax on capital gains. On the moderate end, Anthropic's Dario Amodei admitted that AI job disruption was "a macroeconomic problem [so] large" it may require a whole new tax code -- with a duty levied "against AI companies in particular." But while the country's two leading AI companies talk about a dramatically different landscape for the American worker, Congress has been largely silent. Maybe it was holding off for hard data. Until recently, the stories of vast displacement by AI were mostly anecdotal and met with skepticism. When Amodei told Axios last year that AI could "wipe out nearly half of all entry-level white-collar jobs," one couldn't help noticing the statement helped his company's valuation. And when former Twitter CEO Jack Dorsey laid off more than 4,000 staff at his new firm, Block, citing AI, critics were quick to point out artificial intelligence was just as likely a convenient excuse to fire people without having to fess up to bad hiring or poor profits. The mood, however, is starting to shift. In March, Goldman Sachs issued a report that about 7 percent of workers will be displaced by AI. The Federal Reserve Bank of New York found that 2025 ended with the highest unemployment rate for recent college grads in years. Yes, another recent report challenged the "AI-job-apocalypse narrative" -- but this one from MIT did so mainly on speed: "2027 is too aggressive an estimate for AI to broadly eclipse the performance of human workers," a statement on the data said. "AI will achieve 80 percent success rates on most tasks by 2029." (So much better.) Even if a politician doubts we are headed to AI hell, one would think self-survival would push forward some bold proposals. Seventy-one percent of workers in one recent poll say they are afraid of job displacement from this technology. Pro-AI PACs like Leading the Future are already dumping millions into the midterm elections. Clearly, the companies are worried something could be done by those in government. (It's doubtful the millions are actually being used to push OpenAI's "New Deal.") We asked five members of the political class who have been vocal regarding AI about their colleagues' relative silence. What are politicians saying behind closed doors? And why isn't there a big plan to deal with AI from D.C.? -- Jacob Rosenberg I thought that policymakers at alllevels, especially the federal level, would be eager to jump in on this issue because it's likely to be the defining one of the next decade. This is not personal computing; it's not like electricity. The entire purpose of this technology is to replace human intelligence and human labor. I've made the analogy to the "China shock," which totally changed manufacturing in the United States. The China shock led to significant job losses and political realignment. But the potential job losses from AI are five times that and therefore have the potential to be even more transformative to our politics. The 2028 election is likely to center on the impact of AI on the country -- and two competing visions for how to deal with it -- in the way that COVID really shaped the 2020 election. Part of my obsession after spending three years in the Biden administration is that you can't just look at a chart of the economy and say, "Well, real inflation-adjusted median household income has improved, which means that people's lives are getting better and they're happy." The economy is much more complicated than that, and there are these feelings about uncertainty, autonomy, fairness: "Why is it that this is being taken away from me and I see a bunch of other people around me who are seemingly at random getting really rich, but I can't get ahead?" When you talk to Democrats, a lot of the unspoken response is "Well, the other guys are in charge for the next three years, and if I propose some kind of sweeping expansion of the social safety net and worker empowerment and expanding unionization, it's going to go nowhere." Of course, nobody's going to come out and say, "The reason I'm not talking about this very much is that I don't want $10 million dumped on my head in my election" -- but I do think that that's in the back of people's minds, too. I also think that there's a legitimate tension here, which is -- look, other countries are moving forward with this technology, whether we do so or not. China's not going to stop, and some of these folks in the Middle East are not going to stop, and countries in Europe are not going to stop. And so isn't it better for the U.S. to lead that race and to be able to set the standards globally rather than China? What I would say to them is unless you convince people that the adoption of this technology is going to somehow make their life better, then there's going to be a political groundswell to stop it. -- As told to Jacob Rosenberg I'm not yet in the school that says, "Yep, 40 percent of people are going to lose their jobs." We don't know. It's going to work in odd ways. But what we do know is that it's unlike how shipping jobs overseas and the shift in trade policy wiped out manufacturing. That was confined, it was geographically largely confined, and it was industry confined because the central insight was people will build things in Asia so much cheaper than they will build them here in the U.S. that we can afford to have our work done in Asia and still pay the transportation cost to bring it all the way across the ocean and make a bigger profit than making it here at home. I've just described the whole multitrillion-dollar effort that destroyed much of middle-class America and unions and the American Dream. But it was comprehensible. This one will hit in all kinds of places that are hard to predict. So I actually talk with some of the big thinkers on this, and I say, "What does that mean for the worker?" And they say things to me like, "It's absolutely crucial that a worker be flexible and resilient." Those are the words you hear over and over and over. So what does it take to be flexible and resilient if your job may disappear in the blink of an eye? If you were to have the magic wand and could say, "AI is coming, this disruption is coming," what would you do? Wouldn't you say, "You know what? We better unhook health care from jobs and make sure that everybody in the country has health care"? In other words, one way you're resilient is, oh, I don't know, you might call it Medicare for All, universal health care. You would not leave health care tied to jobs. What's the next thing you do? You would change our unemployment insurance. You'd beef it up; you'd take it out of its 1935 mind-set. All the things that we found were broken during COVID that we didn't fix, you'd come back and you'd fix that. What's the third thing you'd do? You'd make post-high-school education free or nearly free. So every therapist who gets knocked out of a job has an opportunity to retrain in something else -- to learn a new skill without having to go tens of thousands of dollars into debt. You'd have universal child care. So if Mama or Daddy can get a job, they can get right back into the job market. If they don't have a job, they can still keep their child-care spot so they can have the care they need to get an education if they have to go back to school. So part of my point is we in Congress should be thinking about the regulation of AI, but we should also be thinking about the resilience of working-class America. How do you strengthen the safety net for all our people? So why doesn't leadership of either party seize on this? And the answer is because billionaires don't want to pay their fair share. Why do we not have universal child care? Because it costs money and Jeff Bezos would have to pay more in taxes. Why do we not have health care that works for more Americans? Because it was more important to the Republicans to do a $2 trillion tax break for the ultrawealthy and corporations. They literally used cutting people off their health care to pay in part for their huge tax cuts that go to the top. We're watching people, millions of them, colliding with the powerful who don't want to hear this. -- As told to Rebecca Traister I'm going to give you two statements that are both true: There will be many jobs created by AI that we cannot possibly predict; millions of people are going to lose their jobs to AI. The proportion is going to be way, way off. I can say very, very confidently it's going to eliminate millions of call-center jobs and retail jobs, coding jobs, and eventually driving jobs. I'm sure that it will create thousands of new jobs that don't presently exist. It's just the ratio is going to be ten to one. Let's say I'm sitting with an average tech CEO -- non-billionaire variety. He would say, "Hey, this is real. I'm going to fire 50 percent of my workers over the next five years. I don't know what my kids are going to do for college. And that's my life." That tech CEO is not going to somehow start going on the news advocating for an AI tax or universal basic income or anything like that. But they see it all happening. They feel bad for some of the workers they're going to fire, but there's not really a room where people come together and say, "Okay, guys, we're going to all come together and do this or that." I would put this AI job apocalypse -- or what I've christened "the fuckening" -- in a category of a number of problems. It's a proud American tradition. It would be a bigger surprise if people got together in a room and came out and said, "Hey, here's what we're going to do at the end of the day." Because the American system doesn't actually lead to that happening. So there will be a whole parade of 2028 candidates saying, "I'm deeply concerned about the effects of AI, and we should examine the impacts and take it very seriously." What does that mean in terms of actual legislation or policy? Unclear. Several politicians have reached out to me. This is actually the way they frame the question, which is funny: "Short of UBI, what do we do?" -- As told to Jacob Rosenberg The potential domino effect is this incredible stratification of our society where you have a massive unemployment rate for everybody who doesn't touch this technology. I just think that that is not how you sustain a democracy. It's not a recipe for social stability. It is a recipe for disaster. And we've got to make sure that does not happen. I don't think people really yet appreciate the significance of the potential threat here. And that's one of the reasons that Senator Mark Warner and I have introduced legislation that would require the government to collect information on AI job impact. I say this to everybody who tells me, "Oh, Josh, you're an alarmist." Well, fine, let's get the data. Then let's require the government to report regularly, multiple times a year, about the number of jobs created and the number of jobs lost due to AI. If we can't agree on that, I don't know what we can agree on. I will tell you that I hate the universal-basic-income idea. I hate it because it takes away the independence and dignity of work. People want to work. And if people are going to get laid off because of AI, I don't know how they're gonna pay their health-care bills. I think we should say, "No taxes on health care for all Americans." Whatever you're paying on your premium ought to be tax free; whatever you're paying on your prescription drugs ought to be tax free. Go down the list. We ought to cap the price of prescription drugs -- we shouldn't allow these drug companies to charge us 300 percent more for the same drug as they're charging somebody in France. -- As told to Simon van Zuylen-Wood I think it was ten years ago when I went up to the microphone at the Democratic retreat. You know, when we go off in a hotel somewhere to sort of stare at our navels and decide what the future of the party is? So I went up to the microphone and said, "AI is coming at us, and we're not ready for it. And the biggest effect is it's going to drive the market value of human labor toward zero." It was really Google -- you know its "Transformer" paper? It was after that that I started getting communications from some of my friends in comp-sci that said, "You would not believe what Google has come up with, these large language models." And then ChatGPT commercializes them fast and without many controls, frankly. Anyone who spends a big part of their day staring at a screen has their job at risk. AI can sit there and watch what you've done for a while and then step in and make a pretty good imitation. Ten years ago, people were worried about self-driving trucks taking' jobs, stuff like that. I've been a little surprised at how long that's taken -- although it looks like it's finally happening too. We're going to have to fundamentally rethink the value proposition for being a human being in the world. The value of a human has to be something different from the market value of their labor. Historically, I have not supported single-payer health care, all right, and I'm going to switch my position in support of that on the basis of AI. And if you want money to move in the economy, there is no logical alternative to reaching deep into Elon's pocket, redistributing the money at the base of the pyramid. Well, you can do a better job of public infrastructure. That is another way you can redistribute a lot of wealth. If everyone has really nice walking trails and parks right outside where they live, that is a source of great personal enjoyment -- and even though you don't have much of a job, or the job is no longer what defines your life, you've got a beautiful nature preserve. It's -- you know, it's a future that you can imagine for your grandchildren without feeling depressed. The challenge will be to figure out how to subsidize human interaction. I think maybe one way to reward organizations and people who do something positive for human interaction is that they should all get subsidized some number of pennies per minute when one human is looking into another human's eyes. So if you set up an archery club where you all go out drinking afterward and laughing and looking into each other's eyes, that should be your economic reward. I don't know if you remember the first time you ever looked into a potential girlfriend's eyes and you sort of felt your heart flutter. A very large fraction of our brains is dedicated to recognizing the faces and analyzing the emotions of our tribe members. So I think that if you want to figure out how to make humans happy, look at the tribal societies we evolved in and try to reproduce that in a way that doesn't involve having raids on your nearby tribes. -- As told to Simon van Zuylen-Wood
Share
Copy Link
Mercor, a $10 billion startup, recruits doctors, lawyers, and social workers to teach AI how to do their jobs—paying them as much as their full-time work. Meanwhile, economists debate whether AI job displacement will trigger economic collapse or simply shift labor to new sectors. Congress remains largely silent despite mounting evidence of disruption.

Tasha Kozak, a social worker in Tampa, earns as much training artificial intelligence for 20 hours per week as she does helping families in her 40-hour full-time role with Hillsborough County Public Schools
1
. She's one of tens of thousands of white-collar professionals working for Mercor, a San Francisco startup valued at $10 billion that recruits experts to teach AI systems how to perform their jobs. The company, which raised nearly $500 million in venture capital from investors including Peter Thiel, Jack Dorsey, and Larry Summers, operates like an Uber for AI training—connecting skilled workers with gig opportunities at major tech companies including OpenAI, Anthropic, and Meta1
.Kozak discovered Mercor through a LinkedIn job posting and was interviewed by an AI agent with a conversational style that probed her experience with targeted questions
1
. Her work involves writing prompts for AI models to perform specific social work tasks based on fictitious case files, then evaluating the AI's responses. In San Francisco, pediatric hospitalist Melania Poonacha logs 10 additional hours weekly asking AI to interpret lab results. In Baton Rouge, novelist Robin Palmer Blanche evaluates AI-generated creative writing after struggling to make a living since the 2023 Hollywood writers strike1
. These professionals are methodically translating their expertise into training data that could eventually replace the white-collar workforce they represent.Alex Imas, a University of Chicago economist tracking AI automation and its labor market impact, initially found the prospect terrifying
2
. His research, which appears in top journals and his widely-read Substack "Ghosts of Electricity," explores doomsday scenarios where AI job displacement leads to mass unemployment, collapsing demand, and economic contraction. Morgan Stanley recently recommended investors follow Imas as a primary resource on AI and the economy2
. The Federal Reserve Bank of New York found that 2025 ended with the highest unemployment rate for recent college graduates in years, while Goldman Sachs estimates about 7 percent of workers will be displaced by AI4
.OpenAI predicts 18 percent of jobs will soon be automated, while Anthropic's Dario Amodei told Axios that AI could "wipe out nearly half of all entry-level white-collar jobs"
4
. An MIT report suggested AI will achieve 80 percent success rates on most tasks by 20294
. These projections have shaken the longstanding economic orthodoxy that technology always creates more jobs than it destroys. Larry Summers and other prominent economists are now questioning whether this time might be different3
.After months of analysis, Imas found a reason for cautious optimism in an unexpected place: Starbucks
2
. The $112 billion company recently reversed years of automation under CEO Brian Niccol, bringing back handwritten notes on cups, ceramic mugs, and comfortable seating. More baristas are being hired because customers value human interaction over efficiency. Imas argues this illustrates structural change theory—as AI makes commodity production cheaper, human presence, social connection, and provenance become scarcer and therefore more economically valuable2
.The analogy to agriculture offers perspective: around 1900, 40 percent of American workers farmed; today it's under 2 percent. People didn't stop eating—they reallocated labor toward manufacturing and services as productivity rose
2
. Whether the same transformation can absorb workers displaced by AI remains uncertain. IBM's Watson demonstrated in 2014 that it could outperform humans at sophisticated cognitive tasks, and today's version is 240 percent faster3
. Google's autonomous vehicles have driven hundreds of thousands of miles with minimal accidents. Computers now excel at screening legal documents and detecting human emotion3
.Related Stories
Despite the mounting evidence, policy makers have offered few concrete proposals to address the impact on the workforce. OpenAI floated a "New Deal" for workers including a 32-hour workweek and public wealth fund, while Anthropic suggested the problem might require an entirely new tax code with duties levied against AI companies
4
. Yet Congress has been largely silent even as 71 percent of workers in one poll say they fear displacement from this technology4
.Pro-AI PACs like Leading the Future are already pouring millions into midterm elections
4
. Political observers suggest the 2028 election will likely center on AI's societal shift, similar to how COVID shaped 2020. Some Democrats privately acknowledge that proposing sweeping reforms while Republicans control government seems futile, while others worry about massive campaign spending against them. There's also tension around global competition—China and other countries are advancing AI regardless of U.S. policy4
. For now, professionals like Kozak continue their dual existence: helping people by day, training their digital replacements by night.Summarized by
Navi
27 Jan 2025•Policy and Regulation

15 Mar 2025•Technology

28 May 2025•Business and Economy

1
Technology

2
Entertainment and Society

3
Policy and Regulation
