Looking for a new job has always been, generally speaking, a massive pain in the ass. These days, this pain is often inflicted bureaucratically, through firms' opaque and mysterious hiring systems and the online platforms job seekers use to find them in the first place. The basic complaint spans industries: Sparse listings on exploitative sites are overwhelmed with applicants who, after applying, never hear back, not even with a rejection (were the listings even real?). This ad for Sorce, a job-application app that recently joined start-up incubator Y Combinator, captures that part pretty well:
Given that we're talking about a start-up in 2025, you can probably guess the proposed solution involves AI. After funneling users through a Tinder-style swipe interface, Sorce says it will simply automate the rest of the process. Just sit back, swipe, and wait for the job offers to roll in. Let the app do all the work and absorb the pain.
Sorce's pitch is less a new concept than an attempt to package recent trends among actual job seekers, many of whom use tools like ChatGPT to churn out tailored résumés and cover letters and in some cases help with interviews. It's also one of many job-application automation start-ups, all of which make versions of the same promise. One app will "find and apply directly to hundreds of matching jobs for you, so you can focus on interviews not applications," while another will "autofill job applications, tailor résumés, and automatically track applications you submit -- all for free." Some companies are less delicate about what they do, offering to "mass apply" on behalf of their customers "every single day"; others evoke a slick free-roaming agent working on users' behalf, one that "continuously finds and applies to relevant job openings" until the person is hired.
Companies are already complaining about a "tsunami of sameness in applications" with vastly more per role and no better sense of whom they should hire. Apps like these, should they find users, are almost certain to make the problem worse, even if companies' hiring practices and platforms of choice are primarily to blame. There are similar AI arms races happening all around us in offices, schools, even within marriages, but job hunting 2025 style is particularly alienating and mercenary. Unlike in an office, where spamming your colleagues with workslop may break professional norms and compacts or simply make you seem bad at your job, or in a school, where excessive AI use is understood as cheating, stalled-out job seekers feel well and truly on their own. It's easy to see how the pitch for an AI-powered job-hunting agent might land: What's the harm in flooding the market with a few hundred more applications? What if it does work?
This attitude toward externalities -- not my problem, and in any case worth it in exchange for a small advantage -- follows the approximate logic of a spammer and often comes wrapped in the language of AI hustle culture. It's also understandable from the perspective of a job seeker who feels constantly thwarted by automated systems employers use that seem to treat seekers with similar indifference or contempt, or by platforms like LinkedIn and Indeed that, while nominally intended to connect two parties with shared interests (one needing specific services, the other offering them), can feel more like social-media-style black holes for engagement. It's an escalation that will likely be met with more escalation: countermeasures by job-listings platforms and hirers to prevent access by AI agents; more aggressive automated filtering; different hiring routines altogether, making it even harder to get through the door to a coveted interview. Mercenary (and slightly deceptive) automation tools like this, which are being pitched all over right now and already wreaking havoc in, for example, online dating, depend on two temporary circumstances to work, if they ever actually do: (1) that most other people don't have access to them, giving the user an edge and (2) that the people and parties on which they're used will tolerate and take no action against them. In other words, if you take their pitches at face value, they're pretty obviously doomed in the medium term, in the sense that they'll either be rejected by the systems they operate in or simply ruin them for everyone.
Taking stock of the first few years of mainstream AI deployment, though, raises an important question. What if that's sort of the point? Or at least a world worth thinking about in a more thorough, long-term way? Generative image and video tools, for example, have significantly degraded social-media platforms, allowing bad actors and regular people to fill them with slop, intensifying existing problems with spam and deceptive content while thwarting old solutions. And, hey, look at that: Suddenly, OpenAI and Meta are launching new social networks based on AI, on which posting generated content is the point, not a problem to be solved. Generative AI may be placing immense stress on educational institutions and worsening the already strained relationships between teachers and students, but wait -- every AI company is selling ed tech now.
For an automated job-hunting app, a successful trajectory might look something like this: First, it gains a lot of users by promising to give them an advantage in their job hunts, and in some cases perhaps it does; companies doing the hiring are overwhelmed with applications they increasingly see and treat like spam; the various old systems for matching job seekers with employers become clogged and useless, leaving employers looking for alternatives; the app, now in possession of a user base full of highly motivated job seekers, has something to sell or at least is a platform hiring managers have reason to work with directly. If things go really well -- or sufficiently poorly for the other job platforms -- it ends up as an AI-flavored competitor to a site like LinkedIn, as a platform for matching employees with employers. AI was a big part of how it got there but perhaps becomes less relevant once it has arrived.
This isn't a terribly different story from that of LinkedIn itself, a platform that established job-searching dominance by building a work-themed social-media network. And for an idea of what the end state of a broader shift toward AI centricity might look like, OpenAI's recent plans to integrate outside apps directly into its platform, app style, rather than subjecting them to users' independent AI "agents," offer a plausible and instructive glimpse:
The bet by investors like Y Combinator, in other words, is on a broadly familiar form of disruption but with a couple of twists. Generative AI may expose some incumbent firms to outpricing, outselling, and outmaneuvering by start-ups that embrace it, while more and more people acclimate to strange, new, chatcentric modes of computing. In the meantime, it may also help glut, jam, and temporarily destroy the entire markets in which they're operating. It's a version of creative destruction in which the destruction is preemptive, indiscriminate, and maybe looks a bit like sabotage -- with a little bit of help from all of us, of course.