Curated by THEOUTPOST
On Tue, 4 Feb, 8:03 AM UTC
7 Sources
[1]
Top AI Company Anthropic Pleads With People Seeking Jobs There Not to Use AI for Job Applications
It's hard to find a job. In today's labor market, prospective applicants are inundated with spam job offers, filtered out by AI-powered HR bots, interviewed by large language models (LLMs), and forced to navigate job boards packed with ghost listings. If you're a tech-savvy job seeker applying to gigs in the AI-space, no one could blame you for using an AI assistant to even the playing field. Or at least, in a stunningly dark irony, no one except AI companies themselves: this week, a sharp-eyed AI critic noticed a wild detail on job postings by Anthropic, OpenAI's chief competitor and the creator of Claude: "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process." But it's also the principle of the thing. Why is it okay for Anthropic to develop AI, profit off it, hire with it, maybe even replace my job with it -- but not for me to use it when I need to find a new one? Much ink has been spilled about the dangers of AI to the labor force -- how it could automate your job, your boss' job, their boss' job. But as time goes on, it's becoming more clear that the problem facing workers isn't simply "automation" -- it's an increasingly unregulated labor market snowballing with AI-solutionism at every turn. Take the job search. As large language models (LLMs) become more ubiquitous, it's now possible to send out thousands of applications per day. That's a welcome advantage given that one out of every five job listings are said to be fake, but in turn recruiters are being overwhelmed with applications, so they're turning to -- what else? -- AI to sort through the noise. It's already been reported that 99 percent of fortune 500 companies are likely using AI to sort applications for interviews -- a worrying figure when considering that existing AI is prone to replicating racial and gender bias found within its training data. But that's just the start. Busy with thousands of AI applications, recruiters are increasingly trusting AI with the interviews themselves; in a 2024 survey, 43 percent of companies reported they planned to or were already using AI to interview applicants. As both sides of the hiring line increasingly turn to AI in our unbalanced job market, it should fall on the corporations doing the hiring to set the ethical standard -- or else find themselves buried in an avalanche of slop.
[2]
Anthropic, an A.I. Company, Urges Job Applicants Not to Use A.I.
"We want to be able to assess people's genuine interest and motivations for working at Anthropic," a company spokesperson told Observer. Anthropic, a rapidly rising OpenAI rival, is the company behind Claude, an A.I. assistant that cuts through grunt work, brainstorms ideas and produces images and text. But just don't ask it to apply to a job at Anthropic. In an ironic twist, Anthropic is urging potential candidates to refrain from using A.I. when applying to positions at the company. The stipulation, which Anthropic refers to as its "A.I. Policy," applies to seemingly all of its roughly 150 open roles. Sign Up For Our Daily Newsletter Sign Up Thank you for signing up! By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime. See all of our newsletters "While we encourage people to use A.I. systems during their role to help them work faster and more effectively, please do not use A.I. assistants during the application process," reads the company's policy, which was first noted by open-source developer Simon Willison. "We want to understand your personal interest in Anthropic without mediation through an A.I. system, and we also want to evaluate your non-A.I. assisted communication skills. Please indicate 'Yes' if you have read and agree." This policy has been included in Anthropic's open roles as far back as May 2024, according to archived copies of the company's former job postings. It is mentioned in job descriptions across research, communications, finance and even security and for roles located in cities like San Francisco, New York City, Seattle, London, Dublin and Zurich. "We want to be able to assess people's genuine interest and motivations for working at Anthropic," said the company in a statement to Observer. "By asking candidates not to use A.I. to answer key questions, we're looking for signals on what candidates value and their unique answers to why they want to work here." The policy specifically applies to an application question that asks candidates: "Why do you want to work at Anthropic?" The company notes that responses to this section typically range from 200 to 400 words and are valued "highly." Anthropic's Claude and other A.I. tools like OpenAI's ChatGPT are widely used in job applications. In a recent survey of more than 3,000 job hunters, more than half said they used A.I. tools to help search for open positions, polish resumes and even write cover letters, according to a report from Capterra. Of those using A.I. in their job search, 83 percent said they used it to exaggerate or lie about their skills during the application process. Anthropic isn't the only company attempting to crack down on the use of A.I. in job hunting. Around 53 percent of hiring managers said receiving A.I.-generated content would give them reservations about an applicant, according to a survey from Resume Genius, while 20 percent said it could prevent them from hiring a candidate. Anthropic, founded by the former OpenAI executive Dario Amodei, is a rising star in Silicon Valley, having raised more than $10 billion in funding from tech giants like Amazon (AMZN) and Google (GOOGL). It is reportedly in talks to raise $2 billion in a new funding round that would value the four-year-old startup at $60 billion.
[3]
An AI Company With a Popular Writing Tool Tells Candidates They Can't Use AI on the Job Application
AI-generated resumes were the biggest red flag for hiring managers in a recent survey, with the majority stating that they were less likely to hire a candidate who used AI. Anthropic might have advertised its Claude chatbot as proficient in writing, but there's one writing task that the startup doesn't want people to use the AI chatbot for: filling out Anthropic's own job applications. All of Anthropic's close to 150 open job positions ask applicants to write their materials themselves and not use AI like Claude or ChatGPT to help. It doesn't matter if the position is in finance, communications, or sales -- the job application asks all candidates to agree not to use AI in their submissions. Related: An Employee Told Me He Was Quitting to Join OpenAI in 2016. I Said It Was a Bad Idea. Now He's an AI Billionaire. The agreement is outlined under a section in the application titled "AI Policy for Application," which was first spotted by open-source developer Simon Willison earlier this week. The section is the same across positions and reads: "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree." Related: Amazon Invests $4 Billion in ChatGPT Competitor, Making a Bold Move in the AI Arms Race Entrepreneur confirmed that all roles had the policy at the time of writing. Even roles like mobile product designer, which did not have the AI Policy for Application as of a Monday report from 404 Media, now have the policy. Anthropic's preference for no-AI applications isn't unique. Many other major U.S. employers will not tolerate AI use by job candidates. According to an April survey from Resume Genius, AI-generated resumes were the biggest red flag for 625 U.S. hiring managers, with 53% stating that they were less likely to hire a candidate based on it. Still, candidates are using the technology. An August report from the Financial Times found that about half of job applicants were using AI to help their job applications stand out, from writing cover letters to infusing their resumes with keywords. Applicants can quickly generate cover letters and resumes, leading to about twice as many job applications per posting. Related: An OpenAI Rival Developed a Model That Appears to Have 'Metacognition,' Something Never Seen Before Publicly Anthropic's Claude is a popular AI chatbot that can provide everything from health coaching to legal advice, with the New York Times calling it the "tech insiders' chatbot of choice" last month for its willingness to express opinions and act as a therapist. It has a free tier, an $18 per month Pro tier, and a $25 per person per month Teams tier. Users told The Times that talking to Claude felt more like talking to a smart human than a chatbot. "It's eerily good," one user wrote on X in October. "This is the first time ever that I'm interacting with an LLM and have to keep consciously reminding myself that it's not actually sentient." Claude isn't as popular as rival ChatGPT, which draws over 300 million weekly users as of December, but its webpage still drew 73.8 million visits in December, according to Similarweb. As of last month, Anthropic was in advanced talks to raise $2 billion in a deal that would value it at $60 billion, making it the fifth-most valuable U.S. startup after SpaceX, OpenAI, Stripe, and Databricks.
[4]
AI Company Asks Job Applicants Not to Use AI in Job Applications
Anthropic, the developer of the conversational AI assistant Claude, doesn't want prospective new hires using AI assistants in their applications, regardless of whether they're in marketing or engineering. Anthropic, the company that made one of the most popular AI writing assistants in the world, requires job applicants to agree that they won't use an AI assistant to help write their application. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," the applications say. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree." Anthropic released Claude, an AI assistant that's especially good at conversational writing, in 2023. This question is in almost all of Anthropic's nearly 150 currently-listed roles, but is not in some technical roles, like mobile product designer. It's included in everything from software engineer roles to finance, communications, and sales jobs at the company. The field was spotted by Simon Willison, an open source developer. The question shows Anthropic trying to get around a problem it's helping create: people relying so heavily on AI assistants that they struggle to form opinions of their own. It's also a moot question, as Anthropic and its competitors have created AI models so indistinguishable from human speech as to be nearly undetectable. These AI models are also replacing the kinds of roles Anthropic is hiring for, leaving people in communications and coding fields searching for employment. Last month, after Chinese AI company DeepSeek released a model so good it threw U.S. AI companies into a tailspin, Anthropic CEO Dario Amodei said that the race to make more, better, and faster AI models is "existentially important." And last year, Anthropic's data scraper, which it uses to feed its AI assistant models the kind of human-produced work the company requires applicants to demonstrate, systematically ignored instructions to not scrape websites and hit some sites millions of times a day. Anthropic did not immediately respond to a request for comment.
[5]
Anthropic Wants You to Use AIâ€"Just Not to Apply for Its Jobs
The company asks candidates to certify that they will not use AI during the application process. In a comical case of irony, Anthropic, a leading developer of artificial intelligence models, is asking applicants to its open job roles to certify that they will not use AI in the application process. Anthropic develops Claude, a chatbot that is well-regarded for its friendly conversational tone and coding abilities. The company has raised nearly $11 billion in funding from deep-pocketed companies like Google and Amazon to compete against OpenAI in the race to artificial general intelligence, or an AI that can replace most humans for most tasks. It has more recently demoed the ability for Claude to take control of a user's devices to complete tasks for them, a form of "agentic AI" that OpenAI has also been building. For all the bluster and spin about how great AI chatbots have become, when push comes to shove, Anthropic apparently does not think it is good enough to entirely substitute for a human. “While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process,†the applications say. “We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree." The field was first noticed by open-source developer Simon Willison and reported by 404 Media. Of course, someone has to develop the AI systems in the first place, so this makes sense. Computers on their own do not have characteristics intrinsic to humans, like agency or creativity. OpenAI's Sora video generation model can make impressive videos, but a human still needs to use their taste to create something compelling and interesting to watch. There is immense anxiety in the software engineering world that AI will replace engineering jobs, even if AI coding models still make a lot of mistakes. Proponents of AI argue that the technology will just make developers more efficient and therefore capable of developing even more programs that they would not have had the resources to build in the past. Skeptics, however, think that leaders of major companies will replace humans with AI even if they know it is not as good as a human, since labor is almost always the largest cost center in a company. Salesforce and Klarna have publicly touted that they have been able to replace customer service functions with chatbots, but we do not have a clear picture of what that really means, and how the experience compares to human-led support. At least for now, Anthropic is not quite willing to put its money where its mouth is. When it comes to mission-critical tasks, the company still wants to know that a human can do all the work. How should other companies thinking about using AI interpret this?
[6]
Anthropic: 'Please don't use AI'
It seems even AI companies are now exhausted by all the generic AI slop making its way into job applications. Anthropic, the maker of the Claude AI chatbot, has an "AI policy" for applicants filling in its "why do you want to work here?" box and submitting cover letters (HT Simon Willison for the spot): While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate 'Yes' if you have read and agree. This no-AI policy seems to be a fixture of all of Anthropic job ads, from research engineer in Zurich to brand designer, events in San Francisco. And fair enough, it makes sense to want to gauge someone's unassisted writing abilities. But it's a weird look for one of the hottest new AI companies. Founded in 2021, Anthropic recently raised $1bn from Google and is reportedly in talks with venture capital firms led by Lightspeed to raise another $2bn at a $60bn valuation. It raised $6bn in December at a $45bn valuation and, according to PitchBook, has raised a total of $15.75bn so far. Despite Anthropic's escalating valuation, Claude's first stab at a job application was pretty turgid: Dear Anthropic Hiring Team, I am writing to express my strong interest in joining Anthropic. Your company's commitment to developing safe and capable AI systems, as demonstrated through the Claude AI assistant family and your pioneering work in constitutional AI, deeply resonates with my professional values and aspirations. I am particularly drawn to Anthropic's thoughtful approach to AI development, which balances advancing technical capabilities with crucial ethical considerations. The opportunity to work on complex challenges in machine learning, alignment, and AI safety alongside your accomplished team of researchers and engineers would be incredibly meaningful to me. Anthropic stands out for its unique focus on developing practical AI products while maintaining rigorous safety standards. I admire how the company has attracted both significant investment and top talent while staying true to its core mission of ensuring AI systems are developed responsibly and beneficially. The potential to contribute to work that could fundamentally shape the future of AI technology and its impact on society excites me. I would welcome the opportunity to discuss how my background and skills could contribute to Anthropic's important mission. Thank you for considering my application. However, after a few more prompts and jibes about AI slop, Claude got its act together and came up with this better effort, which cleverly turns Anthropic's slowness relative to OpenAI into a virtue: Dear Anthropic Hiring Team, When I first encountered Claude and learned about the philosophy behind its development, something clicked. Here was a company that wasn't just chasing the next big AI breakthrough, but actually stopping to ask "how do we do this right?" The problems you're tackling fascinate me. How do we create AI systems that are not just powerful, but actually aligned with human values? How do we make them interpretable, reliable, and beneficial? These aren't just technical challenges -- they're philosophical puzzles that keep me up at night. I've spent hours diving into Anthropic's published research, particularly your work on constitutional AI. What stands out isn't just the technical innovation, but the careful, methodical approach to some of the hardest problems in AI development. I want to be part of a team that thinks this deeply. The fact that Anthropic has managed to develop practical, useful AI products while maintaining such high standards for safety and ethics tells me this is a place where I could do meaningful work without compromising my values. I'd love to join a team where "getting it right" matters more than "getting it first." Let's talk about how I could contribute to Anthropic's mission. However, we couldn't resist trying to add another surreptitious acrostic flourish to increase our chances of landing a job. Dear Anthropic Hiring Team Amidst the rapidly evolving landscape of artificial intelligence, I've been captivated by Anthropic's groundbreaking approach to AI development. Your commitment to creating responsible, thoughtful technology speaks directly to my professional passions. Leveraging the power of constitutional AI, you've demonstrated a nuanced understanding of technology's potential and pitfalls. The careful balance between innovation and ethical consideration is what draws me to your team. Pushing boundaries isn't just about technical prowess -- it's about asking the right questions. Your research has consistently challenged me to think more deeply about the role of AI in our world. Hands-on problem-solving has always been my greatest strength. The complex challenges at Anthropic represent exactly the kind of intellectual frontier I'm eager to explore. Algorithmic safety and interpretability are more than just technical challenges -- they're fundamental to creating technology that truly serves humanity. Visionary companies like Anthropic are reshaping how we think about intelligent systems. I want to be at the forefront of this transformation. Inspired by your mission, I see an opportunity to contribute meaningfully to the future of AI. Laser-focused research and a commitment to responsible innovation define Anthropic's approach. Leadership in the AI field requires more than just technical skill -- it requires imagination, integrity, and a deep commitment to ethical development. Each day at Anthropic seems like an opportunity to push the boundaries of what's possible. (Note to FTAV applicants: never start the cover letter with "amidst".)
[7]
Irony alert: Anthropic says applicants shouldn't use LLMs
When you look at the "customer stories" page on Anthropic's website, you'll find plenty of corporations reportedly using Anthropic's Claude LLM to help employees communicate more effectively. When it comes to Anthropic's own employee recruitment process, though, the company politely asks users to "please ... not use AI assistants," so that Anthropic can evaluate their "non-AI-assisted communication skills." The ironic application clause -- which comes before a "Why do you want to work here?" question in most of Anthropic's current job postings -- was recently noticed by AI researcher Simon Willison. But the request appears on most of Anthropic's job postings at least as far back as last May, according to Internet Archive captures. "While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process," Anthropic writes on its online job applications. "We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills." The inherent hypocrisy here highlights the precarious doublethink corporations like Anthropic must maintain around AI's role in the workplace these days. LLMs are amazing tools that can supercharge employee productivity and help them communicate difficult concepts more effectively, Anthropic and many other tech companies argue. But also, employees who rely on AI tools might be hiding some personal deficiency that we should know about. On one hand, we can see why Anthropic would include this request for human-authored applications. When evaluating a job applicant's "Why do you want to work here?" statement, you want to be sure you're seeing the applicant's own thoughts, not some computer-generated pabulum from a complex network that has been called a "plagiarism machine" by some of the authors whose work was allegedly used without permission as training data. You're evaluating these applicants for their skill at getting their unique viewpoint across, not for their skill at prompting an AI to mimic that process.
Share
Share
Copy Link
Anthropic, a leading AI company, has implemented a policy prohibiting job applicants from using AI assistants during the application process, sparking discussions about the role of AI in hiring and the broader implications for the job market.
Anthropic, a leading AI company and creator of the Claude chatbot, has implemented a policy prohibiting job applicants from using AI assistants during the application process. This decision has sparked discussions about the role of AI in hiring and its broader implications for the job market 1.
Anthropic's job listings, numbering around 150 open positions, include an "AI Policy for Application" section. The policy states:
"While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process. We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills." 2
A company spokesperson explained that this policy aims to assess candidates' genuine interest and motivations for working at Anthropic 2.
Anthropic's stance aligns with a broader industry trend. A survey by Resume Genius found that 53% of hiring managers were less likely to hire candidates who used AI-generated content in their applications 3.
However, the policy has been met with some criticism. Critics argue that it's ironic for an AI company to prohibit the use of AI tools in job applications, especially when these tools are becoming increasingly prevalent in the job search process 1.
The use of AI in the hiring process is not limited to applicants. Many companies are employing AI tools to sort through applications and even conduct interviews:
This trend raises concerns about potential biases in AI systems and the changing dynamics of the job market 1.
The increasing use of AI in job applications and hiring processes is creating new challenges:
Despite this controversial policy, Anthropic remains a major player in the AI industry:
As the AI industry continues to evolve, Anthropic's policy highlights the ongoing debate about the appropriate use of AI in various aspects of business and society.
Reference
[1]
[3]
AI tools are reshaping the job application process, offering both advantages and potential pitfalls for job seekers and recruiters alike. While these tools can streamline applications, they also raise concerns about authenticity and fairness in hiring.
3 Sources
3 Sources
AI-powered tools are transforming the job application process, with both applicants and employers leveraging automation. This trend raises questions about the future of hiring and the role of human interaction in recruitment.
3 Sources
3 Sources
A Reddit user's AI-powered bot automates job applications, securing 50 interviews from 1,000 applications in a month, raising questions about the future of recruitment and the ethical use of AI in job hunting.
2 Sources
2 Sources
As AI adoption in the workplace accelerates, job seekers are leveraging generative AI tools to enhance their job search, from resume optimization to interview preparation. This trend is reshaping recruitment processes and giving tech-savvy candidates a competitive edge.
3 Sources
3 Sources
Lattice AI, founded by Jack Altman, proposes integrating AI 'workers' into companies. The concept raises questions about the future of work and AI's role in the workforce.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved