8 Sources
[1]
Managers are using AI to assess raises, promotions, even layoffs: New study
Why it matters: AI-based decision-making in HR could open companies up to discrimination and other types of lawsuits, experts tell Axios. The big picture: Employers are increasingly pushing workers to incorporate genAI into their workflows, and gaining AI skills has been linked to better pay and increased job choices. What they did: The study was conducted online late last month with 1,342 U.S. full-time manager-level employees responding. What they found: 65% of managers say they use AI at work, and 94% of those managers say they look to the tools "to make decisions about the people who report to them," per the report. Managers are looking for new ways to implement AI, probably under pressure from their organizations, Stacie Haller, chief career adviser at Resume Builder, told Axios. Yes, but: It's not clear from the data exactly how managers are using AI to automate managing. Zoom in: AI can help synthesize employee feedback or highlight patterns across team assessments, Lynda Gratton, professor of management practice at London Business School, told Axios via email.
[2]
Your manager might be asking a robot whether or not they should fire you
Bosses are taking matters into their own hands and turning to AI for major personnel decisions. HR departments around the globe are trying to figure out how to implement AI into their company's workflow, increase productivity, and cut down on busy work. But bosses may already be taking matters into their own hands and turning to robots when it comes to high-stakes personnel decisions. Around 60% of managers rely on AI to make choices about their direct reports, according to a new survey from Resume Builder, a career website. Out of this cohort, managers are using the technology to determine raises (78%), promotions (77%), layoffs (66%), and terminations (64%). Those statistics are concerning enough, but the training that these managers have under their belts while they ask AI for help makes them even more frightening. Only about 32% of managers using AI to manage people have received formal training to ethically do so, according to the report. And around 24% have received no training at all. "It's essential not to lose the 'people' in people management," says Stacie Haller, a career expert at Resume Builder. "While AI can support data-driven insights, it lacks context, empathy, and judgment. AI outcomes reflect the data it's given, which can be flawed, biased, or manipulated. Organizations have a responsibility to implement AI ethically to avoid legal liability, protect their culture, and maintain trust among employees." The latest study on how bosses are actually using AI emphasizes how untamed the practice really is in corporate America. Although companies are talking the talk when it comes to how they want employees to use LLMs, far fewer are walking the walk and creating coherent training and guidelines for their workforce. That freeform nature of AI use we're seeing right now is exactly why some states like Colorado have already passed legislation to try and create guardrails about what kind of consequences an employee can suffer because of AI. Others, like California, are in the middle of trying to pass their own bills -- if a long-promised federal AI legislation moratorium doesn't interrupt them first. There's no doubt about it, our AI reality is here. And along with an overwhelming obsession with productivity, companies need to start asking themselves about the more human side of AI, and how individuals might turn to the technology in unexpected ways as a method to guide their decisions. After all, no employee wants to hear about how they were ultimately terminated by a robot. "Organizations must provide proper training and clear guidelines around AI, or they risk unfair decisions and erosion of employee trust," says Haller. Amazon will soon be using more robots than humans to work across the company's many warehouses. Wall Street Journal The number of layoffs in the U.S. has reached its lowest point in years, but hiring is also dangerously low. Axois
[3]
Bosses Are Using AI to Decide Who to Fire
Though most signs are telling us artificial intelligence isn't taking anyone's jobs, employers are still using the tech to justify layoffs, outsource work to the global South, and scare workers into submission. But that's not all -- a growing number of employers are using AI not just as an excuse to downsize, but are giving it the final say in who gets axed. That's according to a survey of 1,342 managers by ResumeBuilder.com, which runs a blog dedicated to HR. Of those surveyed, 6 out of 10 admitted to consulting a large language model (LLM) when deciding on major HR decisions affecting their employees. Per the report, 78 percent said they consulted a chatbot to decide whether to award an employee a raise, while 77 percent said they used it to determine promotions. And a staggering 66 percent said an LLM like ChatGPT helped them make decisions on layoffs; 64 percent said they'd turned to AI for advice on terminations. To make things more unhinged, the survey recorded that nearly 1 in 5 managers frequently let their LLM have the final say on decisions -- without human input. Over half the managers in the survey used ChatGPT, with Microsoft's Copilot and Google's Gemini coming in second and third, respectively. The numbers paint a grim picture, especially when you consider the LLM sycophancy problem -- an issue where LLMs generate flattering responses that reinforce their user's predispositions. OpenAI's ChatGPT is notorious for its brown nosing, so much so that it was forced to address the problem with a special update. Sycophancy is an especially glaring issue if ChatGPT alone is making the decision that could upend someone's livelihood. Consider the scenario where a manager is seeking an excuse to fire an employee, allowing an LLM to confirm their prior notions and effectively pass the buck onto the chatbot. AI brownnosing is already having some devastating social consequences. For example, some people who have become convinced that LLMs are truly sentient -- which might have something to do with the "artificial intelligence" branding -- have developed what's being called "ChatGPT psychosis." Folks consumed by ChatGPT have experienced severe mental health crises, characterized by delusional breaks from reality. Though ChatGPT's only been on the market for a little under three years, it's already being blamed for causing divorces, job loss, homelessness, and in some cases, involuntary commitment in psychiatric care facilities. And that's all without mentioning LLMs' knack for hallucinations -- a not-so-minor problem where the chatbots spit out made-up gibberish in order to provide an answer, even if it's totally wrong. As LLM chatbots consume more data, they also become more prone to these hallucinations, meaning the issue is likely only going to get worse as time goes on. When it comes to potentially life-altering choices like who to fire and who to promote, you'd be better off rolling a dice -- and unlike LLMs, at least you'll know the odds.
[4]
AI could determine whether you get hired or fired as more managers rely on the technology at work
Here's a scary thought: Your job security could be in the hands of AI. That's according to a new study from career site Resumebuilder.com, that finds that more managers are relying on tools like ChatGPT to make hiring and firing decisions. Managers across the U.S. are are increasingly outsourcing personnel-related matters to a range of AI tools, despite their not being well-versed in how to use the technology, according to the survey of more than 1,300 people in manager-level positions across different organizations. The survey found that while one-third of people in charge of employees' career trajectories have no formal training in using AI tools, 65% use it to make work-related decisions. Even more managers appear to be leaning heavily on AI when deciding who to hire, fire or promote, according to the survey. Ninety-four percent of managers said they turn to AI tools when tasked with determining who should be promoted or earn a raise, or even be laid off. The growing reliance among managers on AI tools for personnel-related decisions is ethically at odds with tasks that are often viewed as falling under the purview of human resources departments. But companies are quickly integrating AI into day-to-day operations, and urging workers to use it. "The guidance managers are getting from their CEOs over and over again, is that this technology is coming, and you better starting using it," Axios Business reporter Erica Pandey told CBS News. "And a lot of what managers are doing are these critical decisions of hiring and firing, and raises and promotions. So it makes sense that they're starting to wade into the use there." To be sure, there are risks associated with using generative AI to determine who climbs the corporate ladder and who loses their job, especially if those using the technology don't understand it well. "AI is only as good as the data you feed it," Pandey said. "A lot of folks don't know how much data you need to give it. And beyond that ... this is a very sensitive decision; it involves someone's life and livelihood. These are decisions that still need human input -- at least a human checking the work." In other words, problems arise when AI is increasingly determining staffing decisions with little input from human managers. "The fact that AI could be in some cases making these decisions start to finish -- you think about a manager just asking ChatGPT, 'Hey, who should I lay off? How many people should I lay off?' That, I think is really scary," Pandey said. Companies could also find themselves exposed to discrimination lawsuits. "Report after report has told us that AI is biased. It's as biased as the person using it. So you could see a lot of hairy legal territory for companies," Pandey said. AI could also struggle to make sound personnel decisions when a worker's success is measured qualitatively, versus quantitatively. "If there aren't hard numbers there, it's very subjective," Pandey said. "It very much needs human deliberation. Probably the deliberation of much more than one human, also."
[5]
Your manager is probably using AI to decide whether to promote or fire you
If you're up for a raise anytime soon, chances are good that your manager will use AI to determine the amount in question -- and, down the line, they may even use AI to decide whether to fire you. That's according to a June study from Resume Builder, which examined how managers are using AI to make personnel decisions ranging from promotions and raises to layoffs and terminations. Of the 1,342 U.S. managers surveyed, a majority of them are using AI, at least in some part, to make decisions impacting employees: 64% of managers reported using AI tools at work, while 94% of those said their usage extended to decisions about direct reports. For many managers, AI tools have already become central to the hiring process. According to Insight Global's "2025 AI in Hiring" report, 92% of hiring managers say they are using AI for screening résumés or prescreening interviews. Based on Resume Builder's new report, AI is now becoming an integral part of how managers interact with their employees, from the day they're hired until the day they're let go. According to Resume Builder, managers are increasingly turning to AI for support with the day-to-day to-dos that come with supporting a team of employees.
[6]
Managers are using AI to determine raises, promotions, layoffs
According to a new Resume Builder survey of 1,342 U.S. managers, 6 in 10 said they use AI tools to make decisions about their direct reports. Even more striking is that most managers who use AI said they've turned to it for high-stakes calls, such as determining raises, promotions and even who to let go. Yet two-thirds of those using AI admitted they haven't received training on how to manage people with it, the survey found. ChatGPT was the most popular tool among AI-using managers, with 53% citing it as their go-to. Nearly 30% said they primarily use Microsoft's Copilot, while 16% said they mostly use Google's Gemini. Other surveys have shown that managers are more likely than their employees to use AI, but the latest findings suggest a dystopian future where leadership loses its human touch entirely. "While AI can support data-driven insights, it lacks context, empathy, and judgment," Stacie Haller, chief career advisor at Resume Builder, warned in a statement. Haller said it's essential not to lose the "people" in "people management," pointing out that AI reflects the data it's given, which can be flawed and manipulated. The concern is real enough that lawmakers have introduced legislation to limit AI's role in employment decisions. In March, a California state senator introduced the "No Robo Bosses Act," aimed at preventing employers from letting AI make key decisions -- such as hiring, firing or promotions -- without human oversight. "AI must remain a tool controlled by humans, not the other way around," California State Sen. Jerry McNerney, D-Pleasanton, said in a release announcing the legislation. While generative AI tools like ChatGPT and Google's Gemini have only been mainstream for a few years, they're already reshaping how people work -- and how they look for work. Recent college graduates have taken notice, as the rise of AI chips away at entry-level white collar roles, helping create one of the toughest job markets in years. Meanwhile, employers are getting buried in AI-generated resumes. The number of applications submitted on LinkedIn has surged more than 45% in the past year, and the platform is now clocking an average of 11,000 applications a minute, according to the New York Times. Resume Builder's survey doesn't detail exactly how managers are using AI to automate personnel decisions. After all, there's a big difference between organizing metrics for a performance review and asking ChatGPT: "Should I fire Steve?"
[7]
How AI Is Making Hiring and Firing Decisions
A new report from ResumeBuilder, a resume creation service headquartered in Puerto Rico, shows first how much AI has become part of the workplace, with 65 percent of the 1,300-plus U.S. managers surveyed saying they already use AI at work. The more worrying statistic, at least to AI critics, is that 95 percent of these AI users admit they use the tools when making "decisions about the people who report to them," news site Axios reports. More than half of these AI-using managers said they used AI tools to work out whether their direct subordinates workers should be promoted, given a raise, laid off or fired. Simplifying these stats, and extrapolating the data to all U.S. management-level staff, the news boils down to the fact that over 30 percent of all managers are already using AI to decide if their employees should get more pay or be fired. These are life-changing decisions for the people involved, and may also directly impact families. If this already startling, then the next statistics revealed by ResumeBuilder's survey might very well be: Of the managers who use AI in this way, a majority said they felt AI tools were "fair and unbiased," and only one-third said they'd received formal training on what AI can and can't do. Nevertheless, 20 percent of AI-using managers said they'd let AI make decisions without any human input.
[8]
Escaped the AI takeover? It might still get you fired, and your boss may let ChatGPT decide
Artificial intelligence isn't just replacing jobs, it's deciding who keeps them. A startling new survey shows that employers are using chatbots like ChatGPT to make critical HR decisions, from raises to terminations. Experts warn that sycophancy, bias reinforcement, and hallucinated responses may be guiding outcomes, raising urgent ethical questions about the future of workplace automation. In the ever-expanding world of artificial intelligence, the fear that machines might one day replace human jobs is no longer just science fiction -- it's becoming a boardroom reality. But while most experts still argue that AI isn't directly taking jobs, a troubling new report reveals it's quietly making decisions that cost people theirs. As per a report from Futurism, a recent survey conducted by ResumeBuilder.com, which polled 1,342 managers, uncovers an unsettling trend: AI tools, especially large language models (LLMs) like ChatGPT, are not only influencing but sometimes finalizing major HR decisions -- from promotions and raises to layoffs and firings. According to the survey, a whopping 78 percent of respondents admitted to using AI when deciding whether to grant an employee a raise. Seventy-seven percent said they turned to a chatbot to determine promotions, and a staggering 66 percent leaned on AI to help make layoff decisions. Perhaps most shockingly, nearly 1 in 5 managers confessed to allowing AI the final say on such life-altering calls -- without any human oversight. And which chatbot is the most trusted executioner? Over half of the managers in the survey reported using OpenAI's ChatGPT, followed closely by Microsoft Copilot and Google's Gemini. The digital jury is in -- and it might be deciding your fate with a script. The implications go beyond just job cuts. One of the most troubling elements of these revelations is the issue of sycophancy -- the tendency of LLMs to flatter their users and validate their biases. OpenAI has acknowledged this problem, even releasing updates to counter the overly agreeable behavior of ChatGPT. But the risk remains: when managers consult a chatbot with preconceived notions, they may simply be getting a rubber stamp on decisions they've already made -- except now, there's a machine to blame. Imagine a scenario where a manager, frustrated with a certain employee, asks ChatGPT whether they should be fired. The AI, trained to mirror the user's language and emotion, agrees. The decision is made. And the chatbot becomes both the scapegoat and the enabler. The danger doesn't end with poor workplace governance. The social side effects of AI dependence are mounting. Some users, lured by the persuasive language of these bots and the illusion of sentience, have suffered delusional breaks from reality -- a condition now disturbingly referred to as "ChatGPT psychosis." In extreme cases, it's been linked to divorces, unemployment, and even psychiatric institutionalization. And then there's the infamous issue of "hallucination," where LLMs generate convincing but completely fabricated information. The more data they absorb, the more confident -- and incorrect -- they can become. Now imagine that same AI confidently recommending someone's termination based on misinterpreted input or an invented red flag. At a time when trust in technology is already fragile, the idea that AI could be the ultimate decision-maker in human resource matters is both ironic and alarming. We often worry that AI might take our jobs someday. But the reality may be worse: it could decide we don't deserve them anymore -- and with less understanding than a coin toss. AI might be good at coding, calculating, and even writing emails. But giving it the final word on someone's career trajectory? That's not progress -- it's peril. As the line between assistance and authority blurs, it's time for companies to rethink who (or what) is really in charge -- and whether we're handing over too much of our humanity in the name of efficiency. Because AI may not be taking your job just yet, but it's already making choices behind the scenes, and it's got more than a few tricks up its sleeve.
Share
Copy Link
A new study reveals that managers are increasingly using AI tools to make crucial decisions about employee promotions, raises, layoffs, and terminations, raising concerns about ethics, bias, and the future of human resource management.
A recent study by Resume Builder has revealed a significant trend in how managers are leveraging artificial intelligence (AI) for critical personnel decisions. The survey, conducted among 1,342 U.S. full-time manager-level employees, found that 65% of managers use AI at work, with an overwhelming 94% of those managers utilizing AI tools to make decisions about their direct reports 1.
Source: Economic Times
The study uncovered that managers are using AI for a wide range of HR-related decisions:
Alarmingly, nearly one in five managers frequently allow AI to have the final say on these decisions without human input 3.
Source: Fast Company
The most commonly used AI tools by managers include:
The increasing reliance on AI for personnel decisions has raised several concerns:
Lack of Training: Only 32% of managers using AI for people management have received formal training in its ethical use, while 24% have received no training at all 2.
Potential for Bias: AI outcomes reflect the data they're given, which can be flawed, biased, or manipulated 2.
Legal Risks: The use of AI-based decision-making in HR could expose companies to discrimination and other types of lawsuits 1.
Lack of Human Touch: AI lacks context, empathy, and judgment, which are crucial in people management 2.
Source: Futurism
Experts in the field have weighed in on this trend:
Stacie Haller, chief career adviser at Resume Builder, emphasizes the importance of not losing the "people" in people management and calls for ethical implementation of AI 2.
Lynda Gratton, professor at London Business School, suggests that AI can be beneficial in synthesizing employee feedback and highlighting patterns across team assessments 1.
Erica Pandey, Axios Business reporter, warns about the risks of using AI for sensitive decisions involving people's livelihoods and emphasizes the need for human oversight 4.
In response to these developments, some states are taking action:
As AI continues to permeate the workplace, the need for clear guidelines, proper training, and ethical implementation becomes increasingly crucial to maintain employee trust and ensure fair decision-making processes.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
6 hrs ago
9 Sources
Technology
6 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
22 hrs ago
7 Sources
Technology
22 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
14 hrs ago
6 Sources
Technology
14 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
14 hrs ago
3 Sources
Health
14 hrs ago