9 Sources
9 Sources
[1]
Lawsuit targets AI hiring systems used by Microsoft and Salesforce
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. In context: In California, a lawsuit against one of the most widely used AI hiring systems could reshape how companies deploy algorithms to evaluate candidates. Eightfold AI, a Santa Clara-based platform that assists employers such as Microsoft, PayPal, Salesforce, and Bayer in ranking applicants, is accused of generating undisclosed reports on job seekers without their consent - a practice the plaintiffs claim violates federal consumer protection law. The proposed class action, filed in Contra Costa County Superior Court, is the first in the US to allege that an AI hiring vendor has violated the Fair Credit Reporting Act (FCRA). Enacted in 1970 to protect consumers from inaccurate or opaque credit data, the law also covers any organization that creates reports about a person's characteristics for employment purposes. The key question is whether an algorithm that scores resumes and predicts future job titles falls under that definition. The complaint, backed by law firm Outten & Golden and the nonprofit advocacy group Towards Justice, contends that automated ranking systems should be subject to the same disclosure and correction requirements as credit scoring agencies. Eightfold uses large-scale data analysis to match candidates with jobs more efficiently than human recruiters. According to the company, its AI models draw on more than a billion professional profiles, a million job titles, and an equally vast catalog of skills and industries. When employers evaluate applicants, the system assembles "talent profiles" that describe traits such as teamwork or introversion, assess educational background, and even forecast likely future roles. The results are distilled into numerical or categorical scores that influence whether a candidate proceeds to human review. Critics argue that such automated assessments act as algorithmic gatekeepers, leaving applicants without access to their detailed rankings or the underlying data. Eightfold disputes the allegations. Company spokesperson Kurt Foeller told Reuters that its systems rely solely on information provided directly by candidates or shared by employer customers. "We do not scrape social media and the like," Foeller said. "We are deeply committed to responsible AI, transparency, and compliance with applicable data protection and employment laws." Still, legal observers say the case highlights a growing tension between technological innovation and decades-old privacy statutes. "These tools are designed to be biased - in the sense that they're looking for specific kinds of candidates," said David J. Walton, a Philadelphia attorney who advises companies on AI compliance but is not involved in the case. "The fine line is whether that bias veers into something unlawful," he told The New York Times. The Eightfold suit follows another high-profile challenge against Workday, an AI hiring platform, where a federal judge allowed claims to proceed last year alleging discrimination against older, disabled, and black applicants. That San Francisco case has intensified legal debates over whether algorithmic models inadvertently encode prohibited bias or merely streamline recruitment. Regulators have also weighed in. In 2024, the Consumer Financial Protection Bureau issued guidance indicating that algorithmic scores used for employment decisions are likely consumer reports under the FCRA. Although that guidance was later rescinded under the Trump administration, it signaled how federal agencies might interpret the law in future enforcement actions. Eightfold's investors - including SoftBank's Vision Fund and General Catalyst - have wagered heavily on the platform's ability to transform corporate hiring pipelines. The company claims that one-third of the Fortune 500 uses its technology and has partnered with state labor departments in New York and Colorado to power public job-matching portals. Such scale means the implications of the California suit could reach far beyond a single company. A ruling that classifies AI hiring algorithms as consumer reporting systems could require tech vendors to open their processes to scrutiny, disclose ranking methods to applicants, and implement dispute mechanisms similar to those used by credit bureaus today. For now, the plaintiffs' legal team - which includes former officials from the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission - views the case as the start of a broader accountability push. "There is no AI exemption to our laws," said David Seligman, executive director of Towards Justice. "For decades, these statutes have protected people from opaque systems that decide their futures. That protection shouldn't disappear just because the system now has an algorithm."
[2]
Job Seekers Want to Know What the Hell Is Going on With AI-Based Hiring Decisions: Lawsuit
Eightfold, an AI company that makes human resources software, is being sued over one of its hiring tools. Applying for a job already sucks on its own. But increasingly, job seekers are left wondering whether an AI system is screening them out before a human ever sees their application. A new lawsuit hopes to change that by forcing more transparency into how AI hiring tools work. The case argues that automated applicant "scores" should be legally treated like credit checks and be subject to the same consumer protection laws. The proposed class action was filed on Wednesday in California state court by two women working in STEM who say AI hiring screeners have filtered them out of jobs they were qualified for. "I've applied to hundreds of jobs, but it feels like an unseen force is stopping me from being fairly considered," said Erin Kistler, one of the plaintiffs, in a press release. "It's disheartening, and I know I'm not alone in feeling this way." And she's right about not being the only person feeling this way at a time when more companies are relying on AI for hiring. Roughly 88% of companies now use some form of AI for initial candidate screening, according to the World Economic Forum. The lawsuit specifically targets Eightfold, an AI human resources company that sells tools designed to help employers manage recruiting and hiring. Among its offerings is a tool that generates a numerical score predicting the likelihood that a candidate is a good match for a given role. That scoring system sits at the center of the case. Eightfold's "match score" is generated using information pulled from a variety of sources, including job postings, an employer's desired skills, applications, and, in some cases, LinkedIn. The Model then provides a score ranging from zero to five that "helps predict the degree of match between a candidate and a job position." The lawsuit argues that this process effectively produces a "consumer report" under the Fair Credit Reporting Act (FCRA), a federal law passed in 1970 to regulate credit bureaus and background check companies. Because the score aggregates personal information and translates it into a ranking used to determine eligibility for "employment purposes," the lawsuit claims Eightfold should be required to follow the same rules that apply to credit reporting agencies. Those rules include notifying applicants when such a report is being created, obtaining their consent, and giving them the chance to dispute any inaccurate information. "Eightfold believes the allegations are without merit. Eightfold's platform operates on data intentionally shared by candidates or provided by our customers," an Eightfold spokesperson told Gizmodo in an emailed statement. "We do not scrape social media and the like. We are deeply committed to responsible AI, transparency, and compliance with applicable data protection and employment laws." Still, the lawsuit is seeking a court order requiring Eightfold to comply with state and federal consumer reporting laws as well as financial damages. "Qualified workers across the country are being denied job opportunities based on automated assessments they have never seen and cannot correct," said Jenny R. Yang, a lawyer for the case and former chair of the U.S. Equal Employment Opportunity Commission. "These are the very real harms Congress sought to prevent when it enacted the FCRA. As hiring tools evolve, AI companies like Eightfold must comply with these common-sense legal safeguards meant to protect everyday Americans."
[3]
Job seekers are suing an AI hiring tool used by Microsoft and Paypal for allegedly compiling secretive reports that help employers screen candidates | Fortune
TL;DR: Job seekers are suing an AI hiring tool called Eightfold for allegedly compiling secretive reports that help employers screen candidates. Why is this illegal? The same reason credit rating agencies have to tell you why they dinged your score, the lawsuit claims. If the courts buy this logic, it could start to reshape the black-box world of AI hiring. What happened: Like many people who've played the job search numbers game lately, the plaintiffs were sick of applications seemingly plummeting into a void. They filed a class-action suit against Eightfold, which is used by major companies like Microsoft and PayPal for vetting potential hires. The lawsuit claims that Eightfold violated the Fair Credit Reporting Act and a similar California consumer protection law by not letting applicants view information about them and correct the record if needed. "Eightfold's technology lurks in the background of job applications," the lawsuit alleges, "collecting personal data, such as social media profiles, location data, internet and device activity, cookies and other tracking." Eightfold disputes this: The tool "operates on data intentionally shared by candidates or provided by our customers. We do not scrape social media and the like," spokesperson Kurt Foeller told us. "Eightfold believes the allegations are without merit." What isn't disputed is that Eightfold uses AI to produce a score between zero and five, ranking how much of a fit a candidate is for a given job. Why it matters: Companies now use a whole slew of behind-the-scenes AI tools to find and evaluate candidates. Candidates are playing the game, too, using their own AI tools to find jobs and craft applications. It's AI all the way down. "We are at a point where AI hiring tools are being adopted very quickly, often faster than companies are building the compliance, auditing, and governance structures needed to use them responsibly," the attorneys on the case, Jenny R. Yang and Christopher M. McNerney, partners at Outten & Golden LLP, told us in an email. "That creates real risk -- not only of inaccurate decisions, but also of hidden discrimination." Some states -- and New York City -- have laws governing these tools, largely focused on their potential for bias and discrimination. But AI decision-making still happens mostly without job seekers' knowledge. This isn't the first time that the Fair Credit Reporting Act has been used to challenge big data hiring systems, according to Pauline Kim, an employment law professor at the Washington University School of Law -- but it is new for one of these cases to focus on AI. What this means for you: If the lawsuit is successful -- which could take years -- AI hiring tools might be more upfront about what data they collect and work harder to ensure accuracy, Kim said. But the 55-year-old law the suit relies on might also not fully capture modern usage. The real significance, according to Kim, is that companies relying on these tools would have to be more transparent about their use. "Because the law was written in an earlier era, however, even if courts apply it, it will provide only limited transparency -- likely not enough to ensure the fairness of these systems." -- PK This report was originally published by Tech Brew.
[4]
Job Seekers Sue Company Scanning Their Résumés Using AI
"I think I deserve to know what's being collected about me and shared with employers." Thanks to scores of competing AI systems clogging up online application portals, applying for a new job in 2026 can feel more like applying for a bank loan than seeking a job. At least, that's what a group of disgruntled job seekers is claiming in a lawsuit against an AI screening company called Eightfold AI. According to the New York Times, the plaintiffs allege that Eightfold's employment screening software should be subjected to the Fair Credit Reporting Act -- the regulations protecting information collected by consumer credit bureaus. The reason, they say, can be found deep within Eightfold's AI algorithm, which actively trolls LinkedIn to create a data set of "1 million job titles, 1 millions skills, and the profiles of more than 1 billion people working in every job, profession, industry, and geography." That data set, in turn, is used in marketing material to help sell its services to potential clients. Using an AI model trained on that data, plaintiffs say, Eightfold scores job applications on a scale of one to five, based on their skills, experience, and the hiring manager's goals. In sum, their argument is that it's not at all unlike the opaque rules used to govern consumer credit scores. In the case of Eightfold, however, applicants have no way of knowing what their final score even is, let alone the steps the system took to come up with it. That creates a "black box": a situation where the people subjected to an algorithmic decision can only see the system's outcome, not the process that led to it. And if Eightfold's AI starts making things up on the fly -- an issue AI models are infamous for -- the job seeker has no way of knowing. There's also the issue of data retention. With no way to take a peek under the hood, there's no telling how much data from job applicants' résumés Eightfold collects, or what the AI company and its clients are doing with it. "I think I deserve to know what's being collected about me and shared with employers," Erin Kistler, one of the plaintiffs told the NYT. "And they're not giving me any feedback, so I can't address the issues." Kistler, who has decades of experience working in computer science, told the publication she's kept a close score of every application she's sent over the last year. Out of "thousands of jobs" she's applied for, only 0.3 percent moved on to a follow-up or interview, she said. It all underscores the sad state of the job market, which has become the stuff of dystopian nightmares thanks to AI hiring tools. Whether the lawsuit can gain enough momentum to challenge the massive legal grey area of AI hiring remains to be seen. If it does, it could bring relief to throngs of despondent job seekers whose careers quite literally hang in the balance. Eightfold AI didn't respond to the NYT's request for comment.
[5]
AI Hiring Firm Eightfold Sued Over Alleged Secret Scoring Of Job Applicants - Decrypt
The filing says the platform's AI is trained on more than 1.5 billion data points, without letting applicants review or correct errors. Two job seekers filed a federal class-action lawsuit Tuesday against AI hiring platform Eightfold, alleging that the company uses hidden artificial intelligence to secretly score applicants without their knowledge or consent, thereby violating consumer protection laws enacted in the 1970s. The complaint, filed in California's Contra Costa County Superior Court, alleges that Eightfold violated the Fair Credit Reporting Act and California's Investigative Consumer Reporting Agencies Act by assembling consumer reports on job applicants without providing required disclosures or dispute rights. Plaintiffs Erin Kistler and Sruti Bhaumik claim Eightfold's platform collects sensitive personal data, including social media profiles, location data, internet activity, and tracking cookies, from public sources like LinkedIn, GitHub, and job boards to evaluate candidates applying to companies including Microsoft, PayPal, Starbucks, and Morgan Stanley. The plaintiffs seek actual and statutory damages between $100 and $1,000 per violation under federal law, plus up to $10,000 per violation under California law, along with punitive damages and injunctive relief requiring Eightfold to change its practices. The lawsuit alleges that Eightfold's AI uses "more than 1.5 billion global data points" to generate "Match Scores" that rank applicants from 0 to 5 based on their "likelihood of success," with lower-ranked candidates often "discarded before a human being ever looks at their application." Kistler, a computer science graduate with 19 years of product management experience, applied for senior PayPal roles via Eightfold in December without landing an interview, while Bhaumik, a project manager with degrees from Bryn Mawr and the University of Pittsburgh, was automatically rejected from a Microsoft role two days after applying. The lawsuit claims that nearly two-thirds of large companies now use AI technology like Eightfold's to screen candidates, while 38% deploy AI software to match and rank applicants. "This case is about a dystopian AI-driven marketplace, where robots operating behind the scenes are making decisions about the most important things in our lives: whether we get a job or housing or healthcare," David Seligman, Executive Director at Towards Justice and one of the attorneys representing the plaintiffs, tweeted. "There is no AI exemption to the law -- no matter how fancy-sounding the tech or how much venture capital is behind it," he noted. The complaint alleges that Eightfold's proprietary Large Language Model incorporates data on "more than 1 million job titles, 1 million skills, and the profiles of more than 1 billion people working in every job, profession, [and] industry," plus "inferences drawn" to create profiles reflecting applicants' "preferences, characteristics, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes." During the application process, neither plaintiff received a standalone disclosure that consumer reports would be generated, nor did they receive summaries of their consumer protection rights or information about Eightfold's role as a consumer reporting agency, the lawsuit alleges. Decrypt has reached out to Eightfold for comment and will update this article should they respond.
[6]
AI Recruitment Platform Eightfold Sued for Screening Job Applicants Without Consent | AIM
The lawsuit marks the first case in the US to accuse an AI recruitment firm of breaching the Fair Credit Reporting Act. Eightfold AI, an AI recruitment platform based in the US and used by companies such as Microsoft and PayPal, as well as various Fortune 500 firms, is being sued in California for reportedly compiling applicant screening reports without their consent. The lawsuit, filed on January 20, marks the first case in the US to accuse an AI recruitment firm of breaching the Fair Credit Reporting Act, according to the legal firms that initiated the suit. It also highlights how consumer advocates are seeking to enforce existing laws on AI systems that can infer information about individuals through extensive data analysis. "In order to protect against the harms of such reports, the FCRA requires consumer reporting agencies like Eightfold to make certain disclosures, obtain certain certifications, and ensure that consumers (here, job applicants) have a mechanism to review and correct reports that are provided to prospective employers for purposes of determining eligibility for employment," the suit said. The startup offers tools to speed up hiring by assessing job applicants and predicting their fit for positions using data from online resumes and job listings. "There is no AI-exemption to these laws, which, for decades, have been an essential tool in protecting job applicants from abuses by third parties, like background check companies, that profit by collecting information about and evaluating job applicants," they said in the lawsuit. However, individuals seeking employment at firms that use these technologies are not informed or given an opportunity to contest inaccuracies, as alleged by Erin Kistler and Sruti Bhaumik in their proposed class-action lawsuit. As a result, they assert that Eightfold breached the FCRA and a California statute that grants consumers the right to access and dispute credit reports utilised in hiring and lending. According to Eightfold representative Kurt Foeller, the platform operates on data provided by candidates or clients, as reported by Reuters. "We do not scrape social media and the like. We are deeply committed to responsible AI, transparency, and compliance with applicable data protection and employment laws," Foeller said. According to the lawsuit, Eightfold generates consumer reports for potential employers using its Evaluation Tools. They evaluate job candidates not just as individuals by claiming to pinpoint their likely skills, experiences, and traits, but also in relation to each other, ranking applicants on a scale from 0 to 5 based on the findings, conclusions, and assumptions derived from Eightfold's proprietary AI regarding their "likelihood of success." Eightfold creates talent profiles of job seekers that include personality descriptions such as 'team player' and 'introvert', ranks their 'quality of education', and predicts their future titles and companies, according to the lawsuit. "Employers use these reports to sift through applications, typically only reviewing highly ranked candidates. Lower-ranked candidates are often discarded before a human being ever looks at their application," the lawsuit said. Kistler and Bhaumik filed a lawsuit in California state court on behalf of all job applicants in the US who were assessed using the company's tools. The proposed class is represented by the labour law firm Outten & Golden and the nonprofit advocacy organisation Towards Justice. Kistler sought positions at various companies that use Eightfold, including PayPal, while Bhaumik pursued opportunities at firms like Microsoft, as stated in the complaint. Both individuals have degrees in science or technology and over a decade of experience. They were not selected for employment, and each believes that Eightfold's tools contributed to this outcome. Microsoft and PayPal are not named as defendants in the lawsuit.
[7]
Job applicants sue to open 'black box' of AI hiring decisions
For millions of applicants seeking jobs at hundreds of employers, the first hurdle is clearing an artificial intelligence system that screens their résumés and evaluates their suitability. The process is similar, in some ways, to how credit agencies rank consumers by assigning them a numeric score based on their finances and borrowing history. And now, a lawsuit filed by a group of job applicants claims that some AI employment screening tools should be subject to the same Fair Credit Reporting Act requirements as credit agencies. The lawsuit's goal is to compel AI companies to disclose more information about what data they are gathering on applicants and how they are being ranked. The target of the suit is a screening company, Eightfold AI, that sells its technology as a tool for employers to save time and money. Using sources such as LinkedIn, Eightfold has created a data set that it says encompasses more than "1 million job titles, 1 million skills, and the profiles of more than 1 billion people working in every job, profession, industry, and geography." When candidates apply for a job, Eightfold's software evaluates their skills and the employers' needs, then scores the applicants on a scale of 1 to 5. Job seekers say the screening tool can become an algorithmic gatekeeper, blocking candidates from advancing to a human hiring manager and giving them no feedback on their scores or how the rating was generated. If the tool is making mistakes, candidates have no way to correct them. "I think I deserve to know what's being collected about me and shared with employers," Erin Kistler, one of the plaintiffs in the lawsuit, said in an interview. "And they're not giving me any feedback, so I can't address the issues." Kistler has a degree in computer science and decades of experience in the technology industry. Out of the thousands of jobs she has sought in the past year, which she has meticulously tracked, only 0.3% of her applications have progressed to a follow-up or interview. Several of her applications were routed through Eightfold's software system. A representative for Eightfold, based in Santa Clara, Calif., did not respond to requests for comment. The lawsuit, filed against Eightfold in Contra Costa County Superior Court in California, is an early attempt at what employers and their lawyers expect will be a wave of challenges to the use of AI in hiring. David J. Walton, a Philadelphia lawyer who works with employers on AI issues and is not involved in the lawsuit, said companies could make a valid argument that these tools differ from credit scoring systems. The hiring software, he said, can be viewed as simply ranking candidates in the same way a human recruiter might sort applicants into tiers of desirable and less desirable candidates. Still, Walton said that as companies push the boundaries of what AI tools can do, they're often operating in legally gray areas -- especially around data privacy and technology that may illegally discriminate against people even if not explicitly trained to do so. "These tools are designed to be biased. I mean, they're designed to find a certain type of person," he said. "So they are designed to be biased but they're not designed to improperly be biased. And that's a very fine line." Kistler's lawsuit, which was filed by Outten & Golden and Towards Justice, a nonprofit Denver law firm, with help from former lawyers at the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission, is taking a relatively novel approach to challenging AI technology. It is among the first cases to invoke credit reporting laws as a way to try to protect applicants against what some might refer to as "black box" employment decisions, where the applicant is kept in the dark about why they were disqualified. Congress enacted the Fair Credit Reporting Act in 1970, not long after credit reporting agencies began using computer databases to compile their dossiers of personal information and turn them into numerical scores. To protect people against errors in those records, lawmakers required reporting agencies to disclose that information to consumers and allow them to dispute inaccuracies. The law is broader than the credit reporting requirements for which it's named. It defines a "consumer report" as any gathering of information on someone's "personal characteristics" that is used to determine their eligibility for various financial services or "employment purposes." "There is no AI exemption to our laws," said David Seligman, executive director of Towards Justice. "Far too often, the business model of these companies is to roll out these new technologies, to wrap them in fancy new language, and ultimately to just violate people's rights." Seligman believes the law requires both Eightfold and the companies that use its technology to disclose to applicants what data is being gathered and provide them the opportunity to dispute and correct inaccuracies. The complaint, which is seeking class-action status, asks for unspecified financial damages and an order that Eightfold comply with state and federal consumer reporting laws. Other lawsuits have taken aim at AI software systems for perceived violations of federal and state anti-discrimination laws. The most prominent is a 2023 lawsuit against Workday in the U.S. District Court in San Francisco that says the company's system, another popular one for screening job seekers, illegally discriminates against some people such as older job seekers, those with disabilities and Black applicants. Judge Rita F. Lin rejected Workday's motion to dismiss the case, finding that the plaintiffs' evidence -- including a rejection notice one job seeker received at 1:50 a.m., less than an hour after submitting his application -- "plausibly supports an inference that Workday's algorithmic tools disproportionately reject applicants based on factors other than qualifications, such as a candidate's race, age or disability." In May, she granted preliminary approval for the case to proceed as a collective action that could potentially include millions of rejected job applicants. A Workday spokesperson said the lawsuit's claims are false. "Workday's AI recruiting tools are not trained to use -- or even identify -- protected characteristics like race, age, or disability," the company said in a statement. A 2024 guidance note from the Consumer Financial Protection Bureau opined that dossiers and scores created for hiring purposes were subject to the Fair Credit Reporting Act and that the vendors who created them legally qualified as consumer reporting agencies. Those notes serve as warnings for companies, telling them how regulators intend to enforce the laws they oversee. Under President Donald Trump, the bureau changed its stance. Russell Vought, the bureau's acting director -- who has spent his tenure trying to demolish and close the agency -- rescinded the guidance memo in May. Corporate litigation typically takes years, and the case against Eightfold is unlikely to move fast. But the underlying issues have also been brewing for years. Jenny Yang, a former chair of the Equal Employment Opportunity Commission appointed during the Obama administration and one of the lawyers representing the plaintiffs in the lawsuit, said the commission began studying algorithmic hiring systems more than a decade ago. "We realized they were fundamentally changing how people were hired. People were getting rejected in the middle of the night and nobody knew why," she said.
[8]
Lawsuit Claims This AI Tool Misused Job Applicants' Credit Info
Eightfold, deployed as a recruitment aid by big names like Microsoft, Salesforce and Paypal, says its AI tech can boost the hiring process by automatically assessing job applicants and predicting if they'd fit the criteria you need for open positions at your company, leveraging a mix of data from online resumes and job listings, Reuters explains. The company's own website boasts, in fancy PR-speak, that it can transform recruiting "from human scale to agent scale," using "deep talent intelligence" to screen "millions to power your Infinite Workforce." It allows HR teams to speedily assess many more candidates than human skills and work hours could process at speed, but the plaintiffs leading the class action suit -- job seekers Erin Kistler and Sruti Bhaumik -- accuse Eightfold of breaching the Fair Credit Reporting Act (FCRA). They allege that the AI system is not transparent, that applicants aren't being given notice the AI tool is being used in this way to infer information about their backgrounds. Also, if an error occurs that impacts a candidate's chances, there's no process to dispute it. This is something the FCRA includes, to allow for individual recourse to any mistakes. There's also a California law giving consumers the right to know if personal credit information is used for banking or hiring processes. An Eightfold spokesperson told the news outlet that the company's systems work by analyzing data from candidates or shared by its customers. They also promised the AI doesn't "scrape social media and the like." They said Eightfold is "deeply committed to responsible AI, transparency" and observes compliance with "applicable data protection and employment laws."
[9]
AI company Eightfold sued for helping companies secretly score job seekers
Jan 21 (Reuters) - Eightfold AI, a venture capital-backed artificial intelligence hiring platform used by Microsoft, PayPal and many other Fortune 500 companies, is being sued in California for allegedly compiling reports used to screen job applicants without their knowledge. The lawsuit filed on Tuesday accusing Eightfold of violating the Fair Credit Reporting Act shows how consumer advocates are seeking to apply existing law to AI systems capable of drawing inferences about individuals based on vast amounts of data. Santa Clara, California-based Eightfold provides tools that promise to speed up the hiring process by assessing job applicants and predicting whether they would be a good fit for a job using massive amounts of data from online resumes and job listings. But candidates who apply for jobs at companies that use those tools are not given notice and a chance to dispute errors, job applicants Erin Kistler and Sruti Bhaumik allege in their proposed class action. Because of that, they claim Eightfold violated the FCRA and a California law that gives consumers the right to view and challenge credit reports used in lending and hiring. "There is no AI-exemption to these laws, which have for decades been an essential tool in protecting job applicants from abuses by third parties--like background check companies--that profit by collecting information about and evaluating job applicants," they said in the lawsuit. A spokesperson for Eightfold did not immediately reply to a request for comment. Eightfold is backed by venture capital firms including SoftBank Vision Fund and General Catalyst. Kistler and Bhaumik sued in California state court on behalf of all U.S. job seekers who applied for jobs and were evaluated using the company's tools. Labor law firm Outten & Golden and nonprofit advocacy group Towards Justice represent the proposed class. Eightfold creates talent profiles of job seekers that include personality descriptions such as "team player" and "introvert," rank their "quality of education," and predict their future titles and companies, according to the lawsuit. Kistler applied to roles at several companies that use Eightfold, including PayPal, and Bhaumik applied to companies including Microsoft, according to the complaint. Both hold science or tech degrees and have more than 10 years of experience. Neither was hired, and both believe that Eightfold's tools played a role. Microsoft and PayPal are not defendants in the lawsuit. A Microsoft spokesperson declined to comment. A spokesperson for PayPal did not immediately reply to a request for comment. One-third of Eightfold customers are Fortune 500 companies, including Salesforce and Bayer, according to the company's website. The New York State Department of Labor and Colorado Department of Labor and Employment also offer Eightfold-powered platforms for job seekers. (Reporting by Jody Godoy in New York; Editing by Matthew Lewis)
Share
Share
Copy Link
A groundbreaking class-action lawsuit targets Eightfold AI, an AI-powered hiring platform used by Microsoft, PayPal, and Salesforce. Two job seekers claim the system violates the Fair Credit Reporting Act by generating hidden algorithmic assessments without applicant consent, potentially reshaping how AI hiring tools operate across the industry.
Two job seekers have filed a class-action lawsuit against Eightfold AI, marking the first legal challenge in the United States to allege that an AI hiring vendor has violated the Fair Credit Reporting Act (FCRA).
1
The complaint, filed in California's Contra Costa County Superior Court, targets the Santa Clara-based company's AI-powered hiring platform, which assists major employers including Microsoft, PayPal, Salesforce, and Bayer in evaluating candidates.1

Source: Decrypt
Plaintiffs Erin Kistler and Sruti Bhaumik claim that Eightfold's system generates secretive candidate reports and match scores without proper disclosure or consent, violating consumer protection laws enacted in the 1970s.
5
Kistler, a computer science graduate with 19 years of product management experience, told The New York Times she applied for thousands of jobs over the past year, with only 0.3 percent resulting in follow-ups or interviews.4
"I think I deserve to know what's being collected about me and shared with employers," Kistler said, expressing frustration over the lack of feedback.4

Source: Futurism
Eightfold AI uses large-scale data analysis to match candidates with jobs more efficiently than human recruiters. According to the company, its AI models draw on more than 1 billion professional profiles, 1 million job titles, and an equally vast catalog of skills and industries.
1
The lawsuit alleges the platform's AI is trained on more than 1.5 billion global data points, collecting information from sources like LinkedIn, GitHub, and job boards.5
Source: TechSpot
When employers evaluate applicants, the system assembles talent profiles that describe traits such as teamwork or introversion, assess educational background, and even forecast likely future roles.
1
These algorithmic assessments are distilled into numerical match scores ranging from zero to five that predict the likelihood a candidate is a good fit for a given role.2
Lower-ranked candidates are often discarded before a human being ever looks at their application, creating what critics call a "black box" where job seekers cannot see the process behind the decision.4
The lawsuit contends that automated ranking systems should be subject to the same disclosure and correction requirements as credit scoring agencies.
1
Backed by law firm Outten & Golden and nonprofit advocacy group Towards Justice, the complaint argues that because Eightfold's scores aggregate personal information and translate it into rankings used for employment purposes, the company should follow rules that apply to consumer reporting agencies.2
Those rules include notifying applicants when such a report is being created, obtaining their consent, and giving them the chance to dispute any inaccurate information.
2
The lawsuit alleges that during the application process, neither plaintiff received standalone disclosure that consumer reports would be generated, nor did they receive summaries of their consumer protection rights.5
The plaintiffs seek actual and statutory damages between $100 and $1,000 per violation under federal law, plus up to $10,000 per violation under California law.5
Related Stories
Eightfold disputes the allegations. Company spokesperson Kurt Foeller stated that the platform "operates on data intentionally shared by candidates or provided by our customers. We do not scrape social media and the like."
3
He emphasized Eightfold's commitment to responsible AI, transparency, and compliance with applicable data protection and employment laws.1
Yet the case highlights growing tension between technological innovation and decades-old privacy statutes. Roughly 88% of companies now use some form of AI for initial candidate screening, according to the World Economic Forum.
2
Legal observers note the challenge ahead. "These tools are designed to be biased - in the sense that they're looking for specific kinds of candidates," said David J. Walton, a Philadelphia attorney who advises companies on AI compliance. "The fine line is whether that bias veers into something unlawful."1
The Eightfold suit follows another high-profile challenge against Workday, an AI-powered hiring platform, where a federal judge allowed claims to proceed alleging discrimination against older, disabled, and Black applicants.
1
Regulators have also weighed in. In 2024, the Consumer Financial Protection Bureau issued guidance indicating that algorithmic scores used for employment decisions are likely consumer reports under the FCRA, though that guidance was later rescinded under the Trump administration.1
Eightfold's investors - including SoftBank's Vision Fund and General Catalyst - have wagered heavily on the platform's ability to transform corporate hiring pipelines. The company claims that one-third of the Fortune 500 uses its technology and has partnered with state labor departments in New York and Colorado to power public job-matching portals.
1
Such scale means the implications could reach far beyond a single company. A ruling that classifies AI hiring algorithms as consumer reporting systems could require tech vendors to open their processes to scrutiny, disclose ranking methods to applicants, and implement dispute mechanisms similar to those used by credit bureaus today.1
"There is no AI exemption to our laws," said David Seligman, executive director of Towards Justice. "For decades, these statutes have protected people from opaque systems that decide their futures. That protection shouldn't disappear just because the system now has an algorithm."
1
Jenny R. Yang, a lawyer for the case and former chair of the U.S. Equal Employment Opportunity Commission, added that qualified workers are being denied job opportunities based on automated assessments they have never seen and cannot correct.2
Attorneys on the case note that AI hiring tools are being adopted very quickly, often faster than companies are building the compliance, auditing, and governance structures needed to use them responsibly, creating real risk of inaccurate decisions and hidden discrimination.3
Summarized by
Navi
[2]
24 May 2025•Technology

21 Dec 2025•Technology

13 Nov 2024•Technology

1
Policy and Regulation

2
Technology

3
Technology
