On January 20, 2026, two job applicants filed a class action lawsuit against Eightfold AI Inc. (“Eightfold”) alleging that Eightfold, an AI-driven hiring platform used by major employers, violated the Fair Credit Reporting Act (“FCRA”) and California’s Investigative Consumer Reporting Agencies Act (“ICRAA”) by secretly generating AI-driven applicant “likelihood of success” scores based on a 0-5 scale and dossiers functions as illegal, undisclosed consumer reports.
More specifically, the lawsuit, which is pending in California’s Superior Court for the County of Contra Costa, alleges that Eightfold violated the FCRA by failing to meet the certification, notification, disclosure, authorization, and dispute requirements and invading job applicants’ privacy by failing to take reasonable steps to safeguard reports. Plaintiffs also allege that Eightfold violated the ICRAA by failing to satisfy the consent and certification requirements and invading job applicants’ privacy by not ensuring that the reports were used for permissible purposes. In support of their claims, plaintiffs allege there was insufficient human oversight in monitoring the AI used to generate the reports, claiming that “[l]ower-ranked candidates are often discarded before a human being ever looks at their application” while applicants who received higher-ranked scores are further reviewed by employers.
Further, the complaint alleges that: (1) by assembling and evaluating personal data to generate hiring-related reports, Eightfold is essentially a Credit Reporting Agency; (2) the AI generated scores are “credit reports”; (3) Eightfold is scraping person data from third-party sources that is inaccurate or incomplete; and (4) the scores rating a candidate’s likelihood of success create inferences about personality, behavior, aptitude, intelligence, and other characteristics.
It will be interesting to see how the court resolves whether crediting reporting statutes apply to AI-based employment decision tools given the claims, potentially considering whether these AI tools may fall within the FCRA’s exemption for information “solely as to transactions or experiences between the consumer and the person making the report.” Further, the court may need to resolve that the AI vendor may not qualify as “consumer reporting agencies” because it arguably does not assemble or evaluate information “for the purpose of providing consumer reports to third parties,” as required by the statute.
If Plaintiffs are successful in establishing that AI-generated reports can constitute consumer reports, employers may face increased scrutiny on their implementation of AI tools used to make employment decisions. Further, employers should be aware of the potential liability when engaging with AI-based recruiting tools—even if these tools are developed by third-parties.
When implementing AI, Employers Should Adopt the Following Best Practices:
Conduct Regular Audits of AI Tools
- Conduct formal AI audits to ensure the tools are functioning as intended and that there is transparency around how the AI tool generates results.
Develop and Update AI Guidelines
- Establish a clear framework for how the tools will be used, when to incorporate human oversight, and what requirements (e.g., notice, disclosure, authorization, consent) are implicated.
Vet Your Vendor
- Consider asking AI vendors: (1) what data the tool relies on; (2) where the data is sourced from; (3) and whether the tool generates scores, inferences, or rankings that could be used to make a hiring decision.
Stay Aligned with Applicable Federal, State, and Local Regulations
- Prioritize staying updated on federal, state, and local AI- related laws and consult with counsel to ensure best practices.
Establish Clear Company Policies
- Create transparent and accessible policies to ensure employees understand when and how AI workplace tools are used.
Epstein Becker & Green, P.C. attorneys continue to monitor AI developments and can assist when your business intersects with this technology.
Blog Editors
Authors
- Associate
- Associate