A lawsuit against Workday, the widely used human resources platform, has expanded into a collective action, alleging systemic discrimination in its artificial intelligence-powered recruitment tools. The case, initially filed by Derek Mobley in February 2023, claims the company’s AI systems unfairly reject applicants based on age, race and disability.
Mobley, an African American man over the age of forty with reported anxiety and depression, applied for over one hundred positions that utilized Workday’s platform. He alleges he was repeatedly and rapidly rejected, sometimes within minutes or less than an hour of submitting his application, even in the middle of the night. He argues that Workday’s AI systematically discriminated against him.
Workday initially defended itself by asserting it is merely a software provider and that the ultimate hiring decisions rest with employers. Even though, the court rejected this argument and allowed the lawsuit to proceed. In May 2025, the case was certified as a collective action, potentially opening the door for millions of affected applicants to join the suit. Workday has stated it automatically rejected approximately 1.1 billion applications during the relevant period.
The Equal Employment Opportunity Commission (EEOC) has also weighed in, stating that Workday must face claims that its AI software is biased, according to Reuters. This development underscores the growing scrutiny of AI-driven recruitment tools and their potential for discriminatory practices.
The case raises critical questions for companies utilizing AI in hiring processes, extending beyond the specifics of recruitment. The core issue is whether organizations adequately assess and monitor the fairness and objectivity of these systems. A recent internal analysis, as highlighted in the original case details, revealed that a similar AI tool favored men under forty, systematically disadvantaging other applicants. When challenged, a company’s defense of simply using the software is unlikely to be successful, particularly under German law.
Experts emphasize that responsibility for ensuring non-discrimination remains with the company, regardless of the technology employed. The argument that “the AI made the decision” will not shield organizations from legal repercussions.
To mitigate risk, companies are advised to implement several key measures. These include establishing clear internal accountability for AI systems, conducting regular audits of decision-making criteria, meticulously documenting the functionality and potential risks of AI tools, and prioritizing sensitization and training for HR departments. Developing internal guidelines for fair AI utilization is also crucial.
The case highlights a broader trend of increased legal challenges to AI-driven HR software. Inc.com reports that Workday and other HR software providers are facing growing scrutiny over potential discriminatory outcomes. The legal landscape surrounding AI bias is rapidly evolving, and companies must proactively address these concerns to avoid potential liabilities.