Legal Update
Jul 19, 2024
Mobley v. Workday: Court Holds AI Service Providers Could Be Directly Liable for Employment Discrimination Under “Agent” Theory
Seyfarth Synopsis: On July 12, 2024, the court issued a mixed ruling in the closely watched Mobley v. Workday putative class action, which claims that Workday, a Human Capital Management platform, is directly liable for alleged unlawful employment discrimination caused by an employer’s use of Workday’s AI-powered hiring tools. The key issue before the Court was whether Workday could be directly liable under Title VII and other federal civil-rights laws. While the court dismissed the claims that Workday acted as an “employment agency,” it allowed claims that Workday acted as an “agent” of employers to proceed to discovery. This ruling has significant implications for both AI vendors and employers using AI-powered hiring tools, potentially expanding the scope of liability under federal anti-discrimination laws.
Judge Rita Lin of the Northern District of California accepted the plaintiffs claim that an AI vendor could be directly subject to liability for employment discrimination under Title VII, the ADA, and the ADEA, under the theory that the AI vendor was acting as an “agent” of the employer. The EEOC had previously filed an amicus brief supporting this novel theory of liability, as we discussed in our April 29, 2024 management update.
In the Mobley v. Workday case, plaintiff alleges that Workday’s AI-powered applicant screening tools discriminate on the basis of race, age, and disability in violation of federal and state anti-discrimination laws. The putative class action has gained significant attention due to its potential to set precedent for AI vendor liability in hiring processes. Initially, the Court granted Workday’s motion to dismiss the original complaint, with leave to amend. Following the plaintiff’s filing of the First Amended Complaint, Workday again moved to dismiss. It was at this stage that the EEOC filed an amicus brief, supporting the plaintiff’s novel theories of direct AI vendor liability and urging the Court to deny the motion to dismiss.
The Court Dismissed Plaintiff’s “Employment Agency” Claims
The Court’s decision, issued on July 12, 2024, rejected the theory that Workday, the AI vendor, was an “employment agency” under federal law, finding that Workday's alleged activities did not meet the statutory definition of “procuring” employees for employers. The Court analyzed the First Amended Complaint and found no support for the conclusory allegations that Workday was the entity recruiting or soliciting candidates, and accordingly dismissed the claim that Workday was acting as an “employment agency”.
The Court Allowed Plaintiff’s “Agent” Theory of Liability
While the Court’s rejection of the “employment agency” theory of liability represents a partial rejection of the liability theories advanced by the plaintiff and the EEOC, its acceptance of the “agent” theory of liability means that there is now precedent for AI vendors to face direct liability for employment discrimination claims. (The Court did not rule on the plaintiff's alternative argument that Workday could also be liable as an “indirect employer", in light of the decision allowing the case to proceed under the “agent” theory of liability.)
In denying Workday’s attempt to dismiss the plaintiff’s “agent” theory of liability, the Court emphasized that the First Amended complaint “plausibly alleges that Workday’s customers delegated their traditional function of rejecting candidates or advancing them to the interview stage to Workday.”
While Workday argued that it was simply providing a tool that implemented the employers’ criteria, the Court rejected that characterization and held that the First Amended Complaint sufficiently alleged, “Workday's software is not simply implementing in a rote way the criteria that employers set forth, but is instead participating in the decision-making process by recommending some candidates to move forward and rejecting others.”
The Court also highlighted the allegation that Mobley received rejection emails not just outside of business hours, but allegedly almost immediately after submitting his application. The opinion notes one allegation that “Mobley received a rejection at 1:50 a.m., less than one hour after he had submitted his application.” While the Court acceped the inference that this rapid rejection might be evidence of automation in the decision-making process, it remains to be seen and tested in discovery whether, alternately, this rejection was simply consistent with the application of “implementing in a rote way” an employer’s straightforward “knockout” criteria or minimum qualifications.
In considering arguments whether Workday was simply applying rote criteria, the Court’s opinion draws a distinction between Workday’s alleged role and that of a simple spreadsheet or email tool, suggesting that the degree of automation and decision-making authority was relevant to the decision. While the opinion accepts that spreadsheet programs and email systems do not qualify as “agents” because they have not been “delegated responsibility,” the Court drew a distinction between those simple tools and Workday, writing:
"By contrast, Workday does qualify as an agent because its tools are alleged to perform a traditional hiring function of rejecting candidates at the screening stage and recommending who to advance to subsequent stages, through the use of artificial intelligence and machine learning."
The Court’s opinion emphasized the importance of the “agency” theory in addressing potential enforcement gaps in our anti-discrimination laws. In this regard, the Court illustrated the potential gaps with a hypothetical scenario: a software vendor intentionally creates a tool that automatically screens out applicants from historically black colleges and universities, unbeknownst to the employers using the software. Without the agency theory, the Court opined, no party could be held liable for this intentional discrimination. By construing federal anti-discrimination laws broadly and adapting traditional legal concepts to the evolving relationship between AI service providers and employers, the Court's decision was based, in part, on the desire to avoid potential loopholes in liability.
By allowing the plaintiff’s agency theory to proceed, as supported by the EEOC in its amicus brief, the ruling opens the door for a significant expansion of liability for AI vendors in the hiring process, with potential far-reaching implications for both AI service providers and for employers using those tools.
What's Next?
The allegations that Workday’s customers “delegate” to Workday the function of rejecting candidates or advancing them to the interview stage may not accurately reflect how most employers would characterize how they are actually using AI tools in their hiring processes. Nevertheless, the plaintiff’s “agency” arguments have now survived the motion to dismiss . As a result, these claims will be subject to factual development and scrutiny during the discovery phase of the litigation. Plaintiffs are likely to seek broad-based discovery into Workday’s AI algorithms, their training data, and the specific way these tools have been used in the hiring processes of the employers named in the First Amended Complaint.
In light of the decision and the EEOC’s support of the plaintiff's theory of liability, employers using AI-powered hiring tools should review their processes to ensure they can clearly articulate the role these tools play in their hiring decisions. They should also be prepared to demonstrate that their use of these tools does not result in disparate impacts on protected groups.
We will continue to monitor these developing issues. For additional information, we encourage you to contact the authors of this article, a member of Seyfarth's People Analytics team, or any of Seyfarth's attorneys.