Legal Update
May 3, 2024
Department of Labor Issues Comprehensive Artificial Intelligence “Promising Practices” Designed to Avoid Bias: All Employers Should Take Note
Seyfarth Synopsis: On April 29, 2024, the Department of Labor published extensive guidance on the use of artificial intelligence in hiring and employment. While the guidance is addressed to federal contractors, all private-sector employers using or considering using artificial intelligence should pay attention. The guidance makes clear that long-standing nondiscrimination principles fully apply to AI and outlines “Promising Practices” designed to mitigate AI risks in employment, including the risk of unlawful bias from the use of AI. While not a binding standard, the Department of Labor’s issuance of these “Promising Practices” demonstrates regulators’ evolving expectations surrounding employers’ use of AI.
President Biden’s comprehensive Executive Order on AI, issued in October 2023, directed multiple federal agencies to issue AI-related guidance.[1] As part of this “whole of government” approach, the Department of Labor was specifically tasked with publishing guidance for federal contractors on “nondiscrimination in hiring involving AI and other technology-based hiring systems.”
The April 29 AI guidance issued by the Department of Labor's Office of Federal Contract Compliance Programs (OFCCP) consists of two parts: (1) a Q&A section addressing how OFCCP will apply existing nondiscrimination laws to AI tools, and (2) a list of “Promising Practices” for the development and use of AI in employment.
While the Department of Labor’s guidance is specifically addressed to federal contractors, it has far-reaching implications for employers generally, not just federal contractors. Consistent with the EEOC’s position, the guidance makes clear that long-standing nondiscrimination principles fully apply to AI, and the “Promising Practices” represent OFCCP’s attempt to capture best practices for mitigating AI risks in employment, drawing from the prior work of stakeholders in the responsible AI ecosystem. While not a binding standard, OFCCP’s issuance of these “Promising Practices” represents a significant marker of the evolving expectations surrounding employers’ use of AI.
Applying Existing Laws to AI: OFCCP Reinforces Foundational Principles
In the Q&A portion of the guidance, OFCCP makes clear that federal contractors are fully responsible for affirmatively ensuring their AI tools comply with existing nondiscrimination laws. The agency also emphasizes federal contractors’ obligations to maintain records related to AI tools, and reminds contractors that their obligation to provide reasonable accommodations extends to the contractor’s use of automated systems. OFCCP also explains how it will use its existing authorities to investigate federal contractors’ use of AI in employment decisions. Key takeaways include:
- AI tools can constitute “selection procedures” subject to OFCCP's existing regulations. Consistent with prior statements from OFCCP in 2019[2] and the EEOC in May 2023,[3] OFCCP again emphasizes and confirms that AI-based tools used to make employment decisions can be “selection procedures” under the Uniform Guidelines on Employee Selection Procedures (UGESP) from 1978, and validation processes required under UGESP.
- Reasonable accommodations obligations fully apply. Federal contractors must make reasonable accommodations to ensure individuals with disabilities have equal access to jobs, including in the contractor's use of AI-based selection procedures.
- Employers are responsible for their AI tools, whether they developed them themselves or if they are using an AI vendor. OFCCP unambiguously warns that federal contractors cannot delegate or avoid their nondiscrimination and affirmative action obligations by using third-party AI products. During an OFCCP audit, contractors must be able to provide the information and records needed for OFCCP to assess compliance, including information about the impact and validity of AI-based selection procedures. In addition, information regarding the design of tool as well as information regarding whether alternative approaches were considered and tested for adverse impact may be requested during audits.
While these principles are not new, OFCCP’s firm stance serves as a call-to-action for employers to proactively examine their use of AI. As OFCCP Acting Director Michelle Hodge recently warned, federal contractors “can't just pull something off the shelf and decide to use it” without ensuring compliance with OFCCP regulations. Just like federal contractors should not rely solely on an affirmative defense of, “I thought my AI vendor was handling that,” private-sector employers should be mindful of these evolving standards and how these defenses may be received if accusations of unlawful bias from the use of AI arise. Now is an excellent time for employers using AI – whether or not they are federal contractors – to work closely with trusted legal counsel to develop a comprehensive approach to AI risk management.
OFCCP’s “Promising Practices” Reflect an Evolving Consensus on AI Risk Management
The “Promising Practices” portion of OFCCP's guidance outlines recommendations to help federal contractors “avoid potential harm to workers and promote trustworthy development and use of AI.” These recommendations span the AI lifecycle, from development to deployment and monitoring, and include “big picture” concepts like the creation of internal AI governance frameworks. Many of the practices identified in the “Promising Practices” closely align with concepts from the National Institute of Standards and Technology's AI Risk Management Framework.
OFCCP’s guidance acknowledges that these “Promising Practices” are “not expressly required,” and that it is “not an exhaustive list but rather an initial framework" and further cautions that "the rapid advancement and adoption of AI in the employment context means that these practices will evolve over time."
Despite their non-mandatory nature, employers should not ignore OFCCP’s guidance. In that regard, the impact of OFCCP's April 29 “Promising Practices” on employer behavior may follow a trajectory similar to the EEOC’s 2017 “Promising Practices” for preventing harassment, which had a significant impact on the way employers thought about harassment prevention. By 2017, some proactive employers had already implemented policies and procedures aligned with the EEOC’s recommendations. But with the #MeToo movement contemporaneously gaining momentum in late 2017, increased public awareness of harassment and demand for action prompted further employer action. This provided the impetus for some employers to re-evaluate and revise their harassment policies and training programs to align with the EEOC’s recommendations. The ultimate goal was to prompt the implementation of more proactive, comprehensive, and systemic approaches to preventing harassment, rather than relying solely on reactive measures like complaint reporting and investigations.
OFCCP may be hoping that its “Promising Practices” on AI will follow a similar trajectory. Like the EEOC’s 2017 effort on harassment, OFCCP's AI "Promising Practices" did not emerge from thin air; they reflect principles identified by various stakeholders, contain similar themes articulated in the White House’s recently issued guidance to federal agencies, emphasizing the need for “stakeholder engagement, validation and monitoring, and greater transparency” in the federal government’s own use of AI, and closely align with NIST AI RMF, which has gained significant traction as a foundational resource for organizations seeking to manage AI risk.
Key themes and recommendations in OFCCP's “Promising Practices” include:
- Transparency and notice to employees and applicants. OFCCP advises federal contractors to provide clear notice about their use of AI, what data is being collected, and how employment decisions are being made.
- Stakeholder engagement. OFCCP advises federal contractors to consult with employees and/or their representatives in the design and deployment of AI systems.
- Fairness assessments and ongoing monitoring. OFCCP advises federal contractors to analyze AI tools for discriminatory impact before use, at regular intervals during use, and after use. Where issues are identified, OFCCP advises employers to take steps to reduce adverse impact or consider alternative practices. While some of this advice may sound obvious to those experienced in AI risk management, OFCCP’s inclusion of these concepts emphasizes their importance. Likewise, OFCCP also advises employers to ensure meaningful human oversight of AI-based decisions. While OFCCP does not provide greater clarity regarding how it might be defining “meaningful human oversight,” the inclusion of this concept in OFCCP’s “Promising Practices” invites contractors and employers to examine and consider how human oversight mechanisms may or may not be present in their existing AI processes.
- Documentation and recordkeeping. OFCCP advises federal contractors to retain information about the data used to develop AI systems, the basis for decision-making, and the results of any fairness assessments or monitoring.
- Vendor management. OFCCP advises federal contractors to carefully vet AI vendors. Among other things, OFCCP’s “Promising Practices” discuss the desirability of verifying the source and quality of the data being collected and analyzed by the AI system, the vendor's data protection and privacy policies, and critical information about the vendor's algorithmic decision-making tool, such as the screening criteria, the predictive nature of the system, and any differences between the data used for training and validation and the actual candidate pool. OFCCP also discusses the desirability of having access to the results of any assessment of system bias, debiasing efforts, and/or any study of system fairness conducted by the vendor and cautions that relying on vendor assurances alone is not enough – emphasizing the importance of being able to verify key information about the vendor's practices.
- Accessibility for individuals with disabilities. OFCCP advises federal contractors to ensure AI systems are accessible, allow for reasonable accommodation requests, and accurately measure job-related skills regardless of disability.
As federal contractors or other employers consider whether changes to their AI risk management practices are warranted in light of OFCCP’s issuance of these “Promising Practices”, contractors and employers should be mindful of legislative activity mandating or otherwise incentivizing employers to implement an AI risk management framework similar to the “Promising Practices” described in OFCCP’s guidance. OFCCP’s “Promising Practices” are not mandatory, but aspects of them may soon be mandatory under future legislative or regulatory efforts.[4]
Implications for Employers
Federal contractors and private-sector employers should expect continued coordination between OFCCP, EEOC and other enforcement agencies on AI issues. While OFCCP’s April 29 AI guidance is addressed to federal contractors, OFCCP’s “Promising Practices” reflect progress toward regulatory consensus regarding AI risk management, and their issuance invites both federal contractors and other employers to evaluate their existing AI risk management practices and to consider whether further proactive processes may be warranted or desirable.
We will continue to monitor these developing issues. For additional information, we encourage you to contact the authors of this article, a member of Seyfarth’s People Analytics team, or any of Seyfarth’s attorneys.
[1] The Department of Labor is one of several federal agencies directed by President Biden to issue AI-related guidance, and the White House’s April 29 statement summarizes the federal government’s progress towards meeting the various deadlines set forth in President Biden’s executive order. Seyfarth attorneys have written about the Wage Hour Division’s Field Assistance Bulletin addressing the application of the Fair Labor Standards Act to use of artificial intelligence and other automated systems in the workplace, also issued on April 29. See https://www.wagehourlitigation.com/2024/05/dol-issues-guidance-on-wage-hour-risk-posed-by-artificial-intelligence/
[2] See https://www.dol.gov/agencies/ofccp/faqs/employee-selection-procedures
[3] See https://www.eeoc.gov/laws/guidance/select-issues-assessing-adverse-impact-software-algorithms-and-artificial
[4] For instance, the version of Connecticut SB2 passed on April 24, 2024 by the Connecticut Senate specifically requires deployers of “high-risk” AI systems to implement a risk-management policy and program, and makes specific reference to the NIST AI RMF. Likewise, legislation introduced in California (AB 2390), New York (S5641A), Illinois (HB 5116), Rhode Island (H 7521), Washington (HB 1951) contain similar language mandating AI deployers to implement a governance program that contains reasonable administrative and technical safeguards to map, measure, manage, and govern the reasonably foreseeable risks of algorithmic discrimination – concepts that are core to the NIST AI RMF. Additionally, on May 3, 2024, Colorado SB 205, a bill with similar legislative language, passed the Colorado Senate.