On October 30, 2023, President Joe Biden signed his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO), which addresses artificial intelligence issues, including safety, security, privacy, civil rights, immigration, and health care. The White House also released a companion Fact Sheet summarizing the AI EO (the “Fact Sheet”). Later in the week, on November 1, 2023, the White House announced that the Office of Management and Budget will release for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. (More on this to come.)
While the AI EO applies primarily to government agencies, it offers key insights into AI regulations that may soon be coming to the private sector, including those impacting workers and the workplace.
What Does the AI EO Require?
The AI EO regulates the government’s use and oversight of AI in the following ways, which ultimately may affect private sector employers.
Safety and Security
The AI EO directs members of the President’s Cabinet, as well as various agency heads, to take several steps to develop guidelines, standards, and best practices to preserve AI safety and security, including, but not limited, to:
- Developing companion resources to the National Institute of Standards and Technology’s AI Risk Management Framework and the Secure Software Development Framework for generative AI;
- Creating guidelines for “red-teaming testing,” which the AI EO defines as a controlled test to find flaws and vulnerabilities, with which developers must comply before releasing AI products and systems; and
- Establishing a cybersecurity program to identify and fix critical software vulnerabilities.
Innovation and Competition
To promote innovation and competition in AI, Section 5 of the AI EO makes it easier to recruit overseas talent for AI-related roles by, among other things, easing visa requirements for certain highly skilled workers. The AI EO builds on a recent proposal from the Department of Homeland Security to modernize the H-1B specialty occupation worker program by streamlining eligibility requirements, improving program efficiency, providing greater benefits and flexibilities for employers and workers, and strengthening integrity measures.
Section 6 of the AI EO relates to the impact that AI may have on the US workforce. The AI EO focuses on evaluating the potential for worker displacement and supporting workers in finding new employment.
The AI EO directs the Secretary of Labor to “develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits,” specifically addressing, at minimum, job-displacement risks, labor standards and job quality, and worker privacy. Agency heads will be encouraged to incorporate these guidelines into their programs. Importantly, the Secretary of Labor must also issue guidelines making clear that employers’ use of workplace AI systems to monitor or augment work must be done consistently with the Fair Labor Standards Act (FLSA) and ensure employees get paid for all hours worked.
Equity and Civil Rights
Section 7 of the AI EO makes clear that government agencies will continue to enforce civil rights laws in all relevant areas, including in the workplace. Pursuant to the AI EO, the Attorney General must “coordinate with and support agencies in their implementation and enforcement of existing Federal laws to address civil rights and civil liberties violations and discrimination related to AI.” Employers should expect increased collaboration between federal civil rights agencies and increased efforts by these agencies to prevent, address, and reduce unlawful discrimination that may be caused or exacerbated by AI. Indeed, the federal agencies already have begun collaborating on such issues, exemplified by the Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (“Joint Statement on Enforcement Efforts)” published this spring by the Equal Employment Opportunity Commission (EEOC), Consumer Financial Protection Bureau, Department of Justice’s Civil Rights Division, and Federal Trade Commission.
In addition, the AI EO also directs the Secretary of Labor to publish guidance for federal contractors regarding nondiscrimination in hiring “involving AI and other technology-based hiring systems.” This directive aligns with recent changes to the Office of Federal Contractor Compliance Programs’ (OFCCP) Combined Scheduling Letter and Itemized Listing (the “Scheduling Letter”), which the OFCCP uses to conduct audits of federal contractors. As of August 24, 2023, Item #21 of the Scheduling Letter requires federal contractors to identify and provide information and documentation regarding their “policies, practices, or systems used to recruit, screen, and hire, including the use of artificial intelligence, algorithms, automated systems or other technology-based selection procedures.”
Section 9 of the AI EO directs relevant departments to establish guidelines and protocols for the collection and use of consumer data and methods to safeguard Americans’ privacy from the potential threats exacerbated by AI. As the Fact Sheet summarizes, the AI EO specifically directs the government to prioritize and strengthen federal investment in privacy-preserving techniques and tools, evaluate how agencies collect and use consumer information, and develop guidelines for agencies to analyze their use of privacy-preserving techniques.
What Does the AI EO Mean for Private Sector Employers?
Although the AI EO does not bind the private sector, AI vendors wishing to contract with the federal government will nonetheless need to ensure that their products and systems comply with the AI EO’s evaluation requirements, especially as they relate to safety and security. In this way, the AI EO may indirectly establish industry “best practices” based on the government’s purchasing power. Moreover, the AI EO’s directive to develop consumer privacy-protecting techniques and tools could signal future regulation requiring private employers and/or the creators of AI tools to preserve workers’ privacy and personally identifiable data.
Further, the AI EO represents the federal government’s most significant attempt to date at regulating AI and serves as evidence that further regulation is likely forthcoming. Rather than asking if AI regulation will come to private sector employers, companies should instead focus on when, from whom, and how this regulation will occur. To help answer these questions, below, we summarize some of the significant efforts by the Biden-Harris administration, U.S. Congress, and state and local governments to regulate AI – both its content and its use – in the private sector.
The Biden-Harris Administration
Until October 30, the White House’s previous attempts to regulate AI have centered around voluntary compliance, including securing commitments from fifteen leading companies in the AI space to take steps to advance “responsible AI” by ensuring that their products are safe, secure, and developed with the public’s trust. Earlier this year, the White House also introduced a “Blueprint for an AI Bill of Rights,” which, through non-binding guidance, outlines five principles and associated practices to guide the “design, use, and deployment” of AI. Additionally, the White House Office of Science and Technology Policy (OSTP) issued a Request for Information on Automated Worker Surveillance and Management in the spring of 2023. The OSTP information request sought comments from the public regarding how employers use automated surveillance systems to monitor, manage, and evaluate workers, including their prevalence, purposes, deployment, impact, and potential violation of labor and employment laws.
At the agency level, however, the Biden-Harris administration has been much more active, with several agencies already publishing enforcement guidance and proposed rules regarding AI’s impact on their respective laws and missions. In addition to the multi-agency Joint Statement on Enforce Efforts referenced above, the EEOC has also published technical assistance regarding discriminatory use of AI in violation of the Americans with Disabilities Act and Title VII of the Civil Rights Act on 1964. Indeed, as we previously reported, the agency even secured what it is calling its first AI bias-related settlement in August.
In addition to executive attention, the past year has also brought increased scrutiny of AI from Congress. While ideological differences have stalled other legislative efforts in Washington, the need for AI regulation has received bipartisan support, with members of both parties collaborating on AI legislation. Unsurprisingly, these proposals vary significantly in form and function, so it is unknown which, if any, Congress ultimately may enact.
Some proposals focus on developing a comprehensive AI regulatory framework. For example, the SAFE Innovation Framework for Artificial Intelligence (the “SAFE Innovation Framework”) seeks to use a series of “AI Insight Forums” to bring together legislators and AI experts to develop a regulatory framework founded on five guiding principles, including security, accountability, alignment with democratic values, transparency, and innovation. No specific pieces of legislation have yet been introduced under the SAFE Innovation Framework, but that may change after the first forum occurred this fall. Senators have also proposed a comprehensive AI regulatory framework. Unlike the SAFE Innovation Framework, their proposal identifies specific action items rather than guiding principles. Included among the action items are the establishment of an independent oversight body with which AI companies must register certain types of products, as well as a requirement that AI companies disclose specific information regarding their products’ safety, accuracy, and limitations.
With respect to the workplace, four Senators have already introduced the No Robot Bosses Act, which, as we previously reported, aims to regulate employers’ use of “automated decision systems,” including by requiring employers to test and validate such systems before use and prohibiting employers from relying exclusively on these systems to make employment-related decisions. The No Robot Bosses Act is like, but broader than, AI regulations currently in place in New York City (discussed below). In addition, as noted above, employers that are federal contractors already have an obligation to disclose information to the OFCCP regarding their use of AI for recruiting and hiring.
State and Local Governments
Even without federal congressional action, many companies are already – or soon may be – subject to AI regulations at the state or local level. One of the more notable regulations, New York City’s Local Law 144, took effect earlier this year. Like the federal No Robot Bosses Act, Local Law 144 regulates employers’ use of automated employment decision tools (AEDT) to screen a candidate or employee for an employment decision. Under the law, employers must subject an AEDT to a bias audit within one year of the tool’s use, make information about the bias audit publicly available, and provide notice of the use of the AEDT to employees and job candidates. In addition to New York City, several other states and state regulators are considering implementing similar regulations, including the California Civil Rights Council, California Privacy Protection Agency, and the New York State legislature.
What Should Private Companies Do Now?
Just as AI technology continues to rapidly change, so, too, will its potential regulation. It is vital that companies understand the current legal landscape – especially for federal contractors, since this includes the new AI EO. They should also have systems in place to monitor and respond to developments at the federal, state, local, and even international level, as the AI EO also directs the federal government to collaborate with international partners to develop and implement global AI standards. This will be especially important as the European Union finalizes it efforts to enact the world’s first comprehensive AI legal framework in the next year or so. Epstein Becker & Green, P.C.’s dedicated AI Team is ready to assist throughout this process.
For further information about the AI EO’s impact on the health care industry, please refer to White House Executive Order on Artificial Intelligence: Implications for the Health Care and Life Sciences Industries.
To learn more about EBG’s AI services, please click here.
- Member of the Firm
- Member of the Firm