Recruiting qualified applicants and hiring top talent have always been time-consuming endeavors that come with constant worry about making a wrong hire. Added to this, the COVID-19 pandemic effectively put a halt to employers’ ability to evaluate applicants in-person. These factors, and others, have led many employers to adopt or to, consider adopting, artificial intelligence (AI) tools to optimize recruitment by introducing efficiencies, reaching a broader pool of applicants, increasing consistency and uniformity in the evaluation of applicants, and, in some cases, helping employers meet diversity, equity, and inclusion goals. Typically, employers opting to use AI, contract with third-party vendors that offer AI-powered algorithms, which perform a variety of functions, such as cognitive assessments, personality tests, and video interviews.

What does this “optimization” of talent acquisition look like in practice?

Consider the following hypothetical example:

A technology company uses a cognitive assessment test purchased from a vendor to screen potential candidates. The test is only accessible on mobile device app. Historical data suggests that women, on average, score lower in certain attributes that are important to the technology company. An algorithm considers all of the information about an applicant, including their scores on that test, and makes recommendations with regard to candidates who should be interviewed in person. The algorithm was trained using internal company data – including resumes, application information, performance reviews, and the cognitive assessment test results – from a group of incumbent employees that the company has identified as high performers. The vendor has provided the company with case studies from other clients and validation analyses to demonstrate the algorithm’s effectiveness and fairness.

What legal issues could this present?

Employers investing in AI to assist in recruiting and hiring should be careful not to simply “plug and play.” Although no federal laws expressly regulate the use of AI in employment decisions, its use is likely subject to several statutes, particularly laws against discrimination. Concerns about potential discriminatory bias in recruitment AI has the attention of federal regulators. For example, on December 8, 2020, a group of ten U.S. Senators (including then-Senator and current Vice President Kamala Harris) sent a letter to the Equal Employment Opportunity Commission asking for more information about the EEOC’s oversight authority for hiring technologies. While recognizing that hiring technologies may reduce the role of individual hiring managers’ biases, the senators expressed concern that the AI, “can also reproduce and deepen systemic patterns of discrimination reflected in today’s workforce data.” Questions posed by the senators in their letter included asking whether the EEOC has ever used its authority to investigate and/or enforce against discrimination related to the use of hiring technologies and whether the EEOC intends to use its authority to study and investigate the development and design, use, and impacts of hiring technologies.

In addition, on January 1, 2021, Congress passed the National Defense Authorization Act which included the National Artificial Intelligence Initiative Act of 2020 (NAIIA). The NAIIA will fund educational and training programs to prepare the workforce to interact with AI systems, and will require governmental agencies to reach out to diverse stakeholders, including civil rights and disability rights organizations, for input on AI initiatives. Meanwhile, in the absence of specific federal legislation governing the use of hiring technologies, several state and local governments  are taking up the issue with some, such as Illinois, already passing laws addressing the use of AI in hiring decisions.

What should employers prepare for?

Given the increased use of hiring technologies by employers and the rising concerns about the potential for a perpetuation of discriminatory hiring through their use, employers should expect increased scrutiny in this area from federal and state regulators and legislators. In the above example, the technology company using a cognitive assessment test to screen applicants should carefully consider the test’s impact on protected groups. The algorithm, for instance, was trained on internal company data, which in part suggests that women score lower than men in important attributes. Carefully crafting a job assessment to ensure only key knowledge, skills, and abilities are taken into account, and considering potential reasons for disparities (e.g., comfort in using a mobile app for taking a test) are some ways the company could address this issue.

Relatedly, the company should also consider the test’s accessibility for persons with disabilities, and others. Will people with physical and mental disabilities be able to take the test and, if not, what reasonable accommodations might be offered? The fact that applicants can access the test only on a mobile device app may also negatively affect people with disabilities, as well as senior citizens or low-income applicants who do not own mobile devices.

In addition, the company cannot expect to validate its use of the test through case studies and validation analyses from other customers. Pursuant to the Uniform Guidelines of Employee Selection Procedures, which apply to all selection procedures used to make employment decisions (such as the cognitive assessment test in the hypothetical), employers must conduct a validity study to utilize any test that adversely impacts a protected group. Validity studies that have been conducted on the employer’s own workforce are preferable, because those studies ensure the accuracy and fairness of the test as it relates to the impacted applicants.

We will discuss these issues in more detail during our upcoming virtual briefing on bias in AI. To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). [Update: For questions about the briefing materials, please contact Matt Loomis or Dionna Rinaldi.]

 

Back to Workforce Bulletin Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Workforce Bulletin posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.