Prompted by the widespread adoption and use of video-conferencing software following the COVID-19 pandemic, many employers have shifted toward video interviews to evaluate potential hires. Even as employers have begun to require in-office attendance, the widespread use of video interviewing has continued, because it is a convenient and efficient way to evaluate applicants. Some of the video interviewing tools used by employers incorporate the use of artificial intelligence (AI) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third-party vendors to provide these AI-powered interviewing tools, as well as other tech-enhanced selection procedures.
While these AI-powered video interviewing tools offer the promise of optimizing recruitment and selection efforts, these products can raise a host of legal issues, including questions about hidden biases, disparate impact, disability discrimination, and data privacy. Although no federal laws expressly regulate the use of AI in employment decisions, at a recent event entitled “Initiative of AI and Algorithmic Fairness: Disability-Focused Listening Session,” U.S. Equal Employment Opportunity Commission Chair Charlotte Burrows expressed concerns about the use of video interview AI technology, noting, for example, that such technology may inappropriately screen out individuals with speech impediments. The same concerns would apply to individuals with visible disabilities, or disabilities that affect their movements. Shortly thereafter, the EEOC released technical guidance on “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” Legislative bodies in Illinois, Maryland, and New York City have taken a more active approach, passing laws that directly impact the use of AI-powered video interview and facial recognition software.
Use of AI Video Interviewing Software in Practice
Consider the following example:
A technology company with offices across the country, including New York City and Los Angeles, contracts with a third-party vendor to help screen potential candidates for employment. As part of the interview process, the third-party vendor uses proprietary software, marketed as being powered by artificial intelligence, to generate a numerical score based on the candidate’s voice, facial expressions, and word choices. At the beginning of each interview, a representative from the technology company’s HR department discloses that the interview will be video recorded and analyzed by an automated employment decision tool, and gives the candidate the option to opt-out of using this software. The HR representative also explains that the software is subjected to a rigorous bias-audit every year, the results of which are published on the third-party vendor’s website.
What legal issues could this present?
State and local legislative bodies have taken the lead in pushing requirements of explainability and transparency onto employers. In this example, the technology company should consider laws and regulations applicable to New York City and Los Angeles employers in determining whether it has any obligations to provide notice to candidates about the nature of the AI-powered video interviewing tool that will be used. The company should also be aware of the current laws in Illinois and Maryland if it hires candidates into locations in either state.
New York City. The New York City Council recently passed a local law governing the use of Automated Employment Decision Tools (AEDT), which goes into effect on January 1, 2023. See N.Y.C. Admin. Code Title 20, Chap. 5, Sub. Ch. 25, § 20-870. Among other things, N.Y.C.’s AEDT law makes it unlawful for an employer to use an automated employment decision tool[1] to screen a candidate for employment, unless the tool has been the subject of a bias audit no more than one year prior to the use of the tool, and a summary of the results of the bias audit is made publicly available. Additionally, employers must provide notice to the candidate, no less than ten business days before the interview, disclosing the “job qualifications and characteristics that such automated employment decision tool will use in the assessment” of the candidate, and providing the candidate with the opportunity to request an alternative selection or accommodation. In the above example, the technology company did not provide the required disclosure under the N.Y.C. AEDT law.
California. California’s Fair Employment & Housing Counsel has proposed draft regulations applicable to automated-decision systems, which, in its current draft form, apply to “algorithms that employ face and/or voice recognition to analyze facial expressions, word choices, and voices.” The draft regulation would incorporate automated-decision systems into California’s existing regulations governing discriminatory hiring practices under the Fair Employment & Housing Act. The draft regulations would make it unlawful for an employer to use automated-decision systems that “screen out or tend to screen out an applicant” on the basis of a protected characteristic unless the “selection criteria…are shown to be job-related for the position and are consistent with business necessity.” The regulations are currently in the pre-rulemaking phase, and the California Department of Fair Employment and Housing has not yet set a timeframe for adopting the draft regulations. However, the regulations, as drafted, do not contain an express requirement to provide notice to candidates of the use of AI, or to explain how the AI works.
Illinois. The Illinois Artificial Intelligence Video Interview Act (AIVI Act) requires that an employer provide notice and obtain prior consent from the applicant, and also requires the employer to explain to the candidate how the AI works and what general types of characteristics it uses to evaluate applicants. See 820 ILCS 42.
Illinois employers using video interviewing technology that takes scans of the face or collects other biometric data must also be aware of the Biometric Information Privacy Act (BIPA), which requires employers to provide notice and obtain consent before collecting biometric data, including “scan of hand or face geometry,” and provides a private right of action to candidates. See 740 ILCS 14.
Maryland. Maryland’s Md. Code Lab. & Empl. § 3-717 prohibits employers from using facial recognition technology during pre-employment job interviews without the applicant’s consent. When using facial recognition services in interviewing employees, a Maryland employer must obtain an applicant’s written consent and waiver that states the applicant’s name, the date of the interview, that the applicant consents to the use of facial recognition during the interview, and that the applicant has read the waiver.
EEOC Technical Guidance. In addition to the above state and local laws, employers must consider the recently-released guidance from the EEOC on the use of software, algorithms, and AI to assess job applicants and employees. The EEOC guidance contains a list of “promising practices” for employers to consider, including the following recommendations to address explainability and transparency by:
- informing all candidates that reasonable accommodations are available for individuals with disabilities, and providing clear and accessible instructions for requesting such accommodations; and
- describing, in plain language and accessible formats, the traits the tech-enabled tool is designed to assess, the method by which those traits will be assessed, and the variables or factors that may affect the assessment or rating.
What should employers prepare for?
As more employers use AI-powered tools, including video interviewing tools, to assist in their hiring practices, they should expect increased scrutiny in this area from federal, state, and local regulators and legislators. Utilizing tools that can be explained to the candidates being evaluated, and being transparent about the ways in which the tools will be used, will not only assist employers in complying with applicable laws and regulations, but also increase employer credibility with candidates and with regulators. To that end, employers should perform due diligence on the software company offering AI-powered tools, familiarize themselves with the software and the way it works, and carefully craft notices that will provide sufficient information to candidates to allow them to understand the evaluation process.
We will discuss these issues in more detail during our upcoming virtual briefing on Explainable Artificial Intelligence: Legal Risks and Remedies for the “Black Box” Problem on June 9, 2022, from 1:00 – 4:00 p.m. (ET). To register, please click here.
*******
[1] The term “automated employment decision tool” is defined to include any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons
Blog Editors
Authors
- Senior Counsel
- Member of the Firm