The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear). These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.
In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals.
On August 9, 2024, Illinois Governor J.B. Pritzker signed HB 3773 into law, amending the Illinois Human Rights Act (IHRA) to expressly regulate the use of artificial intelligence (AI) for employment decisions. HB 3773 is the second Illinois law that regulates workplace AI. As we previously reported, in August 2019, Illinois enacted the first of its kind statute, the Artificial Intelligence Video Interview Act (AIVIA), which requires employers who use AI-enabled video interviewing technology to provide applicants advanced notice of the use of the AI, information regarding how the AI works and the characteristics evaluated, and obtain prior consent from applicants. And, while not necessarily directed exclusively at workplace AI tools, as we also previously reported, an employer’s use of AI-powered facial expression and screening technology could also implicate the requirements of the Illinois Biometric Information Privacy Act (BIPA).
HB 3773 has a potentially broader application than either AIVIA or BIPA. HB 3773 provides two new definitions:
Artificial Intelligence
A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Artificial intelligence also includes generative artificial intelligence.
Generative Artificial Intelligence
An automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following:
On July 11, 2024, after considering comments from insurers, trade associations, advisory firms, universities, and other stakeholders, the New York State Department of Financial Services (NYSDFS) issued its Final Circular Letter regarding the “Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing” (“Final Letter.”) By way of background, NYSDFS published its Proposed Circular Letter (“Proposed Letter”) on the subject in January 2024. As we noted in our February blog, the Proposed Letter called on insurers and others in the state of New York, using external consumer data and information sources (“ECDIS”) and artificial intelligence systems (“AIS”), to assess and mitigate bias, inequality, and discriminatory decision making or other adverse effects in the underwriting and pricing of insurance policies. While NYSDFS recognized the value of ECDIS and AI in simplifying and expediting the insurance underwriting process, the agency—following current trends—wanted to mitigate the potential for harm.
And if the opening section of the Final Letter is any indication, the agency did not back down. It continued to insist, for example, that senior management and boards of directors “have a responsibility for the overall outcomes of the use of ECDIS and AIS”; and that insurers should conduct “appropriate due diligence and oversight” with respect to third-party vendors. NYSDFS declined to define “unfair discrimination” or “unlawful discrimination,” noting that those definitions may be found in various state and federal laws dealing with insurance and insurers.
On July 12, 2024, in a keenly awaited decision, the U.S. District Court for the Northern District of California determined that Workday, Inc. (“Workday”), a provider of AI-infused human resources (HR) software, can be held liable under Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act of 1967 (ADEA), and the Americans with Disabilities Act (ADA) (collectively the “Anti-Discrimination Laws”) as an agent of the corporate clients that hire Workday to screen and source candidates for employment by utilizing its AI-infused decision-making tools. In noting that “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era,” the court underscored the EEOC’s admonition, which we discussed in our previous post, that employers delegating their hiring protocols to AI must do so cognizant of the potential discriminatory impacts of such use. See Opinion at 10. Thus, the court allowed plaintiff Derek Mobley’s disparate impact claim to proceed, finding that Mobley’s allegations supported a plausible inference that Workday’s screening algorithms automatically rejected his applications based on protected characteristics rather than his qualifications.
Prior Proceedings
Mobley filed his initial complaint as a putative class action on February 21, 2023, alleging claims against Workday as an “employment agency” for disparate impact and intentional discrimination under the Anti-Discrimination Laws. His complaint centered on his allegation that he applied for “at least 80-100 positions that upon information and belief use Workday, Inc. as a screening tool for talent acquisition and/or hiring” and “has been denied employment each and every time.” Complaint at 10.
The past several years have witnessed a notable uptick in workplace artificial intelligence related legislation and agency enforcement attention, specifically focused on the infusion of AI or so-called automated decision-making tools. Colorado’s new Artificial Intelligence Act, for example, designates employment as a “high-risk” sector of AI applications and has heightened concerns of lawmakers and corporate executives. Lawsuits, such as Mobley v. Workday and Moffatt v. Air Canada, underscore the concerns of employment candidate screening, recruitment and conversational AI. Most recently, the US Equal Employment Opportunity Commission issued a Determination finding cause to believe the employer violated the Older Workers Benefit Act by using AI in a reduction in force that adversely impacted older workers. A complaint in the Southern District of New York against IBM and its spinoff technology company, Kyndryl, promptly followed.
Perhaps not surprisingly, over the past few years, the State of New York (“NYS”), following the lead of New York City, has introduced several bills that would regulate the use of AI infused decision-making tools. One such bill, called New York Workforce Stabilization Act (“NYWFSA”) was introduced in May 2024 by Senators Michelle Hinchey and Kristen Gonzalez. They will likely re-introduce the NYWFSA during the upcoming January 2025 legislative session intending to “stabilize” New York’s labor market at a time when the deployment of AI may fundamentally alter the New York industrial landscape.
The Department of Labor's (DOL) May 16, 2024 guidance, Artificial Intelligence and Worker Well-Being: Principles for Developers and Employers, published in response to the mandates of Executive Order 14110 (EO 14110) (Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), weighs the benefits and risks of an AI-augmented workplace and establishes Principles to follow that endeavor to ensure the responsible and transparent use of AI. The DOL’s publication of these Principles follows in the footsteps of the EEOC and the OFCCP’s recent guidance on AI in the workplace and mirrors, in significant respects, the letter and spirit of their pronouncements.
While not “exhaustive,” the Principles” should be considered during the whole lifecycle of AI” from ”design to development, testing, training, deployment and use, oversight, and auditing.” Although the DOL intends the Principles to apply to all business sectors, the guidance notes that not all Principles will apply to the same extent in every industry or workplace, and thus should be reviewed and customized based on organizational context and input from workers.
While not defined in the Principles, EO 14110 defines artificial intelligence as set forth in 15 U.S.C. 9401(3): “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
In line with the mandates of President Biden’s Executive Order 14110, entitled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and its call for a coordinated U.S. government approach to ensure responsible and safe development and use of artificial intelligence (AI) systems, the Office of Federal Contract Compliance Programs (OFCCP) has published a Guide addressing federal contractors’ use of AI in the context of Equal Employment Opportunity (EEO).
As discussed below, the Guide comprises a set of common questions and answers about the intersection of AI and EEO, as well as so-called “promising practices” federal contractors should consider implementing in the development and deployment of AI in the EEO context. In addition, the new OFCCP “landing page” in which the new Guide appears includes a Joint Statement signed by nine other federal agencies and the OFCCP articulating their joint commitment to protect the public from unlawful bias in the use of AI and automated systems.
In response to President Biden’s Executive Order 14110 calling for a coordinated U.S. government approach to ensuring the responsible and safe development and use of AI, the U.S. Department of Labor Wage and Hour Division (WHD) issued Field Assistance Bulletin No. 2024-1 (the “Bulletin”). This Bulletin, published on April 29, 2024, provides guidance on the application of the Fair Labor Standards Act (FLSA) and other federal labor standards in the context of increasing use of artificial intelligence (AI) and automated systems in the workplace.
Importantly, reinforcing the DOL’s position expressed in the Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems, the WHD confirms that the historical federal laws enforced by the WHD will continue to apply to new technological innovations, such as workplace AI. The WHD also notes that, although AI and automated systems may streamline tasks for employers, improve workplace efficiency and safety, and enhance workforce accountability, implementation of such tools without responsible human oversight may pose potential compliance challenges.
The Bulletin discusses multiple ways in which AI interacts with the Fair Labor Standards Act (“FLSA”), the Family and Medical Leave Act (“FMLA”), the Providing Urgent Maternal Protections for Nursing Mothers Act (“PUMP Act”), and the Employee Polygraph Protection Act (“EPPA”). The Bulletin makes the following pronouncements regarding the potential compliance issues that may arise due to the use of AI to perform wage-and-hour tasks:
On May 17, 2024, Colorado Governor Jared Polis signed into law SB 24-205—concerning consumer protections in interactions with artificial intelligence systems—after the Senate passed the bill on May 3, and the House of Representatives passed the bill on May 8. In a letter to the Colorado General Assembly, Governor Polis noted that he signed the bill into law with reservations, hoping to further the conversation on artificial intelligence (AI) and urging lawmakers to “significantly improve” on the law before it takes effect.
SB 24-205 will become effective on February 1 ...
Is the developer of an AI resume-screening tool an “employment agency” or “agent” subject to liability under Title VII of the Civil Rights Act for its customers’ allegedly discriminatory employment decisions? According to the United States Equal Employment Opportunity Commission (“EEOC”), the answer is yes. On April 9, 2024, the EEOC filed a motion for leave to file a brief as amicus curiae, together with a brief, in Mobley v. Workday, Inc., Case No. 3:23-cv-00770-RFL, to support plaintiff Derek Mobley’s (“Mobley”) motion to dismiss.
The EEOC’s action is ...
As featured in #WorkforceWednesday: This week, we’re breaking down the U.S. Supreme Court’s (SCOTUS’s) new workplace discrimination decision, the Equal Employment Opportunity Commission’s (EEOC’s) final rule on the Pregnant Workers Fairness Act (PWFA), and how recent artificial intelligence (AI) hiring tools have violated federal anti-bias laws.
The U.S. Court of Appeals for the Eleventh Circuit recently weighed in on the circuit-splitting debate over the proper causation standard for Family and Medical Leave Act (“FMLA”) retaliation claims. In a win for employers, the Eleventh Circuit held that the proper standard is the heightened “but-for” causation standard, rather than the “motivating factor” causation standard, leading it to affirm the district court’s grant of summary judgment in favor of defendant Walgreen Co. (“Walgreens”) against plaintiff Doris Lapham (“Lapham”) on her FMLA ...
Since the dawn of digitalization, the collection and retention of personal and other business confidential data by employers has implicated security and privacy challenges—by amassing a treasure trove of data for bad actors (or unwitting/unauthorized employees) and drawing a roadmap for those seeking to breach the system. Adding artificial intelligence (AI) into the mix creates further areas of concern. A recent survey undertaken by the Society of Human Resource Management of more than 2000 human resources professionals indicates that AI is being utilized by the majority of ...
A recent decision from the Northern District of Illinois highlights new legal hurdles for employers using AI-powered video interview technologies under Illinois’ Biometric Information Privacy Act (BIPA), 740 ILCS 14/15. In Deyerler v. HireVue, initially filed over two years ago in January 2022, a class of plaintiffs alleged that HireVue’s AI-powered facial expression and screening technology violated BIPA. According to the complaint, HireVue collected, used, disclosed, and profited from “biometric identifiers” without complying with the requirements of BIPA. ...
On December 11, 2023, the City of San Francisco released the San Francisco Generative AI Guidelines (“Guidelines”). The Guidelines set forth parameters for City employees, contractors, consultants, volunteers, and vendors who use generative artificial intelligence (AI) tools to perform work on behalf of the City.
Specifically, the Guidelines encourage City employees, contractors, consultants, volunteers, and vendors to use generative AI tools for purposes such as preparing initial drafts of documents, “translating” text into levels of formality or for a ...
As the implementation and integration of artificial intelligence and machine learning tools (AI) continue to affect nearly every industry, concerns over AI’s potentially discriminatory effects in the use of these tools continue to grow. The need for ethical, trustworthy, explainable, and transparent AI systems is gaining momentum and recognition among state and local regulatory agencies—and the insurance industry has not escaped their notice.
On January 17, 2024, the New York State Department of Financial Services (“NYSDFS”) took a further step towards imposing ...
As featured in #WorkforceWednesday: This week, we’re detailing the National Labor Relations Board’s (NLRB’s) expanded “joint employer” definition, the recent confirmations of the Equal Employment Opportunity Commission’s (EEOC’s) General Counsel and the Department of Labor’s (DOL’s) Wage and Hour Administrator, and President Biden’s executive order on artificial intelligence (AI).
NLRB Expands Definition of “Joint Employer"
The NLRB recently published its long-awaited final rule, setting a new test for determining joint-employer ...
On October 30, 2023, President Joe Biden signed his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO), which addresses artificial intelligence issues, including safety, security, privacy, civil rights, immigration, and health care. The White House also released a companion Fact Sheet summarizing the AI EO (the “Fact Sheet”). Later in the week, on November 1, 2023, the White House announced that the Office of Management and Budget will release for comment a new draft policy on Advancing Governance, Innovation, and ...
While recent public attention has largely focused on generative artificial intelligence (AI), the use of AI for recruitment and promotion screening in the employment context is already widespread. It can help HR-professionals make sense of data as the job posting and application process is increasingly conducted online. According to a survey conducted by the Society for Human Resource Management (SHRM),[1] nearly one in four organizations use automation and/or AI to support HR-related activities, such as recruitment, hiring, and promotion decisions, and that number is posed ...
The five-member Board of the California Privacy Protection Agency (the “CPPA”) held a public meeting on September 8, 2023, to discuss a range of topics, most notably, draft regulations relating to risk assessments and cybersecurity audits. Once the regulations are finalized and approved after a formal rulemaking process, they will impose additional obligations on many businesses covered by the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). The Board’s discussion of these draft regulations is instructive for ...
As featured in #WorkforceWednesday: This week, we’re analyzing the National Labor Relations Board’s (NLRB’s) recent Stericycle decision, the Equal Employment Opportunity Commission’s (EEOC’s) proposed rule on pregnant workers’ rights, and the EEOC’s first-ever artificial intelligence (AI) anti-discrimination lawsuit settlement.
On August 9, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) and iTutorGroup, Inc. and related companies (collectively, “iTutorGroup”) filed a joint notice of settlement and a request for approval and execution of a consent decree, effectively settling claims that the EEOC brought last year against iTutorGroup regarding its application software. The EEOC claimed in its lawsuit that iTutorGroup violated the Age Discrimination in Employment Act (“ADEA”) by programming its application software to automatically reject hundreds of female applicants age 55 or older and male applicants age 60 or older.
After releasing an initial two-page “fact sheet,” Congress publicly posted the bill text of the No Robot Bosses Act (the “Proposed Act”), detailing proposed federal guardrails for use of automated decision-making systems in the employment context. Robert Casey (D-PA), Brian Schatz (D-HI), John Fetterman (D-PA), and Bernie Sanders (I-VT) currently cosponsor the Proposed Act.
On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed federal legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would:
As we previously reported, on July 5, 2023, the New York City Department of Consumer and Worker Protection (DCWP) began enforcing Local Law 144 of 2021 (the “Law”) regulating the use of automated employment decision tools (AEDT). In preparation for the July 5 enforcement date, last week, the DCWP published Frequently Asked Questions (FAQ) concerning the use of AEDTs on its fact page for the Law. The FAQ contain an overview of the Law and general information and guidance regarding bias audit requirements, data requirements, independent auditors, responsibility for bias audits, notice requirements, and complaints.
As explained in the FAQ, the Law applies to employers and employment agencies that use AEDT:
As featured in #WorkforceWednesday: This week, we analyze how employers can benefit from artificial intelligence (AI) innovations while remaining in compliance with federal regulations:
AI is evolving faster than ever before. How can employers prepare for the future of AI in the workplace? Epstein Becker Green attorneys Alexander J. Franchilli and J.T. Wilson III tell us how looming federal regulations and diversity, equity, and inclusion concerns are creating a turbulence of compliance and innovation.
Since late October 2021, when the Equal Employment Opportunity Commission (EEOC) launched its Initiative on Artificial Intelligence (AI) and Algorithmic Fairness, the agency has taken several steps to ensure AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces, including Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). Among other things, the EEOC has hosted disability-focused listening and educational sessions, published technical assistance regarding the ADA and the use of AI and other technologies, and held a public hearing to examine the use of automated systems in employment decisions.
On Thursday May 4, 2023, the Biden-Harris Administration announced its plan to implement artificial intelligence (“AI”) safeguards to “protect people’s rights and safety.”
Given the rapid development of AI in workplaces, public health, education, and security, the Administration seeks to underscore related risks and opportunities. Vice President Kamala Harris and senior Administration officials have met with leaders at the forefront of AI innovation to call attention to “responsible, trustworthy, and ethical innovation with safeguards that mitigate risk and potential harms to individuals and our society.”
On Tuesday, April 25, 2023, the Equal Employment Opportunity Commission (“EEOC”), Consumer Financial Protection Bureau (“CFPB”), Justice Department’s Civil Rights Division (“DOJ”), and the Federal Trade Commission (“FTC”) issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated System” (“Joint Statement”). According to a press release from the EEOC, by the Joint Statement, the federal agencies pledged to uphold America’s commitment to the core principles of fairness, equality, and justice as emerging automated systems, including those sometimes marketed as “artificial intelligence,” or “AI,” become increasingly common in people’s daily lives – impacting civil rights, fair competition, consumer protection, and equal opportunity.
On April 6, 2023, the New York City Department of Consumer and Worker Protection (“DCWP”) issued a Notice of Adoption of Final Rule to implement Local Law 144 of 2021, legislation regarding automated employment decision tools (“AEDT Law”). DCWP also announced that it will begin enforcement of the AEDT Law and Final Rule on July 5, 2023. Pursuant to the AEDT Law, an employer or employment agency that uses an automated employment decision tool (“AEDT”) in NYC to screen a candidate or employee for an employment decision must subject the tool to a bias audit within one year of the tool’s use, make information about the bias audit publicly available, and provide notice of the use of the tool to employees or job candidates.
As featured in #WorkforceWednesday: This week, we’re taking a closer look at ChatGPT, exploring the opportunities and risks associated with this artificial intelligence (AI) technology, and providing valuable insights for employers who are looking to stay ahead of the curve:
ChatGPT is set to become the next big thing for employers and beyond. What potential issues should employers be aware of? Epstein Becker Green attorney Brian G. Cesaratto explains how critical it is for employers to think through the workplace-related risks.
As featured in #WorkforceWednesday: This week, we’re mapping out how Local Law 144 applies to every employer with employees in New York City using artificial intelligence (AI), machine learning, or a similar computational process to screen candidates for employment or employees for promotion within the city.
On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.
As we recently reported, on December 9, 2022, the New York City Department of Consumer and Worker Production (“DCWP”) announced that it was postponing enforcement of the Automated Employment Decision Tools (“AEDT”) law, until April 15, 2023, due to the high volume of public comments it received regarding its proposed rules.
As we previously noted, New York City’s Automated Employment Decision Tools Law (“AEDT Law”), regulating employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions, had an effective date of January 1, 2023. In late September 2022, we reported about the New York City Department of Consumer and Worker Protection (“DCWP”) issuing a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to the AEDT law. The hearing subsequently took place on November 4, 2022, and dozens of organizations and individuals submitted comments, leaving many observers wondering whether the comments would impact the quickly approaching January 1, 2023 enforcement date and how the DCWP would interpret the law.
On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”). The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.” Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.
On Tuesday October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) released a document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (the “Blueprint”) together with a companion document “From Principles to Practice: A Technical Companion to the Blueprint for an AI Bill of Rights” (the “Technical Companion”).
As featured in #WorkforceWednesday: This week, we look at the trend of “quiet quitting” and the legal and technology considerations employers should weigh when navigating the issue.
On Friday, September 23, 2022, the New York City Department of Consumer and Worker Protection (“DCWP”) released a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to its Automated Employment Decision Tool law (the “AEDT Law”), which goes into effect on January 1, 2023. As we previously wrote, the City passed the AEDT Law to regulate employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions; as written, however, it contains many ambiguities, which has left covered employers with open questions about compliance.
Over the past several years, workplace artificial intelligence (“AI”) tools have matured from novel to mainstream. Whether facilitating attracting, screening, hiring, and onboarding job applicants or charting the career path or promotability of current employees, workplace AI tools will likely become more prevalent. Legislators and administrative agencies have taken note and are in various stages of examining and regulating these tools, with the primary goal of ensuring that they do not violate federal and state laws prohibiting workplace discrimination.
Prompted by the widespread adoption and use of video-conferencing software following the COVID-19 pandemic, many employers have shifted toward video interviews to evaluate potential hires. Even as employers have begun to require in-office attendance, the widespread use of video interviewing has continued, because it is a convenient and efficient way to evaluate applicants. Some of the video interviewing tools used by employers incorporate the use of artificial intelligence (AI) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third-party vendors to provide these AI-powered interviewing tools, as well as other tech-enhanced selection procedures.
Recruiting qualified applicants and hiring top talent have always been time-consuming endeavors that come with constant worry about making a wrong hire. Added to this, the COVID-19 pandemic effectively put a halt to employers’ ability to evaluate applicants in-person. These factors, and others, have led many employers to adopt or to, consider adopting, artificial intelligence (AI) tools to optimize recruitment by introducing efficiencies, reaching a broader pool of applicants, increasing consistency and uniformity in the evaluation of applicants, and, in some cases, helping employers meet diversity, equity, and inclusion goals. Typically, employers opting to use AI, contract with third-party vendors that offer AI-powered algorithms, which perform a variety of functions, such as cognitive assessments, personality tests, and video interviews.
A critical component of a successful employer-employee relationship is the employer’s fair and equitable treatment of employees, often embodied in the employer’s employee engagement, retention, and compensation practices. When it comes to compensation, U.S. employers must comply with federal and applicable state equal pay laws that prohibit discriminatory pay practices, and a myriad of state and local laws banning inquiries into, or the use of, prior salary history in setting pay. Yet, compensation bias and discrimination still exist and continue to be the subject of ...
As we have previously blogged, use of third-party digital hiring platforms to select job applicants using video interviews can present an array of potential legal issues. A recent Complaint filed with the Federal Trade Commission (“FTC”) by a consumer advocacy organization, Electronic Privacy Information Center (“EPIC”), illustrates some of those potential pitfalls. EPIC asks the FTC to investigate the recruiting technology company HireVue for alleged discriminatory screening of job applicants through its face-scanning software. HireVue asks job applicants to ...
We have long counseled employers using or contemplating using artificial intelligence (“AI”) algorithms in their employee selection processes to validate the AI-based selection procedure using an appropriate validation strategy approved by the Uniform Guidelines on Employee Selection Procedures (“Uniform Guidelines”). Our advice has been primarily based on minimizing legal risk and complying with best practices. A recently updated Frequently Asked Questions (“FAQ”) from the Office of Federal Contract Compliance Programs (“OFCCP”) provides further ...
This extended interview from Employment Law This Week will be of interest to many of our readers. Attorney and co-editor of this blog, Michelle Capezza explains how recent legal developments have prepared employers for their future workforce, which will include artificial intelligence technologies working alongside human employees. She also looks at the strategies employers should start to consider as artificial intelligence is incorporated into the workplace.
As I continue to follow developments regarding the future of work, I recently attended an event co-sponsored by Cornell/ILR’s Institute for Workplace Studies in NYC and the McKinsey Global Institute (MGI) addressing MGI’s report last Fall entitled Independent Work: Choice, Necessity and the Gig Economy. The report examines the increasing numbers of self-employed, freelance and temporary workers in the U.S. and Europe which are currently estimated to comprise 30 percent of the working-age population and rising. The report notes that many workers have chosen this ...
Blog Editors
Recent Updates
- Video: FTC Exits Labor Pact, EEOC Alleges Significant Underrepresentation in Tech, Sixth Circuit Affirms NLRB Ruling - Employment Law This Week
- Massachusetts High Court Rules That Franchisees Are Independent Contractors
- Video: New DOL Guidance - ERISA Plan Cybersecurity Update - Employment Law This Week
- Video: DOL Authority Challenged - Key Rulings on Overtime and Tip Credit - Employment Law This Week
- Deepfakes: Why Executive Teams Should Prepare for the Cybersecurity and Fraud Risks