As featured in #WorkforceWednesday: This week, we’re detailing the National Labor Relations Board’s (NLRB’s) expanded “joint employer” definition, the recent confirmations of the Equal Employment Opportunity Commission’s (EEOC’s) General Counsel and the Department of Labor’s (DOL’s) Wage and Hour Administrator, and President Biden’s executive order on artificial intelligence (AI).
NLRB Expands Definition of “Joint Employer"
The NLRB recently published its long-awaited final rule, setting a new test for determining joint-employer ...
On October 30, 2023, President Joe Biden signed his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO), which addresses artificial intelligence issues, including safety, security, privacy, civil rights, immigration, and health care. The White House also released a companion Fact Sheet summarizing the AI EO (the “Fact Sheet”). Later in the week, on November 1, 2023, the White House announced that the Office of Management and Budget will release for comment a new draft policy on Advancing Governance, Innovation, and ...
While recent public attention has largely focused on generative artificial intelligence (AI), the use of AI for recruitment and promotion screening in the employment context is already widespread. It can help HR-professionals make sense of data as the job posting and application process is increasingly conducted online. According to a survey conducted by the Society for Human Resource Management (SHRM), nearly one in four organizations use automation and/or AI to support HR-related activities, such as recruitment, hiring, and promotion decisions, and that number is posed ...
The five-member Board of the California Privacy Protection Agency (the “CPPA”) held a public meeting on September 8, 2023, to discuss a range of topics, most notably, draft regulations relating to risk assessments and cybersecurity audits. Once the regulations are finalized and approved after a formal rulemaking process, they will impose additional obligations on many businesses covered by the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). The Board’s discussion of these draft regulations is instructive for ...
As featured in #WorkforceWednesday: This week, we’re analyzing the National Labor Relations Board’s (NLRB’s) recent Stericycle decision, the Equal Employment Opportunity Commission’s (EEOC’s) proposed rule on pregnant workers’ rights, and the EEOC’s first-ever artificial intelligence (AI) anti-discrimination lawsuit settlement.
On August 9, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) and iTutorGroup, Inc. and related companies (collectively, “iTutorGroup”) filed a joint notice of settlement and a request for approval and execution of a consent decree, effectively settling claims that the EEOC brought last year against iTutorGroup regarding its application software. The EEOC claimed in its lawsuit that iTutorGroup violated the Age Discrimination in Employment Act (“ADEA”) by programming its application software to automatically reject hundreds of female applicants age 55 or older and male applicants age 60 or older.
After releasing an initial two-page “fact sheet,” Congress publicly posted the bill text of the No Robot Bosses Act (the “Proposed Act”), detailing proposed federal guardrails for use of automated decision-making systems in the employment context. Robert Casey (D-PA), Brian Schatz (D-HI), John Fetterman (D-PA), and Bernie Sanders (I-VT) currently cosponsor the Proposed Act.
On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed federal legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would:
As we previously reported, on July 5, 2023, the New York City Department of Consumer and Worker Protection (DCWP) began enforcing Local Law 144 of 2021 (the “Law”) regulating the use of automated employment decision tools (AEDT). In preparation for the July 5 enforcement date, last week, the DCWP published Frequently Asked Questions (FAQ) concerning the use of AEDTs on its fact page for the Law. The FAQ contain an overview of the Law and general information and guidance regarding bias audit requirements, data requirements, independent auditors, responsibility for bias audits, notice requirements, and complaints.
As explained in the FAQ, the Law applies to employers and employment agencies that use AEDT:
As featured in #WorkforceWednesday: This week, we analyze how employers can benefit from artificial intelligence (AI) innovations while remaining in compliance with federal regulations:
AI is evolving faster than ever before. How can employers prepare for the future of AI in the workplace? Epstein Becker Green attorneys Alexander J. Franchilli and J.T. Wilson III tell us how looming federal regulations and diversity, equity, and inclusion concerns are creating a turbulence of compliance and innovation.
Since late October 2021, when the Equal Employment Opportunity Commission (EEOC) launched its Initiative on Artificial Intelligence (AI) and Algorithmic Fairness, the agency has taken several steps to ensure AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces, including Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). Among other things, the EEOC has hosted disability-focused listening and educational sessions, published technical assistance regarding the ADA and the use of AI and other technologies, and held a public hearing to examine the use of automated systems in employment decisions.
On Thursday May 4, 2023, the Biden-Harris Administration announced its plan to implement artificial intelligence (“AI”) safeguards to “protect people’s rights and safety.”
Given the rapid development of AI in workplaces, public health, education, and security, the Administration seeks to underscore related risks and opportunities. Vice President Kamala Harris and senior Administration officials have met with leaders at the forefront of AI innovation to call attention to “responsible, trustworthy, and ethical innovation with safeguards that mitigate risk and potential harms to individuals and our society.”
On Tuesday, April 25, 2023, the Equal Employment Opportunity Commission (“EEOC”), Consumer Financial Protection Bureau (“CFPB”), Justice Department’s Civil Rights Division (“DOJ”), and the Federal Trade Commission (“FTC”) issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated System” (“Joint Statement”). According to a press release from the EEOC, by the Joint Statement, the federal agencies pledged to uphold America’s commitment to the core principles of fairness, equality, and justice as emerging automated systems, including those sometimes marketed as “artificial intelligence,” or “AI,” become increasingly common in people’s daily lives – impacting civil rights, fair competition, consumer protection, and equal opportunity.
On April 6, 2023, the New York City Department of Consumer and Worker Protection (“DCWP”) issued a Notice of Adoption of Final Rule to implement Local Law 144 of 2021, legislation regarding automated employment decision tools (“AEDT Law”). DCWP also announced that it will begin enforcement of the AEDT Law and Final Rule on July 5, 2023. Pursuant to the AEDT Law, an employer or employment agency that uses an automated employment decision tool (“AEDT”) in NYC to screen a candidate or employee for an employment decision must subject the tool to a bias audit within one year of the tool’s use, make information about the bias audit publicly available, and provide notice of the use of the tool to employees or job candidates.
As featured in #WorkforceWednesday: This week, we’re taking a closer look at ChatGPT, exploring the opportunities and risks associated with this artificial intelligence (AI) technology, and providing valuable insights for employers who are looking to stay ahead of the curve:
ChatGPT is set to become the next big thing for employers and beyond. What potential issues should employers be aware of? Epstein Becker Green attorney Brian G. Cesaratto explains how critical it is for employers to think through the workplace-related risks.
As featured in #WorkforceWednesday: This week, we’re mapping out how Local Law 144 applies to every employer with employees in New York City using artificial intelligence (AI), machine learning, or a similar computational process to screen candidates for employment or employees for promotion within the city.
On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.
As we recently reported, on December 9, 2022, the New York City Department of Consumer and Worker Production (“DCWP”) announced that it was postponing enforcement of the Automated Employment Decision Tools (“AEDT”) law, until April 15, 2023, due to the high volume of public comments it received regarding its proposed rules.
As we previously noted, New York City’s Automated Employment Decision Tools Law (“AEDT Law”), regulating employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions, had an effective date of January 1, 2023. In late September 2022, we reported about the New York City Department of Consumer and Worker Protection (“DCWP”) issuing a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to the AEDT law. The hearing subsequently took place on November 4, 2022, and dozens of organizations and individuals submitted comments, leaving many observers wondering whether the comments would impact the quickly approaching January 1, 2023 enforcement date and how the DCWP would interpret the law.
On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”). The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.” Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.
On Tuesday October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) released a document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (the “Blueprint”) together with a companion document “From Principles to Practice: A Technical Companion to the Blueprint for an AI Bill of Rights” (the “Technical Companion”).
As featured in #WorkforceWednesday: This week, we look at the trend of “quiet quitting” and the legal and technology considerations employers should weigh when navigating the issue.
On Friday, September 23, 2022, the New York City Department of Consumer and Worker Protection (“DCWP”) released a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to its Automated Employment Decision Tool law (the “AEDT Law”), which goes into effect on January 1, 2023. As we previously wrote, the City passed the AEDT Law to regulate employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions; as written, however, it contains many ambiguities, which has left covered employers with open questions about compliance.
Over the past several years, workplace artificial intelligence (“AI”) tools have matured from novel to mainstream. Whether facilitating attracting, screening, hiring, and onboarding job applicants or charting the career path or promotability of current employees, workplace AI tools will likely become more prevalent. Legislators and administrative agencies have taken note and are in various stages of examining and regulating these tools, with the primary goal of ensuring that they do not violate federal and state laws prohibiting workplace discrimination.
Prompted by the widespread adoption and use of video-conferencing software following the COVID-19 pandemic, many employers have shifted toward video interviews to evaluate potential hires. Even as employers have begun to require in-office attendance, the widespread use of video interviewing has continued, because it is a convenient and efficient way to evaluate applicants. Some of the video interviewing tools used by employers incorporate the use of artificial intelligence (AI) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third-party vendors to provide these AI-powered interviewing tools, as well as other tech-enhanced selection procedures.
Recruiting qualified applicants and hiring top talent have always been time-consuming endeavors that come with constant worry about making a wrong hire. Added to this, the COVID-19 pandemic effectively put a halt to employers’ ability to evaluate applicants in-person. These factors, and others, have led many employers to adopt or to, consider adopting, artificial intelligence (AI) tools to optimize recruitment by introducing efficiencies, reaching a broader pool of applicants, increasing consistency and uniformity in the evaluation of applicants, and, in some cases, helping employers meet diversity, equity, and inclusion goals. Typically, employers opting to use AI, contract with third-party vendors that offer AI-powered algorithms, which perform a variety of functions, such as cognitive assessments, personality tests, and video interviews.
A critical component of a successful employer-employee relationship is the employer’s fair and equitable treatment of employees, often embodied in the employer’s employee engagement, retention, and compensation practices. When it comes to compensation, U.S. employers must comply with federal and applicable state equal pay laws that prohibit discriminatory pay practices, and a myriad of state and local laws banning inquiries into, or the use of, prior salary history in setting pay. Yet, compensation bias and discrimination still exist and continue to be the subject of ...
As we have previously blogged, use of third-party digital hiring platforms to select job applicants using video interviews can present an array of potential legal issues. A recent Complaint filed with the Federal Trade Commission (“FTC”) by a consumer advocacy organization, Electronic Privacy Information Center (“EPIC”), illustrates some of those potential pitfalls. EPIC asks the FTC to investigate the recruiting technology company HireVue for alleged discriminatory screening of job applicants through its face-scanning software. HireVue asks job applicants to ...
We have long counseled employers using or contemplating using artificial intelligence (“AI”) algorithms in their employee selection processes to validate the AI-based selection procedure using an appropriate validation strategy approved by the Uniform Guidelines on Employee Selection Procedures (“Uniform Guidelines”). Our advice has been primarily based on minimizing legal risk and complying with best practices. A recently updated Frequently Asked Questions (“FAQ”) from the Office of Federal Contract Compliance Programs (“OFCCP”) provides further ...
This extended interview from Employment Law This Week will be of interest to many of our readers. Attorney and co-editor of this blog, Michelle Capezza explains how recent legal developments have prepared employers for their future workforce, which will include artificial intelligence technologies working alongside human employees. She also looks at the strategies employers should start to consider as artificial intelligence is incorporated into the workplace.
As I continue to follow developments regarding the future of work, I recently attended an event co-sponsored by Cornell/ILR’s Institute for Workplace Studies in NYC and the McKinsey Global Institute (MGI) addressing MGI’s report last Fall entitled Independent Work: Choice, Necessity and the Gig Economy. The report examines the increasing numbers of self-employed, freelance and temporary workers in the U.S. and Europe which are currently estimated to comprise 30 percent of the working-age population and rising. The report notes that many workers have chosen this ...
- Video: The Department of Labor's New Rules and Rising Challenges - Employment Law This Week
- Fifth Circuit Applies Its New Broader Title VII Standard in Bias Case
- New York to Extend Window for Filing Administrative Complaints of Unlawful Discrimination
- Massachusetts Federal Judge Rules That Protected Activity Does Not Shield an Employee from the Consequences of Engaging in Misconduct
- Pay Transparency Remains in Vogue This Legislative Session – Part 2: Pay Data Reporting