- Posts by Alexander J. FranchilliSenior Counsel
Attorney Alexander Franchilli helps clients comply with the evolving landscape of laws and regulations related to consumer protection, privacy, cybersecurity, and employment. He is an experienced litigator in state and federal ...
The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear). These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.
In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals.
Over the past several years, the number of states with comprehensive consumer data privacy laws has increased exponentially from just a handful—California, Colorado, Virginia, Connecticut, and Utah—to up to twenty by some counts. Many of these state laws will go into effect starting Q4 of 2024 through 2025.
We have previously written in more detail on New Jersey’s comprehensive data privacy law, which goes into effect January 15, 2025, and Tennessee’s comprehensive data privacy law, which goes into effect July 1, 2025. Some laws have already gone into effect, like Texas’s Data Privacy and Security Act, and Oregon’s Consumer Privacy Act, both of which became effective July of 2024. Now is a good time to take stock of the current landscape as the next batch of state privacy laws go into effect.
On July 12, 2024, in a keenly awaited decision, the U.S. District Court for the Northern District of California determined that Workday, Inc. (“Workday”), a provider of AI-infused human resources (HR) software, can be held liable under Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act of 1967 (ADEA), and the Americans with Disabilities Act (ADA) (collectively the “Anti-Discrimination Laws”) as an agent of the corporate clients that hire Workday to screen and source candidates for employment by utilizing its AI-infused decision-making tools. In noting that “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era,” the court underscored the EEOC’s admonition, which we discussed in our previous post, that employers delegating their hiring protocols to AI must do so cognizant of the potential discriminatory impacts of such use. See Opinion at 10. Thus, the court allowed plaintiff Derek Mobley’s disparate impact claim to proceed, finding that Mobley’s allegations supported a plausible inference that Workday’s screening algorithms automatically rejected his applications based on protected characteristics rather than his qualifications.
Prior Proceedings
Mobley filed his initial complaint as a putative class action on February 21, 2023, alleging claims against Workday as an “employment agency” for disparate impact and intentional discrimination under the Anti-Discrimination Laws. His complaint centered on his allegation that he applied for “at least 80-100 positions that upon information and belief use Workday, Inc. as a screening tool for talent acquisition and/or hiring” and “has been denied employment each and every time.” Complaint at 10.
Is the developer of an AI resume-screening tool an “employment agency” or “agent” subject to liability under Title VII of the Civil Rights Act for its customers’ allegedly discriminatory employment decisions? According to the United States Equal Employment Opportunity Commission (“EEOC”), the answer is yes. On April 9, 2024, the EEOC filed a motion for leave to file a brief as amicus curiae, together with a brief, in Mobley v. Workday, Inc., Case No. 3:23-cv-00770-RFL, to support plaintiff Derek Mobley’s (“Mobley”) motion to dismiss.
The EEOC’s action is ...
Recently, the Sixth Circuit found that the Fair Credit Reporting Act (“FCRA”) preempted a former employee’s state law defamation claim against his former employer. While the FCRA can impose burdensome requirements on the entities that fall within its scope, including consumer reporting agencies (“CRAs”), furnishers, or users of consumer reports, the FCRA can also serve as a shield against certain state law tort claims.
In McKenna v. Dillion Transportation, LLC, plaintiff, a truck driver named Frank McKenna, sued his former employer, Dillon Transportation, LLC, for ...
A recent decision from the Northern District of Illinois highlights new legal hurdles for employers using AI-powered video interview technologies under Illinois’ Biometric Information Privacy Act (BIPA), 740 ILCS 14/15. In Deyerler v. HireVue, initially filed over two years ago in January 2022, a class of plaintiffs alleged that HireVue’s AI-powered facial expression and screening technology violated BIPA. According to the complaint, HireVue collected, used, disclosed, and profited from “biometric identifiers” without complying with the requirements of BIPA. ...
On December 11, 2023, the City of San Francisco released the San Francisco Generative AI Guidelines (“Guidelines”). The Guidelines set forth parameters for City employees, contractors, consultants, volunteers, and vendors who use generative artificial intelligence (AI) tools to perform work on behalf of the City.
Specifically, the Guidelines encourage City employees, contractors, consultants, volunteers, and vendors to use generative AI tools for purposes such as preparing initial drafts of documents, “translating” text into levels of formality or for a ...
On December 8, 2023, the California Privacy Protection Agency (“CPPA”) Board (the “Board”) held a public meeting to discuss, among other things, regulations addressing: (1) cybersecurity audits; (2) risk assessments; and (3) automated decisionmaking technology (“ADMT”). After years in the making, the December 8 Board meeting was another step towards the final rulemaking process for these regulations. The Board’s discussion of the draft regulations revealed their broad implications for businesses covered by the California Consumer Privacy Act ...
With the potential “tendency of many to ‘overshare,’ documenting everything from their breakfast to their favorite Marvel™ villain” on social media, as recognized in at least one court opinion[1], perhaps unsurprisingly, some employers might consider social media to be a valuable source for insight about applicants or employees. Assembly Bill A836/Senate Bill S2518A (the “Personal Accounts Law”), signed into law by Governor Kathy Hochul on September 14, 2023, however, will soon place new limits on New York employers that seek access to an employee’s or ...
While recent public attention has largely focused on generative artificial intelligence (AI), the use of AI for recruitment and promotion screening in the employment context is already widespread. It can help HR-professionals make sense of data as the job posting and application process is increasingly conducted online. According to a survey conducted by the Society for Human Resource Management (SHRM),[1] nearly one in four organizations use automation and/or AI to support HR-related activities, such as recruitment, hiring, and promotion decisions, and that number is posed ...
The five-member Board of the California Privacy Protection Agency (the “CPPA”) held a public meeting on September 8, 2023, to discuss a range of topics, most notably, draft regulations relating to risk assessments and cybersecurity audits. Once the regulations are finalized and approved after a formal rulemaking process, they will impose additional obligations on many businesses covered by the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). The Board’s discussion of these draft regulations is instructive for ...
On August 9, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) and iTutorGroup, Inc. and related companies (collectively, “iTutorGroup”) filed a joint notice of settlement and a request for approval and execution of a consent decree, effectively settling claims that the EEOC brought last year against iTutorGroup regarding its application software. The EEOC claimed in its lawsuit that iTutorGroup violated the Age Discrimination in Employment Act (“ADEA”) by programming its application software to automatically reject hundreds of female applicants age 55 or older and male applicants age 60 or older.
After releasing an initial two-page “fact sheet,” Congress publicly posted the bill text of the No Robot Bosses Act (the “Proposed Act”), detailing proposed federal guardrails for use of automated decision-making systems in the employment context. Robert Casey (D-PA), Brian Schatz (D-HI), John Fetterman (D-PA), and Bernie Sanders (I-VT) currently cosponsor the Proposed Act.
On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed federal legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would:
California businesses, including employers, that have not already complied with their statutory data privacy obligations under the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA), including as to employee and job applicant personal information, should be taking all necessary steps to do so. See No More Exceptions: What to Do When the California Privacy Exemptions for Employee, Applicant and B2B Data Expire on January 1, 2023. As background, a covered business is one that “does business” in California, and either has annual gross revenues of $25 million, annually buys sells or shares personal information of 100,00 consumers or households, or derives 50 percent or more of its annual revenues from selling or sharing consumers’ personal information. It also applies, in certain circumstances, to entities that control or are controlled by a covered business or joint ventures. Covered businesses may be exempt from obligations under certain enumerated entity-level or information-level carve-outs.
On July 13, 2023, the White House issued the first iteration of its National Cybersecurity Strategy Implementation Plan (the “Implementation Plan”), which will be updated annually. The two overarching goals of the Implementation Plan are to address the need for more capable actors in cyberspace to bear more of the responsibility for cybersecurity and to increase incentives to make investments in long-term resilience. The Implementation Plan is structured around the five pillars laid out in the White House’s National Cybersecurity Strategy earlier this year, namely: (1) defend critical infrastructure; (2) disrupt and dismantle threat actors; (3) shape market forces to drive security and resilience; (4) invest in a resilient future; and (5) forge international partnerships to pursue shared goals. The Implementation Plan identifies strategic objectives and high-impact cybersecurity initiatives under each pillar and designates the federal agency responsible for leading the initiative to meet each objective. The following summarizes some of the key initiatives included in the Implementation Plan that will directly impact critical infrastructure organizations, including healthcare, energy, manufacturing, information technology and financial services.
As we previously reported, on July 5, 2023, the New York City Department of Consumer and Worker Protection (DCWP) began enforcing Local Law 144 of 2021 (the “Law”) regulating the use of automated employment decision tools (AEDT). In preparation for the July 5 enforcement date, last week, the DCWP published Frequently Asked Questions (FAQ) concerning the use of AEDTs on its fact page for the Law. The FAQ contain an overview of the Law and general information and guidance regarding bias audit requirements, data requirements, independent auditors, responsibility for bias audits, notice requirements, and complaints.
As explained in the FAQ, the Law applies to employers and employment agencies that use AEDT:
On April 6, 2023, the New York City Department of Consumer and Worker Protection (“DCWP”) issued a Notice of Adoption of Final Rule to implement Local Law 144 of 2021, legislation regarding automated employment decision tools (“AEDT Law”). DCWP also announced that it will begin enforcement of the AEDT Law and Final Rule on July 5, 2023. Pursuant to the AEDT Law, an employer or employment agency that uses an automated employment decision tool (“AEDT”) in NYC to screen a candidate or employee for an employment decision must subject the tool to a bias audit within one year of the tool’s use, make information about the bias audit publicly available, and provide notice of the use of the tool to employees or job candidates.
On February 2, 2023, the Illinois Supreme Court filed an opinion in Jorome Tims v. Black Horse Carriers, Inc., holding that Illinois’ Biometric Information Privacy Act (BIPA) is subject to a single, five-year statute of limitations period.
The California Privacy Protection Agency Board (the “Board”) held a public meeting on February 3, 2023, adopting and approving the current set of draft rules (the “Draft Rules”), which implement and clarify the California Consumer Privacy Act of 2018 (“CCPA”) as amended by the California Privacy Rights Act of 2020 (“CPRA”). The Draft Rules cover many CCPA requirements, including restrictions on the collection and use of personal information, transparency obligations, consumer rights and responding to consumer requests, and service provider contract requirements. At the meeting, the Board also addressed additional proposed rulemaking processes concerning cybersecurity audits, risk assessments, and automated decision-making.
On February 1, 2023, the FTC announced a proposed $1.5 million settlement with GoodRx Holdings, based on alleged violations of the Federal Trade Commission Act (“FTC Act”) and Health Breach Notification Rule (“HBNR”) for using advertising technologies on its websites and mobile app that resulted in the unauthorized disclosure of consumers’ personal and health information to advertisers and other third parties. On the same day, the U.S. Department of Justice, acting on behalf of the FTC, filed a Complaint and Proposed Stipulated Order detailing the FTC’s allegations and the terms of the proposed settlement.
On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.
On Tuesday October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) released a document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (the “Blueprint”) together with a companion document “From Principles to Practice: A Technical Companion to the Blueprint for an AI Bill of Rights” (the “Technical Companion”).
On Friday, September 23, 2022, the New York City Department of Consumer and Worker Protection (“DCWP”) released a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to its Automated Employment Decision Tool law (the “AEDT Law”), which goes into effect on January 1, 2023. As we previously wrote, the City passed the AEDT Law to regulate employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions; as written, however, it contains many ambiguities, which has left covered employers with open questions about compliance.
As reported in a June 3, 2022 press release from the House Committee on Energy and Commerce, U.S. Representatives Frank Pallone, Cathy McMorris Rodgers, and Senator Roger Wicker released a “discussion draft” of a federal data privacy bill entitled the “American Data Privacy and Protection Act” (the “Draft Bill”), which would impact the data privacy and cybersecurity practices of virtually every business and not-for-profit organization in the United States.
As further described below, the Draft Bill’s highlights include: (i) a comprehensive nationwide data privacy framework; (ii) preemption of state data privacy laws, with some exceptions; (iii) a private right of action after four (4) years, subject to the individual’s prior notice to the Federal Trade Commission (“FTC”) and applicable state attorney general before commencement of lawsuit; (iv) exemptions for covered entities that are in compliance with other federal privacy regimes such as the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) and Gramm-Leach Bliley Act (“GLBA”) solely with respect to data covered by those statutes; (v) exclusions from Act’s requirements for certain “employee data”; and (vi) a requirement for implementation of reasonable administrative, technical and physical safeguards to protect covered data. The Draft Bill would be enforced by the FTC, and violations treated as unfair or deceptive trade practices under the Federal Trade Commission Act, as well as by state attorneys general.
Prompted by the widespread adoption and use of video-conferencing software following the COVID-19 pandemic, many employers have shifted toward video interviews to evaluate potential hires. Even as employers have begun to require in-office attendance, the widespread use of video interviewing has continued, because it is a convenient and efficient way to evaluate applicants. Some of the video interviewing tools used by employers incorporate the use of artificial intelligence (AI) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third-party vendors to provide these AI-powered interviewing tools, as well as other tech-enhanced selection procedures.
On March 15, 2022, President Biden signed into law the 2022 Consolidated Appropriations Act containing the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (the “Cyber Incident Reporting Act”). While President Biden’s remarks highlighted the $13.6 billion in funding “to address Russia’s invasion of Ukraine and the impact on surrounding countries,” the 2022 Consolidated Appropriations Act contained numerous other laws, including the Cyber Incident Reporting Act, which should not be overlooked. The Cyber Incident Reporting Act puts in motion important new cybersecurity reporting requirements that will likely apply to businesses in almost every major sector of the economy, including health care, financial services, energy, transportation and commercial facilities. Critical infrastructure entities should monitor the upcoming rule-making by the Cybersecurity and Infrastructure Security Agency (“CISA”), as the final regulations will clarify the scope and application of the new law.
The Cybersecurity & Infrastructure Security Agency (CISA) and the National Institute of Standards and Technology (NIST) jointly published a new resource as part of their ongoing efforts to promote awareness of, and help organizations defend against, supply chain risks. The publication, Defending Against Software Supply Chain Attacks, provides recommendations for software customers and vendors as well as key steps for prevention, mitigation and resilience of software supply chain attacks.
Software supply chain attacks occur when a cyber threat actor infiltrates a software ...
A recently discovered security vulnerability potentially affecting at least 100 million Internet of Things (“IoT”) devices[1] highlights the importance of the newly enacted IoT Cybersecurity Improvement Act of 2020 (the “IoT Act”). Researchers at the security firms Forescout Research Labs and JSOF Research Labs have jointly published a report detailing a security vulnerability known as “NAME:WRECK.” This is exactly the type of issue that the new IoT Act was and is designed to address at the governmental level, because the vulnerability can detrimentally affect ...
Enacted on December 4, 2020, the Internet of Things Cybersecurity Improvement Act of 2020 (the “IoT Act”) is expected to dramatically improve the cybersecurity of the ubiquitous IoT devices.[1] With IoT devices on track to exceed 21.5 billion by 2025, the IoT Act mandates cybersecurity standards and guidelines for the acquisition and use by the federal government of IoT devices capable of connecting to the Internet. The IoT Act, and the accompanying standards and guidance being developed by the National Institute of Standards and Technology (NIST) will directly affect ...
Blog Editors
Recent Updates
- Video: FTC Exits Labor Pact, EEOC Alleges Significant Underrepresentation in Tech, Sixth Circuit Affirms NLRB Ruling - Employment Law This Week
- Massachusetts High Court Rules That Franchisees Are Independent Contractors
- Video: New DOL Guidance - ERISA Plan Cybersecurity Update - Employment Law This Week
- Video: DOL Authority Challenged - Key Rulings on Overtime and Tip Credit - Employment Law This Week
- Deepfakes: Why Executive Teams Should Prepare for the Cybersecurity and Fraud Risks