- Posts by Adam S. FormanMember of the Firm
Attorney Adam Forman* leverages his 25 years of experience representing employers in all manner of employment litigation and traditional labor matters to advise clients on emerging technologies and their impact in the workplace.
On September 24, 2024, the U.S. Department of Labor (“DOL”), collaborating with the Partnership on Employment & Accessible Technology (“PEAT”), a non-governmental organization the DOL funds and supports, announced the publication of the “AI & Inclusive Hiring Framework,” (“the DOL’s Framework”). The DOL’s Framework, created in response to the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, helps employers create and maintain non-discriminatory artificial intelligence (“AI”) hiring procedures for job seekers with disabilities. (For more information on the Biden-Harris Executive Order, see our Workforce Bulletin.)
Establishing these procedures has become a top priority for employers as nearly 1 in 4 organizations have implemented AI tools in human resource departments, according to new research from SHRM.
AI-powered recruitment and selection tools can streamline the hiring process by identifying potential candidates or screening applicant resumes, but employers must ensure their AI hiring tools do not intentionally or unintentionally perpetuate discriminatory practices or create barriers for job seekers with disabilities. Employers may rely on the DOL’s Framework as a useful starting point when implementing AI hiring tools. Employers that have already implemented such tools should review the DOL’s Framework to ensure their practices do not create unwanted liability.
We previously wrote about a Michigan Supreme Court decision to reinstate two voter initiatives – the Wage Act and the Earned Sick Time Act (ESTA) – and state agency responses to that decision (the “Original Order”), which included the filing of a motion asking the court to clarify the Original Order. On September 18, 2024, the Michigan Supreme Court responded, granting the request for immediate consideration and issuing a thirteen-page Order (the “Clarification Order”).
New Details on Coming Adjustments to Michigan Wage Rates
Tip Credit Phase Out
The substantive portion of the Clarification Order re-writes a lengthy and important footnote in the Original Order, including an extension of the gradual phase-out of the tip credit and a clearer definition of the annually increasing percentage amount. Instead of merely saying “The tip credit will be [XX]% of minimum wage,” the Clarification Order provides that “tipped workers’ minimum hourly wage rate must be at least [XX]% of the general minimum wage rate, and the tip credit can be used to satisfy the balance owed to such workers.”
In other words, the Clarification Order spells out that, for example, “80%” means that tipped workers must be paid a base rate that is at least 80% of the general minimum hourly wage rate.
On August 22, 2024, the Michigan Department of Labor & Economic Opportunity (LEO) issued a press release on the heels of the Mothering Justice decision, about which we previously wrote, and which will drastically change the minimum wage, tip credit, and paid sick leave obligations for Michigan employers.
With respect to paid sick leave, LEO announced that it issued new guidance and FAQs on the Earned Sick Time Act, which goes into effect on February 21, 2025. We will be publishing an Insight shortly detailing all the mandatory changes.
With respect to the minimum wage and tip credit changes, on August 21, the state of Michigan’s Attorney General, LEO, and the Department of Treasury asked the Michigan Supreme Court for clarification on how the Treasurer should calculate adjustments for inflation to set new minimum wage rates, as directed by the July 31 decision. The motion outlines a proposed schedule of new minimum wages based on one interpretation of the Supreme Court’s order, but suggests that ambiguity in the order leaves room for interpretation and therefore lays out five options:
On August 9, 2024, Illinois Governor J.B. Pritzker signed HB 3773 into law, amending the Illinois Human Rights Act (IHRA) to expressly regulate the use of artificial intelligence (AI) for employment decisions. HB 3773 is the second Illinois law that regulates workplace AI. As we previously reported, in August 2019, Illinois enacted the first of its kind statute, the Artificial Intelligence Video Interview Act (AIVIA), which requires employers who use AI-enabled video interviewing technology to provide applicants advanced notice of the use of the AI, information regarding how the AI works and the characteristics evaluated, and obtain prior consent from applicants. And, while not necessarily directed exclusively at workplace AI tools, as we also previously reported, an employer’s use of AI-powered facial expression and screening technology could also implicate the requirements of the Illinois Biometric Information Privacy Act (BIPA).
HB 3773 has a potentially broader application than either AIVIA or BIPA. HB 3773 provides two new definitions:
Artificial Intelligence
A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Artificial intelligence also includes generative artificial intelligence.
Generative Artificial Intelligence
An automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following:
The Michigan Supreme Court has written the latest, and perhaps last, chapter of an ongoing saga affecting most Michigan employers. In Mothering Justice v. Attorney General, the Michigan Supreme Court fully restored sweeping minimum wage and paid sick leave laws, bringing finality to a legal controversy that has been churning since the laws were first proposed in 2018. Pursuant to that decision, the laws will take full effect in their original form, about six months from now, on February 21, 2025.
How We Got Here
In 2018, labor advocacy groups presented the Michigan legislature with two voter initiatives related to minimum wage (the Improved Workforce Opportunity Wage Act (IWOWA)) and paid sick leave (the Earned Sick Time Act (ESTA)) through the state’s citizen initiative process. Michigan’s constitution allows voter initiatives to propose legislation, and the legislature may take one of these three actions: (1) adopt “without change or amendment”; (2) reject and place the proposed legislation on the ballot; or (3) reject and propose an amendment, placing both on the ballot. As we previously explained, the Legislature quickly enacted amended versions of the IWOWA (2018 PA 368) and the ESTA, which was renamed the Paid Medical Leave Act (PMLA) (2018 PA 369), with significant changes. As we detailed here, the amended versions of these laws were less burdensome to employers.
The legislature’s actions led the initiatives’ advocates to file a legal action challenging the lawmakers’ authority to modify a voter initiative so quickly and dramatically through a process labeled “adopt and amend.” That lawsuit has wended its way through Michigan’s courts, with the final outcome decided on July 31, 2024, echoing that of the initial holding issued in 2022: the Michigan legislature’s adoption-and-amendment of the two initiatives violated the State constitution’s provision on voter initiatives. Hence, those amendments are void as unconstitutional and the laws as originally conceived should take effect.
On July 12, 2024, in a keenly awaited decision, the U.S. District Court for the Northern District of California determined that Workday, Inc. (“Workday”), a provider of AI-infused human resources (HR) software, can be held liable under Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act of 1967 (ADEA), and the Americans with Disabilities Act (ADA) (collectively the “Anti-Discrimination Laws”) as an agent of the corporate clients that hire Workday to screen and source candidates for employment by utilizing its AI-infused decision-making tools. In noting that “[d]rawing an artificial distinction between software decisionmakers and human decisionmakers would potentially gut anti-discrimination laws in the modern era,” the court underscored the EEOC’s admonition, which we discussed in our previous post, that employers delegating their hiring protocols to AI must do so cognizant of the potential discriminatory impacts of such use. See Opinion at 10. Thus, the court allowed plaintiff Derek Mobley’s disparate impact claim to proceed, finding that Mobley’s allegations supported a plausible inference that Workday’s screening algorithms automatically rejected his applications based on protected characteristics rather than his qualifications.
Prior Proceedings
Mobley filed his initial complaint as a putative class action on February 21, 2023, alleging claims against Workday as an “employment agency” for disparate impact and intentional discrimination under the Anti-Discrimination Laws. His complaint centered on his allegation that he applied for “at least 80-100 positions that upon information and belief use Workday, Inc. as a screening tool for talent acquisition and/or hiring” and “has been denied employment each and every time.” Complaint at 10.
The Department of Labor's (DOL) May 16, 2024 guidance, Artificial Intelligence and Worker Well-Being: Principles for Developers and Employers, published in response to the mandates of Executive Order 14110 (EO 14110) (Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), weighs the benefits and risks of an AI-augmented workplace and establishes Principles to follow that endeavor to ensure the responsible and transparent use of AI. The DOL’s publication of these Principles follows in the footsteps of the EEOC and the OFCCP’s recent guidance on AI in the workplace and mirrors, in significant respects, the letter and spirit of their pronouncements.
While not “exhaustive,” the Principles” should be considered during the whole lifecycle of AI” from ”design to development, testing, training, deployment and use, oversight, and auditing.” Although the DOL intends the Principles to apply to all business sectors, the guidance notes that not all Principles will apply to the same extent in every industry or workplace, and thus should be reviewed and customized based on organizational context and input from workers.
While not defined in the Principles, EO 14110 defines artificial intelligence as set forth in 15 U.S.C. 9401(3): “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
In line with the mandates of President Biden’s Executive Order 14110, entitled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and its call for a coordinated U.S. government approach to ensure responsible and safe development and use of artificial intelligence (AI) systems, the Office of Federal Contract Compliance Programs (OFCCP) has published a Guide addressing federal contractors’ use of AI in the context of Equal Employment Opportunity (EEO).
As discussed below, the Guide comprises a set of common questions and answers about the intersection of AI and EEO, as well as so-called “promising practices” federal contractors should consider implementing in the development and deployment of AI in the EEO context. In addition, the new OFCCP “landing page” in which the new Guide appears includes a Joint Statement signed by nine other federal agencies and the OFCCP articulating their joint commitment to protect the public from unlawful bias in the use of AI and automated systems.
In response to President Biden’s Executive Order 14110 calling for a coordinated U.S. government approach to ensuring the responsible and safe development and use of AI, the U.S. Department of Labor Wage and Hour Division (WHD) issued Field Assistance Bulletin No. 2024-1 (the “Bulletin”). This Bulletin, published on April 29, 2024, provides guidance on the application of the Fair Labor Standards Act (FLSA) and other federal labor standards in the context of increasing use of artificial intelligence (AI) and automated systems in the workplace.
Importantly, reinforcing the DOL’s position expressed in the Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems, the WHD confirms that the historical federal laws enforced by the WHD will continue to apply to new technological innovations, such as workplace AI. The WHD also notes that, although AI and automated systems may streamline tasks for employers, improve workplace efficiency and safety, and enhance workforce accountability, implementation of such tools without responsible human oversight may pose potential compliance challenges.
The Bulletin discusses multiple ways in which AI interacts with the Fair Labor Standards Act (“FLSA”), the Family and Medical Leave Act (“FMLA”), the Providing Urgent Maternal Protections for Nursing Mothers Act (“PUMP Act”), and the Employee Polygraph Protection Act (“EPPA”). The Bulletin makes the following pronouncements regarding the potential compliance issues that may arise due to the use of AI to perform wage-and-hour tasks:
On May 17, 2024, Colorado Governor Jared Polis signed into law SB 24-205—concerning consumer protections in interactions with artificial intelligence systems—after the Senate passed the bill on May 3, and the House of Representatives passed the bill on May 8. In a letter to the Colorado General Assembly, Governor Polis noted that he signed the bill into law with reservations, hoping to further the conversation on artificial intelligence (AI) and urging lawmakers to “significantly improve” on the law before it takes effect.
SB 24-205 will become effective on February 1 ...
Is the developer of an AI resume-screening tool an “employment agency” or “agent” subject to liability under Title VII of the Civil Rights Act for its customers’ allegedly discriminatory employment decisions? According to the United States Equal Employment Opportunity Commission (“EEOC”), the answer is yes. On April 9, 2024, the EEOC filed a motion for leave to file a brief as amicus curiae, together with a brief, in Mobley v. Workday, Inc., Case No. 3:23-cv-00770-RFL, to support plaintiff Derek Mobley’s (“Mobley”) motion to dismiss.
The EEOC’s action is ...
Recently, the Sixth Circuit found that the Fair Credit Reporting Act (“FCRA”) preempted a former employee’s state law defamation claim against his former employer. While the FCRA can impose burdensome requirements on the entities that fall within its scope, including consumer reporting agencies (“CRAs”), furnishers, or users of consumer reports, the FCRA can also serve as a shield against certain state law tort claims.
In McKenna v. Dillion Transportation, LLC, plaintiff, a truck driver named Frank McKenna, sued his former employer, Dillon Transportation, LLC, for ...
A recent decision from the Northern District of Illinois highlights new legal hurdles for employers using AI-powered video interview technologies under Illinois’ Biometric Information Privacy Act (BIPA), 740 ILCS 14/15. In Deyerler v. HireVue, initially filed over two years ago in January 2022, a class of plaintiffs alleged that HireVue’s AI-powered facial expression and screening technology violated BIPA. According to the complaint, HireVue collected, used, disclosed, and profited from “biometric identifiers” without complying with the requirements of BIPA. ...
On December 11, 2023, the City of San Francisco released the San Francisco Generative AI Guidelines (“Guidelines”). The Guidelines set forth parameters for City employees, contractors, consultants, volunteers, and vendors who use generative artificial intelligence (AI) tools to perform work on behalf of the City.
Specifically, the Guidelines encourage City employees, contractors, consultants, volunteers, and vendors to use generative AI tools for purposes such as preparing initial drafts of documents, “translating” text into levels of formality or for a ...
On October 30, 2023, President Joe Biden signed his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO), which addresses artificial intelligence issues, including safety, security, privacy, civil rights, immigration, and health care. The White House also released a companion Fact Sheet summarizing the AI EO (the “Fact Sheet”). Later in the week, on November 1, 2023, the White House announced that the Office of Management and Budget will release for comment a new draft policy on Advancing Governance, Innovation, and ...
On August 9, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) and iTutorGroup, Inc. and related companies (collectively, “iTutorGroup”) filed a joint notice of settlement and a request for approval and execution of a consent decree, effectively settling claims that the EEOC brought last year against iTutorGroup regarding its application software. The EEOC claimed in its lawsuit that iTutorGroup violated the Age Discrimination in Employment Act (“ADEA”) by programming its application software to automatically reject hundreds of female applicants age 55 or older and male applicants age 60 or older.
After releasing an initial two-page “fact sheet,” Congress publicly posted the bill text of the No Robot Bosses Act (the “Proposed Act”), detailing proposed federal guardrails for use of automated decision-making systems in the employment context. Robert Casey (D-PA), Brian Schatz (D-HI), John Fetterman (D-PA), and Bernie Sanders (I-VT) currently cosponsor the Proposed Act.
On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed federal legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would:
As we previously reported, on July 5, 2023, the New York City Department of Consumer and Worker Protection (DCWP) began enforcing Local Law 144 of 2021 (the “Law”) regulating the use of automated employment decision tools (AEDT). In preparation for the July 5 enforcement date, last week, the DCWP published Frequently Asked Questions (FAQ) concerning the use of AEDTs on its fact page for the Law. The FAQ contain an overview of the Law and general information and guidance regarding bias audit requirements, data requirements, independent auditors, responsibility for bias audits, notice requirements, and complaints.
As explained in the FAQ, the Law applies to employers and employment agencies that use AEDT:
Michigan is the latest state to expand its legal definition of race as a protected class to include hairstyle descriptors. As we recently explained, legislation with the acronym for “Creating a Respectful and Open Work for Natural Hair” is intended to protect from discrimination individuals with hairstyles often associated with race.
On June 15, 2023, Governor Gretchen Whitmer signed Michigan’s version of the CROWN Act – S.B. 90 – into law, once again amending the state’s increasingly broad anti-discrimination statute, the Elliott-Larsen Civil Rights Act (“ELCRA”). The Michigan CROWN Act represents the third amendment to ELCRA this year: prohibitions on discrimination based on sexual orientation, gender identity, and gender expression were added in March, and protections for individuals who have an abortion were provided by amendments enacted in May.
On May 17, 2023, Michigan Governor Gretchen Whitmer signed SB 147 into law, amending the Elliot-Larsen Civil Rights Act (“ELCRA”) to expand its protections from workplace discrimination to those who have abortions. The law is expected take effect on March 31, 2024, ninety-one days after final adjournment of the Michigan Legislature’s 2023 Regular Session and will apply to any Michigan employer with one or more employees. This is the second time this year that the Michigan Legislature has amended ELCRA, joining SB 4 in early March 2023, which amended ELCRA to add protections for individuals based on their sexual orientation, and gender identity or expression.
Since late October 2021, when the Equal Employment Opportunity Commission (EEOC) launched its Initiative on Artificial Intelligence (AI) and Algorithmic Fairness, the agency has taken several steps to ensure AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces, including Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). Among other things, the EEOC has hosted disability-focused listening and educational sessions, published technical assistance regarding the ADA and the use of AI and other technologies, and held a public hearing to examine the use of automated systems in employment decisions.
On Thursday May 4, 2023, the Biden-Harris Administration announced its plan to implement artificial intelligence (“AI”) safeguards to “protect people’s rights and safety.”
Given the rapid development of AI in workplaces, public health, education, and security, the Administration seeks to underscore related risks and opportunities. Vice President Kamala Harris and senior Administration officials have met with leaders at the forefront of AI innovation to call attention to “responsible, trustworthy, and ethical innovation with safeguards that mitigate risk and potential harms to individuals and our society.”
On Tuesday, April 25, 2023, the Equal Employment Opportunity Commission (“EEOC”), Consumer Financial Protection Bureau (“CFPB”), Justice Department’s Civil Rights Division (“DOJ”), and the Federal Trade Commission (“FTC”) issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated System” (“Joint Statement”). According to a press release from the EEOC, by the Joint Statement, the federal agencies pledged to uphold America’s commitment to the core principles of fairness, equality, and justice as emerging automated systems, including those sometimes marketed as “artificial intelligence,” or “AI,” become increasingly common in people’s daily lives – impacting civil rights, fair competition, consumer protection, and equal opportunity.
On April 6, 2023, the New York City Department of Consumer and Worker Protection (“DCWP”) issued a Notice of Adoption of Final Rule to implement Local Law 144 of 2021, legislation regarding automated employment decision tools (“AEDT Law”). DCWP also announced that it will begin enforcement of the AEDT Law and Final Rule on July 5, 2023. Pursuant to the AEDT Law, an employer or employment agency that uses an automated employment decision tool (“AEDT”) in NYC to screen a candidate or employee for an employment decision must subject the tool to a bias audit within one year of the tool’s use, make information about the bias audit publicly available, and provide notice of the use of the tool to employees or job candidates.
On March 8, 2023, the Michigan Legislature passed Senate Bill 4, amending the Elliott Larsen Civil Rights Act (ELCRA), and adding protections for individuals based on their sexual orientation, gender identity or expression. Codifying the Michigan Supreme Court’s 2022 decision in Rouch World v MI Department of Civil Rights, which held that discrimination on the basis of sexual orientation constitutes a violation of ELCRA as currently written, the amendment makes Michigan the 24th state to incorporate provisions for safeguarding individuals based on sexual orientation. The amendment, however, goes one step further to add protections for “gender identity or expression.”
On February 2, 2023, the Illinois Supreme Court filed an opinion in Jorome Tims v. Black Horse Carriers, Inc., holding that Illinois’ Biometric Information Privacy Act (BIPA) is subject to a single, five-year statute of limitations period.
On January 26, 2023, a Michigan appellate court panel in Mothering Justice v. Attorney General issued a ruling to halt changes to the State’s paid sick leave law and an increase to the State’s minimum wage for hourly workers that were set to go into effect on February 19, 2023. The ruling is the latest development in a saga that has been ongoing for more than four years.
On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.
As we recently reported, on December 9, 2022, the New York City Department of Consumer and Worker Production (“DCWP”) announced that it was postponing enforcement of the Automated Employment Decision Tools (“AEDT”) law, until April 15, 2023, due to the high volume of public comments it received regarding its proposed rules.
On December 21, 2022, the Michigan Supreme Court held that the Whistleblowers’ Protection Act (“WPA”) protects employees who report that their employer has violated “suspected” laws in a case called Janetsky v. County of Saginaw. In a first-of-its-kind ruling, the divided Court in Janetsky concluded that an assistant county prosecutor could bring WPA claims against her supervisor who she believed illegally offered a below-minimum plea deal.
As we previously noted, New York City’s Automated Employment Decision Tools Law (“AEDT Law”), regulating employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions, had an effective date of January 1, 2023. In late September 2022, we reported about the New York City Department of Consumer and Worker Protection (“DCWP”) issuing a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to the AEDT law. The hearing subsequently took place on November 4, 2022, and dozens of organizations and individuals submitted comments, leaving many observers wondering whether the comments would impact the quickly approaching January 1, 2023 enforcement date and how the DCWP would interpret the law.
On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”). The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.” Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.
On Tuesday October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) released a document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (the “Blueprint”) together with a companion document “From Principles to Practice: A Technical Companion to the Blueprint for an AI Bill of Rights” (the “Technical Companion”).
On Friday, September 23, 2022, the New York City Department of Consumer and Worker Protection (“DCWP”) released a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to its Automated Employment Decision Tool law (the “AEDT Law”), which goes into effect on January 1, 2023. As we previously wrote, the City passed the AEDT Law to regulate employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions; as written, however, it contains many ambiguities, which has left covered employers with open questions about compliance.
Over the past several years, workplace artificial intelligence (“AI”) tools have matured from novel to mainstream. Whether facilitating attracting, screening, hiring, and onboarding job applicants or charting the career path or promotability of current employees, workplace AI tools will likely become more prevalent. Legislators and administrative agencies have taken note and are in various stages of examining and regulating these tools, with the primary goal of ensuring that they do not violate federal and state laws prohibiting workplace discrimination.
Next month, New Jersey private employers will need to start informing drivers before using GPS tracking devices in the vehicles they operate. A new state law that becomes effective April 18, 2022, requires employers to provide written notice to employees before using “electronic or mechanical devices” that are “designed or intended to be used for the sole purpose of tracking the movement of a vehicle, person, or device.” The notification requirement applies to both employer-owned or -leased and personal vehicles.
As of December 11, 2021, the Bill regulating employers’ use of automated employment decision tools has been enacted. Compliance with the Bill’s requirements begins January 1, 2023.
***
Joining Illinois and Maryland, on November 10, 2021, the New York City Council approved a measure, Int. 1894-2020A (the “Bill”), to regulate employers’ use of “automated employment decision tools” with the aim of curbing bias in hiring and promotions. The Bill, which is awaiting Mayor DeBlasio’s signature, is to take effect on January 1, 2023. Should the Mayor not sign the Bill within thirty days of the Council’s approval (i.e., by December 10), absent veto, it will become law.
Recruiting qualified applicants and hiring top talent have always been time-consuming endeavors that come with constant worry about making a wrong hire. Added to this, the COVID-19 pandemic effectively put a halt to employers’ ability to evaluate applicants in-person. These factors, and others, have led many employers to adopt or to, consider adopting, artificial intelligence (AI) tools to optimize recruitment by introducing efficiencies, reaching a broader pool of applicants, increasing consistency and uniformity in the evaluation of applicants, and, in some cases, helping employers meet diversity, equity, and inclusion goals. Typically, employers opting to use AI, contract with third-party vendors that offer AI-powered algorithms, which perform a variety of functions, such as cognitive assessments, personality tests, and video interviews.
As of December 29, 2020, Michigan employers are no longer required to permit employees to self-quarantine for up to 14 days due to alleged close contact with an individual displaying COVID-19 symptoms. Recent amendments to Michigan’s Anti-Retaliation COVID-19 law reflect updated CDC guidance reducing the recommended length of quarantine for individuals who suspect exposure to COVID-19. Previous CDC guidance recommended that individuals quarantine for up to 14 days following close contact with an individual displaying COVID-19 symptoms. Now, the CDC recommends a 10-day ...
Michigan recently announced two COVID-19 developments that will impact employers and their workplaces. Most recently, the Michigan Department of Health and Human Services (MDHHS) issued new restrictions for business operations in the state that are set to take effect on November 18 and last through December 8, 2020 (the “Three Week Pause Order”). The Three Week Pause Order followed an announcement late last week by the Michigan Occupational Safety and Health Administration (MIOSHA) of a State Emphasis Program (SEP) focused on in-door activities and venues, including ...
October has brought a weekly flurry of changes to Michigan’s COVID-19 legal landscape. [1] On Thursday October 22, 2020, Governor Whitmer added to this recent activity by signing three bills into law that provide employers with significant liability protection and employees with job protections related to COVID-19.
Employer Protections: Liability Shield
Titled the “COVID-19 Response and Reopening Liability Assurance Act,” HB 6030 provides employers with immunity from liability for a “COVID-19 claim” as long as the employer acted in compliance with all federal ...
In a recent Bloomberg Law article, we reported on legislative developments regulating the use of artificial intelligence (“AI”) in employment law decisions. On May 11, 2020, one of the pieces of proposed legislation we discussed, Maryland’s H.B. 1202, became law without Governor Larry Hogan’s signature. As we reported, H.B. 1202 prohibits employers from using facial recognition technology during pre-employment job interviews without the applicant’s consent. To use facial recognition services in interviewing employees, an employer must obtain an applicant’s ...
As Michigan businesses begin the process of reopening, they must comply with Governor Gretchen Whitmer’s Executive Order 2020-91 (“Order”) regarding “Safeguards to protect Michigan’s workers from COVID-19.” The Order includes detailed safety standards, with which employers in construction, manufacturing, retail, research labs, offices and restaurants, must comply, for the stated goal of protecting workers and customers from the novel coronavirus.
Whereas the specific safety standards required by the Order differ by industry, all businesses or operations ...
Joining California, Delaware, Illinois, Louisiana, Massachusetts, New Jersey, New York, Ohio, as well as multiple counties and cities, on March 23, 2020, Michigan’s Governor Gretchen Whitmer issued Executive Order 2020-21 (COVID-19) (“Order”), ordering that all Michigan residents “shelter in place” in response to the novel coronavirus (“COVID-19”), effective 12:01 a.m. on Tuesday, March 24, 2020, and continuing through April 13, 2020, at 11:59 p.m.
Among other things, the Order prohibits an employer from requiring its workers to leave their homes, unless ...
We are pleased to present Workforce Bulletin, the newest blog from law firm Epstein Becker Green (EBG).
We've combined a decade of posts from five of the firm's well-regarded blogs, spanning employment law topics impacting employers in a range of industries and areas, including financial services, hospitality, OSHA, retail, technology, and more.
Workforce Bulletin will feature thought leadership from EBG attorneys on cutting-edge issues, such as sexual harassment, diversity and inclusion, pay equity, artificial intelligence in the workplace, cybersecurity, and the impact of the coronavirus outbreak on human resources. While individual posts will often address such issues in industry-specific contexts, this broader resource will give employers in all industries the benefit of discussions and information that might not have come to their attention through previous single-industry platforms.
As we have previously blogged, use of third-party digital hiring platforms to select job applicants using video interviews can present an array of potential legal issues. A recent Complaint filed with the Federal Trade Commission (“FTC”) by a consumer advocacy organization, Electronic Privacy Information Center (“EPIC”), illustrates some of those potential pitfalls. EPIC asks the FTC to investigate the recruiting technology company HireVue for alleged discriminatory screening of job applicants through its face-scanning software. HireVue asks job applicants to ...
We have long counseled employers using or contemplating using artificial intelligence (“AI”) algorithms in their employee selection processes to validate the AI-based selection procedure using an appropriate validation strategy approved by the Uniform Guidelines on Employee Selection Procedures (“Uniform Guidelines”). Our advice has been primarily based on minimizing legal risk and complying with best practices. A recently updated Frequently Asked Questions (“FAQ”) from the Office of Federal Contract Compliance Programs (“OFCCP”) provides further ...
Increasingly companies are using third-party digital hiring platforms to recruit and select job applicants. These products, explicitly or implicitly, promise to reduce or eliminate the bias of hiring managers in making selection decisions. Instead, the platforms grade applicants based on a variety of purportedly objective factors. For example, a platform may scan thousands of resumes and select applicants based on education level, work experience, or interests, or rank applicants based on their performance on an aptitude test – whatever data point(s) the platform has been ...
As we previously reported, since 2017 employees have filed dozens of employment class actions claiming violations of Illinois’ 2008 Biometric Information Privacy Act (“BIPA”). In short, BIPA protects the privacy rights of employees, customers, and others in Illinois against the improper collection, usage, storage, transmission, and destruction of biometric information, including biometric identifiers, such as retina or iris scans, fingerprints, voiceprints, and scans of face or hand geometry. Before collecting such biometric information, BIPA requires an ...
Employers continue to incorporate the use of biometric information for several employee management purposes, such as in systems managing time keeping and security access that use fingerprints, handprints, or facial scans. Recently, Illinois state courts have encountered a substantial increase in the amount of privacy class action complaints under the Illinois Biometric Information Privacy Act (“BIPA”), which requires employers to provide written notice and obtain consent from employees (as well as customers) prior to collecting and storing any biometric data. Under ...
Human Resources and Payroll should advise employees in their departments to be on the lookout for the latest tax season phishing scam designed to steal employees’ tax related information and social security numbers. Given the regular frequency of these types of attacks, employers should be taking appropriate steps to safeguard employee Personally Identifiable Information (“PII”). At a minimum, Human Resources should have in place written policies regarding the handling of employee PII and provide training designed to protect employee PII against a data breach. Because ...
For years, companies have been struggling to understand the multitude of locations where their data resides. From traditional employment files with embedded Social Security numbers, to new-aged hiring software with videos of job applicants, and enterprise software used to facilitate employee communications, controlling employee, customer, and corporate data is, to say the least, a logistical challenge. One of the newest entries into the mix is the increased use of ShadowIT and cloud-based storage systems.
ShadowIT involves workers’ use of unsanctioned products and ...
Imagine that an employee asks to come to your office to address concerns about workplace harassment. Pursuant to the company’s open door and non-harassment policies, you promptly schedule a meeting. When the employee arrives, she sits down, sets her smartphone on the desk facing you, and turns on the video camera before beginning to speak. Can you instruct her to turn off the recording device? Can you stop the meeting if she refuses? Would the answer change if the recording was surreptitious?
The answer to questions like these have become more blurry since the decision last year by the ...
Employers in the technology, media and telecommunications industry are faced with many workplace management and legal compliance challenges. Among these are trends in the shared economy and rise of the contingent workforce, data privacy and security, and use of social media in connection with recruitment, employee monitoring and termination. At the recent Epstein Becker Green 34th Annual Workforce Management Briefing held at the New York Hilton, members of the firm’s TMT Group including the authors of this post, along with in-house counsel speakers Rebecca Clar of AOL and ...
Blog Editors
Recent Updates
- Video: Biden’s Final Labor Moves - Employment Law This Week
- Video: Workplace Investigation Protocols - One-on-One with Greg Keating
- Differing Approaches to Earned Wage Access Programs Lead to Regulatory Conflict
- Podcast: Beyond Non-Competes - IP and Trade Secret Assessment Strategies for Employers – Employment Law This Week
- On Trend: New Jersey Hops on the Pay Transparency Bandwagon