As featured in #WorkforceWednesday:  This week on our special podcast series, Employers and the New Administration, we look at what President Biden’s support for unions throughout his political career might mean for labor management relations.

In this episode, Glenn Spencer, Senior Vice President of the Employment Policy Division at the U.S. Chamber of Commerce, and attorney Steve Swirsky discuss what employers can expect from the NLRB under the Biden administration. Attorney David Garland leads the conversation.

See below for the video edition and the extended audio podcast:

Video: YouTubeVimeo.

Extended Podcast: Apple PodcastsGoogle Podcasts,

Today is “Equal Pay Day” in the United States, a symbolic date used to focus attention on gaps that exist between men’s and women’s wages.  Current estimates show that women still only earn 82 cents for every dollar a man makes.  While there are various opinions about the pay gap and what it means, today is not a day to celebrate; rather it is a day for honest reflection.  Ask the question: is your pay system equitable?

While reflecting on Equal Pay Day, also ask, what can employers do?  Treat pay equity as a business imperative and do a deep dive on pay systems and data.  Start with a privileged pay analysis.  This provides maximum protection for both the process and results.  A privileged pay analysis can provide information necessary to identify disparities and diagnose why they exist, and affords an opportunity to fix them.  There may be perfectly legitimate and lawful reasons for pay differences based on education, experience, job scope or tenure, among many other factors.  Each employer values different elements.  Some value education level more than experience and vice versa.  A pay analysis shines a light on disparities and points out areas requiring further examination.

Pay equity analysis results are often surprising, even in the best-run organizations.  If an analysis confirms equitable treatment, it may be advantageous for talent attraction and retention to share the findings with employees or the public. Good results may also have a positive impact on the organization’s reputation.

If, however, the pay analysis discloses issues, focus and start working to understand why.  What is the data saying?  Are there systemic reasons (starting salary, merit increases, gender channeling) causing the gaps?  Alternatively, are there simpler, non-systemic reasons–a few key outlier hires for example?  Remember, every hire, discharge, promotion or demotion changes the pay data and the comparisons.

Organizations that decide to conduct a privileged analysis will be in complete control of any pay adjustments and timing – not a government regulator, plaintiff’s class action law firm, shareholder activist or employee group.  Do not underestimate the value and flexibility this provides as an incentive to act.  Those thinking their pay system and decisions are beyond reproach should be congratulated as one of the select few.  For everyone else, however, it is better to understand the landscape now, because that which is unknown can hurt. Why not use Equal Pay Day as a starting point.

Epstein Becker Green has the experience and resources to help organizations on their pay equity journey.  Please reach out to the author or your Epstein Becker Green attorney for assistance.

President Biden’s January 21, 2021 Executive Order (EO) on COVID-19 tasked the Occupational Safety and Health Administration (OSHA) to: launch a national enforcement program, review and correct any shortcomings in their prior enforcement strategies and to determine whether any Emergency Temporary Standards (ETS) were necessary and, if so, to issue an ETS by March 15, 2021.  The prior Administration had not issued an ETS, and was severely criticized by the Congress and labor unions.

On March 12, 2021, OSHA fulfilled some of the EO directives by publishing two COVID-19 initiatives to bolster safety enforcement during the remaining period of the pandemic, but it did not issue an ETS as expected.  While the original deadline has now passed, OSHA reportedly is preparing to issue the ETS within the next few weeks and is currently working with the White House on regulatory review.

The first announced initiative is a COVID-19 National Emphasis Program (NEP) Directive, whose goal is to significantly reduce or eliminate worker exposures to COVID-19.  The NEP will focus OSHA resources on target industries and worksites where employees may have a high frequency of close contact exposures. The NEP combines inspection-targeting, employer outreach and provides compliance assistance to promote safe workplaces.

Target or high-hazard industries include healthcare, meat and poultry processing, supermarkets, restaurants, discount department stores, general warehousing and storage facilities and correctional institutions.  The NEP also includes an expansive secondary target industry list covering a myriad of manufacturing, construction, general merchandise stories, and transportation companies among others.

For the NEP, each OSHA Region will dedicate a high percentage of inspections (at least 5% or 1,600 nationally) to COVID-19 until further notice. OSHA expects that the majority of the inspections will continue to occur in healthcare establishments, based on their enforcement data showing higher COVID-19-related complaints, referrals and severe incident reports at healthcare worksites.

The NEP will also target worksites previously inspected for COVID-19-related hazards with follow-up inspections to ensure effective abatement. It is likely that OSHA will revisit any establishment that received COVID-19 citations.  The NEP took effect immediately on March 12.

The second announced OSHA initiative is an update to its Interim Enforcement Response Plan that prioritizes the use of on-site workplace inspections where practical, or a combination of on-site and remote methods. OSHA will only use remote-only inspections if the agency determines that on-site inspections cannot be performed safely. On March 18, 2021, OSHA will rescind the May 26, 2020, memorandum on this topic and this new guidance will go into and remain in effect until further notice.

The updated Interim Enforcement Response Plan relies heavily on Centers for Disease Control and Prevention (CDC) guidance on a wide-range of issues including: type of work activity, safe distancing, hygiene protocols and the ability of workers to wear face coverings and appropriate personal protective equipment (PPE). During investigations, OSHA will consult current CDC guidance in assessing potential workplace hazards and evaluate the adequacy of an employer’s protective measures for workers.  Where the protective measures implemented by an employer are not as protective as those recommended by the CDC, OSHA will determine whether employees are exposed to a recognized hazard and whether there are feasible means to abate that hazard.  This could be difficult for employers due to the evolving nature of guidance issued by both agencies as seen repeatedly during the course of the pandemic.

If OSHA issues an ETS as expected, all violations under the ETS will take precedence over general duty clause citations (the catch-all safety standard for OSHA). In all cases where the investigation determines that a condition exists warranting issuance of a general duty clause violation for an occupational exposure to COVID-19, the proposed citation will be reviewed with the OSHA Regional Administrator and the National Office prior to issuance.  In general duty clause cases, the Regional Offices shall also consult with their Regional Solicitor.  This higher- level review process indicates that OSHA wants a coordinated approach to COVID-related infractions of this type.

We will continue to monitor for relevant developments and update as needed. If you have any questions, please contact the author or your Epstein Becker Green attorney directly.

On March 12, 2021, the Equal Employment Opportunity Commission (EEOC) announced that the EEO-1 Component 1 data collection period will open at the end of April 2021 and close in July 2021.  Submission of the EEO-1 Report is required for employers with 100 or more employees, and applicable Federal government contractors with 50 or more employees and contracts of $50,000. The agency has not announced an exact closing date, indicating:

The EEO-1 Component 1 data collection will open at the end of April 2021 and close in July 2021. The exact closing date will be posted when the data collection launches. Employers will be notified of additional details and how to access the online filing system in April.

The EEOC will collect EEO-1 data on the gender and race/ethnicity make-up of the workforce for 2019 and 2020. Pay data is not being collected.  Additional details and how to access online filing will be released by the EEOC the end of April 2021. Information regarding EEO-1 Component 1 filing, including guidelines, will be available on the EEOC’s website at  Should you have questions or concerns, please contact the attorneys at Epstein Becker Green.

Recruiting qualified applicants and hiring top talent have always been time-consuming endeavors that come with constant worry about making a wrong hire. Added to this, the COVID-19 pandemic effectively put a halt to employers’ ability to evaluate applicants in-person. These factors, and others, have led many employers to adopt or to, consider adopting, artificial intelligence (AI) tools to optimize recruitment by introducing efficiencies, reaching a broader pool of applicants, increasing consistency and uniformity in the evaluation of applicants, and, in some cases, helping employers meet diversity, equity, and inclusion goals. Typically, employers opting to use AI, contract with third-party vendors that offer AI-powered algorithms, which perform a variety of functions, such as cognitive assessments, personality tests, and video interviews.

What does this “optimization” of talent acquisition look like in practice?

Consider the following hypothetical example:

A technology company uses a cognitive assessment test purchased from a vendor to screen potential candidates. The test is only accessible on mobile device app. Historical data suggests that women, on average, score lower in certain attributes that are important to the technology company. An algorithm considers all of the information about an applicant, including their scores on that test, and makes recommendations with regard to candidates who should be interviewed in person. The algorithm was trained using internal company data – including resumes, application information, performance reviews, and the cognitive assessment test results – from a group of incumbent employees that the company has identified as high performers. The vendor has provided the company with case studies from other clients and validation analyses to demonstrate the algorithm’s effectiveness and fairness.

What legal issues could this present?

Employers investing in AI to assist in recruiting and hiring should be careful not to simply “plug and play.” Although no federal laws expressly regulate the use of AI in employment decisions, its use is likely subject to several statutes, particularly laws against discrimination. Concerns about potential discriminatory bias in recruitment AI has the attention of federal regulators. For example, on December 8, 2020, a group of ten U.S. Senators (including then-Senator and current Vice President Kamala Harris) sent a letter to the Equal Employment Opportunity Commission asking for more information about the EEOC’s oversight authority for hiring technologies. While recognizing that hiring technologies may reduce the role of individual hiring managers’ biases, the senators expressed concern that the AI, “can also reproduce and deepen systemic patterns of discrimination reflected in today’s workforce data.” Questions posed by the senators in their letter included asking whether the EEOC has ever used its authority to investigate and/or enforce against discrimination related to the use of hiring technologies and whether the EEOC intends to use its authority to study and investigate the development and design, use, and impacts of hiring technologies.

In addition, on January 1, 2021, Congress passed the National Defense Authorization Act which included the National Artificial Intelligence Initiative Act of 2020 (NAIIA). The NAIIA will fund educational and training programs to prepare the workforce to interact with AI systems, and will require governmental agencies to reach out to diverse stakeholders, including civil rights and disability rights organizations, for input on AI initiatives. Meanwhile, in the absence of specific federal legislation governing the use of hiring technologies, several state and local governments  are taking up the issue with some, such as Illinois, already passing laws addressing the use of AI in hiring decisions.

What should employers prepare for?

Given the increased use of hiring technologies by employers and the rising concerns about the potential for a perpetuation of discriminatory hiring through their use, employers should expect increased scrutiny in this area from federal and state regulators and legislators. In the above example, the technology company using a cognitive assessment test to screen applicants should carefully consider the test’s impact on protected groups. The algorithm, for instance, was trained on internal company data, which in part suggests that women score lower than men in important attributes. Carefully crafting a job assessment to ensure only key knowledge, skills, and abilities are taken into account, and considering potential reasons for disparities (e.g., comfort in using a mobile app for taking a test) are some ways the company could address this issue.

Relatedly, the company should also consider the test’s accessibility for persons with disabilities, and others. Will people with physical and mental disabilities be able to take the test and, if not, what reasonable accommodations might be offered? The fact that applicants can access the test only on a mobile device app may also negatively affect people with disabilities, as well as senior citizens or low-income applicants who do not own mobile devices.

In addition, the company cannot expect to validate its use of the test through case studies and validation analyses from other customers. Pursuant to the Uniform Guidelines of Employee Selection Procedures, which apply to all selection procedures used to make employment decisions (such as the cognitive assessment test in the hypothetical), employers must conduct a validity study to utilize any test that adversely impacts a protected group. Validity studies that have been conducted on the employer’s own workforce are preferable, because those studies ensure the accuracy and fairness of the test as it relates to the impacted applicants.

We will discuss these issues in more detail during our upcoming virtual briefing on bias in AI. To learn more about the legal risks of and solutions

to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

As featured in #WorkforceWednesday:  This week, COVID-19 recovery and safety are top of mind as new stimulus funding, an Occupational Safety and Health Administration (“OSHA”) directive, and paid leave requirements are put in place.

Video: YouTubeVimeo.

Enacted on December 4, 2020, the Internet of Things Cybersecurity Improvement Act of 2020 (the “IoT Act”) is expected to dramatically improve the cybersecurity of the ubiquitous IoT devices.[1] With IoT devices on track to exceed 21.5 billion by 2025, the IoT Act mandates cybersecurity standards and guidelines for the acquisition and use by the federal government of IoT devices capable of connecting to the Internet. The IoT Act, and the accompanying standards and guidance being developed by the National Institute of Standards and Technology (NIST) will directly affect government contractors who manufacture IoT devices for federal government use, or who provide services, software or information systems using IoT devices to the federal government.

There will also be a significant indirect effect on private sector organizations purchasing IoT devices or systems using such devices for corporate use. Indeed, Congress specifically intended for a wide ranging spillover effect on the private sector with the expectation that the proverbial rising tide will raise all boats. Organizations will ultimately need to determine whether they will purchase and use IoT devices, software and systems that meet the standards for federal use, or acquire insecure or less secure IoT devices and systems. Corporations that consume and use IoT devices and systems, including in manufacturing, logistics, healthcare, hospitality and retail, should consider the impact the IoT Act will have on organizational cybersecurity. The IoT Act and the accompanying NIST standards will influence compliance under state and federal laws providing for the cybersecurity of protected information, such as personal or private information, and protected health information (PHI).

Among other things, the IoT Act contains the following requirements:

  • NIST STANDARDS AND GUIDELINES FOR USE AND MANAGEMENT OF IoT DEVICES: NIST shall publish standards and guidelines for the federal government’s use of IoT devices, including minimum information security requirements for managing cybersecurity risks. The guidance shall address secure development, identity management, patching and configuration management. NIST shall “consider relevant standards, guidelines and best practices developed by the private sector, agencies, and public-private partnerships.” As noted in the legislative history, there is presently no national standard to ensure the security of IoT devices, with the inability to effectively patch these devices or set secure device passwords, among other vulnerabilities, a significant threat to the nation’s infrastructure and security.
  • NIST GUIDELINES FOR THE DISCLOSURE AND RESOLUTION OF IoT DEVICE VULNERABILITIES: NIST shall also publish guidelines: (a) for the reporting and publishing of security vulnerabilities of information systems owned or controlled by a federal agency (including IoT devices owned or controlled by an agency), and the resolution of such vulnerabilities; and (b) for a contractor or subcontractor providing such systems receiving vulnerability information and dissemination of information about the resolution of such security vulnerability. Significantly, the guidelines are to include example content, on the vulnerability disclosures that should be “reported, coordinated, published or received” by a contractor, or any subcontractor thereof.
  • ISSUANCE OF FEDERAL AGENCY INFORMATION SECURITY POLICIES AND PRINCIPLES: The Director of the Office of Management and Budget shall review agency information security policies and principles based on the NIST standards and guidance, and issue policies and principles as necessary to align the policies and principles with NIST standards and guidelines.
  • REVISIONS TO THE FEDERAL ACQUISITION REGULATION: The Federal Acquisition Regulation shall be revised as necessary to implement the NIST standards and guidelines.
  • CONTRACTOR COMPLIANCE WITH NIST STANDARDS AND GUIDELINES: Federal agencies are prohibited from procuring, obtaining, renewing a contract to procure or obtain, or using an IoT device, if the Chief Information Officer (CIO) of the agency determines that the use of such device prevents compliance with the NIST standards and guidelines, subject to a waiver for certain devices. This prohibition takes effect in December 2022, effectively providing for a two-year ramp up for planning to meet the new standards.

NIST has published draft guidance on IoT device cybersecurity, for which the comment period ended on February 26, 2021. According to NIST, the guidance offers a suggested starting point for manufacturers who are building IoT devices for the federal government market, as well as guidance to federal agencies on what they should ask for when they acquire these devices. NIST has presented publicly on the guidance and received comments and is in the process of finalizing its guidance. See, e.g., NIST drafts SP 800-213, NISTIR 8259B8259C,  and 8259D, as well as NISTIR Final 8259, 8259A. These publications collectively discuss both technical and non-technical controls for securing federal IoT devices, including standards for manufacturing and acquiring these devices.

Organizations should do the following now to plan for the IoT Act taking effect in December 2022:

  • Manufacturers who produce IoT devices for use by the federal government should review the draft guidance and await the final NIST guidance and standards, and develop appropriate device level requirements and documentation. They will also need to plan to develop processes to publicly report and mitigate vulnerabilities in their devices.
  • Federal contractors, including software and service providers, should identify information systems that use IoT devices, and plan to meet the NIST IoT guidance and standards, including in their IoT device specifications, vendor selection and contractual requirements. Acquisition, purchasing and contracting decisions made in the coming months may impact the organization’s ability to be utilizing secure IoT devices as of December 2022.
  • Organizations that are not federal contractors should consider how NIST IoT standards and guidance may impact their compliance with cybersecurity laws requiring reasonable safeguards for protected information depending on the use cases (e.g., Gramm-Leach Bliley, Health Insurance Portability and Accountability Act (HIPAA); HR7898 as a defense or mitigation to HIPAA enforcement, NY SHIELD Act, California Civil Code §1781.5, Massachusetts data protection regulation, Illinois Personal Information Protection Act and Biometric Information Protection Act (BIPA)), including potential impact on risk assessments, risk management frameworks (including NIST frameworks – e.g., SP 800-53, NIST Cybersecurity Framework and other information security standards, such as ISO, OWASP), vendor selection, purchasing and contracting, RFP processes, supply chain risk and workforce training. The organization should identify IoT devices incorporated into its information systems and their usage in light of the NIST guidance. Chief Information Security Officers (CISOs) and Chief Technology Officers (CTOs) should determine whether voluntarily following the prohibition operable on their counterparts in federal agencies against using non-compliant IoT devices and systems furthers the organization’s compliance and risk reduction strategies, and the potential adverse consequences of not doing so. The potential impact of NIST IoT cybersecurity guidance on private sector compliance and risk reduction strategy should involve information technology, information security, compliance, personnel, and legal departments, as well as the individual business units responsible for the IoT device use.

EBG works closely, under attorney-client privilege, with organizations to conduct risk assessments and develop information security programs, manage supply chain risk and identify recognized security practices that may bolster practical security and improve compliance defensibility. Any questions may be directed to the authors or another member of EBG’s Privacy, Cybersecurity, and Data Asset Management Group. Brian G. Cesaratto is a Certified Information Systems Security Professional (CISSP) and Certified Ethical Hacker (CEH). Alexander Franchilli is an Associate in the Employment, Labor & Workforce Management and Litigation practices, in the New York office of Epstein Becker Green.

[1] IoT devices “have at least one transducer (sensor or actuator) for interacting directly with the physical world, have at least one network interface, and are not conventional Information Technology devices, such as smartphones and laptops, for which the identification and implementation of cybersecurity features is already well understood, and can function on their own and are not only able to function when acting as a component of another device, such as a processor.” The wide range of IoT devices that connect to the Internet include security cameras and systems, geolocation trackers, smart appliances (e.g., tvs, refrigerators), fitness trackers and wearables, medical device sensors, driverless cars, industrial and home thermostats, biometric devices, manufacturing and industrial sensors, farming sensors and other smart devices.

A critical component of a successful employer-employee relationship is the employer’s fair and equitable treatment of employees, often embodied in the employer’s employee engagement, retention, and compensation practices.  When it comes to compensation, U.S. employers must comply with federal and applicable state equal pay laws that prohibit discriminatory pay practices, and a myriad of state and local laws banning inquiries into, or the use of, prior salary history in setting pay.  Yet, compensation bias and discrimination still exist and continue to be the subject of government investigations, audits, and litigation.

With the growing use of artificial intelligence (AI) tools in every aspect of our lives, it is not surprising that companies are increasingly deploying AI and machine learning algorithms in various human resources functions, including compensation management and decision-making, as well as in the prediction of future employee performance.  Many argue that AI-based compensation tools can be used to close pay gaps and end discriminatory compensation practices.   The question arises, however, as to whether these tools can provide employers with a reliable method of providing objective, fair and precise compensation structures for their workforce, or whether they could intentionally (or unintentionally) cause an employer to violate applicable law and perpetuate bias in compensation structures.

Consider the following scenario:

A professional services firm has a large percentage of its employees working remotely, consequently, supervisors do not believe they have a good understanding of the contributions each employee makes.  So the firm collects metrics that are intended to reflect the value of each employee’s contribution to the company.  Because many metrics are subjective and hard to measure, the firm uses proxy metrics such as time spent in training, the type of training, and “electronic presence,” which is a measurement of time each employee spends electronically active working at the computer.  The firm uses an algorithm that is trained on all the data collected for all workers, and that algorithm makes a compensation recommendation based on its analyses of the collected data points and its predictions of future performance.  The employee’s supervisor makes the ultimate compensation decision, informed by the algorithm’s recommendation as well as qualitative and subjective assessments of the employee’s contributions (such as quality of work product, timely completion of projects, responsiveness, enhancement of skills, innovation).  The firm also collects market data to reflect, for each function at the company, the going market rate of pay as well as trends in compensation for certain skill sets, and uses the algorithm to assist supervisors in making salary recommendations for new hires and for purposes of promotions.  The supervisors are increasingly relying on the algorithms, especially when they do not have time to review each employee or candidate’s file completely.  The company believes the system is fair, but hasn’t done any special testing to identify any particular biases.  The question arises: to what legal risks, if any, is the employer exposed?

In this scenario, the AI-based compensation tool is being used for multiple purposes, from setting pay for new hires, to determining promotions, and to assessing remote worker performance.  Despite these well-intended uses, there are potential legal risks depending on the nature and source of the data used to train the algorithm.

Datasets used to train the algorithms may be comprised of an employer’s existing internal data, data from external sources, or a combination of both.   Employers should evaluate the quality of the initial data collected, and monitor any evolution of the data, for discriminatory factors.  These datasets may include employee data for particular positions requiring a certain educational, skill or experience level with varying compensation levels.  The appropriate grouping of employees performing the same job functions or job types is also critical to the assessment. AI tools should be carefully calibrated to compare the proper categories of employees. Any errors in these datasets could skew the results produced by the AI tool in a manner that could adversely affect the employer’s compensation practices.

In general, with these datasets, we must consider whether use of the AI tool can lead to compensation decisions that have a disparate impact on employees.  Further, could use of the AI tool potentially violate applicable laws that prohibit inquiries into prior salary histories?  Should the results from these AI tools be used by employers to make compensation decisions without additional input from supervisors?

These are among the questions that we’ll explore in our upcoming Labor & Employment-Recruiting and Compensation workshop.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

As featured in #WorkforceWednesday:  In this episode, hear from EEOC Commissioner Keith Sonderling. As a sitting commissioner, Mr. Sonderling has a unique perspective on priorities, new initiatives, and the outlook for what employers can expect from the agency in 2021. Attorney David Garland leads the conversation.

Employers and the New Administration is a special podcast series from Employment Law This Week®, with analysis of the first 100 days of the Biden administration. Special podcast episodes air every other #WorkforceWednesday.

If you’d like to hear more from Commissioner Sonderling, he’ll also be speaking at our virtual briefing, Bias in Artificial Intelligence: Legal Risks and Solutions, on March 23. Register here.

See below for the video edition and the extended audio podcast.

Video: YouTubeVimeo.

Extended Podcast: Apple PodcastsGoogle PodcastsOvercastSpotifyStitcher.

On February 22, 2021, Governor Murphy signed three separate cannabis reform bills into law that formally legalize the use and possession of recreational marijuana in the Garden state: (1) the “New Jersey Cannabis Regularly, Enforcement Assistance, and Marketplace Modernization Act” (the “Cannabis Act”) (NJ A21), which legalizes the recreational use and possession of cannabis or cannabis products (collectively “cannabis items”) for adults; (2) a  decriminalization law (NJ A1897), which legalizes the possession of up to six ounces of cannabis and provides for certain criminal and civil justice reforms related to marijuana and hashish offenses, and (3) a “clean up” bill (NJ A5342/NJ S3454), which concerns penalties for underage cannabis offenders and dictates how police may interact with youth offenders. We summarize the relevant provisions for employers below.

Employment Provisions of the Cannabis Act

The Cannabis Act contains express workplace-related provisions that will have significant impact on many New Jersey employers.  It establishes nondiscrimination rules for recreational cannabis users or marijuana users, codifies that employers do not have a duty to accommodate cannabis use in the workplace, and establishes procedures for employer drug testing.

Although the Cannabis Act’s employment provisions are effective immediately, they do not become operative until the New Jersey Cannabis Regulatory Commission (“CRC”) adopts implementing regulations.  The CRC, which is composed of five members appointed by the governor, is responsible for the regulation and oversight of the development, regulation, and enforcement of personal recreational and medical cannabis use.  Governor Murphy has appointed the CRC members and rulemaking will commence within 180 days of the law’s enactment, by i.e., August 21, 2021.  Employers, however, should also note that the CRC’s initial rules will only be in effect for up to one year.  Therefore, New Jersey employers should be prepared for possible revisions next year.

Nondiscrimination Provisions

Under the Cannabis Act, employers cannot penalize an applicant or employee because that person does or does not smoke, vape, aerosolize or otherwise use cannabis items by:

  • refusing to hire the applicant;
  • terminating the employee; or
  • taking an adverse employment action against the employee with respect to compensation, terms, conditions, or other privileges of employment.

The Cannabis Act is unique in providing employment nondiscrimination protection to cannabis users, as most other state laws legalizing recreational marijuana do not provide any express protections to users.

Drug Testing

Although employers may not discriminate against employees who use cannabis, employers are under no obligation to permit recreational use or to accommodate medical use in the workplace.  The Cannabis Act, as well as the New Jersey’s Compassionate Use of Medical Marijuana Act, emphasizes that employers have the right to maintain a drug-and alcohol-free workplace and may implement policies prohibiting the use of cannabis items or intoxication during work hours.  To that end, the Cannabis Act allows employers to drug test:

  • upon reasonable suspicion of an employee’s cannabis use at work;
  • upon finding observable signs of intoxication related to cannabis items at work;
  • as part of a work-related accident investigation;
  • randomly;
  • as part of pre-employment screening; or
  • as part of regular screening of current employees to determine use during work hours.

The law permits employers to use the results of any such drug test when determining the appropriate employment action concerning the employee, but mere positive test results will not alone be sufficient to take an adverse job action, even for safety sensitive positions.  The Cannabis Act requires that any employee drug test include “scientifically reliable objective testing methods and procedures,” such as testing of blood, urine, or saliva, plus a physical evaluation by a certified “Workplace Impairment Recognition Expert” (“WIRE”) to determine the employee’s state of impairment.  The statute directs the CRC to develop a certification program in consultation with the state’s Police Training Commission, to train persons “in detecting an employee’s usage of, or impartment from, a cannabis item or other intoxicating substance, and for assisting in the investigation of workplace accidents.”  An existing employee or contractor may be the WIRE.

Employment Provisions of the Decriminalization Law

Section 15 of the decriminalization law sets out strict parameters for employer inquiries related to an applicant’s marijuana-related criminal history.  Specifically, the law provides that employers may not “rely on, require an applicant to disclose, or take any adverse action against an applicant on the basis of any arrest, charge, conviction, or adjudication of delinquency” related to cannabis manufacture, distribution, or possession when making an employment decision.  These provisions do not apply to positions in law enforcement corrections, the judiciary, homeland security, or emergency management.

The Commissioner of the New Jersey Division of Labor and Workforce Development enforces civil penalties for violations.  Employers that violate Section 15 can be fined up to $1,000 for the first violation, $2,000 for the second violation, and $10,000 for each subsequent violation.

The employment provisions of the decriminalization law are set to take effect five months after the enactment of the Cannabis Act, i.e., on August 1, 2021.

Employers may anticipate that the CRC will propose regulations in the up-coming months, which will go through a comment period before final adoption.  In the meanwhile, employers should use this time to review their hiring, drug testing and substance abuse policies and procedures, including as to whether to continue to do pre-hire drug testing for cannabis.

*          *         *

Please contact Nathaniel M. Glasser, Maxine Neuhauser, Anastasia A. Regne, Eric I. Emanuelson, Jr., or Jenna D. Russell for assistance with questions regarding workplace drug policies.