On August 9, 2024, Illinois Governor J.B. Pritzker signed HB 3773 into law, amending the Illinois Human Rights Act (IHRA) to expressly regulate the use of artificial intelligence (AI) for employment decisions. HB 3773 is the second Illinois law that regulates workplace AI. As we previously reported, in August 2019, Illinois enacted the first of its kind statute, the Artificial Intelligence Video Interview Act (AIVIA), which requires employers who use AI-enabled video interviewing technology to provide applicants advanced notice of the use of the AI, information regarding how the AI works and the characteristics evaluated, and obtain prior consent from applicants. And, while not necessarily directed exclusively at workplace AI tools, as we also previously reported, an employer’s use of AI-powered facial expression and screening technology could also implicate the requirements of the Illinois Biometric Information Privacy Act (BIPA).
HB 3773 has a potentially broader application than either AIVIA or BIPA. HB 3773 provides two new definitions:
Artificial Intelligence
A machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Artificial intelligence also includes generative artificial intelligence.
Generative Artificial Intelligence
An automated computing system that, when prompted with human prompts, descriptions, or queries, can produce outputs that simulate human-produced content, including, but not limited to, the following:
On July 11, 2024, after considering comments from insurers, trade associations, advisory firms, universities, and other stakeholders, the New York State Department of Financial Services (NYSDFS) issued its Final Circular Letter regarding the “Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing” (“Final Letter.”) By way of background, NYSDFS published its Proposed Circular Letter (“Proposed Letter”) on the subject in January 2024. As we noted in our February blog, the Proposed Letter called on insurers and others in the state of New York, using external consumer data and information sources (“ECDIS”) and artificial intelligence systems (“AIS”), to assess and mitigate bias, inequality, and discriminatory decision making or other adverse effects in the underwriting and pricing of insurance policies. While NYSDFS recognized the value of ECDIS and AI in simplifying and expediting the insurance underwriting process, the agency—following current trends—wanted to mitigate the potential for harm.
And if the opening section of the Final Letter is any indication, the agency did not back down. It continued to insist, for example, that senior management and boards of directors “have a responsibility for the overall outcomes of the use of ECDIS and AIS”; and that insurers should conduct “appropriate due diligence and oversight” with respect to third-party vendors. NYSDFS declined to define “unfair discrimination” or “unlawful discrimination,” noting that those definitions may be found in various state and federal laws dealing with insurance and insurers.
The past several years have witnessed a notable uptick in workplace artificial intelligence related legislation and agency enforcement attention, specifically focused on the infusion of AI or so-called automated decision-making tools. Colorado’s new Artificial Intelligence Act, for example, designates employment as a “high-risk” sector of AI applications and has heightened concerns of lawmakers and corporate executives. Lawsuits, such as Mobley v. Workday and Moffatt v. Air Canada, underscore the concerns of employment candidate screening, recruitment and conversational AI. Most recently, the US Equal Employment Opportunity Commission issued a Determination finding cause to believe the employer violated the Older Workers Benefit Act by using AI in a reduction in force that adversely impacted older workers. A complaint in the Southern District of New York against IBM and its spinoff technology company, Kyndryl, promptly followed.
Perhaps not surprisingly, over the past few years, the State of New York (“NYS”), following the lead of New York City, has introduced several bills that would regulate the use of AI infused decision-making tools. One such bill, called New York Workforce Stabilization Act (“NYWFSA”) was introduced in May 2024 by Senators Michelle Hinchey and Kristen Gonzalez. They will likely re-introduce the NYWFSA during the upcoming January 2025 legislative session intending to “stabilize” New York’s labor market at a time when the deployment of AI may fundamentally alter the New York industrial landscape.
The Department of Labor's (DOL) May 16, 2024 guidance, Artificial Intelligence and Worker Well-Being: Principles for Developers and Employers, published in response to the mandates of Executive Order 14110 (EO 14110) (Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), weighs the benefits and risks of an AI-augmented workplace and establishes Principles to follow that endeavor to ensure the responsible and transparent use of AI. The DOL’s publication of these Principles follows in the footsteps of the EEOC and the OFCCP’s recent guidance on AI in the workplace and mirrors, in significant respects, the letter and spirit of their pronouncements.
While not “exhaustive,” the Principles” should be considered during the whole lifecycle of AI” from ”design to development, testing, training, deployment and use, oversight, and auditing.” Although the DOL intends the Principles to apply to all business sectors, the guidance notes that not all Principles will apply to the same extent in every industry or workplace, and thus should be reviewed and customized based on organizational context and input from workers.
While not defined in the Principles, EO 14110 defines artificial intelligence as set forth in 15 U.S.C. 9401(3): “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
In line with the mandates of President Biden’s Executive Order 14110, entitled “The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” and its call for a coordinated U.S. government approach to ensure responsible and safe development and use of artificial intelligence (AI) systems, the Office of Federal Contract Compliance Programs (OFCCP) has published a Guide addressing federal contractors’ use of AI in the context of Equal Employment Opportunity (EEO).
As discussed below, the Guide comprises a set of common questions and answers about the intersection of AI and EEO, as well as so-called “promising practices” federal contractors should consider implementing in the development and deployment of AI in the EEO context. In addition, the new OFCCP “landing page” in which the new Guide appears includes a Joint Statement signed by nine other federal agencies and the OFCCP articulating their joint commitment to protect the public from unlawful bias in the use of AI and automated systems.
In response to President Biden’s Executive Order 14110 calling for a coordinated U.S. government approach to ensuring the responsible and safe development and use of AI, the U.S. Department of Labor Wage and Hour Division (WHD) issued Field Assistance Bulletin No. 2024-1 (the “Bulletin”). This Bulletin, published on April 29, 2024, provides guidance on the application of the Fair Labor Standards Act (FLSA) and other federal labor standards in the context of increasing use of artificial intelligence (AI) and automated systems in the workplace.
Importantly, reinforcing the DOL’s position expressed in the Joint Statement on Enforcement of Civil Rights, Fair Competition, Consumer Protection, and Equal Opportunity Laws in Automated Systems, the WHD confirms that the historical federal laws enforced by the WHD will continue to apply to new technological innovations, such as workplace AI. The WHD also notes that, although AI and automated systems may streamline tasks for employers, improve workplace efficiency and safety, and enhance workforce accountability, implementation of such tools without responsible human oversight may pose potential compliance challenges.
The Bulletin discusses multiple ways in which AI interacts with the Fair Labor Standards Act (“FLSA”), the Family and Medical Leave Act (“FMLA”), the Providing Urgent Maternal Protections for Nursing Mothers Act (“PUMP Act”), and the Employee Polygraph Protection Act (“EPPA”). The Bulletin makes the following pronouncements regarding the potential compliance issues that may arise due to the use of AI to perform wage-and-hour tasks:
As featured in #WorkforceWednesday: This week, we’re highlighting recent updates across the state and federal employment landscapes, including the New Jersey Supreme Court’s non-disparagement ruling, the U.S. Department of Labor’s (DOL’s) new artificial intelligence (AI) guidelines, and the DOL’s restructuring of Occupational Safety and Health Administration’s (OSHA’s) regional operations.
Is the developer of an AI resume-screening tool an “employment agency” or “agent” subject to liability under Title VII of the Civil Rights Act for its customers’ allegedly discriminatory employment decisions? According to the United States Equal Employment Opportunity Commission (“EEOC”), the answer is yes. On April 9, 2024, the EEOC filed a motion for leave to file a brief as amicus curiae, together with a brief, in Mobley v. Workday, Inc., Case No. 3:23-cv-00770-RFL, to support plaintiff Derek Mobley’s (“Mobley”) motion to dismiss.
The EEOC’s action is ...
As featured in #WorkforceWednesday: This week, we’re breaking down the U.S. Supreme Court’s (SCOTUS’s) new workplace discrimination decision, the Equal Employment Opportunity Commission’s (EEOC’s) final rule on the Pregnant Workers Fairness Act (PWFA), and how recent artificial intelligence (AI) hiring tools have violated federal anti-bias laws.
Since the dawn of digitalization, the collection and retention of personal and other business confidential data by employers has implicated security and privacy challenges—by amassing a treasure trove of data for bad actors (or unwitting/unauthorized employees) and drawing a roadmap for those seeking to breach the system. Adding artificial intelligence (AI) into the mix creates further areas of concern. A recent survey undertaken by the Society of Human Resource Management of more than 2000 human resources professionals indicates that AI is being utilized by the majority of ...
A recent decision from the Northern District of Illinois highlights new legal hurdles for employers using AI-powered video interview technologies under Illinois’ Biometric Information Privacy Act (BIPA), 740 ILCS 14/15. In Deyerler v. HireVue, initially filed over two years ago in January 2022, a class of plaintiffs alleged that HireVue’s AI-powered facial expression and screening technology violated BIPA. According to the complaint, HireVue collected, used, disclosed, and profited from “biometric identifiers” without complying with the requirements of BIPA. ...
On December 11, 2023, the City of San Francisco released the San Francisco Generative AI Guidelines (“Guidelines”). The Guidelines set forth parameters for City employees, contractors, consultants, volunteers, and vendors who use generative artificial intelligence (AI) tools to perform work on behalf of the City.
Specifically, the Guidelines encourage City employees, contractors, consultants, volunteers, and vendors to use generative AI tools for purposes such as preparing initial drafts of documents, “translating” text into levels of formality or for a ...
As the implementation and integration of artificial intelligence and machine learning tools (AI) continue to affect nearly every industry, concerns over AI’s potentially discriminatory effects in the use of these tools continue to grow. The need for ethical, trustworthy, explainable, and transparent AI systems is gaining momentum and recognition among state and local regulatory agencies—and the insurance industry has not escaped their notice.
On January 17, 2024, the New York State Department of Financial Services (“NYSDFS”) took a further step towards imposing ...
As featured in #WorkforceWednesday: With such a tumultuous year of labor and employment updates behind us, it begs the question, “What lies ahead in 2024?”
In this special New Year's episode, Epstein Becker Green attorneys share insights and predictions for the 2024 labor and employment space, addressing important topics such as maintaining compliance, promoting mental health, navigating protected concerted activity policies, and staying abreast of the latest developments in artificial intelligence and non-compete guidance.
In this special year-end episode of Employment Law This Week, recorded live from our 42nd Annual Workforce Management Briefing in New York City, Epstein Becker Green attorneys discuss the biggest employment law trends and crucial workforce changes in 2023, covering everything from non-competes and National Labor Relations Board actions to union dynamics, cybersecurity, and the impacts of artificial intelligence.
Podcast: Amazon Music, Apple Podcasts, Audacy, Audible, Deezer, Goodpods, iHeartRadio, Overcast, Pandora, Player FM, Pocket Casts, Spotify, YouTube Music.
***
Employment Law This Week® gives a rundown of the top developments in employment and ...
As featured in #WorkforceWednesday: This week, we’re detailing the National Labor Relations Board’s (NLRB’s) expanded “joint employer” definition, the recent confirmations of the Equal Employment Opportunity Commission’s (EEOC’s) General Counsel and the Department of Labor’s (DOL’s) Wage and Hour Administrator, and President Biden’s executive order on artificial intelligence (AI).
NLRB Expands Definition of “Joint Employer"
The NLRB recently published its long-awaited final rule, setting a new test for determining joint-employer ...
On October 30, 2023, President Joe Biden signed his Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO), which addresses artificial intelligence issues, including safety, security, privacy, civil rights, immigration, and health care. The White House also released a companion Fact Sheet summarizing the AI EO (the “Fact Sheet”). Later in the week, on November 1, 2023, the White House announced that the Office of Management and Budget will release for comment a new draft policy on Advancing Governance, Innovation, and ...
While recent public attention has largely focused on generative artificial intelligence (AI), the use of AI for recruitment and promotion screening in the employment context is already widespread. It can help HR-professionals make sense of data as the job posting and application process is increasingly conducted online. According to a survey conducted by the Society for Human Resource Management (SHRM),[1] nearly one in four organizations use automation and/or AI to support HR-related activities, such as recruitment, hiring, and promotion decisions, and that number is posed ...
The five-member Board of the California Privacy Protection Agency (the “CPPA”) held a public meeting on September 8, 2023, to discuss a range of topics, most notably, draft regulations relating to risk assessments and cybersecurity audits. Once the regulations are finalized and approved after a formal rulemaking process, they will impose additional obligations on many businesses covered by the California Consumer Privacy Act, as amended by the California Privacy Rights Act (“CCPA”). The Board’s discussion of these draft regulations is instructive for ...
On August 9, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) and iTutorGroup, Inc. and related companies (collectively, “iTutorGroup”) filed a joint notice of settlement and a request for approval and execution of a consent decree, effectively settling claims that the EEOC brought last year against iTutorGroup regarding its application software. The EEOC claimed in its lawsuit that iTutorGroup violated the Age Discrimination in Employment Act (“ADEA”) by programming its application software to automatically reject hundreds of female applicants age 55 or older and male applicants age 60 or older.
After releasing an initial two-page “fact sheet,” Congress publicly posted the bill text of the No Robot Bosses Act (the “Proposed Act”), detailing proposed federal guardrails for use of automated decision-making systems in the employment context. Robert Casey (D-PA), Brian Schatz (D-HI), John Fetterman (D-PA), and Bernie Sanders (I-VT) currently cosponsor the Proposed Act.
On July 20, 2023, U.S. Senators Bob Casey (D-PA) and Brian Schatz (D-HI) introduced the “No Robot Bosses Act.” Other than bringing to mind a catchy title for a dystopic science fiction novel, the bill aims to regulate the use of “automated decision systems” throughout the employment life cycle and, as such, appears broader in scope than the New York City’s Local Law 144 of 2021, about which we have previously written, and which New York City recently began enforcing. Although the text of the proposed federal legislation has not yet been widely circulated, a two-page fact sheet released by the sponsoring Senators outlines the bill’s pertinent provisions regarding an employer’s use of automated decision systems affecting employees and would:
As featured in #WorkforceWednesday: This week, we analyze how employers can benefit from artificial intelligence (AI) innovations while remaining in compliance with federal regulations:
AI is evolving faster than ever before. How can employers prepare for the future of AI in the workplace? Epstein Becker Green attorneys Alexander J. Franchilli and J.T. Wilson III tell us how looming federal regulations and diversity, equity, and inclusion concerns are creating a turbulence of compliance and innovation.
Since late October 2021, when the Equal Employment Opportunity Commission (EEOC) launched its Initiative on Artificial Intelligence (AI) and Algorithmic Fairness, the agency has taken several steps to ensure AI and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces, including Title VII of the Civil Rights Act of 1964 (Title VII), the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). Among other things, the EEOC has hosted disability-focused listening and educational sessions, published technical assistance regarding the ADA and the use of AI and other technologies, and held a public hearing to examine the use of automated systems in employment decisions.
On Thursday May 4, 2023, the Biden-Harris Administration announced its plan to implement artificial intelligence (“AI”) safeguards to “protect people’s rights and safety.”
Given the rapid development of AI in workplaces, public health, education, and security, the Administration seeks to underscore related risks and opportunities. Vice President Kamala Harris and senior Administration officials have met with leaders at the forefront of AI innovation to call attention to “responsible, trustworthy, and ethical innovation with safeguards that mitigate risk and potential harms to individuals and our society.”
On Tuesday, April 25, 2023, the Equal Employment Opportunity Commission (“EEOC”), Consumer Financial Protection Bureau (“CFPB”), Justice Department’s Civil Rights Division (“DOJ”), and the Federal Trade Commission (“FTC”) issued a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated System” (“Joint Statement”). According to a press release from the EEOC, by the Joint Statement, the federal agencies pledged to uphold America’s commitment to the core principles of fairness, equality, and justice as emerging automated systems, including those sometimes marketed as “artificial intelligence,” or “AI,” become increasingly common in people’s daily lives – impacting civil rights, fair competition, consumer protection, and equal opportunity.
As featured in #WorkforceWednesday: This week, we’re taking a closer look at ChatGPT, exploring the opportunities and risks associated with this artificial intelligence (AI) technology, and providing valuable insights for employers who are looking to stay ahead of the curve:
ChatGPT is set to become the next big thing for employers and beyond. What potential issues should employers be aware of? Epstein Becker Green attorney Brian G. Cesaratto explains how critical it is for employers to think through the workplace-related risks.
On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released guidance entitled Artificial Intelligence Risk Management Framework (AI RMF 1.0) (the “AI RMF”), intended to help organizations and individuals in the design, development, deployment, and use of AI systems. The AI RMF, like the White House’s recently published Blueprint for an AI Bill of Rights, is not legally binding. Nevertheless, as state and local regulators begin enforcing rules governing the use of AI systems, industry professionals will likely turn to NIST’s voluntary guidance when performing risk assessments of AI systems, negotiating contracts with vendors, performing audits on AI systems, and monitoring the use AI systems.
As we recently reported, on December 9, 2022, the New York City Department of Consumer and Worker Production (“DCWP”) announced that it was postponing enforcement of the Automated Employment Decision Tools (“AEDT”) law, until April 15, 2023, due to the high volume of public comments it received regarding its proposed rules.
As featured in #WorkforceWednesday: This week, we’re recapping some of the most significant changes that impacted employers in 2022.
On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”). The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.” Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.
On Tuesday October 4, 2022, the White House Office of Science and Technology Policy (“OSTP”) released a document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” (the “Blueprint”) together with a companion document “From Principles to Practice: A Technical Companion to the Blueprint for an AI Bill of Rights” (the “Technical Companion”).
On Friday, September 23, 2022, the New York City Department of Consumer and Worker Protection (“DCWP”) released a Notice of Public Hearing and Opportunity to Comment on Proposed Rules related to its Automated Employment Decision Tool law (the “AEDT Law”), which goes into effect on January 1, 2023. As we previously wrote, the City passed the AEDT Law to regulate employers’ use of automated employment decision tools, with the aim of curbing bias in hiring and promotions; as written, however, it contains many ambiguities, which has left covered employers with open questions about compliance.
As featured in #WorkforceWednesday: This week, we focus on compliance and transparency when using artificial intelligence (AI) tools in employment decision-making.
Prompted by the widespread adoption and use of video-conferencing software following the COVID-19 pandemic, many employers have shifted toward video interviews to evaluate potential hires. Even as employers have begun to require in-office attendance, the widespread use of video interviewing has continued, because it is a convenient and efficient way to evaluate applicants. Some of the video interviewing tools used by employers incorporate the use of artificial intelligence (AI) in an effort to maximize the effectiveness of the interview process. Often, employers contract with third-party vendors to provide these AI-powered interviewing tools, as well as other tech-enhanced selection procedures.
Recruiting qualified applicants and hiring top talent have always been time-consuming endeavors that come with constant worry about making a wrong hire. Added to this, the COVID-19 pandemic effectively put a halt to employers’ ability to evaluate applicants in-person. These factors, and others, have led many employers to adopt or to, consider adopting, artificial intelligence (AI) tools to optimize recruitment by introducing efficiencies, reaching a broader pool of applicants, increasing consistency and uniformity in the evaluation of applicants, and, in some cases, helping employers meet diversity, equity, and inclusion goals. Typically, employers opting to use AI, contract with third-party vendors that offer AI-powered algorithms, which perform a variety of functions, such as cognitive assessments, personality tests, and video interviews.
A critical component of a successful employer-employee relationship is the employer’s fair and equitable treatment of employees, often embodied in the employer’s employee engagement, retention, and compensation practices. When it comes to compensation, U.S. employers must comply with federal and applicable state equal pay laws that prohibit discriminatory pay practices, and a myriad of state and local laws banning inquiries into, or the use of, prior salary history in setting pay. Yet, compensation bias and discrimination still exist and continue to be the subject of ...
As we have previously blogged, use of third-party digital hiring platforms to select job applicants using video interviews can present an array of potential legal issues. A recent Complaint filed with the Federal Trade Commission (“FTC”) by a consumer advocacy organization, Electronic Privacy Information Center (“EPIC”), illustrates some of those potential pitfalls. EPIC asks the FTC to investigate the recruiting technology company HireVue for alleged discriminatory screening of job applicants through its face-scanning software. HireVue asks job applicants to ...
We have long counseled employers using or contemplating using artificial intelligence (“AI”) algorithms in their employee selection processes to validate the AI-based selection procedure using an appropriate validation strategy approved by the Uniform Guidelines on Employee Selection Procedures (“Uniform Guidelines”). Our advice has been primarily based on minimizing legal risk and complying with best practices. A recently updated Frequently Asked Questions (“FAQ”) from the Office of Federal Contract Compliance Programs (“OFCCP”) provides further ...
This edition of Take 5 highlights compliance with cutting-edge issues—such as pay equity, workplace violence, and artificial intelligence (“AI”)—that have a significant impact on retailers. We also provide an update on National Labor Relations Act (“NLRA”) compliance and New York City drug testing to assist you in navigating an increasingly complex legal landscape.
Watercooler (and Bathroom) Conversations Among Co-Workers About Work-Related Matters Are Not Always Protected Concerted Activity Under the NLRA
Historically, a conversation ...
Increasingly companies are using third-party digital hiring platforms to recruit and select job applicants. These products, explicitly or implicitly, promise to reduce or eliminate the bias of hiring managers in making selection decisions. Instead, the platforms grade applicants based on a variety of purportedly objective factors. For example, a platform may scan thousands of resumes and select applicants based on education level, work experience, or interests, or rank applicants based on their performance on an aptitude test – whatever data point(s) the platform has been ...
A recent WSJ article about a private equity firm using AI to source investment opportunities by Laura Cooper presages a larger challenge facing employees and employers: AI tools do “the work of ‘several dozen humans’” “with greater accuracy and at lower cost.” In the competitive and employee-dense financial services sector, AI tools can provide a competitive advantage.
Ms. Cooper cites San Francisco based Pilot Growth Equity Partners, one of many of a growing number of equity investment firms to utilize AI. Pilot Growth that has developed “NavPod’ a cloud based ...
Please join Nathaniel M. Glasser, Elizabeth K. McManus, Jeremy M. Brown, and Joshua A. Stein for an engaging and informative discussion of topical labor and employment issues facing all retailers. The presenters will address cutting-edge employment matters and share best practices in a private forum in which all attendees can freely participate, exchange insights, and network with colleagues.
Topics will include:
Artificial Intelligence for Recruiting and Selection
We will discuss the legal and practical implications of the various types of artificial intelligence (AI ...
Blog Editors
Recent Updates
- Video: Staples Sued Over MA’s Lie Detector Notice, NJ’s Gender-Neutral Dress Code, 2024 Voting Leave Policies - Employment Law This Week
- Court of Appeals for the Tenth Circuit Rules That Secretly Recording Co-Workers Dooms Retaliation Claim
- Video: What the FTC Non-Compete Ban Block Means for Employers - Employment Law This Week
- “Fair Chance” Updates: Los Angeles County Ordinance Takes Effect; New York City Proposes Amendments to Existing Law
- MI Agencies Request Clarity on New Minimum Wage & Tip Credit Requirements