There is a visceral and palpable dynamic emerging in global workplaces: tension.
Tension between what is potentially knowable—and what is actually known. Tension between the present and the future state of work. Tension between what was, is, and what might become (and when). Tension between the nature, function, and limits of data and technology.
The present-future of work is being shaped daily, dynamically, and profoundly by a host of factors—led by the exponential proliferation of data, new technologies, and artificial intelligence (“AI”)—whose impact cannot be understated. Modern employers have access to an unprecedented amount of data impacting their workforce, from data concerning the trends and patterns in employee behaviors and data concerning the people analytics used in hiring, compensation, and employee benefits, to data that analyzes the composition of the employee workforce itself. To be sure, AI will continue to disrupt how virtually every employer views its human capital model on an enterprise basis. On a micro level, employers are already analyzing which functions or groups of roles might be automated, augmented, or better aligned to meet their future business models.
And, yet, there is an equal, counterbalancing force at play—the increased demand for accountability, transparency, civility, and equity. We have already seen this force playing out in real time, most notably in the #MeToo, pay equity, and data privacy and security movements. We expect that these movements and trends will continue to gain traction and momentum in litigation, regulation, and international conversation into 2019 and beyond.
We have invited Epstein Becker Green attorneys from our Technology, Media & Telecommunications (“TMT”) service team to reflect and opine on the most significant developments of the year. In each, we endeavor to provide practical insights to enable employers to think strategically through these emergent tensions and business realities—to continue to deliver value to their organizations and safeguard their goodwill and reputation.
A New Approach to Civility: Training, Awareness, and Education in the #MeToo Era
It is hard to believe that it has only been a year since the parturition of the #MeToo movement as we’ve come to know it. And during that concentrated period, a spotlight has shined brightly on issues that at their core are (unfortunately) not very new. Indeed, the types of conduct and misconduct that we’ve heard about and read about are not novel—nor are the underlying laws, policies, and procedures that prohibit discrimination and harassment in all of our workplaces. And yet, the issues of harassment and discrimination—some of them vestiges of a troubled but distant past and some as current as your most recent internet browser refresh—continue to plague society and workforces across the United States.
So, what is the solution? (Or at least, what is a critical first step toward the solution?)
The EEOC, in its prophetic wisdom, ‘called it,’ even before the now famous tweet that launched a worldwide discussion. In its 2016 Select Taskforce on the Study of Harassment in the Workplace, the EEOC noted that ‘training must change’ and that ‘new and different approaches to training should be explored’—a movement away from merely avoiding legal liability, but instead toward a more ‘holistic’ approach that is grounded in principles of general civility and that calls upon everyone to be active participants in their workplaces—as bystanders. For our part, this call to action was the catalyst for Halting Harassment.
An unprecedented approach: straight talk.
Employee Benefits in the Future Workplace
As the nature of the employer-employee relationship continues to evolve, from expanded worker classifications and mobile arrangements, to increases in automation, artificial intelligence, and co-bot relationships, employers must evaluate how to best design their traditional employee benefits programs to account for these shifts. Employers must determine who will be eligible for their programs in the future workplace. In addition, employers must consider offering programs that will not only provide retirement security and a means to meet health insurance needs, but also meaningful incentive compensation and non-traditional benefits, programs that will further set them apart, so that they attract, motivate, and retain their desired employees. Employers should also consider programs that provide their workers with training that will enable them to re-skill for future positions, as well as the need for any severance programs that may provide for their transition out of the organization. Use of technological tools to deliver and collect benefits information and provide education to employees will also require attention, as well as the protection of sensitive personal data.
Laws Protecting Employee Privacy Are On the Rise
With the news surrounding Facebook’s use of user data and exploitation by Cambridge Analytica, privacy continues to be on the consciousness of people everywhere, including employees. The European Union’s General Data Protection Regulation (“GDPR”) is seen by privacy advocates as the gold standard, and yet, enforcement has been haphazard, so companies in the U.S. are left with few clues on whether they may be a target of enforcement. Inside the U.S., individual states are also starting to consider legislation to protect an individual’s personal data. California, for example, recently passed the Consumer Privacy Act of 2018. Effective January 1, 2020, the law gives “consumers”—defined as natural persons who are California residents—four basic rights in relation to their personal information. While the Privacy Act is ostensibly a consumer protection statute, its requirements, on their face, apply in the employment context. Illinois has had a Biometric Information Privacy Act on the books for about a decade, but litigation under that statute has seen a surge as of late. Employers should be taking active steps now to secure employees’ private information, rather than waiting on the inevitable lawsuit.
Chatting with an HR Bot
Adam S. Forman and Nathaniel M. Glasser
Most of us have experienced a pop-up window on our favorite website asking whether we needed any assistance. Like ‘Clippy,’ Microsoft’s late-1990s interactive office assistant, the ‘person’ on the other end of that message window is not human, but rather a computer program designed to simulate conversation with human users. These computer programs, commonly referred to as ‘chatbots,’ use AI to communicate without human assistance.
Over the past several years, human resources departments have begun delegating certain general human resources tasks to chatbots. Their goal is to direct employees to the AI-powered technology as a first step when raising questions about basic workplace issues, such as PTO, benefits, and leave. Some companies are also evaluating whether they should supplant humans with chatbots to intake internal complaints of discrimination or harassment. While management and human resources may embrace this technology to increase efficiency and reduce subjectivity, companies should avoid automatically adopting AI tools. Instead, companies should consult with all stakeholders, including operations, human resources, and legal counsel, to ensure that chatbots have built-in processes to elevate certain matters for human review, to appropriately evaluate complex questions, and to ensure the employee-user is satisfied with the experience. Thoughtful evaluation prior to implementation will mitigate legal risk.
Formalized Insider Threat Risk Assessment Processes and Programs Are Critical
Critical technologies are at risk from employees and trusted business partners’ workers. Employers should conduct an insider risk assessment process and program to identify insider threats to their most critical technologies and systems and to address the specific risks. Firms should consider whether to add to or strengthen their insider threat controls consistent with the outcome of the risk assessment.