The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear). These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.
In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals.
Blog Editors
Recent Updates
- Video: Workplace ICE Raids Are Surging—Here’s How Employers Can Prepare - Employment Law This Week
- SCOTUS Levels the Field for “Reverse” Discrimination: Potential Consequences
- Video: DOL Restructures - OFCCP on the Chopping Block as Opinion Letters Expand - Employment Law This Week
- Hot Topics in Employee Benefits: A Primer for In-House Lawyers
- Video: Abortion Protections Struck Down, LGBTQ Harassment Guidance Vacated, EEO-1 Reporting Opens - Employment Law This Week