The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear). These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.
In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals.
Blog Editors
Recent Updates
- Video: Abortion Protections Struck Down, LGBTQ Harassment Guidance Vacated, EEO-1 Reporting Opens - Employment Law This Week
- As Retail Worker Safety Act Becomes Effective, NYSDOL Issues Guidance and Materials for Employers
- DOJ Announces Initiative to Expand FCA Enforcement Into Alleged Discrimination
- Video: New Executive Order Targets Disparate Impact Claims Nationwide - Employment Law This Week
- EEOC Opens 2024 EEO-1 Reporting and the Deadline to File is Weeks Away