The Intersection of AI and Employment Discrimination: A Legal Perspective
Tue, Oct 7th, 2025 | by Miles Mediation and Arbitration | Article | Social Share
By Tanya Tate
The rapid evolution of technology is having a profound impact on the workplace. While artificial intelligence (AI) has emerged as a powerful tool for businesses, enhancing efficiency and decision-making, the proliferation of AI technology also raises significant concerns, particularly in the realm of employment discrimination and the increasing use of “deepfakes” in the workplace.
A deepfake is digital content created or manipulated using AI tools. With minimal effort employees can use a deepfake to impersonate coworkers, executives, or clients to create purported discriminatory content. Increasingly, employers are having to navigate and address the use of fake videos, texts, and photos.
Recent cases, including a shocking incident in Baltimore County, highlight the potential for AI misuse in exacerbating discriminatory practices. Attorneys on both sides should be aware of this opportunity for manipulation of information and be sure to independently confirm authenticity whenever possible.
In the Baltimore County matter, a principal’s voice was cloned by a disgruntled subordinate, the school’s athletic director. The director used AI technology to create audio recordings that contained antisemitic and racist remarks. The AI generated recording was then circulated by the disgruntled employee (who had recently learned that the principal would not be renewing his contract for the following school year) to other teachers in the school, the superintendent, and on social media.
As a result, the principal was placed on leave and became the recipient of threats so severe that it was necessary for him to hire a security detail at his home. While the athletic director was ultimately sentenced to four months in jail for his AI cloning activities, the targeted principal has stated, “[t]hough people later learned the recording was fake, my life [will] never be the same.” Not surprisingly, this incident sent shockwaves through the community and raised critical questions about the implications of AI in the workplace. 1
In another incident, a disgruntled employee at a tech company cloned the voice of his manager to create audio that included derogatory comments about a specific racial group. The employee later claimed that the company retaliated against him for reporting this discrimination, citing Title VII violations.
Evolving Legal Issues Related to AI and Deepfakes in Employment
These cases raise several questions including (1) whether, in the employment setting, reporting activity that one knows to be false is protected activity under Title VII; and (2) whether employers can be liable when they rely upon deepfakes when making employment decisions. While Title VII’s anti-retaliation provision does protect employees who oppose discriminatory practices, its application is only triggered by the employee’s good faith, reasonable belief that the conduct was unlawful. Without that “good faith belief”, which is undoubtedly absent when using AI to create deepfakes to support a Title VII claim, it is unlikely that this would be deemed to be “protected activity” under the law.
However, there are other sources of potential liability of which employers should be aware. Some concerns that loom include:
- The potential liability of an employer under Title VII when employees use deepfakes in such a way that it could create a hostile work environment.
- The potential liability that employers may have to employees if employers rely upon deepfake “evidence” to make an employment decision, particularly one that would arguably constitute an “adverse employment action” under the law.
While this area of law is developing every day, it is still in its infancy, leaving employers constantly on edge. AI technology, while innovative, can be weaponized to manipulate perceptions and create false narratives and manipulate evidence. Such misuse underscores the urgent need for legal frameworks to address the ethical dilemmas posed by AI in employment contexts.
Some efforts in the works, however, may start to add a little clarity to the topic such as:
- The EEOC’s 2024–2028 Strategic Enforcement Plan that emphasizes scrutiny of technology-driven discrimination and digital harassment.
- Proposed changes to Federal Rules of Evidence (“FRE”) 901 and the proposed creation of FRE 707 would require parties to authenticate AI-generated evidence and meet expert witness standards for certain digital content, especially in cases involving deepfakes or algorithmic decision-making.
The rise of AI in the workplace presents both opportunities and challenges. While these technologies can enhance productivity and decision-making, they also carry the risk of exacerbating discrimination and bias. The Baltimore County incident serves as a stark reminder of the potential dangers that lie ahead.
Deepfakes represent a fast-evolving threat to workplace safety, dignity, and trust. Preemptive planning, such as ensuring that employee handbooks address synthetic media or manipulated content and training human resources employees to evaluate and identify deepfakes in the workplace can better equip employers to navigate these ever-shifting waters.
*Originally published in the Daily Report and reprinted with permission.
About Tanya Tate
A member of the National Academy of Distinguished Neutrals, Tanya Tate is a seasoned and highly effective mediator, having mediated hundreds of cases in her career, the majority of them disputes involving employment law and business-related matters. Tanya is also a skilled arbitrator, with significant experience serving as both a solo arbitrator and on three-arbitrator panels. Prior to beginning her practice as a mediator and arbitrator thirteen years ago, Tanya represented both plaintiffs and defendants in cases involving employment law, restrictive covenants, trade secrets, contracts, torts, business litigation, insurance coverage, school law, premises liability, and personal injuries.