Four federal agencies are prepared to throw cold water (and lawsuits) at employers who abuse artificial intelligence


While recognizing the prevalence of automated systems, including those sometimes marketed as “artificial intelligence” or “AI,” and the “insights and breakthroughs, increasing efficiencies and cost-savings” that AI can offer, four federal agencies recently announced in a joint statement that they are ready to police “unlawful bias,” “unlawful discrimination,” and “other harmful outcomes” too.
U.S. Equal Employment Opportunity Commission Chair Charlotte Burrows, U.S. Department of Justice’s Civil Rights Division Assistant Attorney General Kristen Clarke, Consumer Financial Protection Bureau Director Rohit Chopra, and Federal Trade Commission Chair Lina Khan released their joint statement  outlining a commitment to enforce their respective laws and regulations to promote responsible innovation in automated systems.
Here’s more from the joint statement:

The Consumer Financial Protection Bureau (CFPB) supervises, sets rules for, enforces numerous federal consumer financial laws, and guards consumers in the financial marketplace from unfair, deceptive, or abusive acts or practices and discrimination. The CFPB published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology used. The circular also made clear that the fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.
The Department of Justice’s Civil Rights Division (Division) enforces constitutional provisions and federal statutes prohibiting discrimination across many facets of life, including in education, the criminal justice system, employment, housing, lending, and voting. Among the Division’s other work on issues related to AI and automated systems, the Division recently filed a  statement of interest  in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services.
The  Equal Employment Opportunity Commission (EEOC) enforces federal laws that make it illegal for an employer, union, or employment agency to discriminate against an applicant or employee due to a person’s race, color, religion, sex (including pregnancy, gender identity, and sexual orientation), national origin, age (40 or older), disability, or genetic information (including family medical history). In addition to the EEOC’s enforcement activities on discrimination related to AI and automated systems, the EEOC issued a technical assistance document explaining how the Americans with Disabilities Act applies to the use of software, algorithms, and AI to make employment-related decisions about job applicants and employees. For information about the EEOC’s AI initiative, you can visit Artificial Intelligence and Algorithmic Fairness Initiative .
The  Federal Trade Commission  (FTC) protects consumers from deceptive or  unfair business practices  and unfair methods of competition across most sectors of the U.S. economy by enforcing the FTC Act and numerous other laws and regulations. The FTC  issued a report  evaluating the use and impact of AI in combatting online harms identified by Congress. The report outlines  significant concerns  that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance. The FTC has also  warned  market participants that it  may violate the FTC Act  to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks. Finally, the FTC has required firms to destroy  algorithms  or other  work product  that were trained on data that should not have been collected.

So, what type of AI use will get you sued for potential discrimination? Here are some examples:

Data and Datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets incorporating historical bias, or datasets containing other types of errors. Automated systems can also correlate data with protected classes, leading to discriminatory outcomes.
Model Opacity and Access: Many automated systems are “black boxes” whose internal workings are unclear to most people and, in some cases, even the tool developer. This lack of transparency often makes it difficult for developers, businesses, and individuals to know whether an automated system is fair.
Design and Use: Developers do not always understand or account for the contexts in which private or public entities will use their automated systems. Developers may design a system based on flawed assumptions about its users, relevant context, or the underlying practices or procedures it may replace.

If you’d like even more resources on AI and employment law, check these out:

Navigating Employment Discrimination in AI and Automated Systems: A New Civil Rights Frontier  – video (January 31, 2023)
The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicant s and Employees
Tips for Workers: The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence
ASL video: Use of Artificial Intelligence in Making Job Decisions for People with Disabilities
Decoded: Can Technology Advance Equitable Recruiting and Hiring?  – video (September 13, 2022)
Commission Meeting on Big Data in the Workplace  – transcript (October 13, 2016)