A California bill would regulate automated decisions. But is its definition of AI too narrow?

A new California bill seeks to put groundbreaking guardrails around how people use AI to make important decisions—but some advocates worry the legislation could stumble over imprecise definitions that have already hindered efforts elsewhere.



If passed, Assembly Bill 331 would create a set of new rules for anyone using an “automated decision tool”—for example, a résumé scanner—to make a “consequential decision” in areas including employment, education, housing, healthcare, lending, voting, and the criminal justice system. It would require anyone who creates or deploys such an automated decision tool (ADT, for short) to disclose its use to any affected parties, and regularly assess the tools for embedded biases. AB331 would also, for the first time, create a private right of action—that is, allow Californians to sue—anyone who develops or uses an ADT that results in “algorithmic discrimination” against them.



Assemblymember Rebecca Bauer-Kahan, the East Bay Democrat who authored AB331, has framed it as a “landmark bill” modeled after the Biden administration’s own AI Bill of Rights (which is itself more of a blueprint than a set of rules). “We’ve seen example after example of automated decision tools that have bias built into them because of the human developers who develop them. And we need to be checking for that, correcting for it,” Bauer-Kahan said at a hearing for the bill last week. It has now cleared two committee votes despite opposition from a coalition of business, tech, finance, and realtor organizations, all of whom have criticized AB331 for targeting an “inordinately broad” scope of “consequential decisions.” (Bauer-Kahan declined Fast Company‘s request for comment.)



AB331 is part of a broader data rights push by state legislatures amid an increasingly conspicuous lack of a federal digital privacy framework. “The states are increasingly saying, ‘If Congress isn’t going to act, we’re going to take matters into our own hands to protect our constituents,’” says Keir Lamont, director of the U.S. legislation team at the Future of Privacy Forum, a D.C.-based think tank. But these state efforts have also fueled debates over the legal language around AI—and that’s where AB331 has already run into some controversy.



One of the thorniest questions revolves around how to define an ADT. Earlier this month, New York City became one of the first U.S. cities to regulate ADTs used in employment decisions with Local Law 144, which requires companies to test their tools for bias and disclose their use in certain situations. But advocates have criticized that law for being too narrow. To be considered an ADT under Local Law 144, a machine would essentially have to overrule human decision-makers in a hiring decision.



Based on its current draft, the California bill could run into the same opprobrium. AB331 would allow affected parties to opt out from the use of an ADT, but only in cases where a decision is being made “solely” based on the machine’s output—which is “too high of a standard,” says Matthew Scherer, a senior policy counsel at the nonprofit Center for Democracy and Technology. “That’s going to eliminate a great many tools that have a real impact on lives and livelihoods.”



The bill also defines ADTs as AI programs designed to be a “controlling factor” in a consequential decision, a standard that’s “definitely too narrow,” says Scherer, adding that companies usually claim their tools merely generate “recommendations” rather than outright decisions.



But where exactly would a regulator draw the line between recommending and controlling? It’s an “incredibly tricky” question, says Lamont, and states are trying different approaches. One notable example is Colorado, whose new comprehensive privacy law breaks down ADTs into three categories (each of which is governed by different rules): solely automated processing, human-reviewed automated processing, and human-involved automated processing. Whereas AB331’s distinction between decisions made “solely” by machines and those in which machines contribute a “controlling factor” suggests at least two tiers—with the latter “applying potentially more broadly,” Lamont says.



So far, the bill has already gone through some changes, but not the ones that rights advocates want. Before a committee on the bill last week, Bauer-Kahan conceded to an amendment that would postpone Californians’ right to sue over algorithmic discrimination until 2026. “The bill is not to hinder innovation,” she emphasized at the hearing. “I think we can come to a place where we succeed in ensuring that enforcement works for our businesses.”