California is Ready to Lead on AI Regulation, with an Assist from Microsoft
Last year, California Assemblymember Rebecca Bauer-Kahan (D - Bay Area) introduced an aggressive and sweeping proposal to combat potential discrimination caused by AI based automated decision tools (ADTs). Under her bill, known as AB331, automated decision tools that made, or assisted in making, so-called “consequential decisions” were subject to a host of requirements. The bill defined consequential decisions broadly to include any action that “has a legal, material, or similarly significant effect on an individual’s life” in areas including employment, education, housing, utilities, financial services, health care, the legal system, and more.
The aim of AB331 was to prevent ADTs from making consequential decisions that had a discriminatory impact. It did this by requiring both the developer and user (known as the deployer) of the ADT to conduct regular impact assessments on the technology, as well as to develop policies and governance programs around their use. Developers of AI technology were required to provide deployers notices describing how systems worked along with their limitations, while deployers were required to provide Californians notice of the use of the ADT in decision-making processes and in some cases allow them to opt-out of the use.
In terms of enforcement, AB331 empowered the attorney general and district attorneys to bring a civil action for violations against both developers and deployers, although companies could eliminate the possibility of injunctive relief if they presented a sworn statement they had cured the alleged violation. More notably, AB331 provided Californians a private right of action against both developers and deployers that allowed for the recovery of both damages and attorney’s fees where one could prove they suffered discrimination through the use of an ADT.
AB331 however faced significant opposition from business groups across a variety of industries. Most problematic for these groups was AB331’s private right of action. Business groups also expressed concerns about the $10,000 per violation fines and the bill’s limited ability to cure potential violations. Due in large part to this opposition, AB 331 failed to gain enough traction to make it to the governor’s desk.
AI Regulation Gets New Life
In February 2024, Bauer-Kahan renewed her efforts to fight AI based discrimination when she introduced AB2930. This time however she did so with the support of companies actively developing and using AI technology. The press release announcing AB2930 contains statements of support from both Microsoft and Workday, with Microsoft’s spokesperson going as far to say that AB2930 aligns with the company’s values.
Why are businesses now on board? On a first read it’s not so clear. In terms of the obligations on businesses, AB2930 and AB331 are substantially similar. They use the same definitions, cover the same conduct, and require the same notices. The measures are word for word the same in most sections.
The new bill also contains a number of the provisions business groups found objectionable in AB331. AB2930 not only retains the $10,000 fines industry groups found so problematic, but adds in a $25,000 fine in instances of “algorithmic discrimination.” The new bill also provides the same limited cure provision found in AB331.
Assemblymember Bauer-Kahan’s new bill does however contain two business-friendly concessions, which I suspect are a large part of what got Microsoft and others onboard. First, the 2024 proposal does not contain a private right of action, thus leaving enforcement to governmental bodies. Second, the new bill explicitly fully exempts “cyber-security related technology” from all of the obligations of the act. AB2930 does not define “cyber-security related technology,” presumably intending to leave it to courts to determine how far the term sweeps.
Takeaways
While it’s too early to know the fate of AB2930, the fact that industry titans like Microsoft are partnering with progressive legislators like Assemblymember Bauer-Kahan suggests we are inching closer to more comprehensive regulation of AI. This means it’s time start thinking of measures you can put in place to ensure you are well positioned when these regulations pass.
So what do you do? Here are three tips:
1. Start including AI enforcement and lawsuits in your indemnity clauses. For providers of AI technology, you’ll need to ensure you are not subject to liability for allegations of discrimination independent of your product. For deployers, you need to protect against flaws in AI products you had no way of knowing about. I would go as far as to treat this topic like most do infringement indemnities, i.e. don’t subject the indemnity to whatever limitation of liability exists.
2. For AI companies, start to build anti-discrimination into your products and how you sell them. It may be a year or more before AI regulations take effect, but the actions you are taking now may make compliance harder to achieve when required. In addition, legal departments are already concerned about privacy, infringement, and confidentiality when it comes to AI. Counsel will soon, to the extent they aren’t already, push to know how your products protect against discrimination. Don’t wait.
3. Get involved and stay informed. AI regulation is happening now. If AI regulation could impact your business, reach out to your representatives and work with industry groups to have your views known. For those who lack the budget or time to do this, you should at a minimum pay close attention to pending legislation to get a sense of what you may be expected to do. If you sit on the sidelines and wait for what the California legislature and major public companies figure out amongst themselves, you might not be happy with the results.
*This blog is intended to provide a general summary of best practices and does not constitute legal advice. You should consult with counsel to determine the exact legal requirements in a given situation.