Signs of (Artificial) Intelligence in the California Legislature?
See all Insights

Signs of (Artificial) Intelligence in the California Legislature?

Brownstein Client Alert, Nov.7, 2023

As legislators in Washington, D.C., and in statehouses around the country turn their focus to artificial intelligence (AI), California is once again trying to assert itself as a national leader on issues of emerging technology. However, as with similar efforts around cryptocurrency, blockchain and NFTs, developing smart and effective laws in a rapidly changing technological landscape is more difficult than it seems. Spurred on by executive orders from President Biden and California Gov. Gavin Newsom, policymakers of all stripes are eying the potential impacts of AI while hoping to foster an innovation-friendly climate that attracts entrepreneurs in what could be the next tech boom. In this client alert we will explore past actions and future possibilities for AI regulation in California.

California Plants the GenAI Flag

On Sept. 6, Gov. Newsom issued an executive order to pave the path for California’s approach to the increased use of generative artificial intelligence (GenAI). More specifically, the executive order includes a number of provisions to study the development, use and risks of AI throughout the state and to develop a process for the evaluation and deployment of AI within state government. These provisions include:

  • Risk-Analysis Report: By March 2024, the California Cybersecurity Integration Center (Cal-CSIC) and the State Threat Assessment Center (STAC), both within the Governor’s Office of Emergency Services (OES), will provide a joint-risk analysis of potential threats to and vulnerabilities of California’s critical energy infrastructure by the use of GenAI.
  • Procurement Blueprint: By January 2024, the Government Operations Agency (GovOps), the California Department of Technology (CDT) and Cal-CSIC will issue general guidelines for public sector procurement, uses and required training for application of GenAI—building on the White House’s Blueprint for an AI Bill of Rights and the National Institute for Science and Technology’s AI Risk Management Framework. State agencies and departments will consider procurement and enterprise use opportunities where GenAI can improve the efficiency, effectiveness, accessibility and equity of government operations.
  • Beneficial Uses of GenAI Report: By Nov. 6, GovOps, CDT, the Office of Data and Innovation (ODI) and the Governor’s Office of Business and Economic Development (GO-Biz) must draft a report to the governor examining the most significant potential beneficial use cases and potential risks for using GenAI tools in state government.
  • Deployment and Analysis Framework: By July 2024, GovOps, CDT and ODI, working with other agencies, must develop guidelines for agencies and departments to analyze the impact that adopting GenAI tools may have on vulnerable communities. By March 2024, CDT must establish the infrastructure to conduct pilots of GenAI projects, including CDT-approved environments or “sandboxes” to test such projects.
  • State Employee Training: By July 2024, GovOps, CDT and the Labor and Workforce Development Agency must provide trainings for state government workers to use state-approved GenAI to achieve equitable outcomes, and will establish criteria to evaluate the impact of GenAI to the state government workforce.
  • GenAI Partnership and Symposium: Establish a formal partnership with the University of California, Berkeley and Stanford University to consider and evaluate the impacts of GenAI on California and what efforts the state should undertake to advance its leadership in this industry. The state and the institutions will develop and host a joint summit in 2024 to engage in meaningful discussions about the impacts of GenAI on California and its workforce.
  • Legislative Engagement: GovOps, the California Department of Human Resources, the California Department of General Services, CDT, ODI and Cal-CSIC shall engage with legislative partners and key stakeholders in a formal process to develop policy recommendations for responsible use of AI, including any guidelines, criteria, reports and/or training.
  • Evaluate Impacts of AI on an Ongoing Basis: State agencies and departments must periodically evaluate for potential impacts of GenAI on regulatory issues under the respective agency, department or board’s authority and recommend necessary updates as a result of this evolving technology.

Old Battle Lines Remain in Place

While Generative AIs (GenAI) like ChatGPT and DALL.E have dominated the headlines and the imagination of policymakers this year, the fundamental issues that arise when human decision-making is outsourced to code have been roiling under the surface for years. Before GenAI, the topic du jour was “algorithmic bias,” the covert and overt ways in which automated decision tools (ADTs) perpetuate unfair or discriminatory treatment of individuals in protected classes. High-profile examples of ADT bias include risk-based bail bond decisions, determinations of eligibility for public benefits, and employment screening. In California, the tension between the desire to speed the provision of state services and the need to avoid disparate impacts on marginalized groups played out over several legislative sessions—with business and tech advocates engaging deftly from the sidelines.

The contours of the debate were illuminated by recent legislation authored by Bay Area legislator Rebecca Bauer-Kahan (D-San Ramon), who approached the issue head on. AB 331 (Bauer-Kahan, 2023) sought to prohibit “algorithmic discrimination,” defined as a situation in which an ADT “contributes to unjustified differential treatment or impacts disfavoring people” based on membership of a protected class. Roughly following principles established by the Biden administration’s Blueprint for an AI Bill of Rights, AB 331 would have required ADT deployers to conduct impact assessments for their technology, and report on safeguards, intended benefits and observed outcomes. Furthermore, the bill would have required ADT deployers to notify individuals any time an ADT was used to make a “consequential decision,” i.e., a decision that has a “legal, material, or similarly significant effect on an individual’s life.” Finally, the bill would have established a private right of civil action against an ADT deployer for perceived discriminatory outcomes. Plaintiffs would have been eligible for compensatory damages, declaratory relief and attorney’s fees.

The debate over the risks of algorithmic bias is not academic; there are numerous real-world examples of discriminatory impacts arising from public sector use of ADTs. In Detroit, an ADT used to allocate public housing subsidies was found to be directing funds away from poor and predominantly Black neighborhoods, while in Arkansas a state-implemented ADT used to assign access to Medicaid benefits inappropriately dropped beneficiaries resulting in the loss of housing and medical care for hundreds of individuals. Perhaps most troubling, then-U.S. Attorney General Eric Holder raised the alarm in 2014 that ADT-driven risk assessment for bail determinations and criminal sentencing “may exacerbate unwarranted and unjust disparities that are already far too common in our criminal justice system and in our society.” A subsequent ProPublica study seemed to confirm these fears, finding that “Black defendants were still 77 percent more likely to be pegged as at higher risk of committing a future violent crime and 45 percent more likely to be predicted to commit a future crime of any kind.”

Despite these well-documented issues, the bill did not have an easy path through the California Legislature. A coalition of business and technology interests, including the California Chamber of Commerce, the California Apartment Association, and TechNet, mounted an opposition campaign focused on the unintended consequences that the law could have on a wide variety of customary or necessary business activities such as credit checks or medical treatment decisions. This concern was reflected in the breadth of opposition, which included banking, financial services, grocery and manufacturing interests. Furthermore, the opposition argued that the recently created California Privacy Protection Agency (CPPA) has the authority and jurisdiction to create regulations regarding ADTs.

To wit, the CPPA Board recently released a notice announcing a Dec. 8 board meeting where it is expected they will open formal rulemaking proceedings on several topics. Our staff will be monitoring the hearing and sharing a recap. This meeting comes after the agency solicited preliminary written comments via an Invitation for Preliminary Comments on Proposed Rulemaking on the following topics: Cybersecurity Audits, Risk Assessments and Automated Decision-making.

While AB 331 advanced through two policy committees on party-line votes, the bill was held in the Assembly Appropriations Committee. While this committee ostensibly reviews legislation for fiscal impacts, it is often used to quietly shelve legislation that faces significant political or policy challenges. No official reason was given for the bill’s demise, other than significant costs to implementing agencies. Nonetheless, the policy debate surrounding ADTs is instructive as we look toward the upcoming session and a new wave of AI bills.

Past California State Legislation—What Worked and What Didn’t, and What We Are Likely to See Advance Next year

While the California State Legislature has been referring to “artificial intelligence” in statute for several sessions now, many proposals, codified or not, have only dealt with AI in its application to algorithms used by major social media sites. Long before ChatGPT, social media companies used AI in their algorithms to keep users engaged in content—an approach that inadvertently reinforced harmful content for users, many of whom are minors. Legislators critical of what they felt was unregulated content began to reference AI in code with their platform and technology transparency and accountability bills (ABs 2826, 1545, 587 and SBs 1018, 1216 of the 2021-22 Legislative Session).

Since then, large language learning models, like ChatGPT and its subsequent competitors, have forced legislators to reckon with AI in a different capacity. In just the first year of this two-year 2023-24 legislative session, we saw the introduction of the following seven AI-regulation proposals, in addition to the now-standard transparency and accountability bills:


ACR 96 (Hoover) – 23 Asilomar AI Principles: This measure would express the support of the Legislature for the 23 Asilomar AI Principles as guiding values for the development of artificial intelligence and of related public policy. Currently paused in the Senate Judiciary Committee.

AJR 6 (Essayli) – Artificial Intelligence: This measure would urge the United States government to impose an immediately moratorium on the training of AI systems more powerful than GPT-4 for at least six months to allow time to develop much-needed AI governance systems. Currently paused in the Assembly Privacy & Consumer Protection Committee.

SB 294 (Wiener) – Artificial Intelligence Regulation: This bill would express the intent of the Legislature to enact legislation related to artificial intelligence that would relate to, among other things, establishing standards and requirements for the safe development, secure deployment and responsible scaling of frontier AI models in the California market by, among other things, establishing a framework of disclosure requirements for AI models. Currently paused in the Senate Rules Committee.

SB 313 (Dodd) – California AI-ware Act: This bill would establish, within the Department of Technology, the Office of Artificial Intelligence, and would grant the office the power and authority necessary to guide the design, use and deployment of automated systems by a state agency to ensure that all AI systems are designed and deployed in a manner that is consistent with state and federal laws and regulations regarding privacy and civil liberties and that minimizes bias and promotes equitable outcomes for all Californians. This bill was held on the Suspense file in the Senate Appropriations Committee and may be acted upon beginning January 2024.

SB 398 (Wahab) – Artificial Intelligence for California Research Act: This bill would require the Department of Technology to develop and implement a comprehensive research plan to study the feasibility of using advanced technology to improve state and local government services and provide a report to the Legislature on the findings of its research. This bill was held on the Suspense file in the Senate Appropriations Committee and may be acted upon beginning January 2024.

SB 721 (Becker) – California Interagency AI Working Group: This bill would create the California Interagency AI Working Group to deliver a report to the Legislature regarding artificial intelligence. The bill would require the working group members to be Californians with expertise in at least two of certain areas, including computer science, artificial intelligence and data privacy. This bill was made a two-year bill at the request of the author.

SCR 17 (Dodd) – Artificial Intelligence: This measure would affirm the California Legislature’s commitment to President Biden’s vision for a safe AI and the principles outlined in the “Blueprint for an AI Bill of Rights” and would express the Legislature’s commitment to examining and implementing those principles in its legislation and policies related to the use and deployment of automated systems. This bill has been chaptered.

The Next “Gold Rush”?

There is a race to establish innovation and leadership on AI and it’s important that California insert itself in that race. Local governments with tech ecosystems like San Francisco are bullish on AI as cities throughout California still struggle with pandemic recovery and vacant commercial property continues to stall downtown recoveries as remote work keeps much of the workforce on a limited in-office schedule. With companies in the AI space exploring and signing leases in downtown San Francisco, Mayor London Breed was quick to encourage the trend declaring San Francisco the “AI capital of the world” and comparing the arrival of the industry to the city’s Gold Rush era. Silicon Valley will also look to players in the space to headquarter or expand within Santa Clara County. Mayor Matt Mahan in San Jose, a strong voice for innovation and outside-the-box approaches, worked with his City Council to draft a memo urging the city to explore incentives to attract AI companies to San Jose—including looking into discounted utility rates and expedited permitting processes. The memo also seeks to encourage innovators to explore AI solutions to civil problems like traffic safety, accountability and transportation, among others.

While the San Francisco Bay Area has planted a strong flag, other regions could look to incentivize AI growth in their backyards. “Silicon Beach” in Los Angeles encompasses the growing tech hubs of Santa Monica, Venice, Marina del Rey, Playa Vista, Playa del Rey and Manhattan Beach. Major technology companies have all invested in office space in this ecosystem, which also has a very strong startup culture. San Diego is also a major life science hub that could potentially see growth as well.


Given that most AI-related bills did not make it through the legislative process this year, California will undoubtedly see a significant increase in AI-regulation bills in 2024. The juxtaposition among legislators is not whether regulation is needed, but rather what type of regulation is best. Some will seek to explore the feasibility of applying AI to improve state and local government services, while others have proposed an outright moratorium on AI development until regulators can catch up.

The pace of change in California’s emerging AI-regulatory landscape seems to be constantly accelerating. Businesses that have been integrating AI or AI-adjacent technologies into their products and services will be wary of efforts by the Legislature to place broad restrictions on emerging technology. Meanwhile, consumer advocates will likely continue to push the envelope, calling for stricter regulation in the name of consumer protection and the minimization of harm to marginalized groups. So far, the Legislature has shown a willingness to entertain the concerns of advocates, but only to a point. As AI becomes more central to economic success across wide swaths of the economy, legislators will face increasing pressure from their constituents (and donors) to act with a light touch.


Recent Insights