California Gov. Gavin Newsom made headlines Monday, signing landmark legislation into law to govern the development of frontier AI technology and mitigate risk. But the Golden State this week also began enforcing new AI regulations related to automated decision making systems (ADS) for all employers in the state.
Newsom signed the Frontier Artificial Intelligence Act, a first-of-its-kind law that oversees the development of major AI technology systems, requiring companies like OpenAI and Anthropic to disclose their safety measures and report any risks associated with their tech.
The measure also strengthens whistleblower protections for workers at these companies against retaliation for flagging issues of potential or apparent risk.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance,” the governor said in a statement about the new law.
But wait, there’s more. While the signature new law in California doesn’t touch on the work of HR pros like yourself, other regs do. The new law isn’t the only major AI regulation in the state addressing the burgeoning technology as it’s deployed in homes and businesses across Cali and the rest of the country.
Taking effect Oct. 1, the state’s amended Fair Employment and Housing Act (FEHA) now also prohibits businesses in California from using ADS that discriminate against their applicants or employees based on protected statuses.
The changes require employers to conduct specific record-keeping practices related to uses of any ADS, systems that use “artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques” to aid in decisions.
“One of the things that I’m telling my clients, and really starting to get out there, is that you’re already using AI, and chances are you’re exposing yourself to risk,” said Wende Knapp, who leads the employment law practice group with the Rochester, NY-based firm Woods Oviatt Gillman. Knapp’s practice closely aligns with an in-house counsel role for clients.
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
Conducting bias audits can help businesses mitigate risk, and the measure outlines audits and record-keeping as possible affirmative defenses if properly executed.
“What this regulation does in a nutshell, it really forces employers to inventory all the AI tools that they’re using in HR. So take a deep look, understand what type of technology they’re using,” Knapp said.
The rule also outlines protections for those who require religious or disability accommodations.
Yup…there’s more still. The California Privacy Protection Agency (CCPA) in August finalized a separate set of regs aimed at the same sorts of tools—this body refers to these AI-powered decision making platforms as automated decision-making tools (ADMT).
“They don’t call it ADS, they call it ADMT, but it’s the same thing. [CCPA board members] talk about the processing of personal information, and so that’s their entry point into this,” said Niloy Ray, a shareholder at employment and labor law firm Littler.
These regulations update the state’s consumer privacy law to make sure that these tools, when used in employment decisions, have a detailed risk assessment and notice of use, and that opt-out rights are observed, according to an analysis from Littler.
“If both of these sets of regulations aren’t challenged, and I’m not sure that they will be,” Ray said. “Then California employers will have to look closely at CCPA to see if their particular AI tool or use falls within it, and if so, they’ll have to do these things.” These same employers will also need to make sure any of these systems adhere to the California’s Fair Employment and Housing Act revisions via bias auditing and records keeping.