We asked a labor lawyer what AI laws HR should look out for

As the compliance landscapes may evolve to cover how AI is used, HR needs to make sure to compliantly deploy the tech.
article cover

Mathisworks/Getty Images

· 4 min read

Artificial intelligence tools are flooding the HR tech space, and as some regulations begin to take shape, which could create a challenging environment when it comes to deploying the tech compliantly.

As we look ahead in 2024, and as jurisdictions across the country (and around the world) begin to consider regulating AI, HR Brew spoke with Natalie Pierce, partner at Gunderson Dettmer, and Niloy Ray, an attorney with the Littler law firm.

Pierce is a labor and employment attorney with a specialization in the technology and the future of work. She’s been monitoring proposed and existing laws, regulations, and guidance to stay on top of the evolving legal landscape when it comes to automated decision-making tools. Ray spoke to HR Brew about the proposed regulations from the California Privacy Protection Agency following its December board meeting.

We asked Pierce and Ray to help catch us up on a few things HR pros should be aware of when it comes to compliance and AI. Here’s what they said to look ahead for:

NYC’s Local Law 144. We know you know about Local Law 144, which America’s largest city began enforcing last summer. Companies cannot use automated recruitment tools unless certain criterias are met, including an annual audit which must be public. Candidates also must be told that automated tools are being used and given an option to opt out.

Pierce suggested other localities might follow this ordinance as an outline for future regulations.

The EEOC. The Equal Employment Opportunity Commission last summer released guidance suggesting that if automated tech discriminates against protected groups in hiring, performance management, and monitoring, companies are on the hook for any disparate impacts. “The AI did it” is not a defense for violations of Title VII, according to the EEOC.

The Biden administration. President Biden issued his longest executive order directing how federal agencies should begin to protect against potential risks to the American public, including workers. The order builds off voluntary commitments the White House secured from some AI companies last summer.

“I think we’re just going to continue to see more of the same,” Pierce said, pointing to the EEOC’s guidance as an example. “But one newer call to action, I would say is: What are we going to do about the fact that we were going to have so much job displacement and so much need for new skills and the upskilling of the workplace and how are we going to make sure that we do that in a fair way.”

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

California. A draft of regulation from the California Privacy Protection Agency was released in November. The agency’s board asked staff in December to revise its recommendations ahead of a formal rulemaking process. HR pros should consider paying attention in 2024 to the Golden State as it wrestles with how to regulate the tech.

“What I was really heartened by in all this is the agency’s thoroughness and thoughtfulness in putting out as many of the ideas, issues and stakes in the ground that they reasonably thought they needed to,” Ray said of a December CPPA board meeting exploring draft regulations. “And then recognizing that from amongst this mass of thoughts and ideas, something narrower and more feasible or practicable, needs to be crafted.”

Illinois. “Illinois has just been sort of an early leader” in protecting personal information, according to Pierce. The state championed a (bipartisan) biometric data privacy back in 2008, and in 2020 began enforcing protections for candidates who participate in video interviews where AI is used. Mindful compliance pros should keep an eye on the Prairie State.

“The overall sense that we’re getting in this area of regulation in California, in New York, and in other places, including overseas, is that centrally, some greater level of transparency as to the use and potentially the impact of these automated decision making technologies is a desirable regulatory goal,” Ray said.

With these measures and others on the horizon in the coming year and beyond, HR pros should consider internal AI policy that details how these technologies are procured, what considerations were made in terms of fairness and equity, anti-bias impacts, or privacy, to protect from future litigation, he said.

“I am an AI optimist in terms of what it can do to really find the right talent and reduce bias in hiring selections,” Pierce said. “But it’s important for employers to create AI policies that protect the company.”

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.