AI governance really matters amid evolving compliance landscape
Lawmakers and regulators have struggled to keep up with emerging AI uses and risk.
• 4 min read
There’s a famous saying you’ve probably heard about building the plane while flying it, but for AI governance pros, there’s no hanger in sight. It seems like building AI(rplane) governance systems will continue to occur on the fly.
As AI tools inside the workplace evolve from experimentation and beta testing to a core part of everyday infrastructure, an ongoing challenge faces the pros charged with guiding deployment and use, and managing the technology’s risk. While organizations push forward with AI tools and new processes, the legal and regulatory environment remains laggart, fragmented, and often fluid, making governance a complicated task.
“What our clients are dealing with is—in some ways—very similar to what they’ve been dealing with for the past three years, which is uncertainty,” said Proceptual founder and CEO John Rood, who helps companies with AI governance and compliance efforts. “Not only do we not know what government, at what level, will pass what legislation with any reasonable certainty, we also don’t know if legislation is passed, it will actually be put into effect.”
Lagging. AI legislation and regulation lags significantly behind development and deployment, according to Rood.
State-level efforts in places like Illinois and Texas are continuing to evolve. Colorado’s marquee AI governance law has been undergoing changes and revisions since its adoption. The European Union AI Act has also faced delays and revisions ahead of enforcement.
The resulting persistent uncertainty means companies and their compliance and legal teams lack clarity on what rules will exist and how compliance and enforcement will be pursued.
Enforcement. Even where rules do exist, enforcement is far from settled. Rood pointed to a recent Cornell University study indicating abysmally low participation in New York City’s Local Law 144, which requires employers using Automated Employment Decision Tools for hiring or promotions in NYC undergo bias audits, share results publicly, and notify candidates of their use. Only 5% of NYC companies that were hiring listed audit results, and another 4% complied with transparency notice requirements.
Rood suggested that even those results may be skewed towards compliance, noting that there’s been little enforcement momentum on the part of the city.
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
By subscribing, you accept our Terms & Privacy Policy.
Vendors. Deployers are asking vendors to carry more weight as uncertainty persists, HR and enterprise customers are increasingly asking their vendors to help them both understand compliance and provide them with stronger governance, transparency and risk controls in order to play fairly.
“There’s an evolving expectation in the vendor and in the vendor-implementer relationship, where the implementers or deployers of AI systems are pushing a little bit harder on vendors than they have in past years,” Rood said.
Lawsuits against vendors like Workday and Eightfold AI have also raised questions about accountability when AI systems potentially (and allegedly) produce biased or discriminatory outcomes.
What’s HR to do? Rood pointed to established frameworks from both the National Institute of Standards and Technology (NIST) and International Organization for Standardization (ISO) as a good place to get a compliance and governance strategy that can mitigate risk.
“What we advise clients on now…is to really think about a broad compliance program companies need to be implementing—either the NIST AI Risk Management Framework or ISO 42001 or both—because ultimately that’s going to capture 95% plus of any foreseeable regulation,” he said.
ISO 42001 is a certifiable international standard focusing on formal management systems governing AI use. The NIST AI RMF is a voluntary US-based framework that offers guidance, but no formal certification. Both are aligned with where Rood suggests the eventual compliance landscape may land.
“The actual mechanisms of both the frameworks are like 90% the same,” he said. “ISO tends to be a little bit more process driven. Whereas NIST is more values driven. But functionally…there’s not a lot of meaningful distinctions that really change the way that an organization would implement their governance frameworks based on those differences.”
Governance aligned with either (or both) the NIST AI RMF and ISO 42001 is a good first step, but Rood also recommended layering good governance standards and controls based on prominent frameworks for specific industries and incorporating company-specific risks and corporate and employer values as well.
About the author
Adam DeRose
Adam DeRose is a senior reporter for HR Brew covering tech and compliance.
Quick-to-read HR news & insights
From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.
By subscribing, you accept our Terms & Privacy Policy.