Skip to main content
Tech

What HR should know about AI governance

A cross-functional policy document, requires input from across departments, including HR.

AI

Getty Images

3 min read

As companies rush to deploy AI across their organizations—especially in areas related to their workforce—shaping policies and guardrails to safely and compliantly use the technology has become top of mind for many.

Information security experts told IT Brew during a live event on AI governance last month that this isn’t simply an IT problem. It’s one that requires cross-functional considerations.

“First of all, it’s a shared responsibility,” said Guru Sethupathy, founder and CEO of AI governance platform FairNow. “Ultimately, you still want someone who’s a decision maker, because you don’t want to get lost in paralysis by analysis…That could be a new role.”

In addition to IT leaders, legal and compliance teams also need to weigh in. Business units, too, have a stake in this. HR’s role is particularly important because of the sensitivity of decisions being made with these tools.

“You see AI in the entire kind of value chain of HR, especially in the talent acquisition space,” Sethupathy said. “Everything from identifying and sourcing candidates to scoring résumés and scoring candidates to actual even interviews…As you can imagine, similar to financial services and your health, your career is one of the most important things for people, and so decisions that are being made with these AI systems are considered high risk across almost all frameworks.”

The starting line. AI governance allows companies to focus on high-risk use cases, while creating looser guidelines for less risky applications of the technology.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

Matt Saner, senior manager of security specialist solutions architecture at AWS, told IT Brew that decisions that can impact life or health and safety should require stricter guardrails, and emphasized the “human in the loop” approach. He said more complicated AI use cases will require more guidance than the “pencil inventory counting tool.”

“At the end of the day, the organization has to make a determination of what their risk appetite is, what they’re willing to take on,” Saner said, adding that regulation and legislation can mandate some politicides, but most are business decisions related to the businesses’ priorities.

Sethupathy pointed to established frameworks such as the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework, sector-specific guardrails, and emerging AI regulations like the EU AI Act as helpful references for organizations.

“What you want are guardrails, where are good guard rails?” Sethupathy said. “You have to combine NIST with specific guardrails that apply to your sector, and so there might be specific standards there that already exist or are starting to emerge.”

But experts cautioned against chasing every new set of guidelines.

“Be wary of the noise as well,” Saner said. “There are hundreds and hundreds of these emerging regulations, emerging frameworks, emerging standards, and looking at what matters to you and your industry and your geography…are really important.”

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.