Tech

Here’s your AI-at-work cheat sheet

ADP, Indeed, LinkedIn, and Workday team up to outline best AI practices for HR.
article cover

Andreypopov/Getty Images

· 3 min read

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

Sometimes when a task feels too big to tackle, we put it off. Addressing the responsible deployment of AI at work might feel like a giant undertaking, but if drafting an HR policy on AI is something you’ve been putting off, you’re in luck.

The Future of Privacy Forum has teamed up with some of the biggest HR software and tech companies to address best practices for responsible use of AI at work.

“I am very optimistic, but also very concerned about the unfolding use of AI that we’re seeing,” said Trey Causey, the head of responsible AI and senior director of data science at Indeed, during a November press conference of its 2024 . “We are very early in the AI age, despite the fact that AI is itself an old term...I am watching to see how we can protect ourselves against the worst outcomes…as well as encourage the best outcomes, especially for job-seekers,” Causey added.

The framework, released earlier this fall, is a collaboration with ADP, Indeed, LinkedIn, and Workday. It outlines six cornerstones to guide AI deployment, and incorporates developer responsibilities as well as those who deploy the tools.

“We really want to encourage employers and job-seekers who are using AI…to think about what [are] the ineffable human parts,” Causey said. “It’s that connection with the person who you think might be perfect for the job. Rather than reducing individuals to just a list of attributes and experiences, you can really get to that human connection.”

TL;DR. The best practices include:

  • Non-discrimination: AI tools used, especially in hiring, should protect against bias, and both developers and those who use these tools should test to make sure they’re used ethically and in compliance with the discrimination laws.
  • Responsible AI governance: Outline rules and practices that oversee how AI will be used within your organization and who is responsible for operation and oversight of the tools.
  • Transparency: Employees should know when and to what extent AI tools are being used and any “consequential impacts.”.
  • Data security and privacy: Protect personal information when using AI tools.
  • Human oversight: AI tools should be designed and used with human oversight.
  • Alternative review procedures: Developers should design AI tools with alternative review procedures in mind.

The guide also highlights relevant US law that HR pros should be mindful of when deploying AI. Local, state, and the federal government are racing to enact or create new laws to regulate AI.

“What we’re seeing here is a lot of interest in the regulatory space. We’re seeing a lot of interest in protecting against some of the worst risks that might materialize, and rightfully so,” Causey told reporters. “We’re also seeing a lot of focus on how…we [can] do that responsibly while protecting innovation and being able to take advantage of all the opportunities that AI can afford.”

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.