Skip to main content
Tech

Boston Consulting Group’s chief AI ethics officer shares his views on the risks and opportunities of AI

Steven Mills uses AI as a thought partner, but cautions that companies should know the potential risks that come with using the rapidly changing technology.

4 min read

AI may be the topic du jour, but there’s still a lot of hesitancy around adopting the rapidly changing technology. More than one in three US workers are afraid that AI could displace them, and some HR leaders are concerned about its unknown effects on their roles and employees.

HR Brew recently sat down with Steven Mills, chief AI ethics officer at Boston Consulting Group, to demystify some of the risks and opportunities associated with AI.

This conversation has been edited for length and clarity.

How do you deal with workers’ AI hesitations and fears?

Once people start using the tech and realizing the value it can bring to them, they actually start using it more, and there’s a bit of a virtuous cycle. They actually report higher job satisfaction. They feel more efficient. They feel like they make better decisions.

That said, we also think it’s really important to educate people about the tech, including what it’s good at and what it’s not good at, that you shouldn’t be using it for. Personally, I sit somewhere in the middle.

Where do you see the biggest risks with AI?

For us [BCG], we have a whole process that, if it falls into what we deem a high risk area, there’s a whole review process to say, “Are we even comfortable using AI in this way?”

Let’s assume we’re going to build the tech. It systematically maps out all the risks, which could be things like, what if it gives a factually incorrect answer, or what if it inadvertently steers users to make a bad decision. And then, as we’re building the product, what is an acceptable level of risk across these different dimensions.

Some people fear that incorrectly deployed AI could result in the technology learning to reinforce biases and create more potential for discrimination. How can we make sure that there’s a diversity of thought within LLMs?

We want to evaluate the input to output from the product perspective. Again, it goes to looking at the potential risks, which might be different types of bias, whether that’s bias against any protected group or things like urban versus rural. These things can exist in models. We really talk a lot about responsible AI by design. It can’t be something you think about when you conceptualize the product, design it from the start, think about these things, and engage users in a meaningful way.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.

What do you hear from HR leaders about their feelings on AI transformation?

A lot of HR leaders are super excited about the productivity and the value unlock of the tech and they want to get it in the hands of their employees. The concern is we want to make sure people are using the tech and feel empowered to use the tech, but doing so in a responsible way.

I love to show fabulous failures of a system doing silly things that sort of make you chuckle, but it’s just a really good illustration that they’re not perfect at everything. And so people seeing that, it helps them realize, I have to be thoughtful about how I’m using it.

We work really hard with our people, making sure they understand that they can’t have AI do their work. Use it as a thought partner. Use it to help refine your points, but like, you need to own your work product at the end of the day.

How can smaller employers establish AI boundaries?

Particularly for small companies, it can be as simple as leadership gets in a room and has a discussion about where they are comfortable using AI. Ultimately, some of this comes down to corporate values, and so you need to have the senior leaders in an organization engage in a dialog. It doesn’t have to be fancy. It can literally be an informal document that’s like, “Here’s how it’s okay to use it. Here’s how you shouldn’t use it.”

Do you think AI could impact productivity requirements?

We want to make sure employees use AI for the productivity benefits, but not in a punitive way. It should be more like, if they’re not getting it, it’s because we have failed. So then we’re enabling them, upskilling them, helping them see how to use the tools.

How do you use AI in your job?

I use it a ton as a thought partner…I might share the slide deck I’m going to use for a big meeting and say, “What questions would you have if you were the chief risk officer?” It’s just a way to help me prep. I also use it to give me counterpoints for arguments I have. It’s important that we still own our own ideas, but using this [AI] as a thought partner, something to challenge your thoughts. It’s pretty powerful in those cases.

Quick-to-read HR news & insights

From recruiting and retention to company culture and the latest in HR tech, HR Brew delivers up-to-date industry news and tips to help HR pros stay nimble in today’s fast-changing business environment.