Could chief diversity officers tackle companies’ growing AI decisions?
Chief diversity officers are in the hot seat.
In the aftermath of the demand boom for CDOs to place diversity, equity and inclusion at the heart of their companies in the last two years, the effectiveness of having a single role responsible for moving the needle on DE&I targets has come under fire.
The talent pool has also changed: diversity executives exited as company priorities shifted, as The Wall Street Journal reported this summer. High-profile DE&I executives at companies including Netflix, Disney and Warner Bros. Discovery resigned or were fired this summer. And thousands of diversity-focused workers were laid off since last year, while some companies scaled back racial justice commitments.
As some organizations make their budgets for next year, there’s potential for an overlap between their investments in generative artificial intelligence and meeting their DEI goals, one that could renew the importance of a CDO.
As the new year approaches questions could include: how executives ensure AI is fair and unbiased, who should be at the table when organizations make tech decisions, which vendors they should support and how to use it in a way that isn’t harmful to marginalized groups. Who’s more familiar with bias and fairness than a CDO?
What we know so far about how AI can impact marginalized groups
Most companies use some form of AI in their hiring processes. But these processes can still be prone to bias if the wrong tech is used, which can result in certain pools of candidates being overlooked — a focus for any CDO.
Choosing the right tech and overseeing the necessary audits, could all be folded into the CDO’s remit especially as legislation enforces this process, such as Local Law 144 in NYC that requires a bias audit on any automated employment decision tool prior to its use.
Selecting a proper AI vendor is just the start. Then there needs to be oversight of how all employees use the tech, to ensure it’s equally leveraged across age groups and genders.
New research from Charter, a media and services company focused on the future of work, found some gaps in how AI has affected historically marginalized groups. The research, conducted in August which included literature review, expert interviews, and a 1,173-person survey, noted a difference in opinion among respondents.
Charter’s data found that over half of Black respondents were concerned about AI replacing them in their jobs in the next five years, which is 14 percentage points higher than white respondents. And female respondents (35%) are less likely than male respondents (48%) to be using generative AI tools in their jobs currently. And individuals aged 18 to 44 years old are much more likely than their 55+ year old colleagues to have used generative AI in their work to date.
“As I reflect on this, there is a real watchout space around gender and ageism,” said Emily Goligoski, head of research at Charter. “I worry about the intersection of those two things, and what does it mean for those workers’ participation and mastery of generative AI tools?”
“I’m struck by the extent to which people in some of the historically marginalized groups that we’ve been talking about say ‘involve me in these processes,’” she added. “They’re saying they want to know what vendors are being chosen, how it will impact their job, what will happen if the technology doesn’t work as planned. That’s really encouraging.”
Is the chief diversity officer the right person to focus on AI?
Charter’s research further found that AI can decrease economic inequality by lowering the productivity gap between high and low performing workers and by automating complex tasks, thereby lowering barriers to entry for some professions.
That’s something Jyl Feliciano, vp of diversity, equity, inclusion and belonging at sales enablement platform Highspot, has seen herself.
Feliciano is intimately involved with AI decisions at Highspot and also recently became a board advisor for AI-driven people analytics platform Included, for which she is compensated. She recommends that all CDOs not only think more about AI, but make it a focus of their work. Feliciano does have a tech background; before becoming a leader in DE&I, she was an operational engineer. But she says that even without that background, it’s something other DE&I leaders can and should get involved in.
“I became really curious about not only AI but how there was so much bias ingested in AI,” said Feliciano. “It made sense, because people who often have access to developing these innovations, they don’t look like us. They’re developing the models, and it’s inherently going to be biased.”
For example, AI can help spot earlier if a department isn’t hiring a diverse workforce to fill new positions and spot trends quicker than we might not see until later on.
She’s used Included daily which helps her analyze employees’ histories and identifies when someone should be up for a promotion, compare compensation packages, and other key indicators that might put them at risk for leaving.
AI can also be used to send surveys focused on inclusion and belonging, said Donald Knight, chief people officer at hiring operating system Greenhouse.
“What AI tools have not been developed yet that can help with those things?,” said Knight. “DE&I can be a part of the AI innovation process and usher in conversation about what to bring to market.”
Collecting data for DE&I goals with AI
One thing that workplace leaders say DE&I experts have failed at time and time again is building a business case for DE&I through collecting data.
“Data has escaped us a little bit,” said Cliff Jurkiewicz, vp, strategy at global HR tech company Phenom. “Every organization will be required to do these AI audits. I view this as the perfect opportunity for someone who is focused on inclusivity in organizations to leverage not only the selection of these tools, but the assessment. An individual that’s focused on inclusivity could have the responsibility to say what kind of testing methodology they want.”
Being able to involve themselves in the entire process of AI selection to assessment can help create standards and show to other executives the importance of work like this in areas of DE&I and beyond.
“Who better to lead that effort than someone focused on diversity,” said Jurkiewicz.
But what if the chief diversity officer isn’t an AI expert?
They don’t have to be. “DE&I trends move and change daily,” said Feliciano. “It’s that level of rigor and discipline and consultative approach that we need to take with AI.”
But Feliciano does have some tips for other CDOs who are looking to be included in decisions around what new technologies should be brought onboard.
CDOs should focus on what type of algorithm the AI platform is using and whether it’s based on internal or external data. If it’s using external data, it will be inherently biased, stressed Feliciano. “Ours could be biased too, but it’s better and we have more integrity over it,” she said.
From there, she asks what ethical guidelines they are using in their module to validate its findings — including whether they’re using predictive analysis and whether they test their model to identify any bias.
But one of the simplest questions can go the furthest, and that is: who is on your team when developing these AI models? Ideally, there should be multiple people in that room to account for different stakeholders that will be impacted.
“It’s a moral imperative that we are bringing people with different lived professional experiences to these committees, councils, and vendor selections because they will bring considerations that a group of leaders on their own simply would not,” said Goligoski. “Are we asking meaningful questions of your staff about how they want to develop their own careers with the use of new technologies?”