Technology   //   June 12, 2023  ■  6 min read

Generative AI boom sets new challenges for company-employed AI ethicists

As artificial intelligence continues to explode, the conversation around responsible and ethical AI deepens. That’s why the role of the AI ethicist is more critical than ever.

AI ethicists have been around since machine learning gained traction over a decade ago, but Thomas Krendl Gilbert, the AI ethics lead at AI ethics platform daios, describes this new crop of ethicists as the 2.0 version.

Five to 10 years ago, ethicists were largely concerned about the bias of the data and possible inaccuracies that may come at the expense of groups of people, usually grouped by demographics or identities. Today, that’s still a big issue, but there are more layers to it, including how AI and humans will coexist in the future. 

“AI ethics is at a turning point right now,” said Krendl Gilbert, who received his PhD in machine ethics and epistemology from UC Berkeley in 2021. “The issue is not on removing bias or ensuring that the model an algorithm generates from data is fair, accurate or transparent in a technical sense, but rather AI systems themselves be made substantively good.” 

Instead of working towards making components of AI unbiased, it’s about making the entire system align with human values and purposes. 

“That’s a completely different frame through which you understand what is at stake with AI,” said Krendl Gilbert. “Why does automation matter? Who is going to get harmed? What are the types of harms that are at stake? It’s a very different landscape.”

Having someone navigate all these things can be a heavy lift, which is why some companies have dedicated an entire role to it instead of tacking on the duties to an existing worker’s remit. The question of AI ethics no longer centers on a strictly technical research agenda but a larger conversation about its societal impact, that is being played out in the public sphere, stressed Krendl Gilbert. As AI continues to grow additional capabilities in generative text but image and audio too, it throws up additional questions for ethicists. 

He gives an example: think of self-driving cars. People work so that it doesn’t hit objects or that it always recognizes stop signs by improving the algorithm. However, these cars could start doing things that make other drivers and pedestrians uncomfortable that the developers don’t understand. 

“That’s a completely different frame through which you understand what is at stake with AI. Why does automation matter? Who is going to get harmed? What are the types of harms that are at stake? It’s a very different landscape.”
Thomas Krendl Gilbert, the AI ethics lead at AI ethics platform daios.

“It will make them less inclined to trust roads on which these systems are being deployed,” said Krendl Gilbert. “So what a lot of people in AI ethics right now are struggling with is how is it that even as we technically improve the system, we end up changing the world in ways that are bad or that people don’t like? It requires a different point of engagement.”

The job looks a little different depending on each company, but overall AI ethicists are consistently trying to call out where a system might be biased, allowing for discussions around ethical systems, and researching the space. ZipRecruiter found that AI ethics roles peaked in 2021, with over 12,000 listings that year. In 2022 that number dipped to just over 11,000. This year, however, there are just 1,139 listings.

“Despite the heightened interest and increased investment in AI, demand for AI talent is not immune to the larger economic trends affecting the tech industry more broadly,” said Julia Pollak, chief economist at ZipRecruiter.

But that poses another challenge for AI ethicists that are already in the role. How is everyone exactly going about their position? What direction should they be moving in? 

“The field is going through a change right now,” said Krendl Gilbert. “I think many different researchers are struggling to make sense of it. There are active debates. What happens during a paradigm shift? People have different interpretations of what’s going on and different agendas are being pursued.”

In early April the Future of Life Institute penned a letter titled “Pause Giant AI Experiments: An Open Letter,” where it called on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least six months. It received over 30,000 signatures. The non-profit argued “contemporary AI systems are now becoming human-competitive at general tasks,” which has indeed been an ongoing debate. Its suggestion was that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Megan Smith-Branch helps developers get to the root of some of these ethics questions. She is the deputy lead of the responsible AI team at Booz Allen Hamilton, a company that works with the military and government to solve complex problems, including navigating their use of AI. That means she is working towards helping scale and operationalize responsible AI within the federal government. 

“What we do is take responsible AI and try to make it tangible, practical and useful,” said Smith-Branch. “What we know and what we see in ethical and responsible AI areas is vague. People are working across these vague philosophical principles and trying to apply them to technology.”

That’s why the team uses an ethical ATO, or “authority to operate,” which is essentially an assessment that evaluates ethics risks, safety, and compliance. They consider the things that are important to the organization that owns and operates the systems and then evaluate them against a set of principles. From there, they identify those potential risk factors for the legal ecosystem, the team ecosystem, environmental ecosystem, and beyond. Alongside that, they provide recommendations and considerations to reduce that risk and review it together. 

“What we know and what we see in ethical and responsible AI areas is vague. People are working across these vague philosophical principles and trying to apply them to technology.”
Megan Smith-Branch, deputy lead of the responsible AI team at Booz Allen Hamilton.

During a typical day she evaluates one of the AI systems using that assessment, and works with a team to review it, having conversations with internal teams about generative AI and what they should continue to monitor. She integrates with numerous teams, both internal and external, across defense and intelligence industries. 

“What I’m noticing is that data scientists are very excited and very open to having other disciplines come in and assist them in mitigating these risks,” said Smith-Branch. “We’ve sort of pigeon holed it so it belongs in computer science, but in reality, different areas within the organization use tools throughout the day to monitor ethics at a human level. AI is trying to replicate human behavior, so if you have expertise in that, from philosophy or sociology or behavioral science, it’s super important to be a part of a responsible AI team. 

She says she is regularly reminded of the importance of having a team like this one, with risks that range from reputational damage to loss of intellectual property. 

Besides pushing for AI to be built with ethics in mind, people are also trying to ensure that folks use the technology ethically as well. Asha Palmer, svp of compliance solutions at corporate digital learning company Skillsoft, has focused on helping put the right infrastructure and guardrails around how people behave, how they use new AI technology and helping guide them. 

“You can say here are the risks, here are the opportunities, now go forth and use it, but there’s got to be guardrails in place, a bridge to the employees who are actually using it,” said Palmer, who focuses on developing that learning, training and communication between AI ethicists and employees who are actually using it. 

A part of that is navigating what company policies should be put in place, how to prompt responsibly so that you aren’t sharing confidential company information, and more depending on the industry. 

“It’s an iterative process,” said Palmer. “You might not see all the risks at first or all the ways people can get around the rules. You’re not going to be able to understand the whole ecosystem of use or misuse at the outset. You have to learn and keep developing and redeveloping those guardrails and communicating those. We still don’t know all of the potentials and the consequences and the checks and balances being built.”