HR leaders race to keep up with employees’ AI use
Human resource professionals are scrambling to keep on top of how employees are using artificial intelligence on the job.
The Society for Human Resource Management is receiving between 30 and 50 calls a week from HR execs at companies across a variety of knowledge-based industries asking how to deal with employees using tools like ChatGPT and other AI-powered tools at work. “This is one of the real hot topics that folks are interested in,” said Mark Smith, SHRM’s director of thought leadership.
Many companies are still working on their own policies guiding use of the nascent technology. But in many cases, the horse has already bolted — employees are already using generative AI to assist with work, regardless of whether their company has approved it.
That’s a concern for HR professionals, who haven’t issued official guidelines on the best way to use the tech. As such, producing inaccurate or poor quality work with the tools, along with privacy issues and potential legal troubles, are top concerns HR leaders are seeking to better understand and create guardrails around as more people and organizations use AI.
“So many of my partners and my network are seeking guidance and insights from industry leaders and professional networks because they’re a little bit at a loss,” said Jess Elmquist, chief human resources officer at Phenom, an HR technology company.
But Elmquist urges that employers expedite creating and communicating new rules internally, so employees know what’s company policy. “The damage that can be done is your teams are using it anyway,” said Elmquist.
“Having some bumpers and guidelines will help me go ‘OK, I can take a breath and I no longer have to actually act like I’m not using generative AI. Now the company is giving me some basic tenets to be able to do that effectively,’” he said.
There are three key guidelines around AI use that companies should understand and communicate through new policies, experts say.
The first is to ensure employees heavily scrutinize chatbot outputs to ensure they’re accurate. One major flaw with ChatGPT and other AI tools is that they can hallucinate, or spit out answers that include incorrect or fabricated information.
Keeping proprietary information safe is the second most important guardrail employees should know about, as large language models digest that content and can relay it to others outside the organization. For example, last month Google warned its own employees to not enter private company information when using its own AI-powered chatbot, Bard.
The potential legal implications of AI use are the third most important point for employers to understand, create policies around and communicate to employees, especially those involved in hiring and recruiting.
For example, New York City’s new law requires a bias audit on any automated employment decision tool that scores or ranks applicants before it’s used, and companies must notify applicants that they’re using AI tools in the hiring process.
HR teams and others on the hiring side will have to know about those new regulations and follow them accordingly. Companies will also likely have to reevaluate and tweak their own AI policies as new laws and concerns arise with more time and greater use.
“The concerns now may be different than those six months from now,” Smith said.
Employers should also clearly explain why they’re putting certain rules in place, and shy away from any kind of blanket policies, “which may lead to employees doing it on the side and not telling people about it, and that’s not good for anyone,” he said.
HR teams can also start by leveraging existing frameworks to address employee AI use, Elmquist said. “What are your current data privacy, diversity and inclusion, employee well-being, rules and regulations and controls inside of your organization?” he said. “Take those and fundamentally use those right now as guidelines around access to generative AI.”