Technology   //   August 14, 2023  ■  6 min read

AI-adoption reality check, by the numbers  

In the last six months AI has gone from being a talking point and toolkit of tech-focused employees and businesses, to a mainstream work instrument and a business priority for a large number of organizations.

But like many hyped-up topics, after the initial frenzy of interest, enthusiasm can taper. AI has been subject to its own hype cycles for decades. Experts have a term for the periods that follow a burst of excitement around the tech’s potential: AI winters. 

“AI generally, has been on the minds of people in tech for the last 30 years and there’s been a whole bunch of false starts,” said Nick Seeber, internet regulation global lead for Deloitte. He described these so-called AI winters as times when people thought a tech breakthrough would move the needle forward on the business world. It never did.

Seeber believes the current AI hype cycle is different however. “What we’re seeing now after all those false starts is something truly transformational. And people are talking about it like the invention of electricity,” he said.

And yet initial fear and feverish excitement is making way for a slightly more considered and cautious embracing of the tech as all kinds of flaws and kinks arise (hallucinations, copyright infringement, data privacy issues) which could cause chaos if left unchecked.

So while this hype cycle may continue far beyond the typical expiration date, or at least until AI has become more seamlessly integrated into workplaces, it’s still important to have a regular reality check.

We’ve taken a look at what the adoption rate of AI looks like, by the numbers.

C-suite execs are all-in on generative AI 

It’s generative AI – courtesy of ChatGPT – that has boosted awareness of AI’s potential to disrupt workforces. A quarter of C-suite execs are personally using gen AI tools for work, with the topic on a large number of board agendas within companies, according to McKinsey’s State of Generative AI in the Workplace 2023 report, published early August. And 40% of respondents say their organizations will increase their overall investment because of advances in gen AI. 

Meanwhile, other emerging evidence suggests that AI isn’t just a tool for speeding up tasks – it’s an ally in becoming a high-performing leader. The most effective leaders are already incorporating AI into their daily tasks. They understand the importance of delegation in driving success, and AI is helping them to achieve that at scale, according to a study of 6,000 high-performing executives from companies including Deloitte, Bank of Montreal and Royal Bank of Canada, conducted by The Work Innovation Lab by Asana and employee performance consultancy Wells Performance. 

The study calculated that CEOs who excel in delegating generate 33% higher revenue than the average senior leaders. Rather than attempt (and fail) to accomplish everything alone these execs position their team to tackle tasks they’re confident they will achieve, in turn empowering employees, boosting morale and increasing productivity, per the report. And by using AI tools to offload routine and mundane tasks to smart systems, they can free up their energy and time to work on strategic decision making. 

People are less afraid, more hungry to reskill in AI

Consensus is that talent and skills will shift to adapt to new remits once AI is more embedded within organizations. And people are getting bolder at experimenting: 79% of employees say they’ve had at least some exposure to gen AI, either for work or outside of work, and 22% say they are regularly using it in their own work, per McKinsey. 

Meanwhile six in 10 workers want to use AI to empower their career growth, according to data from customer engagement platform Amdocs. Another 28% want AI itself to help them reskill, while 30% want an AI solution that can scan their resume and match them to new jobs within their existing company.

For all the talk of AI replacing jobs, there is ample excitement about how it can be used to boost productivity and efficiency. More than two-thirds (66%) of full-time workers want their employers to offer AI solutions. And they’re interested in how AI can make their work lives easier and more fulfilling, per the Amdocs research.

“In order for leaders to have those relationships that feel like their value-add to client business, they need to be versed on this stuff, so they can't reject it. And if you haven't spent time on [AI] platforms, or been a little bit curious about it, you will absolutely get left behind.”
Julia Hammond, president of advertising group Stagwell Global.

Julia Hammond, president of advertising group Stagwell Global, said leaders across the organization are experimenting with AI and are often asked for guidance on how their clients should respond to it. “In order for leaders to have those relationships that feel like their value-add to client business, they need to be versed on this stuff, so they can’t reject it. And if you haven’t spent time on [AI] platforms, or been a little bit curious about it, you will absolutely get left behind,” she said. 

Stagwell has its own internal AI sandbox, that pulls together all the available AI tools and technology and allows staff to experiment with it across its suit of internal products. “It just invites people to play with it, because if you aren’t experimenting, or at least curious, that’s what’s going to create a divide [in people’s skills and abilities],” said Hammond. People have to want to take the initiative to learn, across all age groups, and accept that change is inevitable, she stressed.

Marco Bertozzi, co-founder of the independent business consultant community The Zoo London, said that while AI’s integration within businesses may lead to the loss of some job roles, the potential for it to free up time for strategic thinking across teams, is exciting and reminiscent of former automation cycles, for instance the arrival of programmatic advertising – a development that also led to speculation around lost jobs.

“We saw it as a good thing because suddenly our talent didn’t need to waste time on loads of menial tasks. Instead we got them thinking about strategy and dealing with the clients. I feel like AI will be another similar version of that,” he said.

Lazy fact checking will screw things up 

There are already a ton of red flags surrounding generative AI and the speed at which it has ripped through workplaces. ChatGPT’s false facts have gotten lawyers into serious hot water when using it to inform research for cases, and generative AI tools in general are wreaking havoc in copyright infringement.

While most AI rhetoric currently touts the need for human supervision and sign-off for all things AI-generated, there are still clear signs that many people’s attitudes towards the false information AI can surface are worryingly laissez-faire. McKinsey’s report highlighted that less than half of the 1,684 respondents said that their organizations are actually mitigating the risk of inaccuracy caused by generative AI being used for work. 

HR execs have an AI help hotline already, as employees’ use of generative AI outpaces their ability to put in place standards around how they should use it within organizations. And few companies seem prepared for the business risks these tools can bring – a worrying situation when so many are already using it. 

Just 21% of respondents in McKinsey’s study who said their companies had adopted gen AI, have seen their organizations’ establish policies governing employees’ use of gen AI technologies in their work. “It [gen AI] can get you 75% of the way there – it can help you create the first draft [or a piece of work or document] – but that remaining 25% needs to come from the human – and that’s something you can’t replace,” said Hammond.

Russell Marsh, CEO of consultancy BlueMozaic and a member of The Zoo London consulting community, regularly advises business leaders on AI implementation. He stressed the need for businesses to keep a close eye on AI regulation coming down the pike while they are training AI models. He believes companies will begin to ring-fence their data – so that in future other training models won’t be able to scrape their data from the web. That data free-for-all is leading to the current mess over copyright infringement and data privacy concerns. “If everyone’s trained all these algorithms on the same data, there’s no value,” he added. “It won’t be the algorithms that will differentiate – those will constantly evolve – it’s the data sets on which you train the algorithms that will be the critical pieces.”