Businesses are starting to panic over the artificial intelligence regulations coming down the pike in Europe. Perhaps for good reason.
Once the European Union’s AI Act comes into fruition, any business that flouts its guidelines around the responsible and ethical use of AI, faces eye-watering fines that make even the hefty penalties brought by the General Data Protection Regulation (GDPR) seem like small potatoes. Companies that fail to adhere to the AI law will be on the hook for fines that equate to 7% of global annual turnover (GDPR fines were capped at 4%) or up to €40 million ($44 million) in fines (GDPR fines were a maximum of €20 million).
A host of C-suite executives from large organizations including Airbus, Meta, Renault and Siemens have appealed to the European Parliament, to water down the potency of the AI Act, on the grounds that it could stifle technology innovation across Europe. And OpenAI has also reportedly begun furtively lobbying to dilute the Act, to reduce the regulatory burden on the company, according to Time.
Legal counsels believe they may have good grounds for doing so. “In the desire to be setting the world-leading tone of regulation I think the EU may have gone too far in a market that is still relatively nascent,” said Emma Wright, director of the Institute of AI and head of the tech, data and digital team for London-based law firm Harbottle and Lewis, which specializes in advising tech and media sectors. “The definitions [in the Act] are currently so broad that they will capture things that aren’t intended to be captured.”
Here are some things to know:
What does the EU AI Act seek to do?
The Europen Union’s AI act marks the first major attempt by a cross-country regulatory body to put some guardrails around the use of AI, to prevent any forseen or unforseen misuse of the tech that results in violations to individuals’ data privacy or discriminatory treatment of individuals. All companies (including U.S. ones) that operate across all E.U. countries (of which there are 27, now excluding the U.K. post Brexit) – and have employees or customers located there will have to adhere to them, or pay up. The Act will also be crucial to the U.K. AI industry both as exporters to the E.U. or providers of AI-as-a-service.
AI’s tentacles are already a sprawling mess, with generative AI causing almost as many problems as it does benefits, whether it’s in the form of creating misinformation or even fabricating facts (dubbed “hallucinations”) to deep fakes and copyright infringement chaos. The EU AI Act will attempt to put some safeguards around this chaos by sorting how companies incorporate AI into four risk bands: Unacceptable, high, limited and minimal.
Unacceptable risk: Social scoring, facial recognition, dark pattern AI, manipulation.
High risk: Education, employment, administration of justice and democratic processes, immigration, law enforcement systems
Limited risk: Chat bots, deep fakes, emotion recognition systems
Minimal risk: Spam filters, video games
While there will be some sectors, like financial services, which will be impacted more deeply than others, any company that recruits employees (so everyone) can’t afford to stick their heads in the sand.
Copyright infringement can of worms
Much of the current hand wringing caused by the speed of generative AI adoption is the copyright infringement mess it has created. In the U.S., OpenAI – the creator of ChatGPT – is currently facing multiple lawsuits filed last week by two law firms, for allegedly violating copyright laws.
While the copyright minefield won’t be easily solved, the EU AI Act does now intend to address it. Four weeks ago the proposal was amended to add the copyright infringement issues created by generative AI. “This will ensure that generative AI providers will need to notify users when the content is AI generated, and then implement safeguards in training and design, ensure the lawfulness of generated content and make public a sufficiently detailed summary of copyrighted data used in model training,” said Wright.
Ideal though that sounds, the reality is likely to be messier. Say an artist visits a gallery and is inspired by the artwork of different painters, then leaves and reproduces their own portrait which is inspired by or influenced by these other painters, that’s deemed a new, original work of art. “That’s effectively what training data for AI systems does,” said Nick Seeber, partner and internet regulation lead at Deloitte. “It goes through looking at lots of examples of different types of works. It doesn’t store the works in the system. You can’t point back and say if we put in this prompt into the system, and it came up with this answer, this is the piece of copyright material which it used to do that. You can draw conclusions to say, well, this looks awfully similar to this sort of artist’s work, but it’s not a one-to-one relationship. And that makes it very difficult.”
While the EU AI Act may help with these problems, it won’t be the “panacea” that can fix everything, stressed Seeber. “We’re in an unprecedented position, because all sorts of IP and legal experts disagree about how you should treat generative AI and whether what it produces is truly a new work or whether it whether it infringes on on copyright,” he added.
Start preparing now
Most businesses will remember the misery of pre-GDPR preparation. Despite having a two-year grace period in which to comply, the regulation’s launch date caused chaos in industries like media and digital advertising as publishers, marketers, ad agencies and ad tech vendors all scrambled to ask consumers for consent to capture and use their data in order to continue running their services. (And they still haven’t nailed it: Meta got fined €1.2 billion – $1.3 billion – in May for breaching GDPR .)
Currently, 35% of businesses already use AI, and 42% of companies are considering implementing AI in the near future, according to the latest stats. So given its nascency it shouldn’t be the same nightmare GDPR represented for companies. But it makes sense to not put off preparing for it, despite it being an estimated 18 months to two years from being enforced.
“The European Commission is bullish about getting the AI Act passed and have made some commitments about trying to get it done this year,” added Seeber. “Any company which is dependent on or has an AI element to their business model should be thinking about the implications of the AI act now and not necessarily making any hard and fast, irreversible decisions because it’s not finalized. But equally, the direction of travel is pretty clear,” said Seeber.
Because of how quickly generative AI has taken off, and how companies are assessing what kind of AI policies they need, or how they should be using it, now is as good a time as any to start building that “AI asset registry,” added Wright. “What is there out there? What does recruitment have hidden away where someone’s procured it without anyone realising? Start to get that in order, because if you think about GDPR, that was such a mad rush to get a sense of people’s data flows on where all this data was going. But if you start putting the processes in now you should then be made aware as people adopt more AI in the business. Not only that – if you’re starting to make decisions around CapEx expenditure, then it’s worth looking at what the legislation is going to be in a couple of years to make sure you’re not investing in something ultimately the EU consider either banned or high risk.”
More transparency around AI black box
One of the biggest issues is that no one truly gets how the models are trained, and that opacity is making the prospect of AI more threatening. “Right now many people just see it as a black box. And that’s really scary,” said Malin Buch, head of legal at Alva Labs, an AI-powered recruitment services provider which specliazies in inclusion and bias-free recruitment.
While Alva Labs has been operating within its own strict ethical guidelines when it comes to recruitment, all businesses will benefit from there being a blueprint of regulator-approved guidelines to work from, which should force more transparency around how many of these AI services work. “If we have the regulations on top of what we already have with our our [Alva Labs’] validation studies, we can just add to the transparency and reliability. And I think that people might feel more secure about using AI tools, because there’s actually a framework surrounding that,” she added.
She also stressed that all businesses which went through the pain of becoming GDPR compliant, will now have a strong foundation from which to comply with the EU AI Act.
“You could probably leverage a lot of the work that has been done within your GDPR, your privacy work within the information security work that you have been doing, and just put the AI part into that and make sure that you document what processes you have in place, put policies in place for how you use it internally and externally,” she said.
Wright agreed that those with a buttoned up GDPR setup will be in a good position. “If you’ve put your governance framework and your principles in place that will stand you in good stead and also things like your data protection impact assessment – those tools will be able to be used when you’re making your own assessments under the [EU AI] act.