The AI Act brings about substantial new obligations for both the developers and users of artificial intelligence. While the analogy is not precise, the Act can be seen as a type of ‘product safety’ legislation. As such, it leaves a wide range of topics to be dealt with in other EU and/or national laws, or by the parties involved in a specific transaction.
Although we can now rest assured that the AI Act will be adopted by the European Union, there is still work to be done. Most importantly, the Act needs to go through a final lawyer-linguist check and be formally endorsed by the Council. Once all this is done, it will enter into force on the 20th day following its publication in the Official Journal of the European Union.
As with the GDPR, the AI Act will not become applicable immediately. Instead, organisations are generally given 24 months to prepare for it. The AI Act will therefore likely be applied starting spring 2026, with some exceptions, most notably:
Zooming in on the AI Act itself, here are some thoughts on what companies should be focusing on in 2024.
One of the most challenging tasks of the legislators was finding a consensus on what types of systems should be regulated as ‘artificial intelligence’. Although the definitions are broad, there are also some limitations to the scope of applicability of the Act.
The adopted text takes a risk-based approach to AI. The higher the risk, the more stringent the obligations. The AI Act even ended up banning the use of certain AI systems due to the unacceptable risk they are seen to pose on health, safety and fundamental rights. While low-risk systems are subject to rather lenient obligations which revolve primarily around transparency, high-risk systems must also comply with numerous other provisions regarding (among others):
In order to be able to comply with these new obligations, organisations need to assess their current compliance level and perform a gap analysis to define a roadmap for meeting the new requirements. To do so, you must first know the risks involved with the AI you are using (or developing).
As mentioned above, regardless of what type of AI system you are using, you will most likely be subject to transparency obligations regarding your AI use. Therefore, prepare to communicate openly with your employees, customers and stakeholders on your use of AI technologies, and what effects such use has on those individuals.
You know what they say: ‘Don’t fix it if it’s not broken’. Instead of creating completely new compliance processes for AI, it is often more efficient to adapt your old ones instead. Data protection, procurement and data security processes in particular often form a sturdy foundation for the use of AI systems as well. When choosing the ‘building blocks’ of your internal AI compliance work, look especially to your policies, processes, training material, training events, and monitoring and supervision activities to ensure they all take AI into account.
AI is often only as good as the data it relies on. In order to ensure your AI tools can be used to their full potential and that you stay compliant with not only the AI Act itself, but all other applicable laws, it is essential to take an early look at what data you have at your disposal, evaluate the quality and content of such data, and ensure that you have sufficient rights to such data.
Collaboration with your AI partners is key in ensuring compliance with the AI Act. In many situations such collaboration is best supported through a clear agreement framework which sets out unambiguous obligations for each party.
The AI Act creates new business opportunities and calls for innovative new services in an EU-wide harmonised market. Are you up for the challenge?
Discover more about employee data privacy on our Global HR Law Guide