In a world that is dominated by American tech giants, the largest legislative body is in the EU. It can enforce legislation inside and outside the member states. Who remembers that Zuckerberg in 2022 threatened to withdraw Instagram from Europe in case the EU would enforce the GDPR, the general data protection rules? Well, he didn’t, did he? He just – very quietly – submitted to the GDPR. The same will happen with the EU AI-act. This legislation will have a global impact on all new and existing AI applications.
Is the AI-act enforceable legislation or just a set of recommendations?
In 2012 the EU set up a commission to investigate the regulation of AI. This has resulted in the current proposal for the AI-act, which has been approved by the European Commission in May 2023. The next step will be the approval of the act by all member states. The AI-act is expected to be enforceable in 2026. Until that time, all rules in the act will be recommendations and a solid practice for developers and business owners. That is, if they want to be able to operate their business in the EU from 2026 on.
Risk levels for AI-applications
AI can be both harmful and beneficial, just like fire, knives or cars. It is the intention that determines the level of risk. You can light a candle, cook a delicious meal, but not throw a burning cigarette end on dry forest leaves. AI can spread misinformation, but also detect abnormalities on scans much better than humans.
In the new legislation AI-applications are divided into 4 risk levels. With higher risk come stricter rules. The risk levels are:
level 1 – none or minimal
Mostly spam filters and games. No rules apply, except copyright laws.
level 2 – limited risk
Customer service chatbots. Just a low level of transparency is required to indicate that AI is used. This might be as simple for the developer as to implement a simple pop-up to clarify that the customer is interacting with AI or that the work is AI-generated.
level 3 – high risk
High-risk applications are programs that can harm the health, well-being or safety of humans or the environment. It compasses programs for education, the health sector or transportation, but also critical infrastructure like water supply, energy or the internet. A high level of transparency is therefore required: Where in the application is AI used? How and on what data does the AI behave and train itself? Are any biometric data used?
level 4 – unacceptable risk
The risk of level 4 applications is unacceptably high and dangerous to the freedom and self-expression of EU-citizens. Applications for social scoring or decision-making are especially unsafe. From 2026 on all level 4 AI-applications will therefore be strictly forbidden in the EU. With this legislation, we will also have the tools in hand to stop AI-software that threatens the EU from outside our borders.
Steps to prepare for the AI-act
Private persons will likely not be affected, but companies will. Don’t wait till the last moment, but start right now to get ready for 2026. Take the following steps:
1. Understand the AI-act
Familiarize yourself with the AI act and get a good grip on the different levels of risk
2. Take stock of all your programs
Make an inventory of all programs that you use professionally. This is the perfect time to update documentation and workflows.
3. Risk assessment
Use documentation and workflows to determine which programs use AI and in which risk category the application falls
4. Don’t forget the human oversight
Make someone responsible for the overview of AI in the company and periodical inspections. This should be done manually, not through AI.
5. Communication and transparency
Make rules and regulations on communicating company AI to staff, customers and auditors.
6. Train your staff
Make your staff familiar with AI in general, the AI-act and their own tasks where AI is involved.
7. Stay informed or miss out
Lawyers of all 27 EU member states will now carefully check the proposed AI-act. Chances are the text will be tweaked a bit, but don’t expect any major changes: the overall framework looks very solid. It’s important though to stay informed: AI will bring new challenges, but also new chances.
Here at Textmetrics, we are always informed on whatever might concern our customers. Check our blogs to stay informed too!