On 12 May, the EU Commission published a question and answer catalogue (FAQ) on the practical implementation of article 4 of the new EU AI Act. This provision, which has been in force since 2 February, requires all EU businesses using any kind of AI service to ensure that their employees have a sufficient level of ‘AI competence’.
Under the AI Act, a business that uses an AI service is considered a ‘deployer’ of an AI system, making it subject to the competence requirement. The Commission now recommends a risk-based and differentiated approach: the scope of an AI competence training should be based, for example, on the different roles of the employees and the risks of the systems they use. As a minimum, employees should be taught a basic understanding of how AI works, the classification of risks posed by AI systems the clarification the individual roles with respect to AI compliance. Concerning AI risks, users should be informed about the tendency of tools such as ChatGPT to ‘hallucinate’ their responses, i.e. to provide incorrect information.
Relevance for businesses: Any EU business using AI services needs to familiarise itself with the AI competence requirements under the AI Act and develop and implement an appropriate training scheme. It should be noted that the national AI authorities will commence their supervisory activities on 2 August 2026. ePrivacy offers different training formats including a certificate for businesses to document their teams’ AI competence.
One open question concerns the sanctions that would be imposed for violations of the competence requirement: Neither the fines catalogue in the AI Act itself nor the existing implementing laws give an indication on this point.
(Dr. Lukas Mezger, UNVERZAGT Rechtsanwälte)