The AI Regulation classifies AI systems according to their risk potential. AI recruitment generally falls under the category of ‘high-risk AI systems’ when used for the recruitment or selection of candidates. This includes, in particular, the placement of targeted job advertisements, the screening or filtering of applications, or the assessment of candidates.
Exemptions: When does AI in HR not fall under high risk?
The AI regulation provides exemptions where there are no significant risks to health, safety or fundamental rights and one of the following conditions applies to the AI system:
- A narrowly defined procedural task is performed,
- The result of a previously completed human activity is improved,
- Decision patterns are identified and no human assessment is influenced or replaced without human oversight, or
- it performs a preparatory task for an assessment.
Notwithstanding these exemptions, an AI system is always considered high-risk if it involves the profiling of natural persons.
The use of AI in the creation of job advertisements is therefore unlikely to fall under the high-risk category; however, the pre-selection of applicants based on personal characteristics generally does.
It is also important to check whether the planned AI recruitment system is subject to the prohibited practices of the AI regulation. These include, for example, AI-supported social socring or systems that analyse applicants’ emotions. Such applications are prohibited under the AI regulation and must not be used.
Deployer obligations for high-risk AI systems under the AI regulation
As a deployer of high-risk AI systems, a company using AI recruiting, the following obligations must be met:
A. Control obligations
- Use in accordance with the instructions for use: The AI must be used in accordance with the instructions for use provided by the provider.
- Human oversight: A qualified person must perform monitoring and control of the AI and be able to intervene in the event of malfunctions.
- Data management: The input data must be fit for purpose, complete and representative.
- Operational monitoring: Ongoing operations must be continuously monitored to identify risks at an early stage.
- Suspension of operation in the event of risk: Operation must be suspended if a significant risk is suspected.
B. Information obligations
- Towards the provider: Reporting of safety-related observations.
- Towards employees: Transparent information regarding the use of AI in the recruitment process.
- Towards applicants: Information regarding the use of AI in the selection process.
- In the event of incidents: Reporting to providers, distributors and authorities in the event of serious risks or malfunctions.
- Cooperation with authorities: Deployers must actively cooperate with supervisory authorities.
C. Documentation requirements
- Retention of AI logs (for at least 6 months).
- Where applicable, fundamental rights impact assessment (GRFA): Certain groups of operators must carry out a GRFA and submit the results to the market surveillance authorities.
- Registration in the EU database where applicable: Relevant only for EU institutions, EU bodies or other EU agencies
In addition to the AI Regulation, the GDPR must also be complied with when using AI in recruitment, particularly regarding legal bases, information obligations, automated decision-making and the conduct of a data protection impact assessment. Before AI is used in recruitment, it should therefore be carefully assessed whether the planned system complies with the requirements of the AI regulation. This is the only way to avoid legal risks and ensure compliance with data protection regulations.