An AI system is defined under the EU’s AI Act as a “[…] machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Put very simply, AI uses algorithms to make predictions based on probabilities. From the dataset used, AI selects the most likely solution. However, this does not necessarily mean it is always correct. The proposed solution is based on its programming, the data used for training, and in the case of self-learning AI, the learned rules. Therefore, an AI’s output must always be verified, otherwise serious risks can arise. In this article, we explore what those risks look like for HR professionals, particularly regarding discrimination and bias decision making, before setting out some key tips for employers.
In principle, computer systems, including AI, do not discriminate. They are initially neutral and impartial. However, if the AI is trained with biased data, this can lead to future results reflecting those biases. Issues of discrimination can also arise as early as in the development phase of the AI system. If certain biases or discriminatory views are overemphasised during programming, even unconsciously, they can distort the AI’s outputs.
This challenge is only made harder by the complexity of AI algorithms, sometimes operating as ‘black boxes’, meaning their decisions can be opaque and difficult to understand. As a result, identifying and correcting the source of discriminatory patterns in AI decision-making and outputs can be particularly difficult.
Discrimination in AI systems often arises due to the use of biased training data. For example, if an AI system is trained with applicant data in which applicants with German or European-sounding names were consistently preferred over applicants with Arabic-sounding names, it is likely that the AI will continue this pattern. This example can be applied to all protected criteria.
In the HR context, these sources of error can lead to employees or applicants being discriminated against based on various criteria when using AI. Discrimination based on race, ethnic origin, gender, age, religion, ideology, sexual identity, or disability is prohibited in Germany and can result in claims for damages.
Depending on their use, the application of AI systems can have far-reaching consequences and so some are classified as being ‘high-risk’ in the AI Act. These include AI systems that make or directly influence the (pre-) selection of applicants for hiring, promotion, or termination.
If an AI system is classified as high-risk, the employer must fulfil various obligations. Employers, as ‘operators’, are generally subject to transparency obligations. If an AI system is developed or adapted to operational requirements, the employer may also qualify as a ‘provider’ under the AI Act. Some additional obligations apply during development, while others are intended to enable the operator to use the AI correctly.
By fulfilling the obligations under the AI Act, the risk of discrimination can at least be reduced, and employers can better assess the potential issues.
Although the use of AI systems can create risks, it can also be used to prevent or reduce discrimination. AI systems are particularly good at recognising and applying patterns. This can be used to review previous selection processes (i.e. hiring, promotions, and other decisions) for unconscious patterns that indicate discrimination. For example, job descriptions could be analysed to see whether they contain wording that favours certain groups of people, or decisions about promotions or pay increases could be examined to see whether women who work part-time are being regularly disadvantaged. Once the patterns have been identified, they can be acted on by the employer. The results could, for example, be used to design fairer hiring policies or wage systems to promote equal pay and thus reduce the gender pay gap. According to the Federal Statistical Office of Germany, it was still at 16% in 2024—and 6% when adjusted.
In addition to carefully selecting the right AI solution for the company and observing the legal framework in Germany, employers are well advised to focus carefully on the incorporation of any new AI solutions into their workplace.
To minimise potential risks, it is important to get it right from the outset. For example, it should be examined whether the IT infrastructure provides an appropriate basis for the use of AI. Employees must also be on–boarded in a timely manner to allay their fears and train them in the use of AI. If employees such as managers or HR personnel can correctly interpret the AI output, the risk of errors, including potential discrimination, can be further reduced through this human supervision. It is also very useful to integrate new AI systems in an overall digitalisation strategy.
Discover more about employee data privacy in our Global HR