Learning algorithms are not only used in searching for suitable applicants, but also for the initial contact with the applicant. In this article, we highlight the legal problems of using AI in the application process and show possible solutions.
There is no single definition of AI. In terms of its application in human resources, AI can be described as a learning algorithm – a rule that enables a computer to solve a class of problems. This can independently analyse existing data sets and recognize patterns through matching. AI is thus able to make predictions or recommendations on targets defined by a human. Robot recruiting describes the automation of parts of the application process with the help of an algorithm.
Robot recruiting can be applied in many ways and can be useful in many phases of the application process. Job advertisements can be optimized with the help of AI systems so that potential applicants can find them as easily as possible. In addition, chatbots can be helpful in collecting frequently asked questions. In a next step, chatbots can answer these questions or forward them to another employee to answer. The influence of AI becomes even more intensive when essential data from resumes or online profiles is filtered as part of so-called ‘resumé parsing’ in order to build up a database of applicants. Here, an analysis of the quality of the incoming applications can be carried out based on defined criteria, thus automatically checking who best fits the job profile.
Due to the broad scope of robot recruiting, the use of AI can make the application process significantly more efficient. At the same time, AI gives the impression of increased objectivity simply by the fact that in certain phases no decisions are made by humans, who may incorporate certain stereotypes and discriminatory tendencies (even unconsciously). The AI itself does not discriminate. Nevertheless, discrimination can also occur when AI is used, the legal consequences of which arise from the legal prohibition of discrimination.
The reasons for this can be manifold. AI decisions and predictions are made on the basis of group probabilities. In this context, it is the link to one of the characteristics listed anti-discrimination law that runs the risk of discriminatory effects. In addition, the classifications are made on the basis of correlations that are not recognizable to an outsider.
It becomes particularly critical when the AI makes decisions based on data sets that have discriminatory tendencies. For example, if more men than women have been hired in an input data set, the algorithm will transfer this pattern to its future decisions. In this case, women would automatically be given less consideration in percentage terms.
Employers should pay particular attention to avoiding discrimination (and the accompanying legal risk) when using AI. Algorithms should be checked for discriminatory patterns as early as the development stage. This is primarily the responsibility of the developers of the AI. Nevertheless, the employer using the AI can also address this issue by conducting test runs before actually using it in the application process. In addition, it can be helpful to have HR and IT employees advised by anti-discrimination bodies in order to prevent discriminatory selection criteria from being built into the algorithm. Co-determination rights of employee representatives must also be taken into account. Finally, use of AI in hiring raises aspects of data protection law, in particular the GDPR’s limits on the processing of data in this context. If these principles are observed, the use of AI in application processes can save resources and make the process more effective.
For more information about employee data privacy