Dr. Asia Biega, MPI Security and Privacy and Cluster of Excellence CASA, Ruhr University Bochum
Algorithms trained to support the selection of suitable applicants have become ubiquitous in HR departments across all industries, expected to increase efficiency, accuracy, and fairness in decision making.
However, algorithm-based models are not to be understood as objective decision makers. Rather, training AI with pre-existing data can lead to a reinforcement of selection bias – simply because human bias is being replicated. Professor Asia Biega and Dr. Alessandro Fabris from the Max Planck Institute for Security and Privacy will trace the hideouts of social bias in algorithmic models of selection and hiring, explain how such bias can be detected, and familiarize us with good practice for responsibility and accountability in the use of algorithmic hiring.