This talk is based on an article:
“We examine the effectiveness of human oversight in counteracting discriminatory outcomes in AI-aided decision making for sensitive tasks. We employ a mixed-method approach, pairing a quantitative behavioural experiment with qualitative analyses through interviews and workshops. The experiment involved 1400 professionals in Italy and Germany, making hiring or lending decisions with either fair or biased AI-generated recommendations. We find that human overseers exhibited algorithm aversion, whereby they followed AI advice in only 55% of cases. They were equally unlikely to follow advice from a fair AI than from a biased AI. Fair AI decreased bias by gender of the applicants, but not bias by nationality. Decision makers were more willing to endorse AI suggestions if those aligned with their own discriminatory preferences, which mainly consisted in favouring candidates similar to themselves. Qualitative insights reveal the background to those decisions in terms of bias awareness, fairness perception, and ethical norms. In conclusion, human oversight does not appear to prevent AI discrimination in our experimental setting. A comprehensive approach, integrating fair AI programming, norms to guide oversight, and decision maker diversity, is essential to ensure that AI-enhanced decision-making processes result in less discriminatory outcomes.”
Authors: Alexia Gaudeul, Ottla Arrigoni, Marianna Baggio, Vasiliki Charisi, Marina Escobar-Planas, Isabelle Hupont-Torres
Let's keep in touch !