Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision-Making

Larsson, Stefan LU ; White, James LU and Ingram Bogusz, Claire (2024) In Social Inclusion 12. p.1-18
Abstract
Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large-scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision-making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives... (More)
Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large-scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision-making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives from 100 Swedish companies with more than 100 employees (“major employers”). In this study, we made use of an open definition of AI to accommodate differences in knowledge and opinion around how AI and ADM are understood by the respondents. The study shows a significant difference between direct and indirect AI and ADM use, which has implications for recruiters’ awareness of the potential for bias or discrimination in recruitment. All of those surveyed made use of large digital platforms like Facebook and LinkedIn for their recruitment, leading to concerns around transparency and accountability—not least because most respondents did not explicitly consider this to be AI or ADM use. We discuss the implications of direct and indirect use in recruitment in Sweden, primarily in terms of transparency and the allocation of accountability for bias and discrimination during recruitment processes. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
AI and risks of discrimination, ADM and risks of discrimination, AI and accountability, AI and transparency, AI platforms and discrimination, discrimination in recruitment, automated decision-making, indirect AI use
in
Social Inclusion
volume
12
article number
7471
pages
18 pages
publisher
Cogitatio
ISSN
2183-2803
DOI
10.17645/si.7471
project
The Automated Administration: Governance of ADM in the public sector
Mapping risks of discrimination in employers' AI use
AI Transparency and Consumer Trust
language
English
LU publication?
yes
id
0e7c3b93-9f73-4cc5-8258-4a61c3ed157e
date added to LUP
2024-02-27 10:27:54
date last changed
2024-04-19 08:57:37
@article{0e7c3b93-9f73-4cc5-8258-4a61c3ed157e,
  abstract     = {{Extant literature points to how the risk of discrimination is intrinsic to AI systems owing to the dependence on training data and the difficulty of post hoc algorithmic auditing. Transparency and auditability limitations are problematic both for companies’ prevention efforts and for government oversight, both in terms of how artificial intelligence (AI) systems function and how large-scale digital platforms support recruitment processes. This article explores the risks and users’ understandings of discrimination when using AI and automated decision-making (ADM) in worker recruitment. We rely on data in the form of 110 completed questionnaires with representatives from 10 of the 50 largest recruitment agencies in Sweden and representatives from 100 Swedish companies with more than 100 employees (“major employers”). In this study, we made use of an open definition of AI to accommodate differences in knowledge and opinion around how AI and ADM are understood by the respondents. The study shows a significant difference between direct and indirect AI and ADM use, which has implications for recruiters’ awareness of the potential for bias or discrimination in recruitment. All of those surveyed made use of large digital platforms like Facebook and LinkedIn for their recruitment, leading to concerns around transparency and accountability—not least because most respondents did not explicitly consider this to be AI or ADM use. We discuss the implications of direct and indirect use in recruitment in Sweden, primarily in terms of transparency and the allocation of accountability for bias and discrimination during recruitment processes.}},
  author       = {{Larsson, Stefan and White, James and Ingram Bogusz, Claire}},
  issn         = {{2183-2803}},
  keywords     = {{AI and risks of discrimination; ADM and risks of discrimination; AI and accountability; AI and transparency; AI platforms and discrimination; discrimination in recruitment; automated decision-making; indirect AI use}},
  language     = {{eng}},
  month        = {{04}},
  pages        = {{1--18}},
  publisher    = {{Cogitatio}},
  series       = {{Social Inclusion}},
  title        = {{The Artificial Recruiter: Risks of Discrimination in Employers’ Use of AI and Automated Decision-Making}},
  url          = {{http://dx.doi.org/10.17645/si.7471}},
  doi          = {{10.17645/si.7471}},
  volume       = {{12}},
  year         = {{2024}},
}