Predictive fairness
(2022) International Association for Computing and Philosophy In Philosophical Studies Series 143. p.141-161- Abstract
- It has recently been argued that in normal decision circumstances no systematic decision method that predicts the likelihood that individuals possess some property can be fair. Either (i) the decision method correctly identifies the relevant property (e.g. recidivism) more often in one subgroup (e.g. black defendants) than another (e.g. white defendants); or (ii) the decision method systematically ascribes higher probabilities to individuals who have and/or individuals who lack the property in one group compared to the probabilities ascribed to individuals who have and/or individuals who lack the property in another group. Otherwise put, these decision methods seem inherently, and unavoidably, unfair. Besides introducing this problem to... (More)
- It has recently been argued that in normal decision circumstances no systematic decision method that predicts the likelihood that individuals possess some property can be fair. Either (i) the decision method correctly identifies the relevant property (e.g. recidivism) more often in one subgroup (e.g. black defendants) than another (e.g. white defendants); or (ii) the decision method systematically ascribes higher probabilities to individuals who have and/or individuals who lack the property in one group compared to the probabilities ascribed to individuals who have and/or individuals who lack the property in another group. Otherwise put, these decision methods seem inherently, and unavoidably, unfair. Besides introducing this problem to the philosophical community, this paper explores different possible responses to the problem and presents three principles that should be universally applied to promote fairness: (1) Dominance: a decision method that is better with respect to one of the dimensions of fairness and worse with respect to none is better overall; (2) Transparency: decision-makers who use these decision methods should be aware of the unintended differences in impact and also be transparent to the affected community about these unintended differences; and (3) Priority to the worse off: a decision method that is relatively better for members of a worse-off subgroup is preferable to a method that is relatively better for members of a better-off subgroup. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/a2b7e3d9-3b7c-4cf0-8bc7-d67ae2140c24
- author
- Herlitz, Anders LU
- publishing date
- 2022
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- Algorithmic fairness, Predictive fairness, Justice, Bias, Priority to the worse off, Impossibility theorem
- host publication
- Philosophy of Computing : Themes from IACAP 2019 - Themes from IACAP 2019
- series title
- Philosophical Studies Series
- editor
- Lundgren, Björn and Nuñez Hernández, Nancy Abigail
- volume
- 143
- pages
- 20 pages
- publisher
- Springer
- conference name
- International Association for Computing and Philosophy
- conference location
- Mexico City, Mexico
- conference dates
- 2019-06-05 - 2019-06-07
- external identifiers
-
- scopus:85132655190
- ISSN
- 0921-8599
- 2542-8349
- ISBN
- 978-3-030-75266-8
- 978-3-030-75267-5
- DOI
- 10.1007/978-3-030-75267-5_5
- language
- English
- LU publication?
- no
- id
- a2b7e3d9-3b7c-4cf0-8bc7-d67ae2140c24
- date added to LUP
- 2023-10-27 10:17:52
- date last changed
- 2024-11-02 16:06:04
@inbook{a2b7e3d9-3b7c-4cf0-8bc7-d67ae2140c24, abstract = {{It has recently been argued that in normal decision circumstances no systematic decision method that predicts the likelihood that individuals possess some property can be fair. Either (i) the decision method correctly identifies the relevant property (e.g. recidivism) more often in one subgroup (e.g. black defendants) than another (e.g. white defendants); or (ii) the decision method systematically ascribes higher probabilities to individuals who have and/or individuals who lack the property in one group compared to the probabilities ascribed to individuals who have and/or individuals who lack the property in another group. Otherwise put, these decision methods seem inherently, and unavoidably, unfair. Besides introducing this problem to the philosophical community, this paper explores different possible responses to the problem and presents three principles that should be universally applied to promote fairness: (1) Dominance: a decision method that is better with respect to one of the dimensions of fairness and worse with respect to none is better overall; (2) Transparency: decision-makers who use these decision methods should be aware of the unintended differences in impact and also be transparent to the affected community about these unintended differences; and (3) Priority to the worse off: a decision method that is relatively better for members of a worse-off subgroup is preferable to a method that is relatively better for members of a better-off subgroup.}}, author = {{Herlitz, Anders}}, booktitle = {{Philosophy of Computing : Themes from IACAP 2019}}, editor = {{Lundgren, Björn and Nuñez Hernández, Nancy Abigail}}, isbn = {{978-3-030-75266-8}}, issn = {{0921-8599}}, keywords = {{Algorithmic fairness; Predictive fairness; Justice; Bias; Priority to the worse off; Impossibility theorem}}, language = {{eng}}, pages = {{141--161}}, publisher = {{Springer}}, series = {{Philosophical Studies Series}}, title = {{Predictive fairness}}, url = {{http://dx.doi.org/10.1007/978-3-030-75267-5_5}}, doi = {{10.1007/978-3-030-75267-5_5}}, volume = {{143}}, year = {{2022}}, }