Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Participation Paradox in the Politics of AI

Strange, Michael ; Tucker, Jason Edward LU ; Haynie-Lavelle, Jess and Munetsi, Dennis (2022)
Abstract
AI systems are increasingly being used to shift decisions made by humans over to automated systems, potentially limiting the space for democratic participation. The risk that AI erodes democracy is exacerbated where most people are excluded from the ownership and production of AI technologies that will impact them.

AI learns through datasets but, very often, that data excludes key parts of the population. Where marginalized groups are considered, datasets often contain derogatory terms, or exclude explanatory contextual information, that is hard to accurately categorise in a format that AI can process. Resulting biases within AI design raise concerns as to the quality and representativeness of AI-based decisions and their impact... (More)
AI systems are increasingly being used to shift decisions made by humans over to automated systems, potentially limiting the space for democratic participation. The risk that AI erodes democracy is exacerbated where most people are excluded from the ownership and production of AI technologies that will impact them.

AI learns through datasets but, very often, that data excludes key parts of the population. Where marginalized groups are considered, datasets often contain derogatory terms, or exclude explanatory contextual information, that is hard to accurately categorise in a format that AI can process. Resulting biases within AI design raise concerns as to the quality and representativeness of AI-based decisions and their impact on society.

There is very little two-way communication between the developers and users of AI-technologies such that the latter function only as personal data providers. Being largely excluded from the development of AI’s role in human decision-making, everyday individuals may feel more marginalized and disinterested in building a healthy and sustainable society.

Yet, AI’s capacity for seeing patterns in big data provides new ways to reach parts of the population excluded from traditional policymaking. It can serve to identify structural discrimination and include information from those otherwise ignored in important decisions. AI could enhance public participation by both providing decision-makers with better data and helping to communicate complex decisions – and their consequences – to wider parts of the population. (Less)
Please use this url to cite or link to this publication:
author
; ; and
publishing date
type
Book/Report
publication status
published
subject
keywords
Participation, Decision making, Drtificial intelligence (AI), Democracy, Health
pages
2 pages
publisher
WASP-HS
project
Politics of AI & Health: From Snake Oil to Social Good - Funded by Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society (WASP-HS)
language
English
LU publication?
no
additional info
The report details the main findings of a workshop organised by the authors with a range of experts. Roundtable experts included (please note that the text does not necessarily reflect the views of everyone listed):Malvika Sharan, The Turing Institute, UK; Pedro Sanches, Umeå University, Sweden; ​Sunny Dosanjh, Deloitte MCS Limited​, UK; Aleks Berditchevskaia, NESTA, UK;​Henrik Björklund, Umeå University, Sweden; ​Birgit Schippers, University of Strathclyde Glasgow, UK​; Rachel Foley, DeepMind, UK/USA; Ratidzo Njagu, Kunashe Foundation, Zimbabwe.
id
f1b47ffe-0a00-4d3e-ba59-a34e5b5ce093
alternative location
https://wasp-hs.org/wp-content/uploads/2023/09/WASP-HS-CRM-Challenges-and-Opportunities-of-Regulating-AI-brief_29.08.2022-1.pdf
date added to LUP
2024-09-12 11:23:04
date last changed
2024-09-17 10:16:05
@techreport{f1b47ffe-0a00-4d3e-ba59-a34e5b5ce093,
  abstract     = {{AI systems are increasingly being used to shift decisions made by humans over to automated systems, potentially limiting the space for democratic participation. The risk that AI erodes democracy is exacerbated where most people are excluded from the ownership and production of AI technologies that will impact them.<br/><br/>AI learns through datasets but, very often, that data excludes key parts of the population. Where marginalized groups are considered, datasets often contain derogatory terms, or exclude explanatory contextual information, that is hard to accurately categorise in a format that AI can process. Resulting biases within AI design raise concerns as to the quality and representativeness of AI-based decisions and their impact on society.<br/><br/>There is very little two-way communication between the developers and users of AI-technologies such that the latter function only as personal data providers. Being largely excluded from the development of AI’s role in human decision-making, everyday individuals may feel more marginalized and disinterested in building a healthy and sustainable society.<br/><br/>Yet, AI’s capacity for seeing patterns in big data provides new ways to reach parts of the population excluded from traditional policymaking. It can serve to identify structural discrimination and include information from those otherwise ignored in important decisions. AI could enhance public participation by both providing decision-makers with better data and helping to communicate complex decisions – and their consequences – to wider parts of the population.}},
  author       = {{Strange, Michael and Tucker, Jason Edward and Haynie-Lavelle, Jess and Munetsi, Dennis}},
  institution  = {{WASP-HS}},
  keywords     = {{Participation; Decision making; Drtificial intelligence (AI); Democracy; Health}},
  language     = {{eng}},
  title        = {{The Participation Paradox in the Politics of AI}},
  url          = {{https://lup.lub.lu.se/search/files/195029192/COMMUNITY_REFERENCE_MEETING-_CHALLENGES_AND_OPPORTUNITIES_OF_REGULATING_AI_REPORT_August_2022.pdf}},
  year         = {{2022}},
}