Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

AI alignment for ethical compliance and risk mitigation in industrial applications

Gupta, Rushali LU ; Song, Qunying ; Wagner, Matthias LU orcid ; Engström, Emelie LU orcid ; Söderberg, Emma LU orcid ; Borg, Markus and Runeson, Per LU orcid (2025) In Lecture notes in computer science p.20-35
Abstract
Context: AI technologies are increasingly embedded in products and software engineering processes of industrial IoT, autonomous systems, and cyber-physical systems. It is therefore essential to ensure alignment with safety, reliability, and ethical standards. However, practical software engineering methods for managing misalignment risks remain underdeveloped.

Objective: This study aims to explore industry awareness of misalignment risks and current practices for monitoring them within real-world software engineering contexts.

Method: We conducted seven interviews with industry professionals to examine perceptions of misalignment risks, gather insights into existing practices, and understand approaches to alignment... (More)
Context: AI technologies are increasingly embedded in products and software engineering processes of industrial IoT, autonomous systems, and cyber-physical systems. It is therefore essential to ensure alignment with safety, reliability, and ethical standards. However, practical software engineering methods for managing misalignment risks remain underdeveloped.

Objective: This study aims to explore industry awareness of misalignment risks and current practices for monitoring them within real-world software engineering contexts.

Method: We conducted seven interviews with industry professionals to examine perceptions of misalignment risks, gather insights into existing practices, and understand approaches to alignment across various industrial settings. Three recently proposed taxonomies guided our discussions: one on ethical guidelines for trustworthy AI published by the EU, another summarizing identified AI risks, and a third addressing “double-edged components” (aspects of AI systems that can simultaneously yield positive and negative effects.)

Results: Our analysis identified common misalignment risks across these settings and revealed limited use of dedicated testing or monitoring for AI alignment. Most organizations rely on general oversight rather than specialized tools.

Conclusion: These findings highlight the need to develop tailored governance practices for alignment in industrial software engineering settings. (Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Product-Focused Software Process Improvement : 26th International Conference, PROFES 2025, Salerno, Italy, December 1–3, 2025, Proceedings - 26th International Conference, PROFES 2025, Salerno, Italy, December 1–3, 2025, Proceedings
series title
Lecture notes in computer science
editor
Scanniello, Giuseppe ; Lenarduzzi, Valentina ; Romano, Simone ; Vegas, Sira and Francese, Rita
issue
16361
pages
16 pages
publisher
Springer
external identifiers
  • scopus:105023328961
ISSN
16361
0302-9743
ISBN
978-3-032-12089-2
978-3-032-12088-5
DOI
10.1007/978-3-032-12089-2_2
project
AI Alignment through Continuous Operational Testing
Next Generation Communication and Computational Infrastructures and Applications (NextG2Com)
language
English
LU publication?
yes
id
8f4d66bd-0438-4667-8081-773f101f73ca
date added to LUP
2025-12-09 11:03:00
date last changed
2025-12-12 03:46:48
@inproceedings{8f4d66bd-0438-4667-8081-773f101f73ca,
  abstract     = {{Context: AI technologies are increasingly embedded in products and software engineering processes of industrial IoT, autonomous systems, and cyber-physical systems. It is therefore essential to ensure alignment with safety, reliability, and ethical standards. However, practical software engineering methods for managing misalignment risks remain underdeveloped. <br/><br/>Objective: This study aims to explore industry awareness of misalignment risks and current practices for monitoring them within real-world software engineering contexts. <br/><br/>Method: We conducted seven interviews with industry professionals to examine perceptions of misalignment risks, gather insights into existing practices, and understand approaches to alignment across various industrial settings. Three recently proposed taxonomies guided our discussions: one on ethical guidelines for trustworthy AI published by the EU, another summarizing identified AI risks, and a third addressing “double-edged components” (aspects of AI systems that can simultaneously yield positive and negative effects.) <br/><br/>Results: Our analysis identified common misalignment risks across these settings and revealed limited use of dedicated testing or monitoring for AI alignment. Most organizations rely on general oversight rather than specialized tools. <br/><br/>Conclusion: These findings highlight the need to develop tailored governance practices for alignment in industrial software engineering settings.}},
  author       = {{Gupta, Rushali and Song, Qunying and Wagner, Matthias and Engström, Emelie and Söderberg, Emma and Borg, Markus and Runeson, Per}},
  booktitle    = {{Product-Focused Software Process Improvement : 26th International Conference, PROFES 2025, Salerno, Italy, December 1–3, 2025, Proceedings}},
  editor       = {{Scanniello, Giuseppe and Lenarduzzi, Valentina and Romano, Simone and Vegas, Sira and Francese, Rita}},
  isbn         = {{978-3-032-12089-2}},
  issn         = {{16361}},
  language     = {{eng}},
  month        = {{12}},
  number       = {{16361}},
  pages        = {{20--35}},
  publisher    = {{Springer}},
  series       = {{Lecture notes in computer science}},
  title        = {{AI alignment for ethical compliance and risk mitigation in industrial applications}},
  url          = {{http://dx.doi.org/10.1007/978-3-032-12089-2_2}},
  doi          = {{10.1007/978-3-032-12089-2_2}},
  year         = {{2025}},
}