Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Continuous Quality Assurance and ML Pipelines under the AI Act

Wagner, Matthias LU orcid (2024) 3rd International Conference on AI Engineering, CAIN 2024, co-located with the 46th International Conference on Software Engineering, ICSE 2024 In Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024 p.247-249
Abstract

More than ever, Machine Learning (ML) as a subfield of Artificial Intelligence (AI) is on the rise and is finding its way into safety-critical software applications. However, when it comes to quality assurance (QA) and trustworthiness, integrating ML models into software comes with challenges that may not be apparent at first glance. The European Union (EU) aims to tackle this problem with new regulatory requirements in the form of harmonized rules on AI (AI Act). It is a risk-based approach with extensive requirements for high-risk systems as well as for foundation models that can be used in various downstream AI systems. Reliable software engineering processes in the form of ML-enabled automated pipelines are likely to become a... (More)

More than ever, Machine Learning (ML) as a subfield of Artificial Intelligence (AI) is on the rise and is finding its way into safety-critical software applications. However, when it comes to quality assurance (QA) and trustworthiness, integrating ML models into software comes with challenges that may not be apparent at first glance. The European Union (EU) aims to tackle this problem with new regulatory requirements in the form of harmonized rules on AI (AI Act). It is a risk-based approach with extensive requirements for high-risk systems as well as for foundation models that can be used in various downstream AI systems. Reliable software engineering processes in the form of ML-enabled automated pipelines are likely to become a discerning factor for legally compliant ML systems. Our research project aims to contribute to the field by establishing an empirically grounded foundation on how to achieve trustworthy AI Act compliant ML systems. Both a literature review and an interview study are ongoing. At a later stage, concrete tools shall be developed, ideally in cooperation with an industry partner, possibly by utilizing the concept of regulatory sandboxes.

(Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
AI act, quality assurance, software engineering
host publication
Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024
series title
Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024
pages
3 pages
publisher
Association for Computing Machinery (ACM)
conference name
3rd International Conference on AI Engineering, CAIN 2024, co-located with the 46th International Conference on Software Engineering, ICSE 2024
conference location
Lisbon, Portugal
conference dates
2024-04-14 - 2024-04-15
external identifiers
  • scopus:85196480508
ISBN
9798400705915
DOI
10.1145/3644815.3644973
language
English
LU publication?
yes
id
d1b6d2a3-dee2-47b6-87db-76098dcb50e4
date added to LUP
2024-08-30 14:09:15
date last changed
2024-08-30 14:09:36
@inproceedings{d1b6d2a3-dee2-47b6-87db-76098dcb50e4,
  abstract     = {{<p>More than ever, Machine Learning (ML) as a subfield of Artificial Intelligence (AI) is on the rise and is finding its way into safety-critical software applications. However, when it comes to quality assurance (QA) and trustworthiness, integrating ML models into software comes with challenges that may not be apparent at first glance. The European Union (EU) aims to tackle this problem with new regulatory requirements in the form of harmonized rules on AI (AI Act). It is a risk-based approach with extensive requirements for high-risk systems as well as for foundation models that can be used in various downstream AI systems. Reliable software engineering processes in the form of ML-enabled automated pipelines are likely to become a discerning factor for legally compliant ML systems. Our research project aims to contribute to the field by establishing an empirically grounded foundation on how to achieve trustworthy AI Act compliant ML systems. Both a literature review and an interview study are ongoing. At a later stage, concrete tools shall be developed, ideally in cooperation with an industry partner, possibly by utilizing the concept of regulatory sandboxes.</p>}},
  author       = {{Wagner, Matthias}},
  booktitle    = {{Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024}},
  isbn         = {{9798400705915}},
  keywords     = {{AI act; quality assurance; software engineering}},
  language     = {{eng}},
  month        = {{04}},
  pages        = {{247--249}},
  publisher    = {{Association for Computing Machinery (ACM)}},
  series       = {{Proceedings - 2024 IEEE/ACM 3rd International Conference on AI Engineering - Software Engineering for AI, CAIN 2024}},
  title        = {{Continuous Quality Assurance and ML Pipelines under the AI Act}},
  url          = {{http://dx.doi.org/10.1145/3644815.3644973}},
  doi          = {{10.1145/3644815.3644973}},
  year         = {{2024}},
}