Trust calibration in IDEs : paving the way for widespread adoption of AI refactoring
(2025) p.37-41- Abstract
- In the software industry, the drive to add new features often overshadows the need to improve existing code. Large Language Models (LLMs) offer a new approach to improving codebases at an unprecedented scale through AI-assisted refactoring. However, LLMs come with inherent risks such as braking changes and the introduction of security vulnerabilities. We advocate for encapsulating the interaction with the models in IDEs and validating refactoring attempts using trustworthy safeguards. However, equally important for the uptake of AI refactoring is research on trust development. In this position paper, we position our future work based on established models from research on human factors in automation. We outline action research within... (More)
- In the software industry, the drive to add new features often overshadows the need to improve existing code. Large Language Models (LLMs) offer a new approach to improving codebases at an unprecedented scale through AI-assisted refactoring. However, LLMs come with inherent risks such as braking changes and the introduction of security vulnerabilities. We advocate for encapsulating the interaction with the models in IDEs and validating refactoring attempts using trustworthy safeguards. However, equally important for the uptake of AI refactoring is research on trust development. In this position paper, we position our future work based on established models from research on human factors in automation. We outline action research within CodeScene on development of 1) novel LLM safeguards and 2) user interaction that conveys an appropriate level of trust. The industry collaboration enables large-scale repository analysis and A/B testing to continuously guide the design of our research interventions. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/378e5130-c654-473e-afb7-d6fcc82ef847
- author
- Borg, Markus LU
- organization
- publishing date
- 2025-07-01
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- 2025 IEEE/ACM Second IDE Workshop (IDE) : Proceedings - Proceedings
- editor
- O’Conner, Lisa
- pages
- 5 pages
- external identifiers
-
- scopus:105011086624
- ISBN
- 979-8-3315-0188-4
- DOI
- 10.1109/IDE66625.2025.00012
- language
- English
- LU publication?
- yes
- id
- 378e5130-c654-473e-afb7-d6fcc82ef847
- alternative location
- https://arxiv.org/abs/2412.15948
- https://doi.ieeecomputersociety.org/10.1109/IDE66625.2025.00012
- date added to LUP
- 2025-08-18 13:21:51
- date last changed
- 2025-10-14 11:18:16
@inproceedings{378e5130-c654-473e-afb7-d6fcc82ef847, abstract = {{In the software industry, the drive to add new features often overshadows the need to improve existing code. Large Language Models (LLMs) offer a new approach to improving codebases at an unprecedented scale through AI-assisted refactoring. However, LLMs come with inherent risks such as braking changes and the introduction of security vulnerabilities. We advocate for encapsulating the interaction with the models in IDEs and validating refactoring attempts using trustworthy safeguards. However, equally important for the uptake of AI refactoring is research on trust development. In this position paper, we position our future work based on established models from research on human factors in automation. We outline action research within CodeScene on development of 1) novel LLM safeguards and 2) user interaction that conveys an appropriate level of trust. The industry collaboration enables large-scale repository analysis and A/B testing to continuously guide the design of our research interventions.}}, author = {{Borg, Markus}}, booktitle = {{2025 IEEE/ACM Second IDE Workshop (IDE) : Proceedings}}, editor = {{O’Conner, Lisa}}, isbn = {{979-8-3315-0188-4}}, language = {{eng}}, month = {{07}}, pages = {{37--41}}, title = {{Trust calibration in IDEs : paving the way for widespread adoption of AI refactoring}}, url = {{http://dx.doi.org/10.1109/IDE66625.2025.00012}}, doi = {{10.1109/IDE66625.2025.00012}}, year = {{2025}}, }