Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Resilient automatic model selection for mobility prediction

Al Atiiq, Syafiq LU ; Gehrmann, Christian LU ; Khalil, Karim LU ; Sternby, Jakob and Yuan, Yachao (2025) In Cluster Computing 28(16).
Abstract

In order to avoid extensive machine learning models selection and optimizations, Automated Machine Learning (AutoML) has arisen as a practical and efficient way to apply machine learning to many different application areas. Data poisoning is a real threat to the accuracy of machine learning models in different settings, and it has in recent research studies been shown that the usage of AutoML can be even more sensitive to data poisoning than is the case for non-AutoML generated models. On the other hand, the usage of AutoML also has the potential of improving the robustness of a model by adapting the model to adversarial patterns. In this way, good accuracy can be maintained despite the attacker’s efforts to poison the data. However, no... (More)

In order to avoid extensive machine learning models selection and optimizations, Automated Machine Learning (AutoML) has arisen as a practical and efficient way to apply machine learning to many different application areas. Data poisoning is a real threat to the accuracy of machine learning models in different settings, and it has in recent research studies been shown that the usage of AutoML can be even more sensitive to data poisoning than is the case for non-AutoML generated models. On the other hand, the usage of AutoML also has the potential of improving the robustness of a model by adapting the model to adversarial patterns. In this way, good accuracy can be maintained despite the attacker’s efforts to poison the data. However, no previous studies have investigated these effects. In this paper, we examine the risks associated with adversarial trajectory attacks in mobile systems, specifically looking into mobility prediction problems. By using mobility data from two different simulation frameworks: a simulator developed by Ericsson, which is based on a real-world deployment of Airtel’s open-network topology, and the ONE framework, we investigate three different AutoML frameworks and how the mobility accuracy for the frameworks is affected by a mobile trajectory attack. Our results show that re-running AutoML at every retraining is vulnerable to adversarial mobility poisoning and shows high accuracy variance. By contrast, using a single, well-chosen model from an initial AutoML search achieves more stable performance across adversarial conditions, even when the training set includes up to 10% adversarial mobility data.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
5G, Adversarial mobility, NWDAF
in
Cluster Computing
volume
28
issue
16
article number
1043
publisher
Kluwer Academic Publishers
external identifiers
  • scopus:105019243865
ISSN
1386-7857
DOI
10.1007/s10586-025-05661-x
language
English
LU publication?
yes
additional info
Publisher Copyright: © The Author(s) 2025.
id
d0fcbc2f-735a-46a5-ba11-6c05f65bf90f
date added to LUP
2025-11-12 11:25:26
date last changed
2025-11-28 12:47:51
@article{d0fcbc2f-735a-46a5-ba11-6c05f65bf90f,
  abstract     = {{<p>In order to avoid extensive machine learning models selection and optimizations, Automated Machine Learning (AutoML) has arisen as a practical and efficient way to apply machine learning to many different application areas. Data poisoning is a real threat to the accuracy of machine learning models in different settings, and it has in recent research studies been shown that the usage of AutoML can be even more sensitive to data poisoning than is the case for non-AutoML generated models. On the other hand, the usage of AutoML also has the potential of improving the robustness of a model by adapting the model to adversarial patterns. In this way, good accuracy can be maintained despite the attacker’s efforts to poison the data. However, no previous studies have investigated these effects. In this paper, we examine the risks associated with adversarial trajectory attacks in mobile systems, specifically looking into mobility prediction problems. By using mobility data from two different simulation frameworks: a simulator developed by Ericsson, which is based on a real-world deployment of Airtel’s open-network topology, and the ONE framework, we investigate three different AutoML frameworks and how the mobility accuracy for the frameworks is affected by a mobile trajectory attack. Our results show that re-running AutoML at every retraining is vulnerable to adversarial mobility poisoning and shows high accuracy variance. By contrast, using a single, well-chosen model from an initial AutoML search achieves more stable performance across adversarial conditions, even when the training set includes up to 10% adversarial mobility data.</p>}},
  author       = {{Al Atiiq, Syafiq and Gehrmann, Christian and Khalil, Karim and Sternby, Jakob and Yuan, Yachao}},
  issn         = {{1386-7857}},
  keywords     = {{5G; Adversarial mobility; NWDAF}},
  language     = {{eng}},
  number       = {{16}},
  publisher    = {{Kluwer Academic Publishers}},
  series       = {{Cluster Computing}},
  title        = {{Resilient automatic model selection for mobility prediction}},
  url          = {{http://dx.doi.org/10.1007/s10586-025-05661-x}},
  doi          = {{10.1007/s10586-025-05661-x}},
  volume       = {{28}},
  year         = {{2025}},
}