Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Membership Inference Attack in Random Forests

Akbarian, Fatemeh LU and Aminifar, Amir LU orcid (2025) ESANN 2025
Abstract
Machine Learning (ML) offers many opportunities, but its reliance on personal data raises privacy concerns. One such example is the Membership Inference Attack (MIA), which aims to determine whether a specific data point was part of a model’s training dataset. In this paper, we investigate this attack on Random Forests (RFs) and propose a method to quantify their vulnerability to MIA. We also demonstrate that in collaborative setups like federated learning, a client with access to the model and partial training dataset can establish MIA against other clients’ training data. The effectiveness of our method is validated through experiments.
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2025)
pages
6 pages
publisher
European Symposium on Artificial Neural Networks
conference name
ESANN 2025
conference location
Bruges, Belgium
conference dates
2025-04-23 - 2025-04-25
ISBN
9782875870933
language
English
LU publication?
yes
id
4d4d6520-f958-490b-a09d-41d8d6e6aff7
alternative location
https://www.esann.org/sites/default/files/proceedings/2025/ES2025-184.pdf
date added to LUP
2025-11-24 16:15:18
date last changed
2025-11-26 09:35:51
@inproceedings{4d4d6520-f958-490b-a09d-41d8d6e6aff7,
  abstract     = {{Machine Learning (ML) offers many opportunities, but its reliance on personal data raises privacy concerns. One such example is the Membership Inference Attack (MIA), which aims to determine whether a specific data point was part of a model’s training dataset. In this paper, we investigate this attack on Random Forests (RFs) and propose a method to quantify their vulnerability to MIA. We also demonstrate that in collaborative setups like federated learning, a client with access to the model and partial training dataset can establish MIA against other clients’ training data. The effectiveness of our method is validated through experiments.}},
  author       = {{Akbarian, Fatemeh and Aminifar, Amir}},
  booktitle    = {{European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2025)}},
  isbn         = {{9782875870933}},
  language     = {{eng}},
  month        = {{04}},
  publisher    = {{European Symposium on Artificial Neural Networks}},
  title        = {{Membership Inference Attack in Random Forests}},
  url          = {{https://lup.lub.lu.se/search/files/233843337/Membership_Inference_Attack_in_Random_Forests.pdf}},
  year         = {{2025}},
}