Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Privacy-Preserving Federated Interpretability

Abtahi Fahliani, Azra LU ; Aminifar, Amin and Aminifar, Amir LU orcid (2024) IEEE International Conference on Big Data, BigData 2024 p.7592-7601
Abstract
Interpretability has become a crucial component in the Machine Learning (ML) domain. This is particularly important in the context of medical and health applications, where the underlying reasons behind how an ML model makes a certain decision are as important as the decision itself for the experts. However, interpreting an ML model based on limited local data may potentially lead to inaccurate conclusions. On the other hand, centralized decision making and interpretability, by transferring the data to a centralized server, may raise privacy concerns due to the sensitivity of personal/medical data in such applications.

In this paper, we propose a federated interpretability scheme based on SHAP (SHapley Additive exPlanations) value... (More)
Interpretability has become a crucial component in the Machine Learning (ML) domain. This is particularly important in the context of medical and health applications, where the underlying reasons behind how an ML model makes a certain decision are as important as the decision itself for the experts. However, interpreting an ML model based on limited local data may potentially lead to inaccurate conclusions. On the other hand, centralized decision making and interpretability, by transferring the data to a centralized server, may raise privacy concerns due to the sensitivity of personal/medical data in such applications.

In this paper, we propose a federated interpretability scheme based on SHAP (SHapley Additive exPlanations) value and DeepLIFT (Deep Learning Important FeaTures) to interpret ML models, without sharing sensitive data and in a privacy-preserving fashion. Our proposed federated interpretability scheme is a decentralized framework for interpreting ML models, where data remains on local devices, and only values that do not directly describe the raw data are aggregated in a privacy-preserving fashion to interpret the model. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
keywords
explainable machine learning, privacy-preserving, federated learning, epilepsy, seizure prediction, seizure Detection, EEG, ECG
host publication
Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024
pages
7592 - 7601
publisher
IEEE - Institute of Electrical and Electronics Engineers Inc.
conference name
IEEE International Conference on Big Data, BigData 2024
conference location
Washington, United States
conference dates
2024-12-15 - 2024-12-18
external identifiers
  • scopus:85217992949
ISBN
979-835036248-0
DOI
10.1109/BigData62323.2024.10825590
language
English
LU publication?
yes
id
6889b710-57f3-489f-b5d0-1c7fb1ae52f7
date added to LUP
2024-11-20 19:42:16
date last changed
2025-06-05 10:47:52
@inproceedings{6889b710-57f3-489f-b5d0-1c7fb1ae52f7,
  abstract     = {{Interpretability has become a crucial component in the Machine Learning (ML) domain. This is particularly important in the context of medical and health applications, where the underlying reasons behind how an ML model makes a certain decision are as important as the decision itself for the experts. However, interpreting an ML model based on limited local data may potentially lead to inaccurate conclusions. On the other hand, centralized decision making and interpretability, by transferring the data to a centralized server, may raise privacy concerns due to the sensitivity of personal/medical data in such applications.<br/><br/>In this paper, we propose a federated interpretability scheme based on SHAP (SHapley Additive exPlanations) value and DeepLIFT (Deep Learning Important FeaTures) to interpret ML models, without sharing sensitive data and in a privacy-preserving fashion. Our proposed federated interpretability scheme is a decentralized framework for interpreting ML models, where data remains on local devices, and only values that do not directly describe the raw data are aggregated in a privacy-preserving fashion to interpret the model.}},
  author       = {{Abtahi Fahliani, Azra and Aminifar, Amin and Aminifar, Amir}},
  booktitle    = {{Proceedings - 2024 IEEE International Conference on Big Data, BigData 2024}},
  isbn         = {{979-835036248-0}},
  keywords     = {{explainable machine learning; privacy-preserving; federated learning; epilepsy; seizure prediction; seizure Detection; EEG; ECG}},
  language     = {{eng}},
  pages        = {{7592--7601}},
  publisher    = {{IEEE - Institute of Electrical and Electronics Engineers Inc.}},
  title        = {{Privacy-Preserving Federated Interpretability}},
  url          = {{https://lup.lub.lu.se/search/files/200283700/BigData2024_12_.pdf}},
  doi          = {{10.1109/BigData62323.2024.10825590}},
  year         = {{2024}},
}