Neural Fake Factor Estimation Using Data-Based Inference
(2025)- Abstract
- In a high-energy physics data analysis, the term "fake" backgrounds refers to events that would formally not satisfy the (signal) process selection criteria, but are accepted nonetheless due to mis-reconstructed particles. This can occur, e.g., when leptons from secondary decays are incorrectly identified as originating from the hard-scatter interaction point (known as non-prompt leptons), or when other physics objects, such as hadronic jets, are mistakenly reconstructed as leptons (resulting in mis-identified leptons). These fake leptons are usually estimated using data-driven techniques, one of the most common being the Fake Factor method. This method relies on predicting the fake lepton contribution by reweighting data events, using a... (More)
- In a high-energy physics data analysis, the term "fake" backgrounds refers to events that would formally not satisfy the (signal) process selection criteria, but are accepted nonetheless due to mis-reconstructed particles. This can occur, e.g., when leptons from secondary decays are incorrectly identified as originating from the hard-scatter interaction point (known as non-prompt leptons), or when other physics objects, such as hadronic jets, are mistakenly reconstructed as leptons (resulting in mis-identified leptons). These fake leptons are usually estimated using data-driven techniques, one of the most common being the Fake Factor method. This method relies on predicting the fake lepton contribution by reweighting data events, using a scale factor (i.e. fake factor) function. Traditionally, fake factors have been estimated by histogramming and computing the ratio of two data distributions, typically as functions of a few relevant physics variables such as the transverse momentum pT and pseudorapidity η. In this work, we introduce a novel approach of fake factor calculation, based on density ratio estimation using neural networks trained directly on data in a higher-dimensional feature space. We show that our method enables the computation of a continuous, unbinned fake factor on a per event basis, offering a more flexible, precise, and higher-dimensional alternative to the conventional method, making it applicable to a wide range of analyses. A simple LHC open data analysis we implemented confirms the feasibility of the method and demonstrates that the ML-based fake factor provides smoother, more stable estimates across the phase space than traditional methods, reducing binning artifacts and improving extrapolation to signal regions. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/aff41959-8179-436a-b9f7-bfc853a4bae5
- author
- Gavranovic, Jan
; Calic, Lara
LU
; Debevc, Jernej
; Lytken, Else
LU
and Kersevan, Borut Paul
- organization
- publishing date
- 2025-11-10
- type
- Working paper/Preprint
- publication status
- published
- subject
- keywords
- High-energy physics, Fake Factor method, Data-driven background estimation, Machine Learning
- pages
- 23 pages
- publisher
- arXiv.org
- DOI
- 10.48550/arXiv.2511.06972
- language
- English
- LU publication?
- yes
- id
- aff41959-8179-436a-b9f7-bfc853a4bae5
- date added to LUP
- 2025-12-25 11:46:39
- date last changed
- 2026-01-07 13:26:08
@misc{aff41959-8179-436a-b9f7-bfc853a4bae5,
abstract = {{In a high-energy physics data analysis, the term "fake" backgrounds refers to events that would formally not satisfy the (signal) process selection criteria, but are accepted nonetheless due to mis-reconstructed particles. This can occur, e.g., when leptons from secondary decays are incorrectly identified as originating from the hard-scatter interaction point (known as non-prompt leptons), or when other physics objects, such as hadronic jets, are mistakenly reconstructed as leptons (resulting in mis-identified leptons). These fake leptons are usually estimated using data-driven techniques, one of the most common being the Fake Factor method. This method relies on predicting the fake lepton contribution by reweighting data events, using a scale factor (i.e. fake factor) function. Traditionally, fake factors have been estimated by histogramming and computing the ratio of two data distributions, typically as functions of a few relevant physics variables such as the transverse momentum pT and pseudorapidity η. In this work, we introduce a novel approach of fake factor calculation, based on density ratio estimation using neural networks trained directly on data in a higher-dimensional feature space. We show that our method enables the computation of a continuous, unbinned fake factor on a per event basis, offering a more flexible, precise, and higher-dimensional alternative to the conventional method, making it applicable to a wide range of analyses. A simple LHC open data analysis we implemented confirms the feasibility of the method and demonstrates that the ML-based fake factor provides smoother, more stable estimates across the phase space than traditional methods, reducing binning artifacts and improving extrapolation to signal regions.}},
author = {{Gavranovic, Jan and Calic, Lara and Debevc, Jernej and Lytken, Else and Kersevan, Borut Paul}},
keywords = {{High-energy physics; Fake Factor method; Data-driven background estimation; Machine Learning}},
language = {{eng}},
month = {{11}},
note = {{Preprint}},
publisher = {{arXiv.org}},
title = {{Neural Fake Factor Estimation Using Data-Based Inference}},
url = {{http://dx.doi.org/10.48550/arXiv.2511.06972}},
doi = {{10.48550/arXiv.2511.06972}},
year = {{2025}},
}