Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

V-ir-Net : A Novel Neural Network for Pupil and Corneal Reflection Detection trained on Simulated Light Distributions

Maquiling, Virmarie ; Byrne, Sean Anthony ; Nyström, Marcus LU orcid ; Kasneci, Enkelejda and Niehorster, Diederick C. LU orcid (2023) p.1-7
Abstract
Deep learning has shown promise for gaze estimation in Virtual Reality (VR) and other head-mounted applications, but such models are hard to train due to lack of available data. Here we introduce a novel method to train neural networks for gaze estimation using synthetic images that model the light distributions captured in a P-CR setup. We tested our model on a dataset of real eye images from a VR setup, achieving 76% accuracy which is close to the state-of-the-art model which was trained on the dataset itself. The localization error for CRs was 1.56 pixels and 2.02 pixels for the pupil, which is on par with state-of-the-art. Our approach allowed inference on the whole dataset without sacrificing data for model training. Our method... (More)
Deep learning has shown promise for gaze estimation in Virtual Reality (VR) and other head-mounted applications, but such models are hard to train due to lack of available data. Here we introduce a novel method to train neural networks for gaze estimation using synthetic images that model the light distributions captured in a P-CR setup. We tested our model on a dataset of real eye images from a VR setup, achieving 76% accuracy which is close to the state-of-the-art model which was trained on the dataset itself. The localization error for CRs was 1.56 pixels and 2.02 pixels for the pupil, which is on par with state-of-the-art. Our approach allowed inference on the whole dataset without sacrificing data for model training. Our method provides a cost-efficient and lightweight training alternative, eliminating the need for hand-labeled data. It offers flexible customization, e.g. adapting to different illuminator configurations, with minimal code changes. (Less)
Please use this url to cite or link to this publication:
author
; ; ; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
MobileHCI '23 Companion : Proceedings of the 25th International Conference on Mobile Human-Computer Interaction - Proceedings of the 25th International Conference on Mobile Human-Computer Interaction
editor
Komninos, Andreas ; Santoro, Carmen ; Gavalas, Damianos ; Schoening, Johannes ; Matera, Maristella and Leiva, Luis A.
article number
23
pages
7 pages
publisher
Association for Computing Machinery (ACM)
external identifiers
  • scopus:85174318528
ISBN
978-1-4503-9924-1
DOI
10.1145/3565066.3608690
language
English
LU publication?
yes
id
eb3a8c08-a6c1-45b3-8c23-32afd9f23de0
alternative location
https://dl.acm.org/doi/10.1145/3565066.3608690
date added to LUP
2023-09-27 10:33:59
date last changed
2023-12-11 15:19:46
@inproceedings{eb3a8c08-a6c1-45b3-8c23-32afd9f23de0,
  abstract     = {{Deep learning has shown promise for gaze estimation in Virtual Reality (VR) and other head-mounted applications, but such models are hard to train due to lack of available data. Here we introduce a novel method to train neural networks for gaze estimation using synthetic images that model the light distributions captured in a P-CR setup. We tested our model on a dataset of real eye images from a VR setup, achieving 76% accuracy which is close to the state-of-the-art model which was trained on the dataset itself. The localization error for CRs was 1.56 pixels and 2.02 pixels for the pupil, which is on par with state-of-the-art. Our approach allowed inference on the whole dataset without sacrificing data for model training. Our method provides a cost-efficient and lightweight training alternative, eliminating the need for hand-labeled data. It offers flexible customization, e.g. adapting to different illuminator configurations, with minimal code changes.}},
  author       = {{Maquiling, Virmarie and Byrne, Sean Anthony and Nyström, Marcus and Kasneci, Enkelejda and Niehorster, Diederick C.}},
  booktitle    = {{MobileHCI '23 Companion : Proceedings of the 25th International Conference on Mobile Human-Computer Interaction}},
  editor       = {{Komninos, Andreas and Santoro, Carmen and Gavalas, Damianos and Schoening, Johannes and Matera, Maristella and Leiva, Luis A.}},
  isbn         = {{978-1-4503-9924-1}},
  language     = {{eng}},
  month        = {{09}},
  pages        = {{1--7}},
  publisher    = {{Association for Computing Machinery (ACM)}},
  title        = {{V-ir-Net : A Novel Neural Network for Pupil and Corneal Reflection Detection trained on Simulated Light Distributions}},
  url          = {{http://dx.doi.org/10.1145/3565066.3608690}},
  doi          = {{10.1145/3565066.3608690}},
  year         = {{2023}},
}