Head Movement Compensation and Multi-Modal Event Detection in Eye-Tracking Data for Unconstrained Head Movements
(2016) In Journal of Neuroscience Methods 274. p.13-26- Abstract
- Background
The complexity of analyzing eye-tracking signals increases as eye-trackers become more mobile. The signals from a mobile eye-tracker are recorded in relation to the head coordinate system and when the head and body move, the recorded eye-tracking signal is influenced by these movements, which render the subsequent event detection difficult.
New method
The purpose of the present paper is to develop a method that performs robust event detection in signals recorded using a mobile eye-tracker. The proposed method performs compensation of head movements recorded using an inertial measurement unit and employs a multi-modal event detection algorithm. The event detection algorithm is based on the head compensated... (More) - Background
The complexity of analyzing eye-tracking signals increases as eye-trackers become more mobile. The signals from a mobile eye-tracker are recorded in relation to the head coordinate system and when the head and body move, the recorded eye-tracking signal is influenced by these movements, which render the subsequent event detection difficult.
New method
The purpose of the present paper is to develop a method that performs robust event detection in signals recorded using a mobile eye-tracker. The proposed method performs compensation of head movements recorded using an inertial measurement unit and employs a multi-modal event detection algorithm. The event detection algorithm is based on the head compensated eye-tracking signal combined with information about detected objects extracted from the scene camera of the mobile eye-tracker.
Results
The method is evaluated when participants are seated 2.6 m in front of a big screen, and is therefore only valid for distant targets. The proposed method for head compensation decreases the standard deviation during intervals of fixations from 8° to 3.3° for eye-tracking signals recorded during large head movements.
Comparison with existing methods
The multi-modal event detection algorithm outperforms both an existing algorithm (I-VDT) and the built-in-algorithm of the mobile eye-tracker with an average balanced accuracy, calculated over all types of eye movements, of 0.90, compared to 0.85 and 0.75, respectively for the compared algorithms.
Conclusions
The proposed event detector that combines head movement compensation and information regarding detected objects in the scene video enables for improved classification of events in mobile eye-tracking data. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/d1bcc5c1-8310-4d2b-9c64-17949d81995b
- author
- Larsson, Linnéa LU ; Schwaller, Andrea ; Nyström, Marcus LU and Stridh, Martin LU
- organization
- publishing date
- 2016
- type
- Contribution to journal
- publication status
- published
- subject
- in
- Journal of Neuroscience Methods
- volume
- 274
- pages
- 13 - 26
- publisher
- Elsevier
- external identifiers
-
- scopus:84989897153
- pmid:27693470
- wos:000389091800002
- ISSN
- 1872-678X
- DOI
- 10.1016/j.jneumeth.2016.09.005
- language
- English
- LU publication?
- yes
- id
- d1bcc5c1-8310-4d2b-9c64-17949d81995b
- date added to LUP
- 2016-09-19 20:28:00
- date last changed
- 2023-01-06 07:59:30
@article{d1bcc5c1-8310-4d2b-9c64-17949d81995b, abstract = {{Background<br/><br/>The complexity of analyzing eye-tracking signals increases as eye-trackers become more mobile. The signals from a mobile eye-tracker are recorded in relation to the head coordinate system and when the head and body move, the recorded eye-tracking signal is influenced by these movements, which render the subsequent event detection difficult.<br/>New method<br/><br/>The purpose of the present paper is to develop a method that performs robust event detection in signals recorded using a mobile eye-tracker. The proposed method performs compensation of head movements recorded using an inertial measurement unit and employs a multi-modal event detection algorithm. The event detection algorithm is based on the head compensated eye-tracking signal combined with information about detected objects extracted from the scene camera of the mobile eye-tracker.<br/>Results<br/><br/>The method is evaluated when participants are seated 2.6 m in front of a big screen, and is therefore only valid for distant targets. The proposed method for head compensation decreases the standard deviation during intervals of fixations from 8° to 3.3° for eye-tracking signals recorded during large head movements.<br/>Comparison with existing methods<br/><br/>The multi-modal event detection algorithm outperforms both an existing algorithm (I-VDT) and the built-in-algorithm of the mobile eye-tracker with an average balanced accuracy, calculated over all types of eye movements, of 0.90, compared to 0.85 and 0.75, respectively for the compared algorithms.<br/>Conclusions<br/><br/>The proposed event detector that combines head movement compensation and information regarding detected objects in the scene video enables for improved classification of events in mobile eye-tracking data.}}, author = {{Larsson, Linnéa and Schwaller, Andrea and Nyström, Marcus and Stridh, Martin}}, issn = {{1872-678X}}, language = {{eng}}, pages = {{13--26}}, publisher = {{Elsevier}}, series = {{Journal of Neuroscience Methods}}, title = {{Head Movement Compensation and Multi-Modal Event Detection in Eye-Tracking Data for Unconstrained Head Movements}}, url = {{http://dx.doi.org/10.1016/j.jneumeth.2016.09.005}}, doi = {{10.1016/j.jneumeth.2016.09.005}}, volume = {{274}}, year = {{2016}}, }