Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Object Tracking with Compressed Networks

Alfvén, Alexander LU (2016) In Master's Theses in Mathematical Sciences FMA820 20161
Mathematics (Faculty of Engineering)
Abstract
With the goal of making object tracking neural networks more suitable for phones and tablets, a quantisation process demonstrates considerable reductions in hardware requirements at surprisingly minimal accuracy losses.
Popular Abstract
While using machine learning for image related tasks is not unusual, using machine learning for object tracking is much less common. Machine learning for this task requires significant amounts of storage and memory, and because of that, neural nets are currently limited in viability on smaller devices such as Smartphones and Tablets.

Techniques for reducing the size of neural networks exists, however because object tracking is a rarer type of application, the effects of these compression methods is not well known. For this thesis, a compression technique utilising quantisation was tested, with very good results.

By utilising quantisation on each layer of the neural network and using lower-precision integers, the memory requirement of... (More)
While using machine learning for image related tasks is not unusual, using machine learning for object tracking is much less common. Machine learning for this task requires significant amounts of storage and memory, and because of that, neural nets are currently limited in viability on smaller devices such as Smartphones and Tablets.

Techniques for reducing the size of neural networks exists, however because object tracking is a rarer type of application, the effects of these compression methods is not well known. For this thesis, a compression technique utilising quantisation was tested, with very good results.

By utilising quantisation on each layer of the neural network and using lower-precision integers, the memory requirement of the network could be reduced by half. With an original memory requirement of approximately 1.6GiB, the reduction to the region of 800MiB made deployment viable on many more devices. Furthermore, the technique simultaneously reduced the storage requirement of the network to just a quarter of the original size, from nearly 400MiB to 100MiB.

Despite the considerable savings in both memory and storage, the effects on accuracy was surprisingly minimal. After quantisation, the accuracy effectively remained the same. Due to technical issues, the computational speed was unfortunately significantly reduced. However these were only due to technical compatibility issues with the software, which if resolved, should result in faster computational speed than the original network.

These findings demonstrate that quantisation can be used to effectively lower the hardware requirements of neural networks. The memory requirements are reduced to a level which smartphones and tablets are often prepared to handle. The storage reduction significantly reduces download and installation size. Finally, the potential speed increase may allow weaker devices to still attain real time performance. (Less)
Please use this url to cite or link to this publication:
author
Alfvén, Alexander LU
supervisor
organization
course
FMA820 20161
year
type
H2 - Master's Degree (Two Years)
subject
publication/series
Master's Theses in Mathematical Sciences
report number
LUTFMA-3309-2016
ISSN
1404-6342
other publication id
2016:E53
language
English
id
8897859
date added to LUP
2017-01-30 15:59:29
date last changed
2017-01-30 15:59:29
@misc{8897859,
  abstract     = {{With the goal of making object tracking neural networks more suitable for phones and tablets, a quantisation process demonstrates considerable reductions in hardware requirements at surprisingly minimal accuracy losses.}},
  author       = {{Alfvén, Alexander}},
  issn         = {{1404-6342}},
  language     = {{eng}},
  note         = {{Student Paper}},
  series       = {{Master's Theses in Mathematical Sciences}},
  title        = {{Object Tracking with Compressed Networks}},
  year         = {{2016}},
}