Advanced

Distributing a Neural Network on Axis Cameras

Ahlbeck, Axel LU and Jakobsson, Anton LU (2016) In LU-CS-EX 2016-41 EDA920 20161
Department of Computer Science
Abstract
This document describes the methods and results of our Master’s Thesis, car- ried out at Axis Communications AB.
A central problem with deep neural networks is that they contain a large num- ber of parameters and heavy computations. To cope with this, our idea was to split the network into chunks large enough that they require their own core, yet small enough to not violate our memory constraints.
The goal of the thesis is to investigate whether it is feasible to distribute and run a deep neural network on a network of cameras with tight constraints such as bandwidth and memory capacity. This is done by performing experiments on existing cameras as well as Raspberry Pi’s as an assumption of how the next generation of cameras might... (More)
This document describes the methods and results of our Master’s Thesis, car- ried out at Axis Communications AB.
A central problem with deep neural networks is that they contain a large num- ber of parameters and heavy computations. To cope with this, our idea was to split the network into chunks large enough that they require their own core, yet small enough to not violate our memory constraints.
The goal of the thesis is to investigate whether it is feasible to distribute and run a deep neural network on a network of cameras with tight constraints such as bandwidth and memory capacity. This is done by performing experiments on existing cameras as well as Raspberry Pi’s as an assumption of how the next generation of cameras might perform.
The first part of the thesis discusses how a neural network can be partitioned, and describes the problems that may occur while doing so. The second part of the thesis presents results and measurements when run on cameras and Rasp- berry Pi’s. The results and measurements are then discussed.
Optimizations and bottlenecks are then described and discussed. In this part, the thesis discusses how the application benefits from hardware acceler- ation. Conclusively a few unsolved problems are identified and presented as future work. (Less)
Popular Abstract (Swedish)
Datorseende blir allt bättre, men med dess precision tillkommer en beräkningskostnad. Detta arbete undersöker hur man kan använda svagare men fler datorer, exempelvis kameror, för att implementera dagens kraftfulla algoritmer.
Please use this url to cite or link to this publication:
author
Ahlbeck, Axel LU and Jakobsson, Anton LU
supervisor
organization
alternative title
Distribuerat Neuralt Nätverk
course
EDA920 20161
year
type
H3 - Professional qualifications (4 Years - )
subject
keywords
Machine learning, computer vision, neural networks, deep learning, em- bedded systems, distributed systems
publication/series
LU-CS-EX 2016-41
report number
LU-CS-EX 2016-41
ISSN
1650-2884
language
English
id
8893617
date added to LUP
2016-10-18 13:54:54
date last changed
2016-10-18 13:54:54
@misc{8893617,
  abstract     = {This document describes the methods and results of our Master’s Thesis, car- ried out at Axis Communications AB.
A central problem with deep neural networks is that they contain a large num- ber of parameters and heavy computations. To cope with this, our idea was to split the network into chunks large enough that they require their own core, yet small enough to not violate our memory constraints.
The goal of the thesis is to investigate whether it is feasible to distribute and run a deep neural network on a network of cameras with tight constraints such as bandwidth and memory capacity. This is done by performing experiments on existing cameras as well as Raspberry Pi’s as an assumption of how the next generation of cameras might perform.
The first part of the thesis discusses how a neural network can be partitioned, and describes the problems that may occur while doing so. The second part of the thesis presents results and measurements when run on cameras and Rasp- berry Pi’s. The results and measurements are then discussed.
Optimizations and bottlenecks are then described and discussed. In this part, the thesis discusses how the application benefits from hardware acceler- ation. Conclusively a few unsolved problems are identified and presented as future work.},
  author       = {Ahlbeck, Axel and Jakobsson, Anton},
  issn         = {1650-2884},
  keyword      = {Machine learning,computer vision,neural networks,deep learning,em- bedded systems,distributed systems},
  language     = {eng},
  note         = {Student Paper},
  series       = {LU-CS-EX 2016-41},
  title        = {Distributing a Neural Network on Axis Cameras},
  year         = {2016},
}