Advanced

Transferability of features from multiple depths of a deep convolutional neural network

Widell, Emil LU and Edström, Jesper LU (2018) In Master's Theses in Mathematical Sciences FMAM05 20181
Mathematics (Faculty of Engineering)
Abstract
Deep convolutional neural networks are great at learning structures in signals and sequential data. Their performance have surpassed most non-convolutional algorithms on classical problems within the field of image analysis. A reason behind their success is that even though these networks generally need a great amount of examples to learn from, they can be used to learn smaller tasks through different types of transfer learning techniques. When having small amounts of data, a common approach is to remove the output layer and use the remaining network as a feature extractor. In this work we attempt to quantify how network layers go from general to specific through extracting features from multiple depths. The transition was measured on some... (More)
Deep convolutional neural networks are great at learning structures in signals and sequential data. Their performance have surpassed most non-convolutional algorithms on classical problems within the field of image analysis. A reason behind their success is that even though these networks generally need a great amount of examples to learn from, they can be used to learn smaller tasks through different types of transfer learning techniques. When having small amounts of data, a common approach is to remove the output layer and use the remaining network as a feature extractor. In this work we attempt to quantify how network layers go from general to specific through extracting features from multiple depths. The transition was measured on some different object classification problems by training classifiers both directly on the feature vectors and on combinations of the vectors.

The reached conclusion was that the feature from the very last layer of a deep convolutional network are very specific to the source task and using it to learn other classification problems is often sub-optimal. The best depth to extract features from depends on how similar the problem you want to learn is to the source task. (Less)
Popular Abstract (Swedish)
Djupa faltningsnätverk har blivit otroligt populära de senaste åren tack vare sin förmåga att lära sig att förstå innehållet i signaler och bilder. Dessa djupa nätverk ligg\-er bakom moderna tekniker såsom självkörande bilar och avancerad ansiktsigenkänning i mobiltelefoner. Genom att undersöka hur strukturer kan utvinnas ur dessa nätverk så kan vi förbättra vår förmåga att lära oss nya problem.
Please use this url to cite or link to this publication:
author
Widell, Emil LU and Edström, Jesper LU
supervisor
organization
course
FMAM05 20181
year
type
H2 - Master's Degree (Two Years)
subject
keywords
Deep Learning, Deep Neural Networks, Transfer Learning, Feature Extraction
publication/series
Master's Theses in Mathematical Sciences
report number
LUTFMA-3347-2018
ISSN
1404-6342
other publication id
2018:E24
language
English
id
8943169
date added to LUP
2018-05-30 17:36:29
date last changed
2018-05-30 17:36:29
@misc{8943169,
  abstract     = {Deep convolutional neural networks are great at learning structures in signals and sequential data. Their performance have surpassed most non-convolutional algorithms on classical problems within the field of image analysis. A reason behind their success is that even though these networks generally need a great amount of examples to learn from, they can be used to learn smaller tasks through different types of transfer learning techniques. When having small amounts of data, a common approach is to remove the output layer and use the remaining network as a feature extractor. In this work we attempt to quantify how network layers go from general to specific through extracting features from multiple depths. The transition was measured on some different object classification problems by training classifiers both directly on the feature vectors and on combinations of the vectors. 

The reached conclusion was that the feature from the very last layer of a deep convolutional network are very specific to the source task and using it to learn other classification problems is often sub-optimal. The best depth to extract features from depends on how similar the problem you want to learn is to the source task.},
  author       = {Widell, Emil and Edström, Jesper},
  issn         = {1404-6342},
  keyword      = {Deep Learning,Deep Neural Networks,Transfer Learning,Feature Extraction},
  language     = {eng},
  note         = {Student Paper},
  series       = {Master's Theses in Mathematical Sciences},
  title        = {Transferability of features from multiple depths of a deep convolutional neural network},
  year         = {2018},
}