Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Transfer Learning in Autonomous Vehicles using Convolutional Networks

Lundberg, Gustav LU (2019) In Master Theses in Mathematical Sciences FMAM05 20182
Mathematics (Faculty of Engineering)
Abstract
This thesis has investigated the potential benefits of using transfer learning when training convolu- tional neural networks for the task of autonomously driving a car. Three transfer learning networks were trained and compared with a conventionally trained convolutional neural network. The first conclusion is that training, as expected, is considerably quicker when using transfer learning. More specifically, transfer learning utilizes pretrained networks and does not require the entire network to be trained, rather just parts of it. This results in faster training since less weights have to be updated per epoch of training. Here, training was approximately twice as fast per epoch compared to training a non-transfer learning network. The... (More)
This thesis has investigated the potential benefits of using transfer learning when training convolu- tional neural networks for the task of autonomously driving a car. Three transfer learning networks were trained and compared with a conventionally trained convolutional neural network. The first conclusion is that training, as expected, is considerably quicker when using transfer learning. More specifically, transfer learning utilizes pretrained networks and does not require the entire network to be trained, rather just parts of it. This results in faster training since less weights have to be updated per epoch of training. Here, training was approximately twice as fast per epoch compared to training a non-transfer learning network. The pretrained network used in this thesis was in fact also trained, something that in a practical transfer learning setting would already have been done. The data used to train the so called pretrained network originates from a different, although similar, domain.

By keeping a varying number of convolutional layers from the pretrained network fixed, three networks were trained using transfer learning. This way, knowledge is said to be transferred from a previously solved problem to the current problem. The network called fine-tuning network 2 performed better than the two other transfer learning networks when tested in the simulator. After only 7 epochs of training, this network could drive around the entire track without going off-road, even at higher speeds. This is to be compared with the so called conventional network, trained under a classic machine learning setting. Not until having been trained for 18 epochs did the performance of the conventional network match the performance of the fine-tuning network 2, trained for merely 7 epochs. In other words, in less than half the number of epochs, and with each epoch training twice as fast, a network using transfer learning performed as well as a non-transfer learning network. (Less)
Popular Abstract
The increase in computer power and access to large quantities of data are two important factors that have helped contribute to drive the rapid development of powerful machine learning applications. A special type of machine learning algorithm is inspired by the structure of the biological brain - artificial neural networks. The process of tuning the parameters of such a structure is called training the network and can often be both time-consuming and in high demand a lot of data to train on.

An interesting application of artificial neural networks is that of autonomous vehicles. The progress in this field has already come a long way but smarter algorithms that need less training data and can be trained more quickly would still be very... (More)
The increase in computer power and access to large quantities of data are two important factors that have helped contribute to drive the rapid development of powerful machine learning applications. A special type of machine learning algorithm is inspired by the structure of the biological brain - artificial neural networks. The process of tuning the parameters of such a structure is called training the network and can often be both time-consuming and in high demand a lot of data to train on.

An interesting application of artificial neural networks is that of autonomous vehicles. The progress in this field has already come a long way but smarter algorithms that need less training data and can be trained more quickly would still be very valuable. This thesis investigates a method referred to as transfer learning, and specifically the potential it has in reducing the training time.

An artificial neural network, just like the biological neural network is made up of neurons that are connected to each other. Commonly, the neurons in the artificial neural network are structured in layers. The goal of the training is to find the right parameters to achieve some predetermined goal, like ’staying in the lane’ or something similar for self-driving cars.

The idea of transfer learning is to transfer knowledge from an old problem when faced with a new, similar problem. This is inspired by the fact that humans can learn a new concept easier if we already have knowledge of a similar concept. In machine learning, you could imagine having trained a network on an old problem and then being faced with a new similar problem. Instead of training a new neural network from scratch, using transfer learning you would reuse parts of the old neural network and only partially train the new network. In transfer learning less layers are trained, thus less training time or better results would be the desired outcome.

In this thesis we have seen how transfer learning in fact can give equally good results as standard machine learning methods. The upgrade of using transfer learning was in the decrease of training time, were in about one fourth of the training time the results from using transfer learning were equally good as using standard machine learning methods. (Less)
Please use this url to cite or link to this publication:
author
Lundberg, Gustav LU
supervisor
organization
course
FMAM05 20182
year
type
H2 - Master's Degree (Two Years)
subject
keywords
Convolutional Neural Networks, Transfer Learning, Machine Learning, Autonomous Vehicles
publication/series
Master Theses in Mathematical Sciences
report number
LUTFMA-3373-2019
ISSN
1404-6342
other publication id
2019:E1
language
English
id
8974804
date added to LUP
2019-07-15 10:33:43
date last changed
2019-07-15 10:33:43
@misc{8974804,
  abstract     = {{This thesis has investigated the potential benefits of using transfer learning when training convolu- tional neural networks for the task of autonomously driving a car. Three transfer learning networks were trained and compared with a conventionally trained convolutional neural network. The first conclusion is that training, as expected, is considerably quicker when using transfer learning. More specifically, transfer learning utilizes pretrained networks and does not require the entire network to be trained, rather just parts of it. This results in faster training since less weights have to be updated per epoch of training. Here, training was approximately twice as fast per epoch compared to training a non-transfer learning network. The pretrained network used in this thesis was in fact also trained, something that in a practical transfer learning setting would already have been done. The data used to train the so called pretrained network originates from a different, although similar, domain.

By keeping a varying number of convolutional layers from the pretrained network fixed, three networks were trained using transfer learning. This way, knowledge is said to be transferred from a previously solved problem to the current problem. The network called fine-tuning network 2 performed better than the two other transfer learning networks when tested in the simulator. After only 7 epochs of training, this network could drive around the entire track without going off-road, even at higher speeds. This is to be compared with the so called conventional network, trained under a classic machine learning setting. Not until having been trained for 18 epochs did the performance of the conventional network match the performance of the fine-tuning network 2, trained for merely 7 epochs. In other words, in less than half the number of epochs, and with each epoch training twice as fast, a network using transfer learning performed as well as a non-transfer learning network.}},
  author       = {{Lundberg, Gustav}},
  issn         = {{1404-6342}},
  language     = {{eng}},
  note         = {{Student Paper}},
  series       = {{Master Theses in Mathematical Sciences}},
  title        = {{Transfer Learning in Autonomous Vehicles using Convolutional Networks}},
  year         = {{2019}},
}