Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Object Detector Differences when Using Synthetic and Real Training Data

Ljungqvist, Martin Georg ; Nordander, Otto ; Skans, Markus ; Mildner, Arvid ; Liu, Tony LU and Nugues, Pierre LU orcid (2023) In SN Computer Science 4(3).
Abstract

To train well-performing generalizing neural networks, sufficiently large and diverse datasets are needed. Collecting data while adhering to privacy legislation becomes increasingly difficult and annotating these large datasets is both a resource-heavy and time-consuming task. An approach to overcome these difficulties is to use synthetic data since it is inherently scalable and can be automatically annotated. However, how training on synthetic data affects the layers of a neural network is still unclear. In this paper, we train the YOLOv3 object detector on real and synthetic images from city environments. We perform a similarity analysis using Centered Kernel Alignment (CKA) to explore the effects of training on synthetic data on a... (More)

To train well-performing generalizing neural networks, sufficiently large and diverse datasets are needed. Collecting data while adhering to privacy legislation becomes increasingly difficult and annotating these large datasets is both a resource-heavy and time-consuming task. An approach to overcome these difficulties is to use synthetic data since it is inherently scalable and can be automatically annotated. However, how training on synthetic data affects the layers of a neural network is still unclear. In this paper, we train the YOLOv3 object detector on real and synthetic images from city environments. We perform a similarity analysis using Centered Kernel Alignment (CKA) to explore the effects of training on synthetic data on a layer-wise basis. The analysis captures the architecture of the detector while showing both different and similar patterns between different models. With this similarity analysis, we want to give insights on how training synthetic data affects each layer and to give a better understanding of the inner workings of complex neural networks. The results show that the largest similarity between a detector trained on real data and a detector trained on synthetic data was in the early layers, and the largest difference was in the head part. The results also show that no major difference in performance or similarity could be seen between frozen and unfrozen backbone.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; ; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Centered Kernel Alignment, Layer Similarity, Object Detection
in
SN Computer Science
volume
4
issue
3
article number
302
publisher
Springer Nature
external identifiers
  • scopus:85151427115
ISSN
2662-995X
DOI
10.1007/s42979-023-01704-5
language
English
LU publication?
yes
id
68c26cc8-08ba-46e3-95a3-bfbde241fa6c
date added to LUP
2023-05-16 15:44:59
date last changed
2023-05-16 15:44:59
@article{68c26cc8-08ba-46e3-95a3-bfbde241fa6c,
  abstract     = {{<p>To train well-performing generalizing neural networks, sufficiently large and diverse datasets are needed. Collecting data while adhering to privacy legislation becomes increasingly difficult and annotating these large datasets is both a resource-heavy and time-consuming task. An approach to overcome these difficulties is to use synthetic data since it is inherently scalable and can be automatically annotated. However, how training on synthetic data affects the layers of a neural network is still unclear. In this paper, we train the YOLOv3 object detector on real and synthetic images from city environments. We perform a similarity analysis using Centered Kernel Alignment (CKA) to explore the effects of training on synthetic data on a layer-wise basis. The analysis captures the architecture of the detector while showing both different and similar patterns between different models. With this similarity analysis, we want to give insights on how training synthetic data affects each layer and to give a better understanding of the inner workings of complex neural networks. The results show that the largest similarity between a detector trained on real data and a detector trained on synthetic data was in the early layers, and the largest difference was in the head part. The results also show that no major difference in performance or similarity could be seen between frozen and unfrozen backbone.</p>}},
  author       = {{Ljungqvist, Martin Georg and Nordander, Otto and Skans, Markus and Mildner, Arvid and Liu, Tony and Nugues, Pierre}},
  issn         = {{2662-995X}},
  keywords     = {{Centered Kernel Alignment; Layer Similarity; Object Detection}},
  language     = {{eng}},
  number       = {{3}},
  publisher    = {{Springer Nature}},
  series       = {{SN Computer Science}},
  title        = {{Object Detector Differences when Using Synthetic and Real Training Data}},
  url          = {{http://dx.doi.org/10.1007/s42979-023-01704-5}},
  doi          = {{10.1007/s42979-023-01704-5}},
  volume       = {{4}},
  year         = {{2023}},
}