Advanced

A factorial experimental evaluation of automated test input generation – Java platform testing in embedded devices

Runeson, Per LU ; Heed, Per and Westrup, Alexander (2011) PROFES In Product-Focused Software Process Improvement/Lecture Notes in Computer Science 6759. p.217-231
Abstract
Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility be- tween the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The... (More)
Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility be- tween the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The results show that the startup sequence gives good code coverage values for the selected MIDlets. The feedback method gives somewhat better code coverage than the random method, but requires real-time code coverage measurements, which decreases the run speed of the tests. Conclusion The random method with startup sequence is the best trade-off in the current setting. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
in
Product-Focused Software Process Improvement/Lecture Notes in Computer Science
editor
Danilo, Caivano; Oivo, Markku; Baldassarre, Maria Teresa and Visaggio, Guiseppe
volume
6759
pages
217 - 231
publisher
Springer
conference name
PROFES
external identifiers
  • Scopus:79960265199
ISBN
978-3-642-21843-9
DOI
10.1007/978-3-642-21843-9_18
project
EASE
language
English
LU publication?
yes
id
104daad3-66a3-46be-820d-5c917b08472d (old id 2174112)
date added to LUP
2011-10-17 11:30:21
date last changed
2016-10-13 04:46:01
@misc{104daad3-66a3-46be-820d-5c917b08472d,
  abstract     = {Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility be- tween the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The results show that the startup sequence gives good code coverage values for the selected MIDlets. The feedback method gives somewhat better code coverage than the random method, but requires real-time code coverage measurements, which decreases the run speed of the tests. Conclusion The random method with startup sequence is the best trade-off in the current setting.},
  author       = {Runeson, Per and Heed, Per and Westrup, Alexander},
  editor       = {Danilo, Caivano and Oivo, Markku and Baldassarre, Maria Teresa and Visaggio, Guiseppe},
  isbn         = {978-3-642-21843-9},
  language     = {eng},
  pages        = {217--231},
  publisher    = {ARRAY(0x99f2678)},
  series       = {Product-Focused Software Process Improvement/Lecture Notes in Computer Science},
  title        = {A factorial experimental evaluation of automated test input generation – Java platform testing in embedded devices},
  url          = {http://dx.doi.org/10.1007/978-3-642-21843-9_18},
  volume       = {6759},
  year         = {2011},
}