Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

A factorial experimental evaluation of automated test input generation – Java platform testing in embedded devices

Runeson, Per LU orcid ; Heed, Per and Westrup, Alexander (2011) PROFES 6759. p.217-231
Abstract
Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility be- tween the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The... (More)
Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility be- tween the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The results show that the startup sequence gives good code coverage values for the selected MIDlets. The feedback method gives somewhat better code coverage than the random method, but requires real-time code coverage measurements, which decreases the run speed of the tests. Conclusion The random method with startup sequence is the best trade-off in the current setting. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Product-Focused Software Process Improvement/Lecture Notes in Computer Science
editor
Danilo, Caivano ; Oivo, Markku ; Baldassarre, Maria Teresa and Visaggio, Guiseppe
volume
6759
pages
217 - 231
publisher
Springer
conference name
PROFES
conference location
Torre Canne, Italy
conference dates
2011-06-20 - 2011-06-22
external identifiers
  • scopus:79960265199
ISBN
978-3-642-21843-9
DOI
10.1007/978-3-642-21843-9_18
project
Embedded Applications Software Engineering
language
English
LU publication?
yes
id
104daad3-66a3-46be-820d-5c917b08472d (old id 2174112)
date added to LUP
2016-04-04 11:29:26
date last changed
2022-01-29 21:58:38
@inproceedings{104daad3-66a3-46be-820d-5c917b08472d,
  abstract     = {{Background. When delivering an embedded product, such as a mobile phone, third party products, like games, are often bundled with it in the form of Java MIDlets. To verify the compatibility be- tween the runtime platform and the MIDlet is a labour-intensive task, if input data should be manually generated for thousands of MIDlets. Aim. In order to make the verification more efficient, we investigate four different automated input generation methods which do not require extensive modeling; random, feedback based, with and without a constant startup sequence. Method. We evaluate the methods in a factorial design experiment with manual input generation as a reference. One original experiment is run, and a partial replication. Result. The results show that the startup sequence gives good code coverage values for the selected MIDlets. The feedback method gives somewhat better code coverage than the random method, but requires real-time code coverage measurements, which decreases the run speed of the tests. Conclusion The random method with startup sequence is the best trade-off in the current setting.}},
  author       = {{Runeson, Per and Heed, Per and Westrup, Alexander}},
  booktitle    = {{Product-Focused Software Process Improvement/Lecture Notes in Computer Science}},
  editor       = {{Danilo, Caivano and Oivo, Markku and Baldassarre, Maria Teresa and Visaggio, Guiseppe}},
  isbn         = {{978-3-642-21843-9}},
  language     = {{eng}},
  pages        = {{217--231}},
  publisher    = {{Springer}},
  title        = {{A factorial experimental evaluation of automated test input generation – Java platform testing in embedded devices}},
  url          = {{http://dx.doi.org/10.1007/978-3-642-21843-9_18}},
  doi          = {{10.1007/978-3-642-21843-9_18}},
  volume       = {{6759}},
  year         = {{2011}},
}