JBrainy: Micro-benchmarking Java Collections with Interference (Work in Progress Paper)
(2020) 11th ACM/SPEC International Conference on Performance Engineering p.42-45- Abstract
- Software developers use collection data structures extensively and
are often faced with the task of picking which collection to use.
Choosing an inappropriate collection can have major negative
impact on runtime performance. However, choosing the right collection
can be difficult since developers are faced with many possibilities,
which often appear functionally equivalent. One approach
to assist developers in this decision-making process is to microbenchmark
datastructures in order to provide performance insights.
In this paper, we present results from experiments on Java collections
(maps, lists, and sets) using our tool JBrainy, which synthesises
micro-benchmarks with sequences of random method... (More) - Software developers use collection data structures extensively and
are often faced with the task of picking which collection to use.
Choosing an inappropriate collection can have major negative
impact on runtime performance. However, choosing the right collection
can be difficult since developers are faced with many possibilities,
which often appear functionally equivalent. One approach
to assist developers in this decision-making process is to microbenchmark
datastructures in order to provide performance insights.
In this paper, we present results from experiments on Java collections
(maps, lists, and sets) using our tool JBrainy, which synthesises
micro-benchmarks with sequences of random method calls.
We compare our results to the results of a previous experiment on
Java collections that uses a micro-benchmarking approach focused
on single methods. Our results support previous results for lists, in
that we found ArrayList to yield the best running time in 90% of
our benchmarks. For sets, we found LinkedHashSet to yield the
best performance in 78% of the benchmarks. In contrast to
previous results, we found TreeMap and LinkedHashMap to yield better
runtime performance than HashMap in 84% of cases.
(Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/669d7402-1532-473c-8d10-ef44b9953e3c
- author
- Couderc, Noric LU ; Söderberg, Emma LU and Reichenbach, Christoph LU
- organization
- publishing date
- 2020-02
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- host publication
- Proceedings of the 11th ACM/SPEC international conference on Performance Engineering
- pages
- 42 - 45
- publisher
- Association for Computing Machinery (ACM)
- conference name
- 11th ACM/SPEC International Conference on Performance Engineering
- conference location
- Edmonton, Canada
- conference dates
- 2020-04-20 - 2020-04-24
- external identifiers
-
- scopus:85086033137
- ISBN
- 978-1-4503-7109-4
- DOI
- 10.1145/3375555.3383760
- project
- Smart Modules
- WASP startup package Christoph Reichenbach
- language
- English
- LU publication?
- yes
- id
- 669d7402-1532-473c-8d10-ef44b9953e3c
- date added to LUP
- 2020-03-05 16:47:21
- date last changed
- 2023-04-10 10:33:55
@inproceedings{669d7402-1532-473c-8d10-ef44b9953e3c, abstract = {{Software developers use collection data structures extensively and<br/>are often faced with the task of picking which collection to use.<br/>Choosing an inappropriate collection can have major negative<br/>impact on runtime performance. However, choosing the right collection<br/>can be difficult since developers are faced with many possibilities,<br/>which often appear functionally equivalent. One approach<br/>to assist developers in this decision-making process is to microbenchmark<br/>datastructures in order to provide performance insights.<br/> In this paper, we present results from experiments on Java collections<br/>(maps, lists, and sets) using our tool JBrainy, which synthesises<br/>micro-benchmarks with sequences of random method calls.<br/>We compare our results to the results of a previous experiment on<br/>Java collections that uses a micro-benchmarking approach focused<br/>on single methods. Our results support previous results for lists, in<br/>that we found ArrayList to yield the best running time in 90% of<br/>our benchmarks. For sets, we found LinkedHashSet to yield the<br/>best performance in 78% of the benchmarks. In contrast to<br/>previous results, we found TreeMap and LinkedHashMap to yield better<br/>runtime performance than HashMap in 84% of cases.<br/>}}, author = {{Couderc, Noric and Söderberg, Emma and Reichenbach, Christoph}}, booktitle = {{Proceedings of the 11th ACM/SPEC international conference on Performance Engineering}}, isbn = {{978-1-4503-7109-4}}, language = {{eng}}, pages = {{42--45}}, publisher = {{Association for Computing Machinery (ACM)}}, title = {{JBrainy: Micro-benchmarking Java Collections with Interference (Work in Progress Paper)}}, url = {{https://lup.lub.lu.se/search/files/76903722/jbrainy_icpe.pdf}}, doi = {{10.1145/3375555.3383760}}, year = {{2020}}, }