PEAS: A Performance Evaluation Framework for Auto-Scaling Strategies in Cloud Applications
(2016) In ACM Transactions on Modeling and Performance Evaluation of Computing Systems 1(4).- Abstract
- Numerous auto-scaling strategies have been proposed in the past few years for improving various Quality of Service (QoS) indicators of cloud applications, for example, response time and throughput, by adapting the amount of resources assigned to the application to meet the workload demand. However, the evaluation of a proposed auto-scaler is usually achieved through experiments under specific conditions and seldom includes extensive testing to account for uncertainties in the workloads and unexpected behaviors of the system. These tests by no means can provide guarantees about the behavior of the system in general conditions. In this article, we present a Performance Evaluation framework for Auto-Scaling (PEAS) strategies in the presence... (More)
- Numerous auto-scaling strategies have been proposed in the past few years for improving various Quality of Service (QoS) indicators of cloud applications, for example, response time and throughput, by adapting the amount of resources assigned to the application to meet the workload demand. However, the evaluation of a proposed auto-scaler is usually achieved through experiments under specific conditions and seldom includes extensive testing to account for uncertainties in the workloads and unexpected behaviors of the system. These tests by no means can provide guarantees about the behavior of the system in general conditions. In this article, we present a Performance Evaluation framework for Auto-Scaling (PEAS) strategies in the presence of uncertainties. The evaluation is formulated as a chance constrained optimization problem, which is solved using scenario theory. The adoption of such a technique allows one to give probabilistic guarantees of the obtainable performance. Six different auto-scaling strategies have been selected from the literature for extensive test evaluation and compared using the proposed framework. We build a discrete event simulator and parameterize it based on real experiments. Using the simulator, each auto-scaler’s performance is evaluated using 796 distinct real workload traces from projects hosted on the Wikimedia foundations’ servers, and their performance is compared using PEAS. The evaluation is carried out using different performance metrics, highlighting the flexibility of the framework, while providing probabilistic bounds on the evaluation and the performance of the algorithms. Our results highlight the problem of generalizing the conclusions of the original published studies and show that based on the evaluation criteria, a controller can be shown to be better than other controllers. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/afa8eeca-b1cc-4f24-b2c2-5d97f5d8a856
- author
- Vittorio Papadopoulos, Alessandro ; Ali-Eldin, Ahmed ; Årzén, Karl-Erik LU ; Tordsson, Johan and Elmroth, Erik
- organization
- publishing date
- 2016-09
- type
- Contribution to journal
- publication status
- published
- subject
- in
- ACM Transactions on Modeling and Performance Evaluation of Computing Systems
- volume
- 1
- issue
- 4
- article number
- 15
- publisher
- Association for Computing Machinery (ACM)
- external identifiers
-
- scopus:85074674305
- ISSN
- 2376-3639
- DOI
- 10.1145/2930659
- language
- English
- LU publication?
- yes
- id
- afa8eeca-b1cc-4f24-b2c2-5d97f5d8a856
- date added to LUP
- 2016-04-29 14:16:54
- date last changed
- 2022-05-02 02:53:30
@article{afa8eeca-b1cc-4f24-b2c2-5d97f5d8a856, abstract = {{Numerous auto-scaling strategies have been proposed in the past few years for improving various Quality of Service (QoS) indicators of cloud applications, for example, response time and throughput, by adapting the amount of resources assigned to the application to meet the workload demand. However, the evaluation of a proposed auto-scaler is usually achieved through experiments under specific conditions and seldom includes extensive testing to account for uncertainties in the workloads and unexpected behaviors of the system. These tests by no means can provide guarantees about the behavior of the system in general conditions. In this article, we present a Performance Evaluation framework for Auto-Scaling (PEAS) strategies in the presence of uncertainties. The evaluation is formulated as a chance constrained optimization problem, which is solved using scenario theory. The adoption of such a technique allows one to give probabilistic guarantees of the obtainable performance. Six different auto-scaling strategies have been selected from the literature for extensive test evaluation and compared using the proposed framework. We build a discrete event simulator and parameterize it based on real experiments. Using the simulator, each auto-scaler’s performance is evaluated using 796 distinct real workload traces from projects hosted on the Wikimedia foundations’ servers, and their performance is compared using PEAS. The evaluation is carried out using different performance metrics, highlighting the flexibility of the framework, while providing probabilistic bounds on the evaluation and the performance of the algorithms. Our results highlight the problem of generalizing the conclusions of the original published studies and show that based on the evaluation criteria, a controller can be shown to be better than other controllers.}}, author = {{Vittorio Papadopoulos, Alessandro and Ali-Eldin, Ahmed and Årzén, Karl-Erik and Tordsson, Johan and Elmroth, Erik}}, issn = {{2376-3639}}, language = {{eng}}, number = {{4}}, publisher = {{Association for Computing Machinery (ACM)}}, series = {{ACM Transactions on Modeling and Performance Evaluation of Computing Systems}}, title = {{PEAS: A Performance Evaluation Framework for Auto-Scaling Strategies in Cloud Applications}}, url = {{http://dx.doi.org/10.1145/2930659}}, doi = {{10.1145/2930659}}, volume = {{1}}, year = {{2016}}, }