Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Achieving predictable and low end-to-end latency for a cloud-robotics network

Millnert, Victor LU ; Eker, Johan LU orcid and Bini, Enrico LU (2018) In Technical Reports TFRT-7655
Abstract
To remain competitive in the field of manufacturing today, companies must make their industrial robots smarter and allow them to collaborate with one another in a more effective manner. This is typically done by adding some form of learning, or artificial intelligence (AI), to the robots. These learning algorithms are often packaged as cloud functions in a remote computational center since the amount of computational power they require is unfeasible to have at the same physical location as the robots.

Using and augmenting the robots with these such cloud functions has usually not been possible since the robots require a very low and predictable end-to-end latency--something which is difficult to achieve when involving cloud... (More)
To remain competitive in the field of manufacturing today, companies must make their industrial robots smarter and allow them to collaborate with one another in a more effective manner. This is typically done by adding some form of learning, or artificial intelligence (AI), to the robots. These learning algorithms are often packaged as cloud functions in a remote computational center since the amount of computational power they require is unfeasible to have at the same physical location as the robots.

Using and augmenting the robots with these such cloud functions has usually not been possible since the robots require a very low and predictable end-to-end latency--something which is difficult to achieve when involving cloud functions. Moreover, different sets of robots will have different end-to-end latency requirement, despite using the same network of cloud functions. However, with the introduction of 5G and network function virtualization (NFV) it does become possible. With this technology it becomes possible to control the amount of resources allocated to the different cloud functions and thereby gives us control over the end-to-end latency. By controlling this in a smart way it will become possible to achieve a very low and predictable end-to-end latency.

In this work we address this challenge by deriving a rigorous mathematical framework that models a general network of cloud functions. On top of this network several applications are hosted. Using this framework we propose a generalized AutoSAC (automatic service- and admission controller) that builds on previous work by the authors. In the previous work the system was only capable of handling a single set of cloud functions, with a single application hosted on top of it. With the contributions of this paper it becomes possible to host multiple applications on top of a larger, general network of cloud functions. It also allows for each application to have its own end-to-end deadline requirement.

The contributions of this paper can be summed up by the following four parts:

a) Input prediction: To achieve a good prediction of propose a communication scheme between the cloud functions. This allows a for a quicker reaction to changes of the traffic rates and in the end a better utilization of the resources allocated to the cloud functions.

b) Service control: With a small theorem we are able to show a simplification of the control law derived in the previous work. This can be especially useful when controlling cloud functions that make use of a large number of virtual machines or containers.

c) Admission control: To be able to ensure that the end-to-end latency is low and predictable we equip every cloud function with an intermediary node deadline. To enforce the node deadlines we propose a novel admission controller capable of achieving the highest possible throughput while still guaranteeing that every packet that is admitted will meet the node deadline. Furthermore, we show that the computation necessary for this can be done in constant time, implying that it is possible to enforce a time-varying node deadline.

d) Selection of node deadlines: The problem of assigning intermediary node deadlines in a way that enforce the global end-to-end deadlines is addressed by investigating how different node deadlines affect the performance of the network. The insights from this then used to set up a convex optimization problem for the assignment problem.
(Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Book/Report
publication status
published
subject
keywords
5g, cloud, cloud robotics, AI, network function virtualization, NFV
in
Technical Reports TFRT-7655
pages
18 pages
publisher
Department of Automatic Control, Lund Institute of Technology, Lund University
project
Feedback Computing in Cyber-Physical Systems
WASP: Autonomous Cloud
language
English
LU publication?
yes
id
fe4a3a04-ee6d-49e2-b634-7f9917340641
date added to LUP
2018-04-04 09:40:44
date last changed
2023-02-22 11:35:34
@techreport{fe4a3a04-ee6d-49e2-b634-7f9917340641,
  abstract     = {{To remain competitive in the field of manufacturing today, companies must make their industrial robots smarter and allow them to collaborate with one another in a more effective manner. This is typically done by adding some form of learning, or artificial intelligence (AI), to the robots. These learning algorithms are often packaged as cloud functions in a remote computational center since the amount of computational power they require is unfeasible to have at the same physical location as the robots.<br/><br/>  Using and augmenting the robots with these such cloud functions has usually not been possible since the robots require a very low and predictable end-to-end latency--something which is difficult to achieve when involving cloud functions. Moreover, different sets of robots will have different end-to-end latency requirement, despite using the same network of cloud functions. However, with the introduction of 5G and network function virtualization (NFV) it does become possible. With this technology it becomes possible to control the amount of resources allocated to the different cloud functions and thereby gives us control over the end-to-end latency. By controlling this in a smart way it will become possible to achieve a very low and predictable end-to-end latency.<br/><br/>  In this work we address this challenge by deriving a rigorous mathematical framework that models a general network of cloud functions. On top of this network several applications are hosted. Using this framework we propose a generalized AutoSAC (automatic service- and admission controller) that builds on previous work by the authors. In the previous work the system was only capable of handling a single set of cloud functions, with a single application hosted on top of it. With the contributions of this paper it becomes possible to host multiple applications on top of a larger, general network of cloud functions. It also allows for each application to have its own end-to-end deadline requirement.<br/><br/>  The contributions of this paper can be summed up by the following four parts:<br/><br/>  a) Input prediction: To achieve a good prediction of propose a communication scheme between the cloud functions. This allows a for a quicker reaction to changes of the traffic rates and in the end a better utilization of the resources allocated to the cloud functions.<br/><br/>  b) Service control: With a small theorem we are able to show a simplification of the control law derived in the previous work. This can be especially useful when controlling cloud functions that make use of a large number of virtual machines or containers.<br/>  <br/>  c) Admission control: To be able to ensure that the end-to-end latency is low and predictable we equip every cloud function with an intermediary node deadline. To enforce the node deadlines we propose a novel admission controller capable of achieving the highest possible throughput while still guaranteeing that every packet that is admitted will meet the node deadline. Furthermore, we show that the computation necessary for this can be done in constant time, implying that it is possible to enforce a time-varying node deadline.<br/>  <br/>  d) Selection of node deadlines: The problem of assigning intermediary node deadlines in a way that enforce the global end-to-end deadlines is addressed by investigating how different node deadlines affect the performance of the network. The insights from this then used to set up a convex optimization problem for the assignment problem.<br/>}},
  author       = {{Millnert, Victor and Eker, Johan and Bini, Enrico}},
  institution  = {{Department of Automatic Control, Lund Institute of Technology, Lund University}},
  keywords     = {{5g; cloud; cloud robotics; AI; network function virtualization; NFV}},
  language     = {{eng}},
  month        = {{04}},
  series       = {{Technical Reports TFRT-7655}},
  title        = {{Achieving predictable and low end-to-end latency for a cloud-robotics network}},
  url          = {{https://lup.lub.lu.se/search/files/40996597/TechnicalReport.pdf}},
  year         = {{2018}},
}