Abstract
Edge Computing (EC) performs computation at the close proximity to the end devices and reduces dependency to the Cloud and Internet. It also overcomes the Quality of Service (QoS) and latency issues that come naturally from the best- effort behaviour of the Cloud. Although reduced dependency to the Internet enables the application of the EC to the real- time applications, without a standard architecture, EC benefits cannot be incorporated. To this end, in our previous papers, we proposed a novel software reference architecture (SRA) for Edge Servers. The SRA allows receiving Edge Computing benefits without worrying about setup or configuration, but only focusing on software development. Regardless of the computing power of the Edge Servers, the architecture enables a decentralised (real- time) task execution at the edge. Once a task request arrives, the receiving server determines whether to execute the task on itself or offload it to any of the neighbouring servers, by considering several parameters, such as latency, and available resources. In this paper, we explain how servers decide where to execute the task, without requiring a centralized load balancer. Moreover, we show how they share their resources between each other to create a common knowledge of their neighbouring servers.
Index Terms—real-time computing, edge computing, task of- floading, fog computing, decentralized load balancer.
Authors
- Volkan Gezer
- Achim Wagner