作者
Shreshth Tuli, Shashikant Ilager, Kotagiri Ramamohanarao, Rajkumar Buyya
发表日期
2020/8/17
期刊
IEEE Transactions on Mobile Computing
出版商
IEEE
简介
The ubiquitous adoption of Internet-of-Things (IoT) based applications has resulted in the emergence of the Fog computing paradigm, which allows seamlessly harnessing both mobile-edge and cloud resources. Efficient scheduling of application tasks in such environments is challenging due to constrained resource capabilities, mobility factors in IoT, resource heterogeneity, network hierarchy, and stochastic behaviors. Existing heuristics and Reinforcement Learning based approaches lack generalizability and quick adaptability, thus failing to tackle this problem optimally. They are also unable to utilize the temporal workload patterns and are suitable only for centralized setups. However, asynchronous-advantage-actor-critic (A3C) learning is known to quickly adapt to dynamic scenarios with less data and residual recurrent neural network (R2N2) to quickly update model parameters. Thus, we propose an A3C …
引用总数
20202021202220232024329597340