A scalable deep reinforcement learning approach for traffic engineering based on link control

P Sun, J Lan, J Li, J Zhang, Y Hu… - IEEE Communications …, 2020 - ieeexplore.ieee.org
P Sun, J Lan, J Li, J Zhang, Y Hu, Z Guo
IEEE Communications Letters, 2020ieeexplore.ieee.org
As modern communication networks are growing more complicated and dynamic, designing
a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the
optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a
chance to design a model-free TE scheme through machine learning. However, existing
DRL-based TE solutions cannot be applied to large networks. In this article, we propose to
combine the control theory and DRL to design a TE scheme. Our proposed scheme …
As modern communication networks are growing more complicated and dynamic, designing a good Traffic Engineering (TE) policy becomes difficult due to the complexity of solving the optimal traffic scheduling problem. Deep Reinforcement Learning (DRL) provides us with a chance to design a model-free TE scheme through machine learning. However, existing DRL-based TE solutions cannot be applied to large networks. In this article, we propose to combine the control theory and DRL to design a TE scheme. Our proposed scheme ScaleDRL employs the idea from the pinning control theory to select a subset of links in the network and name them critical links. Based on the traffic distribution information, we use a DRL algorithm to dynamically adjust the link weights for the critical links. Through a weighted shortest path algorithm, the forwarding paths of the flows can be dynamically adjusted. The packet-level simulation shows that ScaleDRL reduces the average end-to-end transmission delay by up to 39% compared to the state-of-the-art in different network topologies.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果