Distributed edge caching via reinforcement learning in fog radio access networks

L Lu, Y Jiang, M Bennis, Z Ding… - 2019 IEEE 89th …, 2019 - ieeexplore.ieee.org
L Lu, Y Jiang, M Bennis, Z Ding, FC Zheng, X You
2019 IEEE 89th Vehicular Technology Conference (VTC2019-Spring), 2019ieeexplore.ieee.org
In this paper, the distributed edge caching problem in fog radio access networks (F-RANs) is
investigated. By considering the unknown spatio-temporal content popularity and user
preference, a user request model based on hidden Markov process is proposed to
characterize the fluctuant spatio-temporal traffic demands in F-RANs. Then, the Q-learning
method based on the reinforcement learning (RL) framework is put forth to seek the optimal
caching policy in a distributed manner, which enables fog access points (F-APs) to learn and …
In this paper, the distributed edge caching problem in fog radio access networks (F-RANs) is investigated. By considering the unknown spatio-temporal content popularity and user preference, a user request model based on hidden Markov process is proposed to characterize the fluctuant spatio-temporal traffic demands in F-RANs. Then, the Q-learning method based on the reinforcement learning (RL) framework is put forth to seek the optimal caching policy in a distributed manner, which enables fog access points (F-APs) to learn and track the potential dynamic process without extra communications cost. Furthermore, we propose a more efficient Q-learning method with value function approximation (Q-VFA-learning) to reduce complexity and accelerate convergence. Simulation results show that the performance of our proposed method is superior to those of the traditional methods.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果