A novel deep reinforcement learning-based approach for task-offloading in vehicular networks

SMA Kazmi, S Otoum, R Hussain… - 2021 IEEE Global …, 2021 - ieeexplore.ieee.org
2021 IEEE Global Communications Conference (GLOBECOM), 2021ieeexplore.ieee.org
Next-generation vehicular networks will impose unprecedented computation demand due to
the wide adoption of compute-intensive services with stringent latency requirements.
Computational capacity of vehicular networks can be enhanced by integration of vehicular
edge or fog computing; however, the growing popularity and massive adoption of novel
services make edge resources insufficient. This challenge can be addressed by utilizing the
onboard computation resources of neighboring vehicles that are not resource-constrained …
Next-generation vehicular networks will impose unprecedented computation demand due to the wide adoption of compute-intensive services with stringent latency requirements. Computational capacity of vehicular networks can be enhanced by integration of vehicular edge or fog computing; however, the growing popularity and massive adoption of novel services make edge resources insufficient. This challenge can be addressed by utilizing the onboard computation resources of neighboring vehicles that are not resource-constrained along with the edge computing resources. To fill the gaps, in this paper, we propose to solve the problem of task offloading by jointly considering the communication and computation resources in a mobile vehicular network. We formulate a non-linear problem to minimize the energy consumption subject to the network resources. Further-more, we consider a practical vehicular environment by taking into account the dynamics of mobile vehicular networks. The formulated problem is solved via a deep reinforcement learning (DRL) based approach. Finally, numerical evaluations are performed that demonstrates the effectiveness of our proposed scheme.
ieeexplore.ieee.org