Multiobjective load balancing for multiband downlink cellular networks: A meta-reinforcement learning approach

A Feriani, D Wu, YT Xu, J Li, S Jang… - IEEE Journal on …, 2022 - ieeexplore.ieee.org
Load balancing has become a key technique to handle the increasing traffic demand and
improve the user experience. It evenly distributes the traffic across network resources by …

Continual deep reinforcement learning with task-agnostic policy distillation

MB Hafez, K Erekmen - Scientific Reports, 2024 - nature.com
Central to the development of universal learning systems is the ability to solve multiple tasks
without retraining from scratch when new data arrives. This is crucial because each task …

Efficient Open-world Reinforcement Learning via Knowledge Distillation and Autonomous Rule Discovery

E Nikonova, C Xue, J Renz - arXiv preprint arXiv:2311.14270, 2023 - arxiv.org
Deep reinforcement learning suffers from catastrophic forgetting and sample inefficiency
making it less applicable to the ever-changing real world. However, the ability to use …

Ensemble policy distillation with reduced data distribution mismatch

Y Sun, Q Zhang - … Joint Conference on Neural Networks (IJCNN …, 2022 - ieeexplore.ieee.org
Policy distillation is a method for model compression for deep reinforcement learning, which
is typically applied onto mobile devices to reduce power consumption and inference time …

Model Compression for Deep Reinforcement Learning Through Mutual Information

J García-Ramírez, EF Morales, HJ Escalante - Ibero-American Conference …, 2022 - Springer
One of the most important limitation of deep learning and deep reinforcement learning, is the
number of parameters in their models (dozens to hundreds of millions). Different model …

Online-S2T: A Lightweight Distributed Online Reinforcement Learning Training Framework For Resource-Constrained Devices

F Zhou, X Qiu, Z Cai, W Chen… - 2023 Asia Conference …, 2023 - ieeexplore.ieee.org
In deep reinforcement learning (DRL), it is generally resource-constrained devices that
interact with real-time dynamic environments, and many DRL models are computationally …

[PDF][PDF] Ensemble policy distillation in deep reinforcement learning

Y Sun, P Fazli - Workshop on Reinforcement Learning in Games, 2020 - pooyanfazli.com
Policy distillation in deep reinforcement learning transfers the knowledge learned by a large
teacher model to a compact student model, which reduces the inference time and power …

[引用][C] Distilled PLASTIC-Policy: Distillation in Ad Hoc Teamwork

IL Vieira - 2022