过去一年中添加的文章,按日期排序

Learning Precoding Policy with Inductive Biases: Graph Neural Networks or Meta-Learning?

B Zhao, Y Ma, J Wu, C Yang - GLOBECOM 2023-2023 IEEE …, 2023 - ieeexplore.ieee.org
B Zhao, Y Ma, J Wu, C Yang
GLOBECOM 2023-2023 IEEE Global Communications Conference, 2023ieeexplore.ieee.org
222 天前 - Deep learning has been introduced to optimize wireless policies such as precoding
for enabling real-time implementation. Yet prevalent studies assume that training and test
samples are drawn from the same distribution, which is not true in dynamic wireless
environments. As a result, a well-trained deep neural network (DNN) may require retraining
to adapt to new environments, incurring the overhead of data collection. The required
training samples for adaptation can be reduced by introducing inductive biases into DNNs …
Deep learning has been introduced to optimize wireless policies such as precoding for enabling real-time implementation. Yet prevalent studies assume that training and test samples are drawn from the same distribution, which is not true in dynamic wireless environments. As a result, a well-trained deep neural network (DNN) may require retraining to adapt to new environments, incurring the overhead of data collection. The required training samples for adaptation can be reduced by introducing inductive biases into DNNs, which can be learned automatically by meta-learning or embedded in DNNs by designing graph neural networks (GNNs). Almost all previous works on meta-learning overlooked the prior-known permutation equivariance (PE) properties, which widely exist in wireless policies and can be harnessed to reduce the hypothesis space of a DNN. In this paper, we strive to answer the following question: which way of introducing inductive biases is more effective in reducing samples for retraining, GNNs or meta-learning? We take the sum-rate maximization precoding problem as an example to answer the question. Simulation results show that the GNNs are more efficient than meta-learning, and meta-learning for precoding cannot adapt to new scenarios where the number of users differs from the training scenario.
ieeexplore.ieee.org