Recently, Deep Reinforcement Learning (DRL) has been used to solve complex robot control tasks with outstanding success. However, previous DRL methods still exist some shortcomings, such as poor generalization performance, which makes policy performance quite sensitive to small vari-ations of the task settings. Besides, it is quite time-consuming and computationally expensive to retrain a new policy from scratch for new tasks, hence restricts the applications of DRL-based methods in the real world. In this work, we propose a novel DRL generalization method called GNN-embedding, which incorporates the robot hardware and the environment simultaneously with GNN-based policy network and learnable embedding vectors of tasks. Thus, it can learn a unified policy for different robots under different environment conditions, which improves the generalization performance of existing DRL robot policies. Multiple experiments on the Hopper-v2 robot are conducted. The experimental results demonstrate the effectiveness and efficiency of GNN-embedding on generalization, including multi-task learning and transfer learning problems.