Due to the extreme imbalance of training data between seen classes and unseen classes, most existing methods fail to achieve satisfactory results in the challenging task of Zero-shot Learning (ZSL). To avoid the need for labelled data of unseen classes, in this paper, we investigate how to synthesize visual features for ZSL problem. The key challenge is how to capture the realistic feature distribution of unseen classes without training samples. To this end, we propose a hybrid model consists of Random Attribute Selection (RAS) and conditional Generative Adversarial Network (cGAN). RAS aims to learn the realistic generation of attributes by their correlations in nature. To improve the discrimination for the large number of classes, we add a reconstruction loss in the generative network, which can solve the domain shift problem and significantly improve the classification accuracy. Extensive experiments on four benchmarks demonstrate that our method can outperform all the state-of-the-art methods. Qualitative results show that, compared to conventional generative models, our method can capture more realistic distribution and remarkably improve the variability of the synthesized data.