Deep learning has aroused a lot of attention and has been used successfully in many domains, such as accurate image recognition and medical diagnosis. Generally, the training of models requires large, representative datasets, which may be collected from a large number of users and contain sensitive information (e.g., users' photos and medical information). The collected data would be stored and computed by service providers (SPs) or delegated to an untrusted cloud. The users can neither control how it will be used, nor realize what will be learned from it, which make the privacy issues prominent and severe. To solve the privacy issues, one of the most popular approaches is to encrypt users' data with their public keys. However, this technique inevitably leads to another challenge that how to train the model based on multi-key encrypted data. In this paper, we propose a novel privacy-preserving deep learning model, namely PDLM, to apply deep learning over the encrypted data under multiple keys. In PDLM, lots of users contribute their encrypted data to SP to learn a specific model. We adopt an effective privacy-preserving calculation toolkit to achieve the training process based on stochastic gradient descent (SGD) in a privacy-preserving manner. We also prove that our PDLM can achieve users' privacy preservation and analyze the efficiency of PDLM in theory. Finally, we conduct an experiment to evaluate PDLM over two real-world datasets and empirical results demonstrate that our PDLM can effectively and efficiently train the model in a privacy-preserving way.