A backdoor attack is a method that causes misrecognition in a deep neural network by training it on additional data that have a specific trigger. The network will correctly recognize normal samples (which lack the specific trigger) as their proper classes but will misrecognize backdoor samples (which contain the trigger) as target classes. In this paper, I propose a method of defense against backdoor attacks that uses a de-trigger autoencoder. In the proposed scheme, the trigger in the backdoor sample is removed using the de-trigger autoencoder, and the backdoor sample is detected from the change in the classification result. Experiments were conducted using MNIST, Fashion-MNIST, and CIFAR-10 as the experimental datasets and TensorFlow as the machine learning library. For MNIST, Fashion-MNIST, and CIFAR-10, respectively, the proposed method detected 91.5%, 82.3%, and 90.9% of the backdoor samples and had 96.1%, 89.6%, and 91.2% accuracy on legitimate samples.