Abstract Model inversion (MI) attacks aim to infer and reconstruct private training data by abusing access to a model. MI attacks have raised concerns about the leaking of sensitive …
We propose a novel image transformation network for generating visually protected images for privacy-preserving deep neural networks (DNNs). The proposed transformation network …
R El Saj, E Sedgh Gooya, A Alfalou, M Khalil - Electronics, 2021 - mdpi.com
Privacy-preserving deep neural networks have become essential and have attracted the attention of many researchers due to the need to maintain the privacy and the confidentiality …
QX Huang, WL Yap, MY Chiu, HM Sun - IEEE Access, 2022 - ieeexplore.ieee.org
The need for cloud servers for training deep neural network (DNN) models is increasing as more complex architecture designs of DNN models are developed. Nevertheless, cloud …
In this paper, we propose a privacy-preserving image classification method that is based on the combined use of encrypted images and the vision transformer (ViT). The proposed …
In this paper, we propose a privacy-preserving semantic segmentation method that uses encrypted images and models with the vision transformer (ViT), called the segmentation …
H Kiya, R Iijima, A Maungmaung… - … on Information and …, 2023 - search.ieice.org
In this paper, we propose a combined use of transformed images and vision transformer (ViT) models transformed with a secret key. We show for the first time that models trained …
Y Xiang, T Li, W Ren, T Zhu, KKR Choo - Engineering Applications of …, 2023 - Elsevier
The training of state-of-the-art deep learning models generally requires significant high- quality data, including personal and sensitive data. To ensure privacy of the sensitive data …
AP Maung Maung, H Kiya - Proceedings of the 2021 ACM Workshop on …, 2021 - dl.acm.org
In this paper, we propose a novel DNN watermarking method that utilizes a learnable image transformation method with a secret key. The proposed method embeds a watermark pattern …