B Wu, X Yang, S Pan, X Yuan - Proceedings of the 2022 ACM on Asia …, 2022 - dl.acm.org
Machine learning models are shown to face a severe threat from Model Extraction Attacks, where a well-trained private model owned by a service provider can be stolen by an attacker …
Graph Neural Networks (GNNs) have achieved promising performance in various real-world applications. However, recent studies have shown that GNNs are vulnerable to adversarial …
Graph Neural Networks (GNNs), which generalize traditional deep neural networks on graph data, have achieved state-of-the-art performance on several graph analytical tasks …
Node injection attack on Graph Neural Networks (GNNs) is an emerging and practical attack scenario that the attacker injects malicious nodes rather than modifying original nodes or …
Many real-world data comes in the form of graphs, such as social networks and protein structure. To fully utilize the information contained in graph data, a new family of machine …
Graph is an important data representation ubiquitously existing in the real world. However, analyzing the graph data is computationally difficult due to its non-Euclidean nature. Graph …
J Xu, M Xue, S Picek - Proceedings of the 3rd ACM workshop on …, 2021 - dl.acm.org
Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while …
M Ju, Y Fan, C Zhang, Y Ye - Proceedings of the AAAI Conference on …, 2023 - ojs.aaai.org
Abstract Graph Neural Networks (GNNs) have drawn significant attentions over the years and been broadly applied to essential applications requiring solid robustness or vigorous …
X Wang, WH Wang - Proceedings of the 2022 ACM SIGSAC Conference …, 2022 - dl.acm.org
Recent research has shown that machine learning (ML) models are vulnerable to privacy attacks that leak information about the training data. In this work, we consider Graph Neural …