Adapting membership inference attacks to GNN for graph classification: Approaches and implications

B Wu, X Yang, S Pan, X Yuan - 2021 IEEE International …, 2021 - ieeexplore.ieee.org
2021 IEEE International Conference on Data Mining (ICDM), 2021ieeexplore.ieee.org
In light of the wide application of Graph Neural Networks (GNNs), Membership Inference
Attack (MIA) against GNNs raises severe privacy concerns, where training data can be
leaked from trained GNN models. However, prior studies focus on inferring the membership
of only the components in a graph, eg, an individual node or edge. In this paper, we take the
first step in MIA against GNNs for graph-level classification. Our objective is to infer whether
a graph sample has been used for training a GNN model. We present and implement two …
In light of the wide application of Graph Neural Networks (GNNs), Membership Inference Attack (MIA) against GNNs raises severe privacy concerns, where training data can be leaked from trained GNN models. However, prior studies focus on inferring the membership of only the components in a graph, e.g., an individual node or edge. In this paper, we take the first step in MIA against GNNs for graph-level classification. Our objective is to infer whether a graph sample has been used for training a GNN model. We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities. We perform comprehensive experiments to evaluate our attacks in seven real-world datasets using five representative GNN models. Both our attacks are shown effective and can achieve high performance, i.e., reaching over 0.7 attack F1 scores in most cases 1 . Our findings also confirm that, unlike the node-level classifier, MIAs on graph-level classification tasks are more co-related with the overfitting level of GNNs rather than the statistic property of their training graphs.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果