AUC maximization in the era of big data and AI: A survey

T Yang, Y Ying - ACM Computing Surveys, 2022 - dl.acm.org
Area under the ROC curve, aka AUC, is a measure of choice for assessing the performance
of a classifier for imbalanced data. AUC maximization refers to a learning paradigm that …

Trustllm: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu, Q Zhang, Y Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …

Unleashing the power of graph data augmentation on covariate distribution shift

Y Sui, Q Wu, J Wu, Q Cui, L Li, J Zhou… - Advances in Neural …, 2024 - proceedings.neurips.cc
The issue of distribution shifts is emerging as a critical concern in graph representation
learning. From the perspective of invariant learning and stable learning, a recently well …

Demystifying structural disparity in graph neural networks: Can one size fit all?

H Mao, Z Chen, W Jin, H Han, Y Ma… - Advances in neural …, 2024 - proceedings.neurips.cc
Abstract Recent studies on Graph Neural Networks (GNNs) provide both empirical and
theoretical evidence supporting their effectiveness in capturing structural patterns on both …

Artificial intelligence for science in quantum, atomistic, and continuum systems

X Zhang, L Wang, J Helwig, Y Luo, C Fu, Y Xie… - arXiv preprint arXiv …, 2023 - arxiv.org
Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural
sciences. Today, AI has started to advance natural sciences by improving, accelerating, and …

Graph data augmentation for graph machine learning: A survey

T Zhao, W Jin, Y Liu, Y Wang, G Liu… - arXiv preprint arXiv …, 2022 - arxiv.org
Data augmentation has recently seen increased interest in graph machine learning given its
demonstrated ability to improve model performance and generalization by added training …

Glue-x: Evaluating natural language understanding models from an out-of-distribution generalization perspective

L Yang, S Zhang, L Qin, Y Li, Y Wang, H Liu… - arXiv preprint arXiv …, 2022 - arxiv.org
Pre-trained language models (PLMs) are known to improve the generalization performance
of natural language understanding models by leveraging large amounts of data during the …

Out-of-distribution generalization on graphs: A survey

H Li, X Wang, Z Zhang, W Zhu - arXiv preprint arXiv:2202.07987, 2022 - arxiv.org
Graph machine learning has been extensively studied in both academia and industry.
Although booming with a vast number of emerging methods and techniques, most of the …

Empowering graph representation learning with test-time graph transformation

W Jin, T Zhao, J Ding, Y Liu, J Tang, N Shah - arXiv preprint arXiv …, 2022 - arxiv.org
As powerful tools for representation learning on graphs, graph neural networks (GNNs) have
facilitated various applications from drug discovery to recommender systems. Nevertheless …

Does invariant graph learning via environment augmentation learn invariance?

Y Chen, Y Bian, K Zhou, B Xie… - Advances in Neural …, 2024 - proceedings.neurips.cc
Invariant graph representation learning aims to learn the invariance among data from
different environments for out-of-distribution generalization on graphs. As the graph …