Trustllm: Trustworthiness in large language models

L Sun, Y Huang, H Wang, S Wu, Q Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …

Artificial intelligence for science in quantum, atomistic, and continuum systems

X Zhang, L Wang, J Helwig, Y Luo, C Fu, Y Xie… - arXiv preprint arXiv …, 2023 - arxiv.org
Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural
sciences. Today, AI has started to advance natural sciences by improving, accelerating, and …

Does invariant graph learning via environment augmentation learn invariance?

Y Chen, Y Bian, K Zhou, B Xie… - Advances in Neural …, 2024 - proceedings.neurips.cc
Invariant graph representation learning aims to learn the invariance among data from
different environments for out-of-distribution generalization on graphs. As the graph …

A survey of graph neural networks in real world: Imbalance, noise, privacy and ood challenges

W Ju, S Yi, Y Wang, Z Xiao, Z Mao, H Li, Y Gu… - arXiv preprint arXiv …, 2024 - arxiv.org
Graph-structured data exhibits universality and widespread applicability across diverse
domains, such as social network analysis, biochemistry, financial fraud detection, and …

Graph structure and feature extrapolation for out-of-distribution generalization

X Li, S Gui, Y Luo, S Ji - arXiv preprint arXiv:2306.08076, 2023 - arxiv.org
Out-of-distribution (OOD) generalization deals with the prevalent learning scenario where
test distribution shifts from training distribution. With rising application demands and inherent …

Identifying Semantic Component for Robust Molecular Property Prediction

Z Li, Z Xu, R Cai, Z Yang, Y Yan, Z Hao, G Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
Although graph neural networks have achieved great success in the task of molecular
property prediction in recent years, their generalization ability under out-of-distribution …

[HTML][HTML] Position: TrustLLM: Trustworthiness in Large Language Models

Y Huang, L Sun, H Wang, S Wu… - International …, 2024 - proceedings.mlr.press
Large language models (LLMs) have gained considerable attention for their excellent
natural language processing capabilities. Nonetheless, these LLMs present many …

Spatio-temporal fluid dynamics modeling via physical-awareness and parameter diffusion guidance

H Wu, F Xu, Y Duan, Z Niu, W Wang, G Lu… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper proposes a two-stage framework named ST-PAD for spatio-temporal fluid
dynamics modeling in the field of earth sciences, aiming to achieve high-precision …

Flowx: Towards explainable graph neural networks via message flows

S Gui, H Yuan, J Wang, Q Lao, K Li… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
We investigate the explainability of graph neural networks (GNNs) as a step toward
elucidating their working mechanisms. While most current methods focus on explaining …

Pairwise Alignment Improves Graph Domain Adaptation

S Liu, D Zou, H Zhao, P Li - arXiv preprint arXiv:2403.01092, 2024 - arxiv.org
Graph-based methods, pivotal for label inference over interconnected objects in many real-
world applications, often encounter generalization challenges, if the graph used for model …