Does a neural network really encode symbolic concepts?

M Li, Q Zhang - International conference on machine …, 2023 - proceedings.mlr.press
Recently, a series of studies have tried to extract interactions between input variables
modeled by a DNN and define such interactions as concepts encoded by the DNN …

Towards the difficulty for a deep neural network to learn concepts of different complexities

D Liu, H Deng, X Cheng, Q Ren… - Advances in Neural …, 2023 - proceedings.neurips.cc
This paper theoretically explains the intuition that simple concepts are more likely to be
learned by deep neural networks (DNNs) than complex concepts. In fact, recent studies …

Bayesian neural networks avoid encoding complex and perturbation-sensitive concepts

Q Ren, H Deng, Y Chen, S Lou… - … on Machine Learning, 2023 - proceedings.mlr.press
In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and
explore the representation capacity of such BNNs by investigating which types of concepts …

Defining and quantifying the emergence of sparse concepts in dnns

J Ren, M Li, Q Chen, H Deng… - Proceedings of the …, 2023 - openaccess.thecvf.com
This paper aims to illustrate the concept-emerging phenomenon in a trained DNN.
Specifically, we find that the inference score of a DNN can be disentangled into the effects of …

Where we have arrived in proving the emergence of sparse symbolic concepts in ai models

Q Ren, J Gao, W Shen, Q Zhang - arXiv preprint arXiv:2305.01939, 2023 - arxiv.org
This paper aims to prove the emergence of symbolic concepts in well-trained AI models. We
prove that if (1) the high-order derivatives of the model output wrt the input variables are all …

MLP Can Be A Good Transformer Learner

S Lin, P Lyu, D Liu, T Tang, X Liang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Self-attention mechanism is the key of the Transformer but often criticized for its computation
demands. Previous token pruning works motivate their methods from the view of …

Can we faithfully represent masked states to compute shapley values on a dnn?

J Ren, Z Zhou, Q Chen, Q Zhang - arXiv preprint arXiv:2105.10719, 2021 - arxiv.org
Masking some input variables of a deep neural network (DNN) and computing output
changes on the masked input sample represent a typical way to compute attributions of input …

Temporal Variations Dataset for Indoor Environmental Parameters in Northern Saudi Arabia

T Alshammari, RA Ramadan, A Ahmad - Applied Sciences, 2023 - mdpi.com
The advancement of the Internet of Things applications (technologies and enabling
platforms), consisting of software and hardware (eg, sensors, actuators, etc.), allows …

TV-Net: Temporal-Variable feature harmonizing Network for multivariate time series classification and interpretation

J Yue, J Wang, S Zhang, Z Ma, Y Shi, Y Lin - Neural Networks, 2025 - Elsevier
Multivariate time series classification (MTSC), which identifies categories of multiple sensor
signals recorded in continuous time, is widely used in various fields such as transportation …

Can the Inference Logic of Large Language Models be Disentangled into Symbolic Concepts?

W Shen, L Cheng, Y Yang, M Li, Q Zhang - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we explain the inference logic of large language models (LLMs) as a set of
symbolic concepts. Many recent studies have discovered that traditional DNNs usually …