This paper theoretically explains the intuition that simple concepts are more likely to be learned by deep neural networks (DNNs) than complex concepts. In fact, recent studies …
Q Ren, H Deng, Y Chen, S Lou… - … on Machine Learning, 2023 - proceedings.mlr.press
In this paper, we focus on mean-field variational Bayesian Neural Networks (BNNs) and explore the representation capacity of such BNNs by investigating which types of concepts …
This paper aims to illustrate the concept-emerging phenomenon in a trained DNN. Specifically, we find that the inference score of a DNN can be disentangled into the effects of …
This paper aims to prove the emergence of symbolic concepts in well-trained AI models. We prove that if (1) the high-order derivatives of the model output wrt the input variables are all …
Self-attention mechanism is the key of the Transformer but often criticized for its computation demands. Previous token pruning works motivate their methods from the view of …
Masking some input variables of a deep neural network (DNN) and computing output changes on the masked input sample represent a typical way to compute attributions of input …
The advancement of the Internet of Things applications (technologies and enabling platforms), consisting of software and hardware (eg, sensors, actuators, etc.), allows …
J Yue, J Wang, S Zhang, Z Ma, Y Shi, Y Lin - Neural Networks, 2025 - Elsevier
Multivariate time series classification (MTSC), which identifies categories of multiple sensor signals recorded in continuous time, is widely used in various fields such as transportation …
W Shen, L Cheng, Y Yang, M Li, Q Zhang - arXiv preprint arXiv …, 2023 - arxiv.org
In this paper, we explain the inference logic of large language models (LLMs) as a set of symbolic concepts. Many recent studies have discovered that traditional DNNs usually …