Information decomposition and the informational architecture of the brain

AI Luppi, FE Rosas, PAM Mediano, DK Menon… - Trends in Cognitive …, 2024 - cell.com
To explain how the brain orchestrates information-processing for cognition, we must
understand information itself. Importantly, information is not a monolithic entity. Information …

6G networks: Beyond Shannon towards semantic and goal-oriented communications

EC Strinati, S Barbarossa - Computer Networks, 2021 - Elsevier
The goal of this paper is to promote the idea that including semantic and goal-oriented
aspects in future 6G networks can produce a significant leap forward in terms of system …

A survey on explainable reinforcement learning: Concepts, algorithms, challenges

Y Qing, S Liu, J Song, H Wang, M Song - arXiv preprint arXiv:2211.06665, 2022 - arxiv.org
Reinforcement Learning (RL) is a popular machine learning paradigm where intelligent
agents interact with the environment to fulfill a long-term goal. Driven by the resurgence of …

Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification

H Li, C Zhu, Y Zhang, Y Sun, Z Shui… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract While Multiple Instance Learning (MIL) has shown promising results in digital
Pathology Whole Slide Image (WSI) analysis, such a paradigm still faces performance and …

Farewell to mutual information: Variational distillation for cross-modal person re-identification

X Tian, Z Zhang, S Lin, Y Qu… - Proceedings of the …, 2021 - openaccess.thecvf.com
Abstract The Information Bottleneck (IB) provides an information theoretic principle for
representation learning, by retaining all information relevant for predicting label while …

How does information bottleneck help deep learning?

K Kawaguchi, Z Deng, X Ji… - … Conference on Machine …, 2023 - proceedings.mlr.press
Numerous deep learning algorithms have been inspired by and understood via the notion of
information bottleneck, where unnecessary information is (often implicitly) minimized while …

Feature learning in deep classifiers through intermediate neural collapse

A Rangamani, M Lindegaard… - International …, 2023 - proceedings.mlr.press
In this paper, we conduct an empirical study of the feature learning process in deep
classifiers. Recent research has identified a training phenomenon called Neural Collapse …

Explaining knowledge distillation by quantifying the knowledge

X Cheng, Z Rao, Y Chen… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
This paper presents a method to interpret the success of knowledge distillation by
quantifying and analyzing task-relevant and task-irrelevant visual concepts that are encoded …

Why do better loss functions lead to less transferable features?

S Kornblith, T Chen, H Lee… - Advances in Neural …, 2021 - proceedings.neurips.cc
Previous work has proposed many new loss functions and regularizers that improve test
accuracy on image classification tasks. However, it is not clear whether these loss functions …

Efficient knowledge distillation from model checkpoints

C Wang, Q Yang, R Huang, S Song… - Advances in Neural …, 2022 - proceedings.neurips.cc
Abstract Knowledge distillation is an effective approach to learn compact models (students)
with the supervision of large and strong models (teachers). As empirically there exists a …