Abstract Concept Bottleneck Models (CBMs) ground image classification on human- understandable concepts to allow for interpretable model decisions as well as human …
Concept-based learning improves a deep learning model's interpretability by explaining its predictions via human-understandable concepts. Deep learning models trained under this …
Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather …
The focus of recent research has shifted from merely increasing the Deep Neural Networks (DNNs) performance in various tasks to DNNs, which are more interpretable to humans. The …
Advancements in deep learning techniques have given a boost to the performance of anomaly detection. However, real-world and safety-critical applications demand a level of …
S Kim, BC Ko - Machine Learning: Science and Technology, 2024 - iopscience.iop.org
In fields requiring high accountability, it is necessary to understand how deep-learning models make decisions when analyzing the causes of image classification. Concept-based …
Recent advances in reinforcement learning (RL) have predominantly leveraged neural network-based policies for decision-making, yet these models often lack interpretability …
Human feedback plays a critical role in learning and refining reward models for text-to- image generation, but the optimal form the feedback should take for learning an accurate …
ME Zarlenga, S Sankaranarayanan… - arXiv preprint arXiv …, 2024 - arxiv.org
Deep neural networks trained via empirical risk minimisation often exhibit significant performance disparities across groups, particularly when group and task labels are …