Aligning model representations to humans has been found to improve robustness and generalization. However, such methods often focus on standard observational data …
How can we build AI systems that can learn any set of individual human values both quickly and safely, avoiding causing harm or violating societal standards for acceptable behavior …
A major challenge in studying robustness in deep learning is defining the set of “meaningless” perturbations to which a given Neural Network (NN) should be invariant. Most …
Z Chen, Y Lu, JX Hu, Q Xuan, Z Wang, X Yang - Neurocomputing, 2025 - Elsevier
Understanding the enigmatic black-box representations within Deep Neural Networks (DNNs) is an essential problem in the community of deep learning. An initial step towards …
O Saisho, K Kashiwagi, S Kawai, K Iwahana… - Adjunct Proceedings of …, 2023 - dl.acm.org
This research deals with how to build reliable AI models using shared sensitive data. Confidential computing is gaining attention in AI services for a ubiquitous computing field. It …
K Combs, TJ Bihl, A Gadre… - Proceedings of the 2024 …, 2024 - dl.acm.org
As generative artificial intelligence (AI) becomes more common in day-to-day life, AI- generated content (AIGC) needs to be accurate, relevant, and comprehensive. These …
Deep Learning (DL) models, especially with the rise of the so-called foundation models, are increasingly used in real-world applications either as autonomous systems (eg facial …
Measuring the human alignment of trained models is gaining traction because it is not clear to which extent artificial image representations are proper models of the visual brain …
Data has powered incredible advances in machine learning (ML). Yet, the kinds of data used for training are often hard labels aggregated over humans' annotations, which fail to …