The rise and potential of large language model based agents: A survey

Z Xi, W Chen, X Guo, W He, Y Ding, B Hong… - arXiv preprint arXiv …, 2023 - arxiv.org
For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing
the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are …

Improving deep learning with prior knowledge and cognitive models: A survey on enhancing interpretability, adversarial robustness and zero-shot learning

F Mumuni, A Mumuni - Cognitive Systems Research, 2023 - Elsevier
We review current and emerging knowledge-informed and brain-inspired cognitive systems
for realizing adversarial defenses, eXplainable Artificial Intelligence (XAI), and zero-shot or …

Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability

VV Ramaswamy, SSY Kim, R Fong… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Concept-based interpretability methods aim to explain a deep neural network
model's components and predictions using a pre-defined set of semantic concepts. These …

Explainable predictive maintenance: a survey of current methods, challenges and opportunities

L Cummins, A Sommers, SB Ramezani, S Mittal… - IEEE …, 2024 - ieeexplore.ieee.org
Predictive maintenance is a well studied collection of techniques that aims to prolong the life
of a mechanical system by using artificial intelligence and machine learning to predict the …

In search of verifiability: Explanations rarely enable complementary performance in ai-advised decision making

R Fok, DS Weld - arXiv preprint arXiv:2305.07722, 2023 - arxiv.org
The current literature on AI-advised decision making--involving explainable AI systems
advising human decision makers--presents a series of inconclusive and confounding …

Humans, ai, and context: Understanding end-users' trust in a real-world computer vision application

SSY Kim, EA Watkins, O Russakovsky, R Fong… - Proceedings of the …, 2023 - dl.acm.org
Trust is an important factor in people's interactions with AI systems. However, there is a lack
of empirical studies examining how real end-users trust or distrust the AI system they interact …

How do data analysts respond to ai assistance? a wizard-of-oz study

K Gu, M Grunde-McLaughlin, A McNutt, J Heer… - Proceedings of the CHI …, 2024 - dl.acm.org
Data analysis is challenging as analysts must navigate nuanced decisions that may yield
divergent conclusions. AI assistants have the potential to support analysts in planning their …

Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling

V Ojewale, R Steed, B Vecchione, A Birhane… - arXiv preprint arXiv …, 2024 - arxiv.org
Audits are critical mechanisms for identifying the risks and limitations of deployed artificial
intelligence (AI) systems. However, the effective execution of AI audits remains incredibly …

Keep the faith: Faithful explanations in convolutional neural networks for case-based reasoning

TN Wolf, F Bongratz, AM Rickmann, S Pölsterl… - Proceedings of the …, 2024 - ojs.aaai.org
Explaining predictions of black-box neural networks is crucial when applied to decision-
critical tasks. Thus, attribution maps are commonly used to identify important image regions …

Concept-based explainable artificial intelligence: A survey

E Poeta, G Ciravegna, E Pastor, T Cerquitelli… - arXiv preprint arXiv …, 2023 - arxiv.org
The field of explainable artificial intelligence emerged in response to the growing need for
more transparent and reliable models. However, using raw features to provide explanations …