A Survey of NL2SQL with Large Language Models: Where are we, and where are we going?

X Liu, S Shen, B Li, P Ma, R Jiang, Y Zhang… - arXiv preprint arXiv …, 2024 - arxiv.org
Translating users' natural language queries (NL) into SQL queries (ie, NL2SQL) can
significantly reduce barriers to accessing relational databases and support various …

Large language models and causal inference in collaboration: A comprehensive survey

X Liu, P Xu, J Wu, J Yuan, Y Yang, Y Zhou, F Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Causal inference has shown potential in enhancing the predictive accuracy, fairness,
robustness, and explainability of Natural Language Processing (NLP) models by capturing …

Language models don't always say what they think: unfaithful explanations in chain-of-thought prompting

M Turpin, J Michael, E Perez… - Advances in Neural …, 2024 - proceedings.neurips.cc
Abstract Large Language Models (LLMs) can achieve strong performance on many tasks by
producing step-by-step reasoning before giving a final output, often referred to as chain-of …

Explainability for large language models: A survey

H Zhao, H Chen, F Yang, N Liu, H Deng, H Cai… - ACM Transactions on …, 2024 - dl.acm.org
Large language models (LLMs) have demonstrated impressive capabilities in natural
language processing. However, their internal mechanisms are still unclear and this lack of …

Foundational challenges in assuring alignment and safety of large language models

U Anwar, A Saparov, J Rando, D Paleka… - arXiv preprint arXiv …, 2024 - arxiv.org
This work identifies 18 foundational challenges in assuring the alignment and safety of large
language models (LLMs). These challenges are organized into three different categories …

Rethinking interpretability in the era of large language models

C Singh, JP Inala, M Galley, R Caruana… - arXiv preprint arXiv …, 2024 - arxiv.org
Interpretable machine learning has exploded as an area of interest over the last decade,
sparked by the rise of increasingly large datasets and deep neural networks …

Can large language models explain themselves? a study of llm-generated self-explanations

S Huang, S Mamidanna, S Jangam, Y Zhou… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) such as ChatGPT have demonstrated superior performance
on a variety of natural language processing (NLP) tasks including sentiment analysis …

Predicting text preference via structured comparative reasoning

JN Yan, T Liu, J Chiu, J Shen, Z Qin, Y Yu… - Proceedings of the …, 2024 - aclanthology.org
Comparative reasoning plays a crucial role in predicting text preferences; however, large
language models (LLMs) often demonstrate inconsistencies in their reasoning, leading to …

xTower: A multilingual LLM for explaining and correcting translation errors

M Treviso, NM Guerreiro, S Agrawal, R Rei… - arXiv preprint arXiv …, 2024 - arxiv.org
While machine translation (MT) systems are achieving increasingly strong performance on
benchmarks, they often produce translations with errors and anomalies. Understanding …

Trustworthy, responsible, and safe ai: A comprehensive architectural framework for ai safety with challenges and mitigations

C Chen, Z Liu, W Jiang, SQ Goh, KKY Lam - arXiv preprint arXiv …, 2024 - arxiv.org
AI Safety is an emerging area of critical importance to the safe adoption and deployment of
AI systems. With the rapid proliferation of AI and especially with the recent advancement of …