Chatgpt is a knowledgeable but inexperienced solver: An investigation of commonsense problem in large language models

N Bian, X Han, L Sun, H Lin, Y Lu, B He, S Jiang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have made significant progress in NLP. However, their
ability to memorize, represent, and leverage commonsense knowledge has been a well …

State-of-the-art generalisation research in NLP: a taxonomy and review

D Hupkes, M Giulianelli, V Dankers, M Artetxe… - arXiv preprint arXiv …, 2022 - arxiv.org
The ability to generalise well is one of the primary desiderata of natural language
processing (NLP). Yet, what'good generalisation'entails and how it should be evaluated is …

Visualize before you write: Imagination-guided open-ended text generation

W Zhu, A Yan, Y Lu, W Xu, XE Wang… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent advances in text-to-image synthesis make it possible to visualize machine
imaginations for a given context. On the other hand, when generating text, human writers are …

BRAINTEASER: Lateral Thinking Puzzles for Large Language Model

Y Jiang, F Ilievski, K Ma - arXiv preprint arXiv:2310.05057, 2023 - arxiv.org
The success of language models has inspired the NLP community to attend to tasks that
require implicit and complex reasoning, relying on human-like commonsense mechanisms …

Robust and explainable identification of logical fallacies in natural language arguments

Z Sourati, VPP Venkatesh, D Deshpande… - Knowledge-Based …, 2023 - Elsevier
The spread of misinformation, propaganda, and flawed argumentation has been amplified in
the Internet era. Given the volume of data and the subtlety of identifying violations of …

Semeval-2024 task 9: Brainteaser: A novel task defying common sense

Y Jiang, F Ilievski, K Ma - arXiv preprint arXiv:2404.16068, 2024 - arxiv.org
While vertical thinking relies on logical and commonsense reasoning, lateral thinking
requires systems to defy commonsense associations and overwrite them through …

Coalescing global and local information for procedural text understanding

K Ma, F Ilievski, J Francis, E Nyberg… - arXiv preprint arXiv …, 2022 - arxiv.org
Procedural text understanding is a challenging language reasoning task that requires
models to track entity states across the development of a narrative. A complete procedural …

A study of situational reasoning for traffic understanding

J Zhang, F Ilievski, K Ma, A Kollaa, J Francis… - Proceedings of the 29th …, 2023 - dl.acm.org
Intelligent Traffic Monitoring (ITMo) technologies hold the potential for improving road
safety/security and for enabling smart city infrastructure. Understanding traffic situations …

Tree visualizations of protein sequence embedding space enable improved functional clustering of diverse protein superfamilies

W Yeung, Z Zhou, L Mathew, N Gravel… - Briefings in …, 2023 - academic.oup.com
Protein language models, trained on millions of biologically observed sequences, generate
feature-rich numerical representations of protein sequences. These representations, called …

[PDF][PDF] A Study of Zero-shot Adaptation with Commonsense Knowledge.

J Zhang, F Ilievski, K Ma, J Francis, A Oltramari - AKBC, 2022 - akbc.ws
Self-supervision with synthetic training data built from knowledge graphs has been proven
useful to enhance the language model accuracy in zero-shot evaluation on commonsense …