ASER: A large-scale eventuality knowledge graph

H Zhang, X Liu, H Pan, Y Song… - Proceedings of the web …, 2020 - dl.acm.org
Understanding human's language requires complex world knowledge. However, existing
large-scale knowledge graphs mainly focus on knowledge about entities while ignoring …

Decoding word embeddings with brain-based semantic features

E Chersoni, E Santus, CR Huang, A Lenci - Computational Linguistics, 2021 - arpi.unipi.it
Word embeddings are vectorial semantic representations built with either counting or
predicting techniques aimed at capturing shades of meaning from word co-occurrences …

Transomcs: From linguistic graphs to commonsense knowledge

H Zhang, D Khashabi, Y Song, D Roth - arXiv preprint arXiv:2005.00206, 2020 - arxiv.org
Commonsense knowledge acquisition is a key problem for artificial intelligence.
Conventional methods of acquiring commonsense knowledge generally require laborious …

Back to square one: Artifact detection, training and commonsense disentanglement in the winograd schema

Y Elazar, H Zhang, Y Goldberg, D Roth - arXiv preprint arXiv:2104.08161, 2021 - arxiv.org
The Winograd Schema (WS) has been proposed as a test for measuring commonsense
capabilities of models. Recently, pre-trained language model-based approaches have …

ASER: Towards large-scale commonsense knowledge acquisition via higher-order selectional preference over eventualities

H Zhang, X Liu, H Pan, H Ke, J Ou, T Fang, Y Song - Artificial Intelligence, 2022 - Elsevier
Commonsense knowledge acquisition and reasoning have long been a core artificial
intelligence problem. However, in the past, there has been a lack of scalable methods to …

WinoWhy: A deep diagnosis of essential commonsense knowledge for answering Winograd schema challenge

H Zhang, X Zhao, Y Song - arXiv preprint arXiv:2005.05763, 2020 - arxiv.org
In this paper, we present the first comprehensive categorization of essential commonsense
knowledge for answering the Winograd Schema Challenge (WSC). For each of the …

Improving the TENOR of Labeling: Re-evaluating Topic Models for Content Analysis

Z Li, A Mao, D Stephens, P Goel… - Proceedings of the …, 2024 - aclanthology.org
Topic models are a popular tool for understanding text collections, but their evaluation has
been a point of contention. Automated evaluation metrics such as coherence are often used …

[图书][B] Distributional semantics

A Lenci, M Sahlgren - 2023 - books.google.com
Distributional semantics develops theories and methods to represent the meaning of natural
language expressions, with vectors encoding their statistical distribution in linguistic …

Did the cat drink the coffee? Challenging transformers with generalized event knowledge

P Pedinotti, G Rambelli, E Chersoni, E Santus… - arXiv preprint arXiv …, 2021 - arxiv.org
Prior research has explored the ability of computational models to predict a word semantic fit
with a given predicate. While much work has been devoted to modeling the typicality relation …

Identifying the driving factors of word co-occurrence: a perspective of semantic relations

Y Zhao, J Yin, J Zhang, L Wu - Scientometrics, 2023 - Springer
This study aims to investigate and identify the driving factors of word co-occurrence from the
perspective of semantic relations between frequently co-occurring words. Natural sentences …