Ten years of BabelNet: A survey

R Navigli, M Bevilacqua, S Conia, D Montagnini… - IJCAI, 2021 - iris.uniroma1.it
The intelligent manipulation of symbolic knowledge has been a long-sought goal of AI.
However, when it comes to Natural Language Processing (NLP), symbols have to be …

Semantic coherence markers: The contribution of perplexity metrics

D Colla, M Delsanto, M Agosto, B Vitiello… - Artificial Intelligence in …, 2022 - Elsevier
Devising automatic tools to assist specialists in the early detection of mental disturbances
and psychotic disorders is to date a challenging scientific problem and a practically relevant …

[HTML][HTML] LMMS reloaded: Transformer-based sense embeddings for disambiguation and beyond

D Loureiro, AM Jorge, J Camacho-Collados - Artificial Intelligence, 2022 - Elsevier
Distributional semantics based on neural approaches is a cornerstone of Natural Language
Processing, with surprising connections to human meaning representation as well. Recent …

[HTML][HTML] A computational analysis of transcribed speech of people living with dementia: The Anchise 2022 Corpus

F Sigona, DP Radicioni, BG Fivela, D Colla… - Computer Speech & …, 2025 - Elsevier
Introduction Automatic linguistic analysis can provide cost-effective, valuable clues to the
diagnosis of cognitive difficulties and to therapeutic practice, and hence impact positively on …

CONcreTEXT norms: Concreteness ratings for Italian and English words in context

M Montefinese, L Gregori, AA Ravelli, R Varvara… - Plos one, 2023 - journals.plos.org
Concreteness is a fundamental dimension of word semantic representation that has
attracted more and more interest to become one of the most studied variables in the …

Large scale substitution-based word sense induction

M Eyal, S Sadde, H Taub-Tabib, Y Goldberg - arXiv preprint arXiv …, 2021 - arxiv.org
We present a word-sense induction method based on pre-trained masked language models
(MLMs), which can cheaply scale to large vocabularies and large corpora. The result is a …

Learning sense-specific static embeddings using contextualised word embeddings as a proxy

Y Zhou, D Bollegala - arXiv preprint arXiv:2110.02204, 2021 - arxiv.org
Contextualised word embeddings generated from Neural Language Models (NLMs), such
as BERT, represent a word with a vector that considers the semantics of the target word as …

Novel metrics for computing semantic similarity with sense embeddings

D Colla, E Mensa, DP Radicioni - Knowledge-Based Systems, 2020 - Elsevier
In the last years many efforts have been spent to build word embeddings, a representational
device in which word meanings are described through dense unit vectors of real numbers …

From Smart City to Smart Society: A quality-of-life ontological model for problem detection from user-generated content

C Periñán-Pascual - Applied Ontology, 2023 - content.iospress.com
Social-media platforms have become a global phenomenon of communication, where users
publish content in text, images, video, audio or a combination of them to convey opinions …

Temporal Word Meaning Disambiguation using TimeLMs

M Godbole, P Dandavate, A Kane - arXiv preprint arXiv:2210.08207, 2022 - arxiv.org
Meaning of words constantly changes given the events in modern civilization. Large
Language Models use word embeddings, which are often static and thus cannot cope with …