Y Zhou, V Srikumar - arXiv preprint arXiv:2106.14282, 2021 - arxiv.org
Given the prevalence of pre-trained contextualized representations in today's NLP, there have been many efforts to understand what information they contain, and why they seem to …
One of the major outstanding questions in computational semantics is how humans integrate the meaning of individual words into a sentence in a way that enables understanding of …
Y Zhou, V Srikumar - arXiv preprint arXiv:2104.05904, 2021 - arxiv.org
Understanding how linguistic structures are encoded in contextualized embedding could help explain their impressive performance across NLP@. Existing approaches for probing …
Pre-trained transformer models shine in many natural language processing tasks and therefore are expected to bear the representation of the input sentence or text meaning …
Recent advances in NLP show that language models retain a discernible level of knowledge in deontological ethics and moral norms. However, existing works often treat morality as …
Obtaining meaning-rich representations of social media inputs, such as Tweets (unstructured and noisy text), from general-purpose pre-trained language models has …
Sentence encoders map sentences to real valued vectors for use in downstream applications. To peek into these representations-eg, to increase interpretability of their …
R Pandey - arXiv preprint arXiv:2305.16328, 2023 - arxiv.org
What is sentence meaning and its ideal representation? Much of the expressive power of human language derives from semantic composition, the mind's ability to represent meaning …
In this paper, our focus is the connection and influence of language technologies on the research in neurolinguistics. We present a review of brain imaging-based neurolinguistic …