Deep learning for aspect-based sentiment analysis: a comparative review

HH Do, PWC Prasad, A Maag, A Alsadoon - Expert systems with …, 2019 - Elsevier
The increasing volume of user-generated content on the web has made sentiment analysis
an important tool for the extraction of information about the human emotional state. A current …

A survey on aspect-based sentiment classification

G Brauwers, F Frasincar - ACM Computing Surveys, 2022 - dl.acm.org
With the constantly growing number of reviews and other sentiment-bearing texts on the
Web, the demand for automatic sentiment analysis algorithms continues to expand. Aspect …

How close is chatgpt to human experts? comparison corpus, evaluation, and detection

B Guo, X Zhang, Z Wang, M Jiang, J Nie, Y Ding… - arXiv preprint arXiv …, 2023 - arxiv.org
The introduction of ChatGPT has garnered widespread attention in both academic and
industrial communities. ChatGPT is able to respond effectively to a wide range of human …

Text embeddings by weakly-supervised contrastive pre-training

L Wang, N Yang, X Huang, B Jiao, L Yang… - arXiv preprint arXiv …, 2022 - arxiv.org
This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a
wide range of tasks. The model is trained in a contrastive manner with weak supervision …

Colbertv2: Effective and efficient retrieval via lightweight late interaction

K Santhanam, O Khattab, J Saad-Falcon… - arXiv preprint arXiv …, 2021 - arxiv.org
Neural information retrieval (IR) has greatly advanced search and other knowledge-
intensive language tasks. While many neural IR methods encode queries and documents …

Beir: A heterogenous benchmark for zero-shot evaluation of information retrieval models

N Thakur, N Reimers, A Rücklé, A Srivastava… - arXiv preprint arXiv …, 2021 - arxiv.org
Existing neural information retrieval (IR) models have often been studied in homogeneous
and narrow settings, which has considerably limited insights into their out-of-distribution …

Adapting large language models via reading comprehension

D Cheng, S Huang, F Wei - The Twelfth International Conference on …, 2023 - openreview.net
We explore how continued pre-training on domain-specific corpora influences large
language models, revealing that training on the raw corpora endows the model with domain …

Promptagator: Few-shot dense retrieval from 8 examples

Z Dai, VY Zhao, J Ma, Y Luan, J Ni, J Lu… - arXiv preprint arXiv …, 2022 - arxiv.org
Much recent research on information retrieval has focused on how to transfer from one task
(typically with abundant supervised data) to various other tasks where supervision is limited …

Dense text retrieval based on pretrained language models: A survey

WX Zhao, J Liu, R Ren, JR Wen - ACM Transactions on Information …, 2024 - dl.acm.org
Text retrieval is a long-standing research topic on information seeking, where a system is
required to return relevant information resources to user's queries in natural language. From …

Fingpt: Democratizing internet-scale data for financial large language models

XY Liu, G Wang, H Yang, D Zha - arXiv preprint arXiv:2307.10485, 2023 - arxiv.org
Large language models (LLMs) have demonstrated remarkable proficiency in
understanding and generating human-like texts, which may potentially revolutionize the …