Recent advances in natural language processing via large pre-trained language models: A survey

B Min, H Ross, E Sulem, APB Veyseh… - ACM Computing …, 2023 - dl.acm.org
Large, pre-trained language models (PLMs) such as BERT and GPT have drastically
changed the Natural Language Processing (NLP) field. For numerous NLP tasks …

Deep learning--based text classification: a comprehensive review

S Minaee, N Kalchbrenner, E Cambria… - ACM computing …, 2021 - dl.acm.org
Deep learning--based models have surpassed classical machine learning--based
approaches in various text classification tasks, including sentiment analysis, news …

A large language model for electronic health records

X Yang, A Chen, N PourNejatian, HC Shin… - NPJ digital …, 2022 - nature.com
There is an increasing interest in developing artificial intelligence (AI) systems to process
and interpret electronic health records (EHRs). Natural language processing (NLP) powered …

Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models

P Manakul, A Liusie, MJF Gales - arXiv preprint arXiv:2303.08896, 2023 - arxiv.org
Generative Large Language Models (LLMs) such as GPT-3 are capable of generating highly
fluent responses to a wide variety of user prompts. However, LLMs are known to hallucinate …

A primer in BERTology: What we know about how BERT works

A Rogers, O Kovaleva, A Rumshisky - Transactions of the Association …, 2021 - direct.mit.edu
Transformer-based models have pushed state of the art in many areas of NLP, but our
understanding of what is behind their success is still limited. This paper is the first survey of …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

Large pre-trained language models contain human-like biases of what is right and wrong to do

P Schramowski, C Turan, N Andersen… - Nature Machine …, 2022 - nature.com
Artificial writing is permeating our lives due to recent advances in large-scale, transformer-
based language models (LMs) such as BERT, GPT-2 and GPT-3. Using them as pre-trained …

TaBERT: Pretraining for joint understanding of textual and tabular data

P Yin, G Neubig, W Yih, S Riedel - arXiv preprint arXiv:2005.08314, 2020 - arxiv.org
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-
based natural language (NL) understanding tasks. Such models are typically trained on free …

Large language models for forecasting and anomaly detection: A systematic literature review

J Su, C Jiang, X Jin, Y Qiao, T Xiao, H Ma… - arXiv preprint arXiv …, 2024 - arxiv.org
This systematic literature review comprehensively examines the application of Large
Language Models (LLMs) in forecasting and anomaly detection, highlighting the current …

Retrospective reader for machine reading comprehension

Z Zhang, J Yang, H Zhao - Proceedings of the AAAI conference on …, 2021 - ojs.aaai.org
Abstract Machine reading comprehension (MRC) is an AI challenge that requires machines
to determine the correct answers to questions based on a given passage. MRC systems …