Opportunities and challenges for ChatGPT and large language models in biomedicine and health

S Tian, Q Jin, L Yeganova, PT Lai, Q Zhu… - Briefings in …, 2024 - academic.oup.com
ChatGPT has drawn considerable attention from both the general public and domain experts
with its remarkable text generation capabilities. This has subsequently led to the emergence …

A survey of knowledge enhanced pre-trained language models

L Hu, Z Liu, Z Zhao, L Hou, L Nie… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Pre-trained Language Models (PLMs) which are trained on large text corpus via self-
supervised learning method, have yielded promising performance on various tasks in …

Galactica: A large language model for science

R Taylor, M Kardas, G Cucurull, T Scialom… - arXiv preprint arXiv …, 2022 - arxiv.org
Information overload is a major obstacle to scientific progress. The explosive growth in
scientific literature and data has made it ever harder to discover useful insights in a large …

Linkbert: Pretraining language models with document links

M Yasunaga, J Leskovec, P Liang - arXiv preprint arXiv:2203.15827, 2022 - arxiv.org
Language model (LM) pretraining can learn various knowledge from text corpora, helping
downstream tasks. However, existing methods such as BERT model a single document, and …

Domain-specific language model pretraining for biomedical natural language processing

Y Gu, R Tinn, H Cheng, M Lucas, N Usuyama… - ACM Transactions on …, 2021 - dl.acm.org
Pretraining large neural language models, such as BERT, has led to impressive gains on
many natural language processing (NLP) tasks. However, most pretraining efforts focus on …

SciBERT: A pretrained language model for scientific text

I Beltagy, K Lo, A Cohan - arXiv preprint arXiv:1903.10676, 2019 - arxiv.org
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging
and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin …

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

J Lee, W Yoon, S Kim, D Kim, S Kim, CH So… - …, 2020 - academic.oup.com
Motivation Biomedical text mining is becoming increasingly important as the number of
biomedical documents rapidly grows. With the progress in natural language processing …

S2ORC: The semantic scholar open research corpus

K Lo, LL Wang, M Neumann, R Kinney… - arXiv preprint arXiv …, 2019 - arxiv.org
We introduce S2ORC, a large corpus of 81.1 M English-language academic papers
spanning many academic disciplines. The corpus consists of rich metadata, paper abstracts …

A survey on recent advances in named entity recognition from deep learning models

V Yadav, S Bethard - arXiv preprint arXiv:1910.11470, 2019 - arxiv.org
Named Entity Recognition (NER) is a key component in NLP systems for question
answering, information retrieval, relation extraction, etc. NER systems have been studied …

ASRNN: A recurrent neural network with an attention model for sequence labeling

JCW Lin, Y Shao, Y Djenouri, U Yun - Knowledge-Based Systems, 2021 - Elsevier
Natural language processing (NLP) is useful for handling text and speech, and sequence
labeling plays an important role by automatically analyzing a sequence (text) to assign …