StaResGRU-CNN with CMedLMs: A stacked residual GRU-CNN with pre-trained biomedical language models for predictive intelligence

P Ni, G Li, PCK Hung, V Chang - Applied Soft Computing, 2021 - Elsevier
As a task requiring strong professional experience as supports, predictive biomedical
intelligence cannot be separated from the support of a large amount of external domain …

NCUEE at MEDIQA 2019: medical text inference using ensemble BERT-BiLSTM-attention model

LH Lee, Y Lu, PH Chen, PL Lee… - Proceedings of the 18th …, 2019 - aclanthology.org
This study describes the model design of the NCUEE system for the MEDIQA challenge at
the ACL-BioNLP 2019 workshop. We use the BERT (Bidirectional Encoder Representations …

MediBioDeBERTa: Biomedical Language Model with Continuous Learning and Intermediate Fine-Tuning

E Kim, Y Jeong, M Choi - IEEE Access, 2023 - ieeexplore.ieee.org
The emergence of large language models (LLMs) has marked a significant milestone in the
evolution of natural language processing. With the expanded use of LLMs in multiple fields …

A pre-trained BERT for Korean medical natural language processing

Y Kim, JH Kim, JM Lee, MJ Jang, YJ Yum, S Kim… - Scientific Reports, 2022 - nature.com
With advances in deep learning and natural language processing (NLP), the analysis of
medical texts is becoming increasingly important. Nonetheless, despite the importance of …

Incorporating domain knowledge into natural language inference on clinical texts

M Lu, Y Fang, F Yan, M Li - IEEE Access, 2019 - ieeexplore.ieee.org
Making inference on clinical texts is a task which has not been fully studied. With the newly
released, expert annotated MedNLI dataset, this task is being boosted. Compared with open …

Chimed-gpt: A chinese medical large language model with full training regime and better alignment to human preferences

Y Tian, R Gan, Y Song, J Zhang, Y Zhang - arXiv preprint arXiv …, 2023 - arxiv.org
Recently, the increasing demand for superior medical services has highlighted the
discrepancies in the medical infrastructure. With big data, especially texts, forming the …

Qilin-med: Multi-stage knowledge injection advanced medical large language model

Q Ye, J Liu, D Chong, P Zhou, Y Hua, A Liu - arXiv preprint arXiv …, 2023 - arxiv.org
Integrating large language models (LLMs) into healthcare presents potential but faces
challenges. Directly pre-training LLMs for domains like medicine is resource-heavy and …

Taiyi: a bilingual fine-tuned large language model for diverse biomedical tasks

L Luo, J Ning, Y Zhao, Z Wang, Z Ding… - Journal of the …, 2024 - academic.oup.com
Objective Most existing fine-tuned biomedical large language models (LLMs) focus on
enhancing performance in monolingual biomedical question answering and conversation …

BioBERTurk: Exploring Turkish Biomedical Language Model Development Strategies in Low-Resource Setting

H Türkmen, O Dikenelli, C Eraslan, MC Callı… - Journal of Healthcare …, 2023 - Springer
Pretrained language models augmented with in-domain corpora show impressive results in
biomedicine and clinical Natural Language Processing (NLP) tasks in English. However …

MRC-based Medical NER with Multi-task Learning and Multi-strategies

X Du, Y Jia, H Zan - China National Conference on Chinese …, 2022 - Springer
Medical named entity recognition (NER), a fundamental task of medical information
extraction, is crucial for medical knowledge graph construction, medical question answering …