Pre-trained language models in biomedical domain: A systematic survey

B Wang, Q Xie, J Pei, Z Chen, P Tiwari, Z Li… - ACM Computing …, 2023 - dl.acm.org
Pre-trained language models (PLMs) have been the de facto paradigm for most natural
language processing tasks. This also benefits the biomedical domain: researchers from …

[HTML][HTML] Faithful AI in medicine: a systematic review with large language models and beyond

Q Xie, EJ Schenck, HS Yang, Y Chen, Y Peng, F Wang - MedRxiv, 2023 - ncbi.nlm.nih.gov
Artificial intelligence (AI), especially the most recent large language models (LLMs), holds
great promise in healthcare and medicine, with applications spanning from biological …

Pre-training multi-task contrastive learning models for scientific literature understanding

Y Zhang, H Cheng, Z Shen, X Liu, YY Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Scientific literature understanding tasks have gained significant attention due to their
potential to accelerate scientific discovery. Pre-trained language models (LMs) have shown …

A survey for biomedical text summarization: From pre-trained to large language models

Q Xie, Z Luo, B Wang, S Ananiadou - arXiv preprint arXiv:2304.08763, 2023 - arxiv.org
The exponential growth of biomedical texts such as biomedical literature and electronic
health records (EHRs), poses a significant challenge for clinicians and researchers to …

ChatGPT based contrastive learning for radiology report summarization

Z Luo, Z Jiang, M Wang, X Cai, D Gao… - Expert Systems with …, 2025 - Elsevier
Abstract Automatically Impression Generation (AIG) can conclude essential information of
the “Findings” section, thus facilitating more effective communication between radiographers …

[HTML][HTML] Graph contrastive topic model

Z Luo, L Liu, S Ananiadou, Q Xie - Expert Systems with Applications, 2024 - Elsevier
Contrastive learning has recently been introduced into neural topic models (NTMs) to
improve latent semantic discovery, but existing methods suffer from the sample bias problem …

Enhancing abstractive summarization of scientific papers using structure information

T Bao, H Zhang, C Zhang - Expert Systems with Applications, 2025 - Elsevier
Abstractive summarization of scientific papers has always been a research focus, yet
existing methods face two main challenges. First, most summarization models rely on …

LitFM: A Retrieval Augmented Structure-aware Foundation Model For Citation Graphs

J Zhang, J Chen, A Maatouk, N Bui, Q Xie… - arXiv preprint arXiv …, 2024 - arxiv.org
With the advent of large language models (LLMs), managing scientific literature via LLMs
has become a promising direction of research. However, existing approaches often overlook …

Graph Contrastive Topic Model

Z Luo, L Liu, Q Xie, S Ananiadou - arXiv preprint arXiv:2307.02078, 2023 - arxiv.org
Existing NTMs with contrastive learning suffer from the sample bias problem owing to the
word frequency-based sampling strategy, which may result in false negative samples with …

Leveraging Knowledge-aware Methodologies for Multi-document Summarization

Y Qu - Companion Proceedings of the ACM on Web …, 2024 - dl.acm.org
With the development of information technology, a large amount of information and corpora
has been incrementally sparked from the Web, stimulating an increasingly high demand for …