Survey on factuality in large language models: Knowledge, retrieval and domain-specificity

C Wang, X Liu, Y Yue, X Tang, T Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
This survey addresses the crucial issue of factuality in Large Language Models (LLMs). As
LLMs find applications across diverse domains, the reliability and accuracy of their outputs …

Zhongjing: Enhancing the chinese medical capabilities of large language model through expert feedback and real-world multi-turn dialogue

S Yang, H Zhao, S Zhu, G Zhou, H Xu, Y Jia… - Proceedings of the AAAI …, 2024 - ojs.aaai.org
Abstract Recent advances in Large Language Models (LLMs) have achieved remarkable
breakthroughs in understanding and responding to user intents. However, their performance …

How close is chatgpt to human experts? comparison corpus, evaluation, and detection

B Guo, X Zhang, Z Wang, M Jiang, J Nie, Y Ding… - arXiv preprint arXiv …, 2023 - arxiv.org
The introduction of ChatGPT has garnered widespread attention in both academic and
industrial communities. ChatGPT is able to respond effectively to a wide range of human …

Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks

Z Chen, J Wu, W Wang, W Su, G Chen… - Proceedings of the …, 2024 - openaccess.thecvf.com
The exponential growth of large language models (LLMs) has opened up numerous
possibilities for multi-modal AGI systems. However the progress in vision and vision …

Large language models in medical and healthcare fields: applications, advances, and challenges

D Wang, S Zhang - Artificial Intelligence Review, 2024 - Springer
Large language models (LLMs) are increasingly recognized for their advanced language
capabilities, offering significant assistance in diverse areas like medical communication …

MIT: A Large-Scale Dataset towards Multi-Modal Multilingual Instruction Tuning

L Li, Y Yin, S Li, L Chen, P Wang, S Ren, M Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Instruction tuning has significantly advanced large language models (LLMs) such as
ChatGPT, enabling them to align with human instructions across diverse tasks. However …

Altclip: Altering the language encoder in clip for extended language capabilities

Z Chen, G Liu, BW Zhang, F Ye, Q Yang… - arXiv preprint arXiv …, 2022 - arxiv.org
In this work, we present a conceptually simple and effective method to train a strong
bilingual/multilingual multimodal representation model. Starting from the pre-trained …

Radiology-llama2: Best-in-class large language model for radiology

Z Liu, Y Li, P Shu, A Zhong, L Yang, C Ju, Z Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
This paper introduces Radiology-Llama2, a large language model specialized for radiology
through a process known as instruction tuning. Radiology-Llama2 is based on the Llama2 …

Cmb: A comprehensive medical benchmark in chinese

X Wang, GH Chen, D Song, Z Zhang, Z Chen… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) provide a possibility to make a great breakthrough in
medicine. The establishment of a standardized medical benchmark becomes a fundamental …

Few-shot adaptation of multi-modal foundation models: A survey

F Liu, T Zhang, W Dai, C Zhang, W Cai, X Zhou… - Artificial Intelligence …, 2024 - Springer
Abstract Multi-modal (vision-language) models, such as CLIP, are replacing traditional
supervised pre-training models (eg, ImageNet-based pre-training) as the new generation of …