Pre-trained language models (PLMs) have been the de facto paradigm for most natural language processing tasks. This also benefits the biomedical domain: researchers from …
Y Shen, L Heacock, J Elias, KD Hentel, B Reig, G Shih… - Radiology, 2023 - pubs.rsna.org
rules integrated into its technology. Additionally, users must carefully craft questions or prompts, providing specific information about a clinical scenario and potential …
Image diffusion models such as DALL-E 2, Imagen, and Stable Diffusion have attracted significant attention due to their ability to generate high-quality synthetic images. In this work …
The stunning qualitative improvement of text-to-image models has led to their widespread attention and adoption. However, we lack a comprehensive quantitative understanding of …
An increasing number of public datasets have shown a marked impact on automated organ segmentation and tumor detection. However, due to the small size and partially labeled …
Prompt engineering is a critical technique in the field of natural language processing that involves designing and optimizing the prompts used to input information into models, aiming …
Multimodal models trained on large natural image-text pair datasets have exhibited astounding abilities in generating high-quality images. Medical imaging data is …
C Chen, J Fu, L Lyu - arXiv preprint arXiv:2303.01325, 2023 - arxiv.org
AI Generated Content (AIGC) has received tremendous attention within the past few years, with content generated in the format of image, text, audio, video, etc. Meanwhile, AIGC has …
H Bansal, A Grover - arXiv preprint arXiv:2302.02503, 2023 - arxiv.org
Recent research on robustness has revealed significant performance gaps between neural image classifiers trained on datasets that are similar to the test set, and those that are from a …