Delving into masked autoencoders for multi-label thorax disease classification

J Xiao, Y Bai, A Yuille, Z Zhou - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
Abstract Vision Transformer (ViT) has become one of the most popular neural architectures
due to its simplicity, scalability, and compelling performance in multiple vision tasks …

Seeking an optimal approach for Computer-aided Diagnosis of Pulmonary Embolism

NU Islam, Z Zhou, S Gehlot, MB Gotway, J Liang - Medical image analysis, 2024 - Elsevier
Pulmonary Embolism (PE) represents a thrombus (“blood clot”), usually originating from a
lower extremity vein, that travels to the blood vessels in the lung, causing vascular …

[HTML][HTML] A survey of the impact of self-supervised pretraining for diagnostic tasks in medical X-ray, CT, MRI, and ultrasound

B VanBerlo, J Hoey, A Wong - BMC Medical Imaging, 2024 - Springer
Self-supervised pretraining has been observed to be effective at improving feature
representations for transfer learning, leveraging large amounts of unlabelled data. This …

Self-supervised learning for medical image analysis: Discriminative, restorative, or adversarial?

F Haghighi, MRH Taher, MB Gotway, J Liang - Medical Image Analysis, 2024 - Elsevier
Discriminative, restorative, and adversarial learning have proven beneficial for self-
supervised learning schemes in computer vision and medical imaging. Existing efforts …

Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability Composability and Decomposability from Anatomy via Self Supervision

MRH Taher, MB Gotway… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Humans effortlessly interpret images by parsing them into part-whole hierarchies; deep
learning excels in learning multi-level feature spaces but they often lack explicit coding of …

MeSa: Masked, geometric, and supervised pre-training for monocular depth estimation

MO Khan, J Liang, CK Wang, S Yang, Y Lou - arXiv preprint arXiv …, 2023 - arxiv.org
Pre-training has been an important ingredient in developing strong monocular depth
estimation models in recent years. For instance, self-supervised learning (SSL) is …

A survey of the impact of self-supervised pretraining for diagnostic tasks with radiological images

B VanBerlo, J Hoey, A Wong - arXiv preprint arXiv:2309.02555, 2023 - arxiv.org
Self-supervised pretraining has been observed to be effective at improving feature
representations for transfer learning, leveraging large amounts of unlabelled data. This …

Revisiting fine-tuning strategies for self-supervised medical imaging analysis

MO Khan, Y Fang - arXiv preprint arXiv:2307.10915, 2023 - arxiv.org
Despite the rapid progress in self-supervised learning (SSL), end-to-end fine-tuning still
remains the dominant fine-tuning strategy for medical imaging analysis. However, it remains …