[PDF][PDF] Mitigating hallucination in large multi-modal models via robust instruction tuning

F Liu, K Lin, L Li, J Wang, Y Yacoob, L Wang - 2023 - researchgate.net
Despite the promising progress in multi-modal tasks, current large multi-modal models
(LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated …

Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

F Liu, K Lin, L Li, J Wang, Y Yacoob, L Wang - The Twelfth International … - openreview.net
Despite the promising progress in multi-modal tasks, current large multi-modal models
(LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated …

Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning

F Liu, K Lin, L Li, J Wang, Y Yacoob, L Wang - arXiv e-prints, 2023 - ui.adsabs.harvard.edu
Despite the promising progress in multi-modal tasks, current large multi-modal models
(LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated …