K Huang, F Mo, H Li, Y Li, Y Zhang, W Yi, Y Mao… - arXiv preprint arXiv …, 2024 - arxiv.org
The rapid development of Large Language Models (LLMs) demonstrates remarkable multilingual capabilities in natural language processing, attracting global attention in both …
With the advancement of language models (LMs), their exposure to private data is increasingly inevitable, and their deployment (especially for smaller ones) on personal …
K Marchisio, WY Ko, A Bérard, T Dehaze… - arXiv preprint arXiv …, 2024 - arxiv.org
We investigate a surprising limitation of LLMs: their inability to consistently generate text in a user's desired language. We create the Language Confusion Benchmark (LCB) to evaluate …
X Wang, J Pan, L Ding, C Biemann - arXiv preprint arXiv:2403.18715, 2024 - arxiv.org
Large Vision-Language Models (LVLMs) are increasingly adept at generating contextually detailed and coherent responses from visual inputs. However, their application in …
T Tang, W Luo, H Huang, D Zhang, X Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) demonstrate remarkable multilingual capabilities without being pre-trained on specially curated multilingual parallel corpora. It remains a challenging …
S Yu, PH Seo, J Son - arXiv preprint arXiv:2407.07412, 2024 - arxiv.org
We propose a new framework that automatically generates high-quality segmentation masks with their referring expressions as pseudo supervisions for referring image segmentation …
Zero-shot In-context learning is the phenomenon where models can perform the task simply given the instructions. However, pre-trained large language models are known to be poorly …
Hallucinated translations pose significant threats and safety concerns when it comes to the practical deployment of machine translation systems. Previous research works have …
Large Language Models (LLMs) demonstrate impressive performance in diverse applications, yet they face significant drawbacks, including high inference latency …