Large language models (LLMs) have demonstrated great success in various fields, benefiting from their huge amount of parameters that store knowledge. However, LLMs still …
In-context learning (ICL), teaching a large language model (LLM) to perform a task with few- shot demonstrations rather than adjusting the model parameters, has emerged as a strong …
S Zhang, X Feng, W Fan, W Fang, F Feng… - Proceedings of the …, 2023 - ojs.aaai.org
Existing video-audio understanding models are trained and evaluated in an intra-domain setting, facing performance degeneration in real-world applications where multiple domains …
H Dao, Y Deng, DD Le, L Liao - … of the 47th International ACM SIGIR …, 2024 - dl.acm.org
Conversational Recommender Systems (CRSs) leverage natural language dialogues to provide tailored recommendations. Traditional methods in this field primarily focus on …
Large Language Models (LLMs) have shown extraordinary capabilities in understanding and generating text that closely mirrors human communication. However, a primary …
Reasoning over knowledge graphs (KGs) is a challenging task that requires a deep understanding of the complex relationships between entities and the underlying logic of their …
SJ Wang, K Pei, J Yang - 2024 IEEE Symposium on Security and …, 2024 - computer.org
Smart contracts are software programs that enable diverse business activities on the blockchain. Recent research has identified new classes of “machine un-auditable” bugs that …
The rise of code pre-trained models has significantly enhanced various coding tasks, such as code completion, and tools like GitHub Copilot. However, the substantial size of these …
Recent advances in fine-tuning large language models (LLMs) have greatly enhanced their usage in domain-specific tasks. Despite the success, fine-tuning continues to rely on …