Large Language Models (LLMs) have unlocked new capabilities and applications; however, evaluating the alignment with human preferences still poses significant challenges. To …
The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very …
Y Huang, L Sun, H Wang, S Wu… - International …, 2024 - proceedings.mlr.press
Large language models (LLMs) have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many …
Prompting a pretrained language model with natural language patterns has been proved effective for natural language understanding (NLU). However, our preliminary study reveals …
Guided sampling is a vital approach for applying diffusion models in real-world tasks that embeds human-defined guidance during the sampling procedure. This paper considers a …
Abstract Model-agnostic meta learning (MAML) is currently one of the dominating approaches for few-shot meta-learning. Albeit its effectiveness, the optimization of MAML …
J Lee, D Jung, J Yim, S Yoon - International conference on …, 2022 - proceedings.mlr.press
Source-free unsupervised domain adaptation (SFUDA) aims to obtain high performance in the unlabeled target domain using the pre-trained source model, not the source data …
H Yu, B Shen, D Ran, J Zhang, Q Zhang, Y Ma… - Proceedings of the 46th …, 2024 - dl.acm.org
Code generation models based on the pre-training and fine-tuning paradigm have been increasingly attempted by both academia and industry, resulting in well-known industrial …
DL García, À Nebot, A Vellido - Knowledge and Information Systems, 2017 - Springer
Globalization processes and market deregulation policies are rapidly changing the competitive environments of many economic sectors. The appearance of new competitors …