A survey on stability of learning with limited labelled data and its sensitivity to the effects of randomness

B Pecher, I Srba, M Bielikova - ACM Computing Surveys, 2024 - dl.acm.org
Learning with limited labelled data, such as prompting, in-context learning, fine-tuning, meta-
learning, or few-shot learning, aims to effectively train a model using only a small amount of …

Generalized logit adjustment: Calibrating fine-tuned models by removing label bias in foundation models

B Zhu, K Tang, Q Sun, H Zhang - Advances in Neural …, 2024 - proceedings.neurips.cc
Foundation models like CLIP allow zero-shot transfer on various tasks without additional
training data. Yet, the zero-shot performance is less competitive than a fully supervised one …

Distribution alignment optimization through neural collapse for long-tailed classification

J Gao, H Zhao, D dan Guo, H Zha - Forty-first International …, 2024 - openreview.net
A well-trained deep neural network on balanced datasets usually exhibits the Neural
Collapse (NC) phenomenon, which is an informative indicator of the model achieving good …

A systematic assessment of openai o1-preview for higher order thinking in education

E Latif, Y Zhou, S Guo, Y Gao, L Shi… - arXiv preprint arXiv …, 2024 - arxiv.org
As artificial intelligence (AI) continues to advance, it demonstrates capabilities comparable
to human intelligence, with significant potential to transform education and workforce …

Large language models in cryptocurrency securities cases: Can chatgpt replace lawyers?

A Trozze, T Davies, B Kleinberg - arXiv preprint arXiv:2308.06032, 2023 - arxiv.org
Large Language Models (LLMs) could enhance access to the legal system. However,
empirical research on their effectiveness in conducting legal tasks is scant. We study …

A Simple Recipe for Language-guided Domain Generalized Segmentation

M Fahes, TH Vu, A Bursuc, P Pérez… - Proceedings of the …, 2024 - openaccess.thecvf.com
Generalization to new domains not seen during training is one of the long-standing
challenges in deploying neural networks in real-world applications. Existing generalization …

Descriptor and Word Soups: Overcoming the Parameter Efficiency Accuracy Tradeoff for Out-of-Distribution Few-shot Learning

C Liao, T Tsiligkaridis, B Kulis - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Over the past year a large body of multimodal research has emerged around zero-shot
evaluation using GPT descriptors. These studies boost the zero-shot accuracy of pretrained …

Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?

A Trozze, T Davies, B Kleinberg - Artificial Intelligence and Law, 2024 - Springer
Abstract Large Language Models (LLMs) could be a useful tool for lawyers. However,
empirical research on their effectiveness in conducting legal tasks is scant. We study …

Investigating the limitation of clip models: The worst-performing categories

JJ Shao, JX Shi, XW Yang, LZ Guo, YF Li - arXiv preprint arXiv …, 2023 - arxiv.org
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating
natural language into visual concepts, enabling zero-shot recognition on downstream tasks …

Prompt Learning via Meta-Regularization

J Park, J Ko, HJ Kim - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Pre-trained vision-language models have shown impressive success on various computer
vision tasks with their zero-shot generalizability. Recently prompt learning approaches have …