Larger language models do in-context learning differently

J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu… - arXiv preprint arXiv …, 2023 - arxiv.org
We study how in-context learning (ICL) in language models is affected by semantic priors
versus input-label mappings. We investigate two setups-ICL with flipped labels and ICL with …

Reliable visual question answering: Abstain rather than answer incorrectly

S Whitehead, S Petryk, V Shakib, J Gonzalez… - … on Computer Vision, 2022 - Springer
Abstract Machine learning has advanced dramatically, narrowing the accuracy gap to
humans in multimodal tasks like visual question answering (VQA). However, while humans …

Three towers: Flexible contrastive learning with pretrained image models

J Kossen, M Collier, B Mustafa… - Advances in …, 2024 - proceedings.neurips.cc
Abstract We introduce Three Towers (3T), a flexible method to improve the contrastive
learning of vision-language models by incorporating pretrained image classifiers. While …

On the optimal combination of cross-entropy and soft dice losses for lesion segmentation with out-of-distribution robustness

A Galdran, G Carneiro, MAG Ballester - Diabetic Foot Ulcers Grand …, 2022 - Springer
We study the impact of different loss functions on lesion segmentation from medical images.
Although the Cross-Entropy (CE) loss is the most popular option when dealing with natural …

Reliability benchmarks for image segmentation

EK Buchanan, MW Dusenberry, J Ren… - … 2022 Workshop on …, 2022 - openreview.net
Recent work has shown the importance of reliability, where model performance is assessed
under stress conditions pervasive in real-world deployment. In this work, we examine …

Attacking Bayes: Are Bayesian Neural Networks Inherently Robust?

Y Feng, TGJ Rudner, N Tsilivis, J Kempe - Fifth Symposium on Advances … - openreview.net
This work examines the claim in recent work that Bayesian neural networks (BNNs) are
inherently robust to adversarial perturbations. To study this question, we investigate whether …

[HTML][HTML] LARGER LANGUAGE MODELS DO IN-CONTEXT LEARNING DIFFERENTLY By admin No Comments

J Wei, J Wei, Y Tay, D Tran, A Webson, Y Lu, X Chen… - your-ai-staff.com
We study how in-context learning (ICL) in language models is affected by semantic priors
versus input–label mappings. We investigate two setups—ICL with flipped labels and ICL …

[PDF][PDF] Improving Machine Learning Systems by Eliciting and Incorporating Additional Human Knowledge

KM Collins, U Bhatt - mlmi.eng.cam.ac.uk
Data has powered incredible advances in machine learning (ML). Yet, the kinds of data
used for training are often hard labels aggregated over humans' annotations, which fail to …