Transfer learning: a friendly introduction

A Hosna, E Merry, J Gyalmo, Z Alom, Z Aung… - Journal of Big Data, 2022 - Springer
Infinite numbers of real-world applications use Machine Learning (ML) techniques to
develop potentially the best data available for the users. Transfer learning (TL), one of the …

Navigating the pitfalls of applying machine learning in genomics

S Whalen, J Schreiber, WS Noble… - Nature Reviews Genetics, 2022 - nature.com
The scale of genetic, epigenomic, transcriptomic, cheminformatic and proteomic data
available today, coupled with easy-to-use machine learning (ML) toolkits, has propelled the …

Trustllm: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu, Q Zhang, Y Li… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs), exemplified by ChatGPT, have gained considerable
attention for their excellent natural language processing capabilities. Nonetheless, these …

A substrate-less nanomesh receptor with meta-learning for rapid hand task recognition

KK Kim, M Kim, K Pyun, J Kim, J Min, S Koh… - Nature …, 2023 - nature.com
With the help of machine learning, electronic devices—including electronic gloves and
electronic skins—can track the movement of human hands and perform tasks such as object …

[HTML][HTML] Position: TrustLLM: Trustworthiness in large language models

Y Huang, L Sun, H Wang, S Wu… - International …, 2024 - proceedings.mlr.press
Large language models (LLMs) have gained considerable attention for their excellent
natural language processing capabilities. Nonetheless, these LLMs present many …

Data selection for language models via importance resampling

SM Xie, S Santurkar, T Ma… - Advances in Neural …, 2023 - proceedings.neurips.cc
Selecting a suitable pretraining dataset is crucial for both general-domain (eg, GPT-3) and
domain-specific (eg, Codex) language models (LMs). We formalize this problem as selecting …

[HTML][HTML] Pre-trained models: Past, present and future

X Han, Z Zhang, N Ding, Y Gu, X Liu, Y Huo, J Qiu… - AI Open, 2021 - Elsevier
Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved
great success and become a milestone in the field of artificial intelligence (AI). Owing to …

Memo: Test time robustness via adaptation and augmentation

M Zhang, S Levine, C Finn - Advances in neural information …, 2022 - proceedings.neurips.cc
While deep neural networks can attain good accuracy on in-distribution test points, many
applications require robustness even in the face of unexpected perturbations in the input …

[HTML][HTML] From concept drift to model degradation: An overview on performance-aware drift detectors

F Bayram, BS Ahmed, A Kassler - Knowledge-Based Systems, 2022 - Elsevier
The dynamicity of real-world systems poses a significant challenge to deployed predictive
machine learning (ML) models. Changes in the system on which the ML model has been …

A brief review of domain adaptation

A Farahani, S Voghoei, K Rasheed… - Advances in data science …, 2021 - Springer
Classical machine learning assumes that the training and test sets come from the same
distributions. Therefore, a model learned from the labeled training data is expected to …