Re-thinking data strategy and integration for artificial intelligence: concepts, opportunities, and challenges

A Aldoseri, KN Al-Khalifa, AM Hamouda - Applied Sciences, 2023 - mdpi.com
The use of artificial intelligence (AI) is becoming more prevalent across industries such as
healthcare, finance, and transportation. Artificial intelligence is based on the analysis of …

A review on fairness in machine learning

D Pessach, E Shmueli - ACM Computing Surveys (CSUR), 2022 - dl.acm.org
An increasing number of decisions regarding the daily lives of human beings are being
controlled by artificial intelligence and machine learning (ML) algorithms in spheres ranging …

ChatGPT and a new academic reality: Artificial Intelligence‐written research papers and the ethics of the large language models in scholarly publishing

BD Lund, T Wang, NR Mannuru, B Nie… - Journal of the …, 2023 - Wiley Online Library
This article discusses OpenAI's ChatGPT, a generative pre‐trained transformer, which uses
natural language processing to fulfill text‐based user requests (ie, a “chatbot”). The history …

Toxicity in chatgpt: Analyzing persona-assigned language models

A Deshpande, V Murahari, T Rajpurohit… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have shown incredible capabilities and transcended the
natural language processing (NLP) community, with adoption throughout many services like …

Lamda: Language models for dialog applications

R Thoppilan, D De Freitas, J Hall, N Shazeer… - arXiv preprint arXiv …, 2022 - arxiv.org
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of
Transformer-based neural language models specialized for dialog, which have up to 137B …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

On the dangers of stochastic parrots: Can language models be too big?🦜

EM Bender, T Gebru, A McMillan-Major… - Proceedings of the 2021 …, 2021 - dl.acm.org
The past 3 years of work in NLP have been characterized by the development and
deployment of ever larger language models, especially for English. BERT, its variants, GPT …

Auto-debias: Debiasing masked language models with automated biased prompts

Y Guo, Y Yang, A Abbasi - … of the 60th Annual Meeting of the …, 2022 - aclanthology.org
Human-like biases and undesired social stereotypes exist in large pretrained language
models. Given the wide adoption of these models in real-world applications, mitigating such …

Realtoxicityprompts: Evaluating neural toxic degeneration in language models

S Gehman, S Gururangan, M Sap, Y Choi… - arXiv preprint arXiv …, 2020 - arxiv.org
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise
toxic language which hinders their safe deployment. We investigate the extent to which …

Five sources of bias in natural language processing

D Hovy, S Prabhumoye - Language and linguistics compass, 2021 - Wiley Online Library
Recently, there has been an increased interest in demographically grounded bias in natural
language processing (NLP) applications. Much of the recent work has focused on describing …