A call to action on assessing and mitigating bias in artificial intelligence applications for mental health

AC Timmons, JB Duong, N Simo Fiallo… - Perspectives on …, 2023 - journals.sagepub.com
Advances in computer science and data-analytic methods are driving a new era in mental
health research and application. Artificial intelligence (AI) technologies hold the potential to …

Attribution and obfuscation of neural text authorship: A data mining perspective

A Uchendu, T Le, D Lee - ACM SIGKDD Explorations Newsletter, 2023 - dl.acm.org
Two interlocking research questions of growing interest and importance in privacy research
are Authorship Attribution (AA) and Authorship Obfuscation (AO). Given an artifact …

Gpt-4 technical report

J Achiam, S Adler, S Agarwal, L Ahmad… - arXiv preprint arXiv …, 2023 - arxiv.org
We report the development of GPT-4, a large-scale, multimodal model which can accept
image and text inputs and produce text outputs. While less capable than humans in many …

Jailbroken: How does llm safety training fail?

A Wei, N Haghtalab… - Advances in Neural …, 2024 - proceedings.neurips.cc
Large language models trained for safety and harmlessness remain susceptible to
adversarial misuse, as evidenced by the prevalence of “jailbreak” attacks on early releases …

Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers

CA Gao, FM Howard, NS Markov, EC Dyer… - NPJ Digital …, 2023 - nature.com
Large language models such as ChatGPT can produce increasingly realistic text, with
unknown information on the accuracy and integrity of using these models in scientific writing …

Synthetic lies: Understanding ai-generated misinformation and evaluating algorithmic and human solutions

J Zhou, Y Zhang, Q Luo, AG Parker… - Proceedings of the 2023 …, 2023 - dl.acm.org
Large language models have abilities in creating high-volume human-like texts and can be
used to generate persuasive misinformation. However, the risks remain under-explored. To …

Generative language models and automated influence operations: Emerging threats and potential mitigations

JA Goldstein, G Sastry, M Musser, R DiResta… - arXiv preprint arXiv …, 2023 - arxiv.org
Generative language models have improved drastically, and can now produce realistic text
outputs that are difficult to distinguish from human-written content. For malicious actors …

Beyond the imitation game: Quantifying and extrapolating the capabilities of language models

A Srivastava, A Rastogi, A Rao, AAM Shoeb… - arXiv preprint arXiv …, 2022 - arxiv.org
Language models demonstrate both quantitative improvement and new qualitative
capabilities with increasing scale. Despite their potentially transformative impact, these new …

Generative ai

S Feuerriegel, J Hartmann, C Janiesch… - Business & Information …, 2024 - Springer
Tom Freston is credited with saying ''Innovation is taking two things that exist and putting
them together in a new way''. For a long time in history, it has been the prevailing …

Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing

JE Casal, M Kessler - Research Methods in Applied Linguistics, 2023 - Elsevier
There has been considerable intrigue surrounding the use of Large Language Model
powered AI chatbots such as ChatGPT in research, educational contexts, and beyond …