A survey of controllable text generation using transformer-based pre-trained language models

H Zhang, H Song, S Li, M Zhou, D Song - ACM Computing Surveys, 2023 - dl.acm.org
Controllable Text Generation (CTG) is an emerging area in the field of natural language
generation (NLG). It is regarded as crucial for the development of advanced text generation …

Fairness in deep learning: A survey on vision and language research

O Parraga, MD More, CM Oliveira, NS Gavenski… - ACM Computing …, 2023 - dl.acm.org
Despite being responsible for state-of-the-art results in several computer vision and natural
language processing tasks, neural networks have faced harsh criticism due to some of their …

Deep bidirectional language-knowledge graph pretraining

M Yasunaga, A Bosselut, H Ren… - Advances in …, 2022 - proceedings.neurips.cc
Pretraining a language model (LM) on text has been shown to help various downstream
NLP tasks. Recent works show that a knowledge graph (KG) can complement text data …

Linkbert: Pretraining language models with document links

M Yasunaga, J Leskovec, P Liang - arXiv preprint arXiv:2203.15827, 2022 - arxiv.org
Language model (LM) pretraining can learn various knowledge from text corpora, helping
downstream tasks. However, existing methods such as BERT model a single document, and …

Towards understanding and mitigating social biases in language models

PP Liang, C Wu, LP Morency… - … on Machine Learning, 2021 - proceedings.mlr.press
As machine learning methods are deployed in real-world settings such as healthcare, legal
systems, and social science, it is crucial to recognize how they shape social biases and …

Gender and representation bias in GPT-3 generated stories

L Lucy, D Bamman - Proceedings of the third workshop on …, 2021 - aclanthology.org
Using topic modeling and lexicon-based word similarity, we find that stories generated by
GPT-3 exhibit many known gender stereotypes. Generated stories depict different topics and …

Greaselm: Graph reasoning enhanced language models for question answering

X Zhang, A Bosselut, M Yasunaga, H Ren… - arXiv preprint arXiv …, 2022 - arxiv.org
Answering complex questions about textual narratives requires reasoning over both stated
context and the world knowledge that underlies it. However, pretrained language models …

Language (technology) is power: A critical survey of" bias" in nlp

SL Blodgett, S Barocas, H Daumé III… - arXiv preprint arXiv …, 2020 - arxiv.org
We survey 146 papers analyzing" bias" in NLP systems, finding that their motivations are
often vague, inconsistent, and lacking in normative reasoning, despite the fact that …

DExperts: Decoding-time controlled text generation with experts and anti-experts

A Liu, M Sap, X Lu, S Swayamdipta… - arXiv preprint arXiv …, 2021 - arxiv.org
Despite recent advances in natural language generation, it remains challenging to control
attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time …

Refiner: Reasoning feedback on intermediate representations

D Paul, M Ismayilzada, M Peyrard, B Borges… - arXiv preprint arXiv …, 2023 - arxiv.org
Language models (LMs) have recently shown remarkable performance on reasoning tasks
by explicitly generating intermediate inferences, eg, chain-of-thought prompting. However …