TaleBrush: Sketching stories with generative pretrained language models

JJY Chung, W Kim, KM Yoo, H Lee, E Adar… - Proceedings of the 2022 …, 2022 - dl.acm.org
While advanced text generation algorithms (eg, GPT-3) have enabled writers to co-create
stories with an AI, guiding the narrative remains a challenge. Existing systems often …

Visual abductive reasoning

C Liang, W Wang, T Zhou… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Abductive reasoning seeks the likeliest possible explanation for partial observations.
Although abduction is frequently employed in human daily reasoning, it is rarely explored in …

PLANET: Dynamic content planning in autoregressive transformers for long-form text generation

Z Hu, HP Chan, J Liu, X Xiao, H Wu… - arXiv preprint arXiv …, 2022 - arxiv.org
Despite recent progress of pre-trained language models on generating fluent text, existing
methods still suffer from incoherence problems in long-form text generation tasks that …

Parallel refinements for lexically constrained text generation with bart

X He - arXiv preprint arXiv:2109.12487, 2021 - arxiv.org
Lexically constrained text generation aims to control the generated text by incorporating
some pre-specified keywords into the output. Previous work injects lexical constraints into …

Sentence bottleneck autoencoders from transformer language models

I Montero, N Pappas, NA Smith - arXiv preprint arXiv:2109.00055, 2021 - arxiv.org
Representation learning for text via pretraining a language model on a large corpus has
become a standard starting point for building NLP systems. This approach stands in contrast …

Computational storytelling and emotions: A survey

Y Mori, H Yamane, Y Mukuta, T Harada - arXiv preprint arXiv:2205.10967, 2022 - arxiv.org
Storytelling has always been vital for human nature. From ancient times, humans have used
stories for several objectives including entertainment, advertisement, and education. Various …

Variable-length music score infilling via XLNet and musically specialized positional encoding

CJ Chang, CY Lee, YH Yang - arXiv preprint arXiv:2108.05064, 2021 - arxiv.org
This paper proposes a new self-attention based model for music score infilling, ie, to
generate a polyphonic music sequence that fills in the gap between given past and future …

Inspiration through observation: Demonstrating the influence of automatically generated text on creative writing

M Roemmele - arXiv preprint arXiv:2107.04007, 2021 - arxiv.org
Getting machines to generate text perceived as creative is a long-pursued goal. A growing
body of research directs this goal towards augmenting the creative writing abilities of human …

Multimodal text style transfer for outdoor vision-and-language navigation

W Zhu, XE Wang, TJ Fu, A Yan, P Narayana… - arXiv preprint arXiv …, 2020 - arxiv.org
One of the most challenging topics in Natural Language Processing (NLP) is visually-
grounded language understanding and reasoning. Outdoor vision-and-language navigation …

TaleStream: Supporting Story Ideation with Trope Knowledge

JP Chou, AF Siu, N Lipka, R Rossi… - Proceedings of the 36th …, 2023 - dl.acm.org
Story ideation is a critical part of the story-writing process. It is challenging to support
computationally due to its exploratory and subjective nature. Tropes, which are recurring …