Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions

P Thota, S Nilizadeh - arXiv preprint arXiv:2410.20019, 2024 - arxiv.org
Large Language Models have introduced novel opportunities for text comprehension and
generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks …