Cross-task defense: Instruction-tuning llms for content safety

Y Fu, W Xiao, J Chen, J Li, E Papalexakis… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2405.15202, 2024arxiv.org
Recent studies reveal that Large Language Models (LLMs) face challenges in balancing
safety with utility, particularly when processing long texts for NLP tasks like summarization
and translation. Despite defenses against malicious short questions, the ability of LLMs to
safely handle dangerous long content, such as manuals teaching illicit activities, remains
unclear. Our work aims to develop robust defenses for LLMs in processing malicious
documents alongside benign NLP task queries. We introduce a defense dataset comprised …
Recent studies reveal that Large Language Models (LLMs) face challenges in balancing safety with utility, particularly when processing long texts for NLP tasks like summarization and translation. Despite defenses against malicious short questions, the ability of LLMs to safely handle dangerous long content, such as manuals teaching illicit activities, remains unclear. Our work aims to develop robust defenses for LLMs in processing malicious documents alongside benign NLP task queries. We introduce a defense dataset comprised of safety-related examples and propose single-task and mixed-task losses for instruction tuning. Our empirical results demonstrate that LLMs can significantly enhance their capacity to safely manage dangerous content with appropriate instruction tuning. Additionally, strengthening the defenses of tasks most susceptible to misuse is effective in protecting LLMs against processing harmful information. We also observe that trade-offs between utility and safety exist in defense strategies, where Llama2, utilizing our proposed approach, displays a significantly better balance compared to Llama1.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References