Prompt as triggers for backdoor attack: Examining the vulnerability in language models

S Zhao, J Wen, LA Tuan, J Zhao, J Fu - arXiv preprint arXiv:2305.01219, 2023 - arxiv.org
The prompt-based learning paradigm, which bridges the gap between pre-training and fine-
tuning, achieves state-of-the-art performance on several NLP tasks, particularly in few-shot …

Read-only prompt optimization for vision-language few-shot learning

D Lee, S Song, J Suh, J Choi… - Proceedings of the …, 2023 - openaccess.thecvf.com
In recent years, prompt tuning has proven effective in adapting pre-trained vision-language
models to down-stream tasks. These methods aim to adapt the pre-trained models by …

Zero-shot rumor detection with propagation structure via prompt learning

H Lin, P Yi, J Ma, H Jiang, Z Luo, S Shi… - Proceedings of the AAAI …, 2023 - ojs.aaai.org
The spread of rumors along with breaking events seriously hinders the truth in the era of
social media. Previous studies reveal that due to the lack of annotated resources, rumors …

Infoprompt: Information-theoretic soft prompt tuning for natural language understanding

J Wu, T Yu, R Wang, Z Song, R Zhang… - Advances in …, 2024 - proceedings.neurips.cc
Soft prompt tuning achieves superior performances across a wide range of few-shot tasks.
However, the performances of prompt tuning can be highly sensitive to the initialization of …

SEQZERO: Few-shot compositional semantic parsing with sequential prompts and zero-shot models

J Yang, H Jiang, Q Yin, D Zhang, B Yin… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent research showed promising results on combining pretrained language models (LMs)
with canonical utterance for few-shot semantic parsing. The canonical utterance is often …

Coupling large language models with logic programming for robust and general reasoning from text

Z Yang, A Ishay, J Lee - arXiv preprint arXiv:2307.07696, 2023 - arxiv.org
While large language models (LLMs), such as GPT-3, appear to be robust and general, their
reasoning ability is not at a level to compete with the best models trained for specific natural …

Few-shot text-to-sql translation using structure and content prompt learning

Z Gu, J Fan, N Tang, L Cao, B Jia, S Madden… - Proceedings of the ACM …, 2023 - dl.acm.org
A common problem with adopting Text-to-SQL translation in database systems is poor
generalization. Specifically, when there is limited training data on new datasets, existing few …

Benchclamp: A benchmark for evaluating language models on syntactic and semantic parsing

S Roy, S Thomson, T Chen, R Shin… - Advances in …, 2024 - proceedings.neurips.cc
Recent work has shown that generation from a prompted or fine-tuned language model can
perform well at semantic parsing when the output is constrained to be a valid semantic …

Knowledge base question answering: A semantic parsing perspective

Y Gu, V Pahuja, G Cheng, Y Su - arXiv preprint arXiv:2209.04994, 2022 - arxiv.org
Recent advances in deep learning have greatly propelled the research on semantic parsing.
Improvement has since been made in many downstream tasks, including natural language …

A parse-then-place approach for generating graphic layouts from textual descriptions

J Lin, J Guo, S Sun, W Xu, T Liu… - Proceedings of the …, 2023 - openaccess.thecvf.com
Creating layouts is a fundamental step in graphic design. In this work, we propose to use text
as the guidance to create graphic layouts, ie, Text-to-Layout, aiming to lower the design …