Aligning large language models with human preferences through representation engineering

W Liu, X Wang, M Wu, T Li, C Lv, Z Ling, J Zhu… - arXiv preprint arXiv …, 2023 - arxiv.org
Aligning large language models (LLMs) with human preferences is crucial for enhancing
their utility in terms of helpfulness, truthfulness, safety, harmlessness, and interestingness …

Advancing parameter efficiency in fine-tuning via representation editing

M Wu, W Liu, X Wang, T Li, C Lv, Z Ling, J Zhu… - arXiv preprint arXiv …, 2024 - arxiv.org
Parameter Efficient Fine-Tuning (PEFT) has gained significant attention for its ability to
achieve competitive results while updating only a small subset of trainable parameters …

ACCEPT: Adaptive Codebook for Composite and Efficient Prompt Tuning

YC Lin, WH Li, JC Chen, CS Chen - arXiv preprint arXiv:2410.12847, 2024 - arxiv.org
Prompt Tuning has been a popular Parameter-Efficient Fine-Tuning method attributed to its
remarkable performance with few updated parameters on various large-scale pretrained …