Wider and deeper llm networks are fairer llm evaluators

X Zhang, B Yu, H Yu, Y Lv, T Liu, F Huang… - arXiv preprint arXiv …, 2023 - arxiv.org
Measuring the quality of responses generated by LLMs is a challenging task, particularly
when it comes to evaluating whether the response is aligned with human preference. A …

Amortizing intractable inference in large language models

EJ Hu, M Jain, E Elmoznino, Y Kaddar, G Lajoie… - arXiv preprint arXiv …, 2023 - arxiv.org
Autoregressive large language models (LLMs) compress knowledge from their training data
through next-token conditional distributions. This limits tractable querying of this knowledge …

Dp-opt: Make large language model your privacy-preserving prompt engineer

J Hong, JT Wang, C Zhang, Z Li, B Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have emerged as dominant tools for various tasks,
particularly when tailored for a specific target by prompt tuning. Nevertheless, concerns …

Privacy preserving prompt engineering: A survey

K Edemacu, X Wu - arXiv preprint arXiv:2404.06001, 2024 - arxiv.org
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a
wide range of general natural language processing (NLP) tasks. Researchers have …

Guiding language model reasoning with planning tokens

X Wang, L Caccia, O Ostapenko, X Yuan… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) have recently attracted considerable interest for their ability
to perform complex reasoning tasks, such as chain-of-thought (CoT) reasoning. However …

Ask more, know better: Reinforce-Learned Prompt Questions for Decision Making with Large Language Models

X Yan, Y Song, X Cui, F Christianos, H Zhang… - arXiv preprint arXiv …, 2023 - arxiv.org
Large language models (LLMs) demonstrate their promise in tackling complicated practical
challenges by combining action-based policies with chain of thought (CoT) reasoning …

DP-TabICL: In-Context Learning with Differentially Private Tabular Data

AN Carey, K Bhaila, K Edemacu, X Wu - arXiv preprint arXiv:2403.05681, 2024 - arxiv.org
In-context learning (ICL) enables large language models (LLMs) to adapt to new tasks by
conditioning on demonstrations of question-answer pairs and it has been shown to have …

Harnessing Multi-Role Capabilities of Large Language Models for Open-Domain Question Answering

H Sun, Y Liu, C Wu, H Yan, C Tai, X Gao… - Proceedings of the …, 2024 - dl.acm.org
Open-domain question answering (ODQA) has emerged as a pivotal research spotlight in
information systems. Existing methods follow two main paradigms to collect evidence:(1) …

Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives

V Hanke, T Blanchard, F Boenisch, IE Olatunji… - arXiv preprint arXiv …, 2024 - arxiv.org
While open Large Language Models (LLMs) have made significant progress, they still fall
short of matching the performance of their closed, proprietary counterparts, making the latter …

CUE-M: Contextual Understanding and Enhanced Search with Multimodal Large Language Model

D Go, T Whang, C Lee, H Kim, S Park, S Ji… - arXiv preprint arXiv …, 2024 - arxiv.org
The integration of Retrieval-Augmented Generation (RAG) with Multimodal Large Language
Models (MLLMs) has expanded the scope of multimodal query resolution. However, current …