Privacy preserving prompt engineering: A survey

K Edemacu, X Wu - arXiv preprint arXiv:2404.06001, 2024 - arxiv.org
Pre-trained language models (PLMs) have demonstrated significant proficiency in solving a
wide range of general natural language processing (NLP) tasks. Researchers have …

Extracting training data from document-based VQA models

F Pinto, N Rauschmayr, F Tramèr, P Torr… - arXiv preprint arXiv …, 2024 - arxiv.org
Vision-Language Models (VLMs) have made remarkable progress in document-based
Visual Question Answering (ie, responding to queries about the contents of an input …

Open LLMs are Necessary for Current Private Adaptations and Outperform their Closed Alternatives

V Hanke, T Blanchard, F Boenisch, IE Olatunji… - arXiv preprint arXiv …, 2024 - arxiv.org
While open Large Language Models (LLMs) have made significant progress, they still fall
short of matching the performance of their closed, proprietary counterparts, making the latter …

LoRA-Contextualizing Adaptation of Large Multimodal Models for Long Document Understanding

J Chen, R Zhang, Y Zhou, T Yu, F Dernoncourt… - arXiv preprint arXiv …, 2024 - arxiv.org
Large multimodal models (LMMs) have recently shown great progress in text-rich image
understanding, yet they still struggle with complex, multi-page, visually-rich documents …

Federated Document Visual Question Answering: A Pilot Study

K Nguyen, D Karatzas - International Conference on Document Analysis …, 2024 - Springer
An important handicap of document analysis research is that documents tend to be
copyrighted or contain private information, which prohibits their open publication and the …

GeoContrastNet: Contrastive Key-Value Edge Learning for Language-Agnostic Document Understanding

N Biescas, C Boned, J Lladós, S Biswas - International Conference on …, 2024 - Springer
This paper presents GeoContrastNet, a language-agnostic framework to structured
document understanding (DU) by integrating a contrastive learning objective with graph …