Y Ma, H Yang - Journal of Machine Learning Research, 2024 - jmlr.org
In this work, we investigate the problem of public data assisted non-interactive Local Differentially Private (LDP) learning with a focus on non-parametric classification. Under the …
With the rise of large language models (LLMs), ensuring they embody the principles of being helpful, honest, and harmless (3H), known as Human Alignment, becomes crucial. While …
B Wang, D Feng, J Su, S Song - Mathematics, 2024 - mdpi.com
The proliferation of data across multiple domains necessitates the adoption of machine learning models that respect user privacy and data security, particularly in sensitive …
Deep learning models for NLP tasks are prone to variants of privacy attacks. To prevent privacy leakage, researchers have investigated word-level perturbations, relying on the …
L Zhu, A Manseur, M Ding, J Liu, J Xu… - arXiv preprint arXiv …, 2024 - arxiv.org
We study the problem of fitting the high dimensional sparse linear regression model with sub- Gaussian covariates and responses, where the data are provided by strategic or self …
We explore the use of distributed differentially private computations across multiple servers, balancing the tradeoff between the error introduced by the differentially private mechanism …
J Su, J Xu, D Wang - Journal of Computer and System Sciences, 2024 - Elsevier
In this paper, we study the problem of PAC learning halfspaces in the non-interactive local differential privacy model (NLDP). To breach the barrier of exponential sample complexity …
In this paper, we study the Differentially Private Empirical Risk Minimization (DP-ERM) problem, considering both convex and non-convex loss functions. For cases where DP-ERM …
J Li, D Simchi-Levi, Y Wang - arXiv preprint arXiv:2404.09413, 2024 - arxiv.org
Contextual bandit with linear reward functions is among one of the most extensively studied models in bandit and online learning research. Recently, there has been increasing interest …