作者
Xinyang Zhang, Shouling Ji, Hui Wang, Ting Wang
发表日期
2017/6/5
研讨会论文
2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS)
页码范围
1442-1452
出版商
IEEE
简介
In this paper, we consider the problem of multiparty deep learning (MDL), wherein autonomous data owners jointly train accurate deep neural network models without sharing their private data. We design, implement, and evaluate ∝MDL, a new MDL paradigm built upon three primitives: asynchronous optimization, lightweight homomorphic encryption, and threshold secret sharing. Compared with prior work, ∝MDL departs in significant ways: a) besides providing explicit privacy guarantee, it retains desirable model utility, which is paramount for accuracy-critical domains; b) it provides an intuitive handle for the operator to gracefully balance model utility and training efficiency; c) moreover, it supports delicate control over communication and computational costs by offering two variants, operating under loose and tight coordination respectively, thus optimizable for given system settings (e.g., limited versus sufficient …
引用总数
2017201820192020202120222023202415372014135
学术搜索中的文章
X Zhang, S Ji, H Wang, T Wang - 2017 IEEE 37th International Conference on …, 2017