alternating direction method of multiplier (ADMM) is a popular method used to
design distributed versions of a machine learning algorithm, whereby local
computations are performed on local data with the output exchanged among
neighbors in an iterative fashion. During this iterative pro
本文提出了使用差分隐私的 Alternating Direction Method of Multipliers(P-ADMM)算法,使得在分布式机器学习中每个本地数据集与邻近代理交换计算结果的迭代过程中保持数据隐私不泄露,并证明这一算法与非私有算法有着相同的收敛率,实验结果表明 P-ADMM 在现有的基于 ADMM 的差分隐私算法中表现最佳。