粒子群算法在抵御成员推理攻击中的应用
首发时间:2020-03-25
摘要:近来,随着大数据和机器学习技术的不断普及和发展,训练数据的隐私泄露问题日益严重。成员推理攻击在只能调用模型的API而无法获得其内部结构和参数的情况下,给定一条数据,可以判断其是否在模型的训练数据集中。为抵御此攻击,本文从数据出发,在充分研究分析了其攻击特性的基础上,借鉴粒子群算法的迁移形式,提出了粒子群样本迁移算法,从原始数据集生成新数据集,用新数据集训练模型,使得模型不直接接触原始数据,从而实现了对数据隐私的保护。在MNIST数据集上,本文的方法在可控的精度损失(3%)下很好地抵御了成员推理攻击。
For information in English, please click here
Application of particle swarm algorithm in defending membership inference attack
Abstract:Recently, with the continuous popularization and development of big data and machine learning technologies, the privacy leakage problem of training data is becoming increasingly serious. In the case that can only call the model\'s API without obtaining its internal structure and parameters, given a piece of data, membership inference attackcan determine whether it is in the model\'s training data set. In order to resist this attack, based on a full study and analysis of its attack characteristics, this paper draws on the particle swarm algorithm migration form, and proposes a particle swarm sample migration algorithm to generate a new data set from the original data set. Then trainng model with the new data set, so that the model does not directly contact the original data, thereby achieving the protection of data privacy. On the MNIST dataset, the method in this paper resists membership inference attack well with a controlled loss of accuracy (3%).
Keywords: Machinelearning Privacy protection membership inference attack Particle swarm optimization
基金:
引用
No.****
同行评议
勘误表
粒子群算法在抵御成员推理攻击中的应用
评论
全部评论0/1000