ebook img

Security and Privacy in Machine Learning PDF

99 Pages·2012·6.77 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Security and Privacy in Machine Learning

@NicolasPapernot Security and Privacy in Machine Learning Nicolas Papernot Research done at Pennsylvania State University & Google Brain December 2017 - Tutorial at IEEE WIFS, Rennes, France @NicolasPapernot Thank you to my collaborators Patrick McDaniel (Penn State) Martín Abadi (Google Brain) Alexey Kurakin (Google Brain) Pieter Abbeel (Berkeley) Praveen Manoharan (CISPA) Michael Backes (CISPA) Ilya Mironov (Google Brain) Dan Boneh (Stanford) Ananth Raghunathan (Google Brain) Z. Berkay Celik (Penn State) Arunesh Sinha (U of Michigan) Yan Duan (OpenAI) Shuang Song (UCSD) Úlfar Erlingsson (Google Brain) Ananthram Swami (US ARL) Matt Fredrikson (CMU) Kunal Talwar (Google Brain) Ian Goodfellow Kathrin Grosse (CISPA) Florian Tramèr (Stanford) (Google Brain) Sandy Huang (Berkeley) Michael Wellman (U of Michigan) Somesh Jha (U of Wisconsin) Xi Wu (Google) 2 The attack surface 3 Machine Learning [0.01, 0.84, 0.02, 0.01, 0.01, 0.01, 0.05, 0.01, 0.03, 0.01] Classifier x f(x,θ) [p(0|x,θ), p(1|x,θ), p(2|x,θ), …, p(7|x,θ), p(8|x,θ), p(9|x,θ)] Classifier: map inputs to one class among a predefined set 4 [0 1 0 0 0 0 0 0 0 0] [0 1 0 0 0 0 0 0 0 0] [1 0 0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 1 0 0] Machine Learning [0 0 0 0 0 0 0 0 0 1] [0 0 0 1 0 0 0 0 0 0] Classifier [0 0 0 0 0 0 0 0 1 0] [0 0 0 0 0 0 1 0 0 0] [0 1 0 0 0 0 0 0 0 0] [0 0 0 0 1 0 0 0 0 0] Learning: find internal classifier parameters θ that minimize a cost/loss function (~model error) 5 Adversarial goals Confidentiality and Privacy ● Confidentiality of the model itself (e.g., intellectual property) ○ Tramèr et al. Stealing Machine Learning Models via Prediction APIs ● Privacy of the training or test data (e.g., medical records) ○ Fredrikson et al. Model Inversion Attacks that Exploit Confidence Information ○ Shokri et al. Membership Inference Attacks Against Machine Learning Models ○ Chaudhuri et al . Privacy-preserving logistic regression ○ Song et al. Stochastic gradient descent with differentially private updates ○ Abadi et al. Deep learning with differential privacy ○ Papernot et al. Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data Integrity and Availability ● Integrity of the predictions (wrt expected outcome) ● Availability of the system deploying machine learning 6 Adversarial capabilities for integrity attacks Training time ○ Kearns et al. Learning in the presence of malicious errors ○ Biggio et al. Support Vector Machines Under Adversarial Label Noise ○ Kloft et al. Online anomaly detection under adversarial impact Inference time ● White-box attacker: ○ Szegedy et al. Intriguing Properties of Neural Networks ○ Biggio et al. Evasion Attacks against Machine Learning at Test Time ML ○ Goodfellow et al. Explaining and Harnessing Adversarial Examples ● Black-box attacker: ○ Dalvi et al. Adversarial Learning ○ Szegedy et al. Intriguing Properties of Neural Networks ○ Xu et al. Automatically evading classifiers: A Case Study on PDF Malware Classifiers ML 7 Adversarial capabilities for integrity attacks Training time ○ Kearns et al. Learning in the presence of malicious errors ○ Biggio et al. Support Vector Machines Under Adversarial Label Noise ○ Kloft et al. Online anomaly detection under adversarial impact Inference time ● White-box attacker: ○ Szegedy et al. Intriguing Properties of Neural Networks ○ Biggio et al. Evasion Attacks against Machine Learning at Test Time ML ○ Goodfellow et al. Explaining and Harnessing Adversarial Examples ● Black-box attacker: ○ Dalvi et al. Adversarial Learning ○ Szegedy et al. Intriguing Properties of Neural Networks ○ Xu et al. Automatically evading classifiers: A Case Study on PDF Malware Classifiers ML 8 Part I Security in machine learning 9 Adversarial examples (white-box attacks) 10

Description:
37. Remote. ML sys. Local substitute. “yield sign”. (4) The adversary then uses the local substitute to craft adversarial examples, which are misclassified by the remote ML system because of transferability. Attacking remotely hosted black-box models
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.