ebook img

David Evans ARO Workshop on Adversarial Learning Stanford, 14 Sept 2017 PDF

54 PagesΒ·2017Β·25.41 MBΒ·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview David Evans ARO Workshop on Adversarial Learning Stanford, 14 Sept 2017

evadeML.org Shrinking and Exploring David Evans University of Virginia Adversarial Search Spaces ARO Workshop on Adversarial Learning Stanford, 14 Sept 2017 Weilin Xu Yanjun Qi Machine Learning is Eating Computer Science 1 Security State-of-the-Art Random guessing attack success Threat models Proofs probability information "πŸπŸπŸ– Cryptography 𝟐 theoretic, resource required bounded capabilities, "πŸ‘πŸ System Security 𝟐 motivations, common rationality Adversarial white-box, "𝟏𝟏 "πŸ” 𝟐 *; 𝟐 rare! Machine Learning black-box 2 Adversarial Examples β€œpanda” + 0.007 Γ— [π‘›π‘œπ‘–π‘ π‘’] = β€œgibbon” Example from: Ian J. Goodfellow, Jonathon Shlens, Christian Szegedy. Explaining and Harnessing Adversarial Examples. 2014. 3 Adversarial Examples Game 6 Given seed sample, π‘₯, find π‘₯ where: 6 𝑓 π‘₯ β‰  𝑓(π‘₯) Class is different (untargeted) 6 𝑓 π‘₯ = 𝑑 Class is 𝑑 (targeted) 6 βˆ† π‘₯, π‘₯ ≀ 𝛿 Difference below threshold 6 βˆ† π‘₯, π‘₯ is defined in some (simple!) metric space: 𝐿 𝐿 𝐿 𝐿 β€œnorm (# different), norm, norm (β€œEuclidean”), norm: @ A B C 4 Detecting Prediction 0 Model Adversarial Examples S q u e Model Adversarial e Prediction z e 1 r 1 Yes S q u 𝒇(π‘π‘Ÿπ‘’π‘‘ , π‘π‘Ÿπ‘’π‘‘ , … , π‘π‘Ÿπ‘’π‘‘ ) @ A K e Model e z Prediction e 2 r 2 Input No … Legitimate S q u e Model’ e z Prediction e k r k β€œFeature Squeezing” 𝒙 [0.054, 0.4894, 0.9258, 0.0116, 0.2898, 0.5222, 0.5074, …] Squeeze: 𝑓 = round(𝑓 Γ—4)/4 O O [0.0, 0.5, 1.0, 0.0, 0.25, 0.5, 0.5, …] 6 6 squeeze 𝒙 β‰ˆ squeeze 𝒙 ⟹ 𝑓(squeeze 𝒙 ) β‰ˆ 𝑓(squeeze 𝒙 ) [0.0, 0.5, 1.0, 0.0, 0.25, 0.5, 0.5, …] Squeeze: 𝑓 = round(𝑓 Γ—4)/4 O O 6 𝒙 [0.0491, 0.4903, 0.9292, 0.009, 0.2942, 0.5243, 0.5078, …] 6 Example Squeezers e m e l o a r c h s c y o e n r g o m t i b t - i 3x3 smoothing: 8 b - Replace with median of pixels and its neighbors 1 Reduce Color Depth Median Smoothing 7 Simple Instantiation Model Prediction 0 (7-layer CNN) Adversarial Yes B i t D Model 1 e Prediction p 1 max 𝐿 𝑝 , 𝑝 , 𝐿 𝑝 , 𝑝2 > 𝑑 t h A @ A A @ - Input No M Prediction 2 e 2 Legitimate Γ— d Model 2 i a n s e 800 l p m a Legitimate 600 x E f o 400 r e Adversarial b m threshold = 0.0029 200 u detection: 98.2%, FP < 4% N 0 0.0 0.4 0.8 1.2 1.6 2.0 Maximum 𝐿 distance between original and squeezed input A 9

Description:
David Evans. University of Virginia. ARO Workshop on. Adversarial Learning. Stanford, 14 Sept 2017. Weilin Xu. Yanjun Qi. evadeML.org . 10%. 20%. 30%. 40%. 50%. 60%. 70%. 80%. 90%. 100%. Detectio n. Perfo rman ce. JSM. A. (LL). JSM. A. (N ext). CW. 0. (LL). CW. 0. (N ext). CW. 2. (LL). CW. 2.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.