ebook img

Adversarial Machine Learning: Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence PDF

314 Pages·2023·2.556 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Adversarial Machine Learning: Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence

Adversarial Machine Learning Aneesh Sreevallabh Chivukula • Xinghao Yang (cid:129) Bo Liu (cid:129) Wei Liu (cid:129) Wanlei Zhou Adversarial Machine Learning Attack Surfaces, Defence Mechanisms, Learning Theories in Artificial Intelligence AneeshSreevallabhChivukula XinghaoYang BITSPilaniHyderabadCampus ComputerScience DepartmentofComputerScience& UniversityofTechnologySydney InformationSystems Sydney,NSW,Australia Secunderabad,Hyderabad,Telangana, India BoLiu WeiLiu ComputerScience ComputerScience UniversityofTechnologySydney UniversityofTechnologySydney Sydney,NSW,Australia Sydney,NSW,Australia WanleiZhou ComputerScience UniversityofTechnologySydney Sydney,NSW,Australia ISBN978-3-030-99771-7 ISBN978-3-030-99772-4 (eBook) https://doi.org/10.1007/978-3-030-99772-4 ©TheEditor(s)(ifapplicable)andTheAuthor(s),underexclusivelicensetoSpringerNatureSwitzerland AG2023 Thisworkissubjecttocopyright.AllrightsaresolelyandexclusivelylicensedbythePublisher,whether thewholeorpartofthematerialisconcerned,specificallytherightsoftranslation,reprinting,reuse ofillustrations,recitation,broadcasting,reproductiononmicrofilmsorinanyotherphysicalway,and transmissionorinformationstorageandretrieval,electronicadaptation,computersoftware,orbysimilar ordissimilarmethodologynowknownorhereafterdeveloped. Theuseofgeneraldescriptivenames,registerednames,trademarks,servicemarks,etc.inthispublication doesnotimply,evenintheabsenceofaspecificstatement,thatsuchnamesareexemptfromtherelevant protectivelawsandregulationsandthereforefreeforgeneraluse. Thepublisher,theauthors,andtheeditorsaresafetoassumethattheadviceandinformationinthisbook arebelievedtobetrueandaccurateatthedateofpublication.Neitherthepublishernortheauthorsor theeditorsgiveawarranty,expressedorimplied,withrespecttothematerialcontainedhereinorforany errorsoromissionsthatmayhavebeenmade.Thepublisherremainsneutralwithregardtojurisdictional claimsinpublishedmapsandinstitutionalaffiliations. ThisSpringerimprintispublishedbytheregisteredcompanySpringerNatureSwitzerlandAG Theregisteredcompanyaddressis:Gewerbestrasse11,6330Cham,Switzerland Preface Asignificantrobustnessgapexistsbetweenmachineintelligenceandhumanpercep- tiondespiterecentadvancesindeeplearning.Deeplearningisnotprovablysecure. Acriticalchallengeindeeplearningisthevulnerabilityofdeeplearningnetworks to security attacks from malicious adversaries. Even innocuous perturbations to the training data can be used to manipulate the behavior of the deep network in unintended ways. For example, autonomous AI agents in unmanned autonomous systems such as self-driving vehicles can play multistage cyber deception games withthelearningalgorithms.Adversarialdeeplearningalgorithmsarespecifically designedtoexploitsuchvulnerabilitiesindeepnetworks.Thesevulnerabilitiesare simulated by training the learning algorithm under various attack scenarios. The attack scenarios are assumed to be formulated by an intelligent adversary. The optimalattackpolicyisformulatedassolvingforoptimizationproblems.Theattack scenarioshaveledtothedevelopmentofadversarialattacktechnologiesincomputer vision,naturallanguageprocessing,cybersecurityonmultidimensional,textualand imagedata,sequencedata,andspatialdata. Indiscriminativelearningmodels,adversariallearningproblemsareformulated withdeepneuralnetworkscomputingstatisticaldivergencemetricsbetweentrain- ing data features and adversarial data features. Latent space on high-dimensional training data can also be searched by deep networks to construct adversarial examples. Depending on the goal, knowledge, and capability of an adversary, adversarial examples can be crafted by prior knowledge, observation, and experi- mentationonthelossfunctionsindeeplearning.Adversarialexamplesareknown totransferbetweendata-specificmanifoldsofdeeplearningmodels.Thuspredictive performanceofdeeplearningmodelsunderattackisaninterestingareaforresearch. Randomized adversarial algorithms for discrimination can be extended with effi- ciency, complexity, reliability, learnability, etc. tradeoffs in the game theoretical optimization. The resultant convergence properties of game theoretical optima can be investigated with adaptive dynamic programming to produce numerical computationalmethodsforadversarialdeeplearning. Theexistingadversariallearningalgorithmsdifferindesignassumptionsregard- ingadversary’sknowledge,attackstrategies,attackinfluence,andsecurityviolation. v vi Preface In this book, we conduct a literature review to provide new insights on the relationbetweenadversariallearningandcyberattacks.Wecontrasttheadversarial threats found in the learning assumptions of machine learning models as well as attack vectors in deep learning models. We also seek to survey and summarize non-stationary data representations and concept classes learnt by adversarial deep learning networks with respect to the sensitivity landscape and loss functions in eachapplicationdomain.Therobustnessoftheadversarialdeeplearningnetworks hasbeensurveyedtoproduceataxonomyofadversarialexamplescharacterizingthe defenseoflearningsystemswithgametheoreticaladversariallearningalgorithms. Thegametheoreticlearningprofilesanalyzeadversarialrobustnessofthelearning system with respect to adversary’s objectives, assumptions, models, etc. in a dynamic optimization of the learning robustness and its solution stability over a changingfitnesslandscape. We then review the use of game theory, convex optimization, and stochastic optimization in securing the adversarial deep learning formulations by providing algorithmic comparisons summarizing the theories and applications of game the- oretical adversarial deep learning. Another interesting study is that of defence mechanisms available for deep learning models deployed in real world environ- ments. We propose future research directions in adversarial learning applications specialized to data analytics models applicable to cybersecurity, deep learning, and artificial intelligence. They can realize the practical verifications, numerical approximations, and formal specifications of adversarial deep learning integrated into complex systems. Computational intelligence techniques such as multitask learningandmultiobjectiveoptimizationrelevanttotheadversariallossdesignare also summarized. We thus propose to bound the attacker’s gain under an optimal policywithrespecttoformalaswellasempiricalverificationinthegametheoretic modelsextensibleintolearningsystemdesigntechniques. Fromadataprivacyperspective,wereviewthecybersecurityrisks,threats,and vulnerabilities in privacy preservation and physical world attacks. Detection and response options are provided for specific deep learning algorithms, attacks and threatsincomplexlearningsystems,adversarialdeeplearning,robustoptimization, and intelligent control. Such research themes are applicable to resilient systems designwithprivacypreservingdataminingtoanalyzethethreatdata,metadata,and attack patterns. It can also be used in the study of data quality and provenance of sharedinformationinadversarialdataminingtoproduceIoTsystemswithdefences inthelearningalgorithmsthatcombinesecurityalgorithmicswithprivacysystemics to produce cybersecurity capacities. Then security orchestrations can provision cybersecuritysolutionsasaserviceontheInternetforreliableaccesstoreal-world machinelearningsystems. Further, we contrast the existing literature with recent research into game theoretical adversarialdeep learning.Wehadstudiedadversarialattacks ongame- theoreticlearningmodelsinvolvingevolutionaryadversaries,stochasticadversaries, and variational adversaries targeting the misclassification performance of deep neural networks and convolutional neural networks. Our game theoretical adver- sarial deep learning is applicable to cyberspace security classification problems Preface vii in the training stage and testing stage. Such learning problems study feature manipulations,misclassificationscosts,anddistributionalrobustnessinadversarial learning applications. The adversarial loss functions and training procedures in recently done research are applicable to the study of trustworthiness of deep learningindeployment.Theycansimulatethecyberspacesecuritysafeguards,risks, and challenges in cyber-physical systems as computational algorithms design and statisticalinferenceanalysisproblems. This book is relevant for adversarial machine learning practitioners and adver- sarial artificial intelligence researchers working in the design and application of adversarialdeeplearningtheoriesinmachinelearning,deeplearning,datamining, and knowledge discovery algorithms design. Particular emphasis is placed on the real-worldapplicationdomainsofAdversarialDeepLearninginthedevelopmentof data science, big data analytics, and cybersecurity solutions. The adversarial deep learning theories are summarized with reference to capabilities of computational algorithms in pattern recognition, game theory, computational mathematics, and numerical analysis. The resultant analytics algorithmics, deep neural networks, and adversarial loss functions review the state of the art in the implementation of adversarial algorithms, their attack surfaces, concepts, and methods from the perspective of game theoretical machine learning. The book explores the systems theoretic dependence between randomization in adversarial manipulations and generalizabilityinblackboxoptimizationsofthegametheoreticaladversarialdeep learning.Itaidsfutureresearch,design,development,andinnovationsinthegame theoretical adversarial deep learning algorithms applicable to cyberspace security dataminingproblems. The book also serves as a reference on the existing literature that can be implementedbyresearchersasbaselinemodelstoempiricallycomparetherelevant attackscenariosanddefensemechanismsforadversarialdeeplearning.Theknown invasivetechniquesandtheircountermeasurestodevelopfuturecybersecuritycapa- bilities are reviewed. The security issues and vulnerabilities in the machine/deep learningsolutionsaremainlylocatedwithinthedeeplayersofmathematicalformu- lationandmechanism ofthelearningmethods.Thegametheoreticalformulations oftheadversariallearninginthebookleveragedeeplearningandbigdatatosolve for adversarial samples that effect data manipulation on the learnt discriminative learning baselines. Several such learning baselines must be built to generate an adversary’s attack hypothesis and consequent defense mechanisms available for adjusting the decision boundaries in discriminative learning. Thus the research questions covered in the book can set the stage for strategies and expectations in the adversarial deep learning capabilities offered around cyber adversaries’ Tools, Tactics,Techniques,andProcedures(TTPs)inthecyberkillchain.Theycanassess, prioritize,andselectthehigh-riskusecasescenariosofcyberthreatstargetingdeep learningmodelsinsecuritydetection/preventionlayers. One significant barrier to the widespread adoption of deep learning methods is their complexity in both learning and reasoning phases that make it difficult to understand and test the potential vulnerabilities and also suitable mitigations. Learning from data for decision making within cyberspace domain is still a viii Preface current and important challenge due to its complexity in design and development. This challenge is also interweaving with complexities from adversarial attacks targeting manipulated results for machine/deep learning models. The resilience of the machine learning models is a critical component for trustworthy systems in cybersecurity and artificial intelligence, but one that is poorly understood and investigated by mainstream security research and industry community. The book providesasurveyofthesecurityevaluationofmachinelearningalgorithmswiththe design-for-security paradigm of adversarial learning to complement the classical design-for-performance paradigm of machine learning. The security evaluation is useful for the purpose of alleviating prediction bias in machine learning systems accordingtothesecurityattributesdefinedforagivenadversariallearningmodels algorithmicsoperationalindynamiclearningenvironments.Formalizedadversarial learning assumptions around the attack surface then constructs adversarial deep learningdesignswithreferencetosignalprocessingcharacteristicsintherobustness propertiesofmachinelearningsystemsTTPs. This book begins with a review of adversarial machine learning in Chap.1 along with a comparison of new versus existing approaches to game theoretical adversarial machine learning. Chapter 2 positions our research contributions in contrast to related literature on (i) adversarial security mechanisms and generative adversarial networks, (ii) adversarial examples for misleading deep classifiers and gametheoreticaladversarialdeeplearningmodels,and(iii)adversarialexamplesin transferlearninganddomainadaptationforcybersecurity. The adversarial attack surfaces for the security and privacy tradeoffs in adver- sarial deep learning are given in Chap.3. They summarize the cyber, physical, active, and passive attack surfaces in interdependent, interconnected, and interac- tive security-critical environments for learning systems. Such attack surfaces are increasing vertically in numbers, volumes and horizontally in types, functionality over Internet, social networks, smartphones, and IoT devices. Autonomic security in self-protecting and self-healing threat mitigation strategies must consider such attacksurfacesincontrolmechanismsofthenetworkingdomainstoidentifythreats andchooseappropriatemachinelearninganddataminingmethodsforadversarial learning. Chapter 4 describes game theoretical adversarial deep learning. The compu- tational algorithms in our research are contrasted with stochastic optimization techniques inthegametheoryliterature.Severalgameformulationsareillustrated with examples to construct cost-sensitive adversaries for adversarial data mining. Proper quantification of the hypothesis set in decision problems of this research leads us into various functional problems, oracular problems, sampling tasks, and optimization problems in the game theoretical adversarial learning. We can then developatheoryofsamplecomplexity,formalverification,andfuzzyautomatain the adversarial models with reliable guarantees. The resultant sampling dynamics areapplicableintotheAdversarialSignalProcessingofsoftmatchingpatternsand theirfeatureembeddingsincybersecurityattackscenariosanddefensemechanisms. In terms of information-theoretic efficiency of machine learning, this is a study of the sample complexity of the function classes in adversarial learning games to Preface ix deviseeachattackscenarioasablackboxattackwheretheadversarieshavenoprior knowledgeofthedeeplearningtrainingprocessesanditsbestresponsestrategies. Chapter5presentstheoriesandalgorithmsforadversarialdeeplearning.These algorithmics can also be used to check the learning system specifications for consistencyandapplicabilitytomergetheattackdataandhardenthespecifications into a new adversarial learning model with vulnerability assessment metrics, protocols, and countermeasure fusions. Example applications of the adversarial attacksduetogametheoreticaladversarialdeeplearningproposedinourresearch arepresentedinChap.6.Weworkinthecontextofstatisticalspamandautonomous systems applications with images and videos. But we have found literature in severalcybersecurityanalyticsapplicationsfortheadversarialdeeplearninginreal- world domains. For instance, it is applicable in cryptanalysis, steganalysis, IoT malware,syntheticdatagenerators,networksecurity,biometricsrecognition,object detection, virtual assistants, cyber-physical control systems, phishing detection, computationalredteaming,naturallanguagegeneration,etc.Butthedataanalytics results from adversarial data mining are not always formulated in terms of game theoreticalmodellingandoptimizationalthoughgametheoryprovidesanexcellent abstractionforgenerative-discriminativemodellinginadversarialdeeplearningthat isintractableinshallowarchitecturesformachinelearning. Chapter7developsadiscussionontheutilizationofadversariallearninginpri- vacyenhancingtechnologies.Bydefiningthetrust,resilience,andagilityontologies foreachthreatagenttheprivacypreservingdataminingtechniquescanextendour researchingametheoreticaladversarialdeeplearningtooperateinaccordancewith privacy-by-designparadigmforcontractual,statutory,andregulatoryrequirements regardingtheuseofcomputingandinternettechnologiesinmachinelearning.We can produce security and dependability metrics ontologies to reflect the quality of an adversarial system with respect to its privacy functionality, performance, dependability, coupled with security costs and complexities, transparency and fairness, interpretability, and explainability in modelling the adversarial AI agents within multivector, multistage, and hybrid kill-chain strategies for cyberattacks. Computational difficulties for measuring utility and associated information loss can be addressed in game theory to provision security service offerings satisfying lightweightness, heterogeneity, early detection of attacks, high availability, high accuracy, high reliability, fault tolerance, resilience, robustness, scalability, and energy efficiency. Such adversarial AI agents can discover new attacks and learn overtimetorespondbettertothreatsincybersecurityasseeninintelligentscanners, firewalls,anti-malware,intelligentespionagetools,andautonomousweapons. Secunderabad,Hyderabad,Telangana,India AneeshSreevallabhChivukula Qingdao,Shandong,China XinghaoYang Sydney,NSW,Australia BoLiu Sydney,NSW,Australia WeiLiu Sydney,NSW,Australia WanleiZhou Contents 1 AdversarialMachineLearning............................................. 1 1.1 AdversarialLearningFrameworks.................................... 4 1.1.1 AdversarialAlgorithmsComparisons....................... 5 1.2 AdversarialSecurityMechanisms .................................... 9 1.2.1 AdversarialExamplesTaxonomies ......................... 10 1.3 StochasticGameIllustrationinAdversarialDeepLearning......... 13 2 AdversarialDeepLearning................................................. 15 2.1 LearningCurveAnalysisforSupervisedMachineLearning........ 16 2.2 AdversarialLossFunctionsforDiscriminativeLearning............ 19 2.3 AdversarialExamplesinDeepNetworks ............................ 21 2.4 AdversarialExamplesforMisleadingClassifiers.................... 22 2.5 GenerativeAdversarialNetworks..................................... 25 2.6 GenerativeAdversarialNetworksforAdversarialLearning......... 26 2.6.1 Causal Feature Learning and Adversarial MachineLearning............................................ 27 2.6.2 Explainable Artificial Intelligence and AdversarialMachineLearning.............................. 36 2.6.3 Stackelberg GameIllustrationinAdversarial DeepLearning................................................ 37 2.7 TransferLearningforDomainAdaptation ........................... 38 2.7.1 AdversarialExamplesinTransferlearning................. 39 2.7.2 AdversarialExamplesinDomainAdaptation.............. 40 2.7.3 AdversarialExamplesinCybersecurityDomains.......... 43 3 AdversarialAttackSurfaces................................................ 47 3.1 SecurityandPrivacyinAdversarialLearning........................ 47 3.1.1 LinearClassifierAttacks..................................... 49 3.2 FeatureWeightingAttacks............................................ 50 3.3 PoisoningSupportVectorMachines.................................. 52 3.4 RobustClassifierEnsembles.......................................... 53 3.5 RobustClusteringModels............................................. 54 xi

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.