ebook img

Intrusion Alert Analysis Framework Using Semantic Correlation by Sherif Saad Mohamed Ahmed B ... PDF

206 Pages·2014·3.5 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Intrusion Alert Analysis Framework Using Semantic Correlation by Sherif Saad Mohamed Ahmed B ...

Intrusion Alert Analysis Framework Using Semantic Correlation by Sherif Saad Mohamed Ahmed B.Sc., Helwan University, 2003 M.Sc., Arab Academy for Science, Technology and Maritime Transport , 2007 A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY in the Department of Electrical and Computer Engineering c Sherif Saad Moahmed Ahmed, 2014 (cid:13) University of Victoria All rights reserved. This dissertation may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author. ii Intrusion Alert Analysis Framework Using Semantic Correlation by Sherif Saad Mohamed Ahmed B.Sc., Helwan University, 2003 M.Sc., Arab Academy for Science, Technology and Maritime Transport , 2007 Supervisory Committee Dr. Issa Traoré, Supervisor (Department of Electrical and Computer Engineering) Dr. Kin Fun LI, Department Member (Department of Electrical and Computer Engineering) Dr. Jens Weber, Outside Member (Department of Computer Science) iii Supervisory Committee Dr. Issa Traoré, Supervisor (Department of Electrical and Computer Engineering) Dr. Kin Fun LI, Department Member (Department of Electrical and Computer Engineering) Dr. Jens Weber, Outside Member (Department of Computer Science) ABSTRACT In the last several years the number of computer network attacks has increased rapidly, while at the same time the attacks have become more and more complex and sophisticated. Intrusion detection systems (IDSs) have become essential security appliances for detecting and reporting these complex and sophisticated attacks. Se- curity officers and analysts need to analyze intrusion alerts in order to extract the underlying attack scenarios and attack intelligence. These allow taking appropri- ate responses and designing adequate defensive or prevention strategies. Intrusion analysis is a resource intensive, complex and expensive process for any organization. The current generation of IDSs generate low level intrusion alerts that describe individual attack events. In addition, existing IDSs tend to generate massive amount iv of alerts with high rate of redundancies and false positives. Typical IDS sensors re- port attacks independently and are not designed to recognize attack plans or discover multistage attack scenarios. Moreover, not all the attacks executed against the target network will be detected by the IDS. False negatives, which correspond to the attacks missed by the IDS, will either make the reconstruction of the attack scenario impossi- ble or lead to an incomplete attack scenario. Because of the above mentioned reasons, intrusion analysis is a challenging task that mainly relies on the analyst experience and requires manual investigation. In this dissertation, we address the above mentioned challenges by proposing a new framework that allows automatic intrusion analysis and attack intelligence ex- traction by analyzing the alerts and attacks semantics using both machine learning and knowledge-representation approaches. Particularly, we use ontological engineer- ing,semanticcorrelation,andclusteringmethodstodesignanewautomatedintrusion analysis framework. The proposedalert analysis approachaddresses many ofthe gaps observed in the existing intrusion analysis techniques, and introduces when needed new metrics to measure the quality of the alerts analysis process. We evaluated ex- perimentally our framework using different benchmark intrusion detection datasets, yielding excellent performance results. v Contents Supervisory Committee ii Abstract iii Table of Contents v List of Tables ix List of Figures xi Acknowledgements xiii Dedication xiv 1 Introduction 1 1.1 Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Limitations of Intrusion Detection Systems . . . . . . . . . . . . . . . 4 1.2.1 Alerts Flooding . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 False Positives . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.3 Interoperability Challenge . . . . . . . . . . . . . . . . . . . . 5 1.2.4 Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Intrusion Alert Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Research Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.5 General Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 vi 1.6 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.7 Dissertation Organization . . . . . . . . . . . . . . . . . . . . . . . . 13 2 Related Work 14 2.1 Alert Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1.1 Techniques based on Environmental Awareness . . . . . . . . . 15 2.1.2 Techniques based on Heuristics and Statistical Analysis . . . . 16 2.1.3 Limitations of Existing Alert Verification Techniques . . . . . 17 2.2 Alert Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Single Sensor Alerts Aggregation . . . . . . . . . . . . . . . . 19 2.2.2 Multi-Sensor Alerts Aggregation . . . . . . . . . . . . . . . . . 22 2.2.3 Limitations of Existing Alert Aggregation . . . . . . . . . . . 25 2.3 Attack Scenario Reconstruction . . . . . . . . . . . . . . . . . . . . . 26 2.3.1 Similarity and Data Mining Techniques . . . . . . . . . . . . . 26 2.3.2 Machine Learning Techniques . . . . . . . . . . . . . . . . . . 27 2.3.3 Knowledge-based Techniques . . . . . . . . . . . . . . . . . . . 29 2.3.4 Limitations of Existing Alert Correlation Techniques . . . . . 33 2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3 Intrusion Alert Analysis 36 3.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 3.2 IDS Alert Analysis Challenges . . . . . . . . . . . . . . . . . . . . . . 38 3.2.1 Alert Analysis Correctness Challenges . . . . . . . . . . . . . 39 3.2.2 Alert Analysis Automation Challenges . . . . . . . . . . . . . 44 3.3 Proposed Alert Analysis Framework . . . . . . . . . . . . . . . . . . . 47 3.4 Alert Analysis Evaluation . . . . . . . . . . . . . . . . . . . . . . . . 50 3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 vii 4 Knowledge-Based Alert Analysis 53 4.1 Knowledge-Based System . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.2 Ontology and Ontology Engineering . . . . . . . . . . . . . . . . . . . 55 4.2.1 What is an Ontology . . . . . . . . . . . . . . . . . . . . . . . 55 4.2.2 Ontology Engineering . . . . . . . . . . . . . . . . . . . . . . 57 4.3 Proposed Intrusion Analysis Ontology . . . . . . . . . . . . . . . . . . 57 4.3.1 Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.3.2 Conceptualization . . . . . . . . . . . . . . . . . . . . . . . . . 61 4.3.3 Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3.4 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.4 Reasoning with Ontology . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.4.1 Deductive Reasoning . . . . . . . . . . . . . . . . . . . . . . . 76 4.4.2 Inductive Reasoning . . . . . . . . . . . . . . . . . . . . . . . 77 4.4.3 Abductive Reasoning . . . . . . . . . . . . . . . . . . . . . . . 78 4.5 Semantic Analysis and Correlation . . . . . . . . . . . . . . . . . . . 78 4.5.1 Ontology-based Semantic Similarity . . . . . . . . . . . . . . . 79 4.5.2 Ontology-based Semantic Relevance . . . . . . . . . . . . . . . 82 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5 Novel Alert Analysis Techniques 87 5.1 Target Network Example . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.2 IDS Alert Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 5.2.1 Alerts Context . . . . . . . . . . . . . . . . . . . . . . . . . . 95 5.2.2 Alert Verification Using Nearest Neighbors Algorithm . . . . . 98 5.2.3 Alert Verification Using Rule Induction . . . . . . . . . . . . . 102 5.3 IDS Alert Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.3.1 A Lightweight Alert Aggregation Method . . . . . . . . . . . . 119 viii 5.3.2 Alerts Aggregation Using Semantic Similarity . . . . . . . . . 121 5.3.3 Information Loss Metric . . . . . . . . . . . . . . . . . . . . . 124 5.4 Attack Scenario Reconstruction . . . . . . . . . . . . . . . . . . . . . 126 5.4.1 Semantic-based Alerts Clustering . . . . . . . . . . . . . . . . 127 5.4.2 Attack Causality Analysis . . . . . . . . . . . . . . . . . . . . 134 5.4.3 Identifying Missing Attacks and False Negatives . . . . . . . . 139 5.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6 Experiments 143 6.1 Benchmark IDS Datasets . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.2 Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.2.1 Handling Massive IDS Alerts . . . . . . . . . . . . . . . . . . 145 6.2.2 Performance Comparison Using DARPA IDS Dataset . . . . . 161 6.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 7 Conclusion 176 7.1 Work Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 7.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Bibliography 180 ix List of Tables Table 4.1 Taxonomic Relations and their Properties . . . . . . . . . . . 63 Table 4.2 Example of Entry in the Relations Dictionary . . . . . . . . 69 Table 4.3 Example of Entry in the Class Dictionary . . . . . . . . . . . 72 Table 4.4 Predicate Examples . . . . . . . . . . . . . . . . . . . . . . . 74 Table 5.1 Description of the hosts in the target network . . . . . . . . 89 Table 5.2 Attack Semantic Features . . . . . . . . . . . . . . . . . . . . 96 Table 5.3 Target Semantic Features . . . . . . . . . . . . . . . . . . . . 97 Table 5.4 Example of labeled raw IDS alerts . . . . . . . . . . . . . . . 98 Table 5.5 Example of unlabeled raw IDS alerts . . . . . . . . . . . . . 100 Table 5.6 Semantic distances between unlabeled alert 1 in Table 5.5 and each labeled alert in Table 5.4 . . . . . . . . . . . . . . . . . 101 Table 5.7 Example of alert training set for rule induction . . . . . . . . 104 Table 5.8 Alert training set after applying the OBRI technique . . . . 105 Table 5.9 Unlabeled novel alert example . . . . . . . . . . . . . . . . . 106 Table 5.10 Mapping Between Alert Verification and Immune System . . 111 Table 5.11 Raw IDS Alerts before Aggregation . . . . . . . . . . . . . . 117 Table 5.12 Summarizing Raw Alerts Using One Hybrid Alert . . . . . . 117 Table 5.13 Summarizing Raw Alerts Using Two Hybrid Alerts . . . . . . 117 Table 5.14 Alerts Examples . . . . . . . . . . . . . . . . . . . . . . . . 128 x Table 5.15 IDS alerts generated by an FTP vulnerability exploitation at- tempt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Table 6.1 ISCX Intrusions Properties . . . . . . . . . . . . . . . . . . . 146 Table 6.2 Numbers of false positives versus true positives in the ISCX dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Table 6.3 Alert Verification Using KNN and Semantic Similarity . . . . 149 Table 6.4 Alerts Verification Using Ontology and Rule Induction . . . 151 Table 6.5 ALightweight AlertsAggregationUsingHillClimbingApproach152 Table 6.6 Alerts Aggregation Based on Alerts Semantic Similarity . . . 153 Table 6.7 DARPA 2000 DOS1.0 Dataset Statistics . . . . . . . . . . . 162 Table 6.8 Semantic Similarity Threshold Vectors . . . . . . . . . . . . 163 Table 6.9 DARPA Semantic-based Alert Aggregation Results . . . . . 164 Table 6.10 ComparisonofalertsaggregationapproachesusingtheDARPA 2000 dataset in their evaluation. . . . . . . . . . . . . . . . . 167 Table 6.11 DARPA dataset preprocessing statistics . . . . . . . . . . . . 168 Table 6.12 Multi-sensor Alerts Aggregation Evaluation Results . . . . . 169 Table 6.13 Comparison of Attack Scenario Reconstruction Approaches Using the LLDDOS1.0 Dataset . . . . . . . . . . . . . . . . . 173

Description:
This dissertation may not be reproduced in whole or in part, by . 2.1.1 Techniques based on Environmental Awareness . 15 .. problems related to IDS alert messages, namely, alerts flooding, false positives, lack.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.