ebook img

Performance Comparison of Apache Spark MLlib PDF

97 Pages·2017·4.46 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Performance Comparison of Apache Spark MLlib

PERFORMANCE COMPARISON OF APACHE SPARK MLLIB A Paper Submitted to the Graduate Faculty of the North Dakota State University of Agriculture and Applied Science By Pallavi Sharma In Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE Major Department: Computer Science August 2018 Fargo, North Dakota North Dakota State University Graduate School Title PERFORMANCE COMPARISON OF APACHE SPARK MLLIB By Pallavi Sharma The Supervisory Committee certifies that this disquisition complies with North Dakota State University’s regulations and meets the accepted standards for the degree of MASTER OF SCIENCE SUPERVISORY COMMITTEE: Dr. Simone Ludwig Chair Dr. Kendall Nygard Dr. Ravi Kiran Yellavajjala Approved: 08/17/2018 Dr. Kendall Nygard Date Department Chair ABSTRACT This study makes an attempt to understand the performance of Apache Spark and the MLlib platform. To this end, the cluster computing system of Apache Spark is set up and five supervised machine learning algorithms (Naïve-Bayes, Decision Tree, Random Forest, Support Vector Machine and Logistic Regression) were investigated. Among the available cluster modes, these algorithms were implemented on two cluster modes, Local and GPU Cluster mode. The performance metrics such as classification accuracy, area under ROC and area under PR for the algorithms were investigated by considering three datasets. It is concluded that the algorithms are computed in parallel in both the modes with GPU Cluster mode performing better than the Local mode for all algorithms in terms of time taken for completion. However, the mentioned performance metrics were not affected in the two modes hinting that the parallel computation does not play a major role in determining these metrics. iii ACKNOWLEDGEMENTS I would like to thank my advisor, Dr. Simone Ludwig, for her constant guidance throughout the project. Her timely feedback and encouraging words helped to complete the project within time. I’d also like to thank my graduate committee members, Dr. Kendall Nygard and Dr. Ravi Kiran Yellavajjala, for their support with the study. A special thanks to Computer Science department System Administrator, Guy Hokanson, for his time and patience in answering my queries. I would also like to extend my gratitude to Computer Science IT department for assistance in technical problems. I am greatly indebted to Dr. Dayakar Naik Lavadiya for his valuable comments as the second reader of this report. Lastly, I wish to acknowledge my family and everyone who has been a part of my graduate life in North Dakota State University, for their unfailing support. iv DEDICATION I would like to dedicate this project to my mother, father and brother who’ve encouraged and supported me through every decision in my life. v TABLE OF CONTENTS ABSTRACT……………………………………………………………………………….. iii ACKNOWLEDGEMENTS……………………………………………………………….. iv DEDICATION……………………………………………………………………………... v LIST OF TABLES…………………………………………………………………………. viii LIST OF FIGURES………………………………………………………………………... ix LIST OF APPENDIX TABLES…………………………………………………………… x LIST OF APPENDIX FIGURES………………………………………………………….. xii 1. INTRODUCTION AND RELATED WORK…………………………………………... 1 2. MACHINE LEARNING AND APACHE SPARK……………………………………... 4 2.1. Machine Learning Algorithms………………………………………………... 4 2.2. Supervised Learning Algorithms……………………………………………… 5 2.2.1. Naïve-Bayes Classification…………………………………………. 5 2.2.2 Decision Tree………………………………………………………... 6 2.2.3. Random Forest……………………………………………………… 7 2.2.4. Support Vector Machine…………………………………………….. 7 2.2.5. Logistic Regression…………………………………………………. 8 2.3. Apache Spark…………………………………………………………………. 8 2.3.1. Architecture…………………………………………………………. 9 2.3.2. Resilient Distributed Dataset (RDD)……………………………….. 10 2.3.3. Spark Context……………………………………………………….. 11 2.3.4. Modes of Cluster Operation………………………………………… 12 3. METHODOLOGY……………………………………………………………………… 14 vi 3.1. Data Description………………………………………………………………. 14 3.1.1. Data Acquisition…………………………………………………….. 14 3.1.2. Data Cleaning……………………………………………………….. 15 3.1.3. Descriptive Statistics………………………………………………... 17 3.1.4. Data Conversion…………………………………………………….. 18 3.2. Prerequisite Installation……………………………………………………….. 18 3.3. Data Splitting and Training…………………………………………………… 19 3.4. Performance Metrics………………………………………………………….. 22 4. EXPERIMENTS………………………………………………………………………... 25 5. RESULT………………………………………………………………………………… 28 5.1. Descriptive Statistics………………………………………………………….. 28 5.2. Performance Metrics………………………………………………………….. 39 6. CONCLUSION AND FUTURE WORK……………………………………………….. 48 REFERENCES…………………………………………………………………………….. 50 APPENDIX A……………………………………………………………………………... 55 APPENDIX B……………………………………………………………………………... 66 vii LIST OF TABLES Table Page 1. Parameters used for Naïve-Bayes Classification algorithm…………………..……… 20 2. Parameters used for Decision Tree Classification algorithm…………………............. 20 3. Parameters used for Random Forest Classification algorithm………………...….…... 21 4. Parameters used for SVM Classification algorithm……………………………..…… 21 5. Parameters used for Logistic Regression algorithm………………………………….. 21 6. Summary measures for IRIS dataset………………………………………………..... 28 7. Summary measures for SUSY dataset………………………………………………... 31 8. Performance metric of Naïve Bayes algorithm in Local Cluster mode for different nodes for IRIS dataset………………………………………………………………... 39 9. Performance metric of Naïve Bayes algorithm in GPU Cluster mode for different nodes for IRIS dataset………………………………………………………………... 39 10. Time taken (in minutes) by machine learning algorithms in Local Cluster mode using different nodes for SUSY dataset……………………………………………...……... 42 11. Time taken (in minutes) by machine learning algorithms in GPU Cluster mode using different nodes for SUSY dataset………………………………………………...…... 43 12. Accuracy (%) of machine learning algorithms in Local Cluster mode using different nodes for SUSY dataset………………………………………………………………. 45 13. Accuracy (%) of machine learning algorithms in GPU Cluster mode using different nodes for SUSY dataset……………………………………………………………… 45 14. Area under ROC of machine learning algorithms in Local Cluster mode using different nodes for SUSY dataset…………………………………………………….. 45 15. Area under ROC of machine learning algorithms in GPU Cluster mode using different nodes for SUSY dataset………………………………………………..…… 46 16. Area under PR of machine learning algorithms in Local Cluster mode using different nodes for SUSY dataset…………………………………………………………...….. 46 17. Area under PR of machine learning algorithms in GPU Cluster mode using different nodes for SUSY dataset………………………………………………………...…..… 46 viii LIST OF FIGURES Figure Page 1. Apache Spark Ecosystem……………………………………………………………... 9 2. Relationship between driver application, Cluster Manager and executors……………. 12 3. An example of libsvm format…………………………………………………………. 18 4. Boxplots for IRIS dataset……………………………………………………………… 29 5. QQplots for IRIS dataset……………………………………………………………..... 30 6. Boxplots for SUSY dataset……………………………………………………………. 33 7. QQplots for SUSY dataset…………………………………………………………….. 36 8. Running time comparison of algorithms for SUSY dataset…………………………... 44 ix LIST OF APPENDIX TABLES Table Page A.1. Summary measures for HIGGS dataset……………………………...…….……… 55 A.2. Time taken by machine learning algorithms in Local Cluster mode using different nodes for HIGGS dataset…………………………………………………………. 57 A.3. Time taken by machine learning algorithms in GPU Cluster mode using different nodes for HIGGS dataset……………………………………………...…………… 58 A.4. Accuracy of machine learning algorithms in Local Cluster mode using different nodes for HIGGS dataset………………………………………………………….. 58 A.5. Accuracy of machine learning algorithms in GPU Cluster mode using different nodes for HIGGS dataset………………………………………………………..… 59 A.6. Area under ROC of machine learning algorithms in Local Cluster mode using different nodes for HIGGS dataset…………………………………………............ 59 A.7. Area under ROC of machine learning algorithms in GPU Cluster mode using different nodes for HIGGS dataset………………………………………………… 59 A.8. Area under PR of machine learning algorithms in Local Cluster mode using different nodes for HIGGS dataset………………………………………………… 60 A.9. Area under PR of machine learning algorithms in GPU Cluster mode using different nodes for HIGGS dataset………………………………………………… 60 A.10. Summary measures for Hepmass dataset………………………………….…...….. 60 A.11. Time taken by machine learning algorithms in Local Cluster mode using different nodes for Hepmass dataset…………………………………………...………….… 62 A.12. Time taken by machine learning algorithms in GPU Cluster mode using different nodes for Hepmass dataset…………………………………………………...….… 63 A.13. Accuracy of machine learning algorithms in Local Cluster mode using different nodes for Hepmass dataset………………………………………………………… 63 A.14. Accuracy of machine learning algorithms in GPU Cluster mode using different nodes for Hepmass dataset………………………………………………………… 64 A.15. Area under ROC of machine learning algorithms in Local Cluster mode using different nodes for Hepmass dataset…………………………………………......… 64 x

Description:
of Agriculture and Applied Science. By MLlib platform. To this end, the cluster computing system of Apache Spark is set up and five .. Examples include market .. It consists of master and workers with specified memory and cores
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.