ebook img

Web Crawling and Data Mining with Apache Nutch PDF

136 Pages·2013·3.19 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview Web Crawling and Data Mining with Apache Nutch

Web Crawling and Data Mining with Apache Nutch Perform web crawling and apply data mining in your application Dr. Zakir Laliwala Abdulbasit Shaikh BIRMINGHAM - MUMBAI Web Crawling and Data Mining with Apache Nutch Copyright © 2013 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: December 2013 Production Reference: 1171213 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78328-685-0 www.packtpub.com Cover Image by Jarek Blaminsky ([email protected]) Credits Authors Project Coordinator Dr. Zakir Laliwala Ankita Goenka Abdulbasit Shaikh Proofreaders Ameesha Green Reviewers Mark Kerzner Bernadette Watkins Shriram Sridharan Indexer Mariammal Chettiyar Acquisition Editors Neha Nagwekar Vinay V. Argekar Graphics Disha Haria Commissioning Editor Deepika Singh Production Coordinator Conidon Miranda Technical Editors Vrinda Nitesh Bhosale Cover Work Conidon Miranda Anita Nayak Harshad Vairat Copy Editors Roshni Banerjee Mradula Hegde Sayanee Mukherjee Deepa Nambiar About the Authors Dr. Zakir Laliwala is an entrepreneur, an open source specialist, and a hands-on CTO at Attune Infocom. Attune Infocom provides enterprise open source solutions and services for SOA, BPM, ESB, Portal, cloud computing, and ECM. At Attune Infocom, he is responsible for product development and the delivery of solutions and services. He explores new enterprise open source technologies and defines architecture, roadmaps, and best practices. He has provided consultations and training to corporations around the world on various open source technologies such as Mule ESB, Activiti BPM, JBoss jBPM and Drools, Liferay Portal, Alfresco ECM, JBoss SOA, and cloud computing. He received a Ph.D. in Information and Communication Technology from Dhirubhai Ambani Institute of Information and Communication Technology. He was an adjunct faculty at Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT), and he taught Master's degree students at CEPT. He has published many research papers on web services, SOA, grid computing, and the semantic web in IEEE, and has participated in ACM International Conferences. He serves as a reviewer at various international conferences and journals. He has also published book chapters and written books on open source technologies. He was a co-author of the books Mule ESB Cookbook and Activiti5 Business Process Management Beginner's Guide, Packt Publishing. Abdulbasit Shaikh has more than two years of experience in the IT industry. He completed his Masters' degree from the Dhirubhai Ambani Institute of Information and Communication Technology (DA-IICT). He has a lot of experience in open source technologies. He has worked on a number of open source technologies, such as Apache Hadoop, Apache Solr, Apache ZooKeeper, Apache Mahout, Apache Nutch, and Liferay. He has provided training on Apache Nutch, Apache Hadoop, Apache Mahout, and AWS architect. He is currently working on the OpenStack technology. He has also delivered projects and training on open source technologies. He has a very good knowledge of cloud computing, such as AWS and Microsoft Azure, as he has successfully delivered many projects in cloud computing. He is a very enthusiastic and active person when he is working on a project or delivering a project. Currently, he is working as a Java developer at Attune Infocom Pvt. Ltd. He is totally focused on open source technologies, and he is very much interested in sharing his knowledge with the open source community. About the Reviewers Mark Kerzner holds degrees in Law, Mathematics, and Computer Science. He has been designing software for many years and Hadoop-based systems since 2008. He is the President of SHMsoft, a provider of Hadoop applications for various verticals. He is a co-founder of the Hadoop Illuminated training and consulting firm, and the co-author of the open source Hadoop Illuminated book. He has authored and co- authored a number of books and patents. I would like to acknowledge the help of my colleagues, in particular Sujee Maniyam, and last but not least, my multitalented family. Shriram Sridharan is a student at the University of Wisconsin-Madison, pursuing his Masters' degree in Computer Science. He is currently working in Prof. Jignesh Patel's research group. His current interests lie in the areas of databases and distributed systems. He received his Bachelor's degree from the College of Engineering Guindy, Anna University, Chennai and has two years of work experience. You can contact him at [email protected]. www.PacktPub.com Support files, eBooks, discount offers and more You might want to visit www.PacktPub.com for support files and downloads related to your book. Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details. At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks. TM http://PacktLib.PacktPub.com Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can access, read and search across Packt's entire library of books. Why Subscribe? • Fully searchable across every book published by Packt • Copy and paste, print and bookmark content • On demand and accessible via web browser Free Access for Packt account holders If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view nine entirely free books. Simply use your login credentials for immediate access. Table of Contents Preface 1 Chapter 1: Getting Started with Apache Nutch 7 Introduction to Apache Nutch 8 Installing and configuring Apache Nutch 8 Installation dependencies 8 Verifying your Apache Nutch installation 13 Crawling your first website 14 Installing Apache Solr 15 Integration of Solr with Nutch 17 Crawling your website using the crawl script 17 Crawling the Web, the CrawlDb, and URL filters 19 InjectorJob 20 GeneratorJob 21 FetcherJob 21 ParserJob 21 DbUpdaterJob 21 Invertlinks 22 Indexing with Apache Solr 22 Parsing and parse filters 22 Webgraph 23 Loops 24 LinkRank 24 ScoreUpdater 25 A scoring example 25 The Apache Nutch plugin 27 The Apache Nutch plugin example 27 Modifying plugin.xml 28 Describing dependencies with the ivy module 29

Description:
Perform web crawling and apply data mining in your application Overview Learn to run your application on single as well as multiple machines Customize search in your application as per your requirements Acquaint yourself with storing crawled webpages in a database and use them according to your need
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.