ebook img

NASA Technical Reports Server (NTRS) 20080010084: Data Mining SIAM Presentation PDF

45.6 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview NASA Technical Reports Server (NTRS) 20080010084: Data Mining SIAM Presentation

Source of Acquisition NASA Ames Research Center Text Mining SIAM Presentation Ashok Srivastava, Ph.D. Recurrinq Anomalv Detection Svstem (ReADS) Ashok Srivastava, Ph.D. Dawn McIntosh Pat Castle Manos Pontikakis Vesselin Diev Brett Zane-Ulrnan Eugene Turkov Ram Akella, Ph.D. Zuobing Xu Sakthi Preethi Kurnaresan Advance Engineering Network Team (NASA Arnes Research Center) Ideas Needle in haystack problem 0 Sampling data does not work (may not sample the entire needle) Outline e - Problem - Approach Supervised, unsupervised, semisupervised New similarity measures Kernel methods 0 PC 0 MDS with kernels 1 Problem Introduction NASA programs have large numbers (and types) of problem reports. ISS PRACA: 3000+ records, 1-4 pages each; ISS SCR: 28,000+ records, 1-4 pages each; * Shuttle CARS: 7000+ records, 1-4 pages each; * ASRS: 27000+ records, 1 paragraph each These free text reports are written by a number of different people, thus the emphasis and wording vary considerably With so much data to sift through, analysts (subject experts) need help identifying any possible safety issues or concerns and to help them confirm that they haven’t missed important problems. Unsupervised clustering is the initial step to accomplish this; We think we can go much farther, specifically, identify possible recurring anomalies. Recurring anomalies may be indicators of larger systemic problems. 0 1 - Text ining Solution Recurring Anomaly Discovery System ~ (ReADS): The Recurring Anomaly Detec (ReADS) is an integrated sec analyze text reports, such as a and maintenance records. - Text clustering algorithms group large quantities of reports and documents. * Reduces human error & fatigue - Automates the discovery of unknown recurring anomalies; - Identifies interconnected reports; - Provides a visualization of the clusters and recurring anomalies 2 Recurring Anomaly “Fingerprints” Recurrent failures J Problems that cross traditional system boundaries so J failure effects are not fully recognized Evidence of unconfirmed or random failures 9 Problems that have been accepted by repeated waivers J Discrepant conditions repeatedly accepted by routine J analysis Problems that are the focus of alternative opinions within 0 the engineering community ReADS Text Mining Algorithms Unsupervised Clustering: Spherical k-means -3 modified von Mises Fisher. ecurring A n ~ ~ fa ~~ y~ ~ ~ i ~ ~ c ~ ~ i ~ n : 1. Identify reports which mention other reports as a recurring anomaly; 2. Detect recurring anomalies, a.find the similarity between documents to detect recurring anomalies using cosine distance similarity measure, b. then according to the similarity measure, run the hierarchical . clustering algorithm to cluster the recurring anomalies. 3 Similarity between Reports Cosine Similarity Measure Calculate the inner product of the normalized term frequency vectors ( R dt Idi) = COS dtdi Hierarchical Clustering of Recurring Anomalies After calculating the distance between each document, 9 the algorithm applies single linkage, Le., nearest neighbor, to create a hierarchical tree representing connections between documents. - Also generates an ‘inconsistency coefficient’ which is a measure of the relative consistency of each link in the tree. The hierarchical tree is partitioned into clusters by setting 0 a threshold on the inconsistency coefficient. - A high inconsistency coefficient implies that the reports couid be very different and still be sorted into the same cluster. Currently the inconsistency coefficient threshold is set 9 very low, which returns many smaller clusters of very similar reports. f‘li ic.tnre nf cinnlr, dnrc fmnntc qrn ~vt-1i1d nd fran, thn rnmi rrrinn 4 ReADS System & lntro In an attempt to quantify any imp[ovemenF Natural Lan uage Processin (NLP) & text normalization have, on text classification using Suppofi #ector Machines SSVM) and Naive Bayes, we did a direct comparison of classification rates of documents that has been processed by: (1) documents processed using a NLP tool & a text normalization tool, PUDS, and (2) the same documents with no preprocessing. - Specifically, we: Measured the difference in Precision, Recall, and F-Measure A-p plied to 60 anomaly classification 0 Not meant to be an opbmwn classifier technique. Precision and Recall results for the different preprocessing methads ware compared No wrk was done to impmve either Dataset used: Aviation Safety Reporting System (ASRS) ASRS is classified b anomalies. These reports are classified into over 100 anomalies Each report may be classried in multiple anomaly classes. * 30% are in only one anomaly class * 50% are in 3 anomaly classes Documents are short, approximately 6 sentences 27,596 documents Training Dataset: 20,000 docs dedicated to training, 4000 selected Test Dataset: 7,000 docs dedicated to testing, 2000 selected 9 Tools used: MATLAB used for preprocessing Weka imolemented for SVM and Naive Baves classification 5 . Sample PLADS Term Reduction JUSTPRIORTOTOUCHCOWN, LAXTWRTOIDUSTOGO AFMUNDEECAUSEOFTHEACF~NFRBOONTTHOTFHUESC OPLANTD I, HOWEVER, UNDERSTCODT WRT O SAY, CLRED TO LAND, ACnO N THE RwY ' IN~ONTOFUSWASCLROFTHE RWY AND WE BOTH MISUNDERSTOOD TWR'S RADIO CALL AND CONSIDER , WE LANDED AS WE TAXED TO THE GATE, TWR muEsTm THAT I CALL THEM FROM A PHONE WHEN IH AD THE OPPORTUNITY (I CALLED FROM THE GATE) IT WAS ON THE P~NE THAT I DISCOVERED TWR HAD SENT US AROUND INH INDSIGHT, FROM ThOR PERSPECTIVE, COlW AROUND WAS THE PRUDENT THING TO CO HAVEBECOMETOOCONDITIONEDINTHEPASTF EWYRS IMBElNGMCTOREDlNTOAVlSUALAPCH BEHINDANAC~WATISTOOCLOSE REGRmbBLY, IN THIS s IT, CONFUSION AN0 MISUNMASTANDINGP UT US IN A DlFflCULT SIT I 1 I 1 1 Expand Acronyms, Simplify Punctuation &Stemming, Remove Non-informative Terms, Phrasing1 _ _ ____ ___ __ _PR_OIRa ffiraftT_ O FURCOHNDOT_ W__N__ -_t do_ew aerr- _T OrLuDn--w agyo-a_r_om_u~_ndn d_-e-amt_rdtc_oi _wane_ r RAFDRIOO NC_TA_ L L_CmOpMl~i~St_ _~u&nEd~eI_ar snt a_n_dt a IOxWiWe-d toSA YG CATT~E~~T~-LWArWeNaqa Dnu ;ae_ns _t ___m C~ALaLy pru_d_e_nPtHthOinNgE _ ____O_Pm POndR~TtUioNn-- INP_ACS~T~_I yeGarA -T E-v ecfor-P -H OVINSEU AL adpipsmcochv e-_ra IlW&tW---- SENT-C--L OSEH RINEDGSRIFGTKTTA-B_L YPE _RS_PEs CitTuIVEa te cgoo nfuse - misunderstand PUT _ __ difficultsituation Raw Text & PLADS Comparison In order to classify the documents, they are first formatted into a document-term frequency matrix. The cells of the matrix are the frequency count of the terms that appear in the document. PLADS reduced the total number of terms in 27000 documents from 44940 to 31 701 PLADS reduced classification computation time by 0%-I 0% 6 Comparison of Raw Text vs. PLADS using SVM 1 Difference Chart: SVM Comparison of Raw Text vs. PLADS using Nai‘ve Bayes Difference Chart Naive Byes * All terms used, no additional term 30% reduction applied 25% 20% * PLADS improves $3 1% Naive Bayes 10% precision 1% on 2vn) 5% average 0% 5% PLADS improves -10% NaTve Baves recall - p=~~~~u,po~Yo’n””wu- Y,~”,UYU= o u”r r ’l - P~C C~ * :*r Y~~ ~=~ ~~~ ~ ~~ ~ ~~~ ~=~ ~ ~ ~ ~ ~ = ~ . ~ ~ ~ ~ ~ = 7 Comparison of Text vs. PLADS, with Terms Selection - 1000 terms Difference Chart: SVM w/ Term Selection selected using Information Gain PLADS improves precision 2% on average * PLADS improves recall 3% on average 1 Comparison of Raw Text vs. NLP with Terms Selection 5002erms Difference: SVM w/ NLP selected using Information Gain NLP improves F-measure 3% on average 8 10

See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.