ebook img

IN THE CIRCUIT COURT FOR LOUDOUN COUNTY GLOBAL AEROSPACE INC., et al ... PDF

156 Pages·2012·3.66 MB·English
by  
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview IN THE CIRCUIT COURT FOR LOUDOUN COUNTY GLOBAL AEROSPACE INC., et al ...

VIRGINIA: IN THE CIRCUIT COURT FOR LOUDOUN COUNTY GLOBAL AEROSPACE INC., et al. * CONSOLIDATED CASE NO. CL 61040 Plaintiff, * CASES AFFECTED v. * Global Aerospace Inc., et al. v. Landow Aviation, L.P. d/b/a Dulles Jet Center, et al. (Case No. CL 61040) LANDOW AVIATION, L.P. d/b/a * BAE Systems Survivability Systems, LLC v. Landow Aviation, L.P., et al. (Case No. CL 61991) Dulles Jet Center, et al. * La Réunion Aérienne v. Landow Aviation, L.P. d/b/a Dulles Jet Center, et al. (Case No. CL 64475) Defendants. * United States Aviation Underwriters, Inc. v. Landow Aviation, L.P., et al. (Case No. CL 63795) * Chartis Aerospace Adjustment Services, Inc. v. Landow Builders Inc., et al. (Case No. CL 63190) * Factory Mutual Insurance Company v. Landow Builders Inc., et al. (Case No. CL 63575) * The Travelers Indemnity Company, as subrogee of Landow Aviation Limited Partnership v. Bascon, Inc., et al. (Case No. CL 61909) * Global Aerospace, Inc. v. J. H. Brandt and Associates, Inc., et al. (Case No. CL 61712) * M.I.C. Industries, Inc. v. Landow Aviation, L.P., et al. (Case No. 71633) MEMORANDUM IN SUPPORT OF MOTION FOR PROTECTIVE ORDER APPROVING THE USE OF PREDICTIVE CODING I. INTRODUCTION Landow Aviation Limited Partnership, Landow Aviation I, Inc., and Landow & Company Builders, Inc. (collectively “Landow”) have moved the Court for a protective order because counsel for a number of parties have objected to Landow’s proposed use of “predictive coding” to retrieve potentially relevant documents from a massive collection of electronically stored information (“ESI”). The ESI retrieved by predictive coding would be reviewed by lawyers or paralegals and, if responsive and not privileged or otherwise immune from discovery, PHDATA 3790937_2 produced to the parties. The use of predictive coding is a reasonable means of locating and retrieving documents that may be responsive to requests for production and, therefore, satisfies the Rules of the Supreme Court of Virginia and should be approved in this case to avoid the undue burden and expense associated with the alternative means of culling the Landow ESI collection. Landow has an estimated 250 gigabytes (GB) of reviewable ESI from its computer systems, which could easily equate to more than two million documents. At average cost and rates of review and effectiveness, linear first-pass review would take 20,000 man hours, cost two million dollars, and locate only sixty percent of the potentially relevant documents. As one alternative, keyword searching might be more cost-effective but likely would retrieve only twenty percent of the potentially relevant documents and would require Landow to incur substantial unnecessary costs for document review. Predictive coding, on the other hand, is capable of locating upwards of seventy-five percent of the potentially relevant documents and can be effectively implemented at a fraction of the cost and in a fraction of the time of linear review and keyword searching. Further, by including a statistically sound validation protocol, Landow’s counsel will thoroughly discharge the “reasonable inquiry” obligations of Rule 4:1(g). Therefore, this Honorable Court should enter an Order approving the use of predictive coding as set forth herein, thereby allowing Landow to locate more potentially relevant documents while avoiding the undue burden and expense associated with other means of culling the Landow ESI.1 1 Given their opposition to the implementation of a more economical and effective means of culling the ESI, Landow respectfully requests that, if it is not inclined to approve the use of predictive coding, this Honorable Court shift any incremental costs associated with a more expensive alternative to the opposing parties. - 2 - PHDATA 3790937_2 II. BACKGROUND The Court is well aware of the genesis of this litigation, which stems from the collapse of three hangars at the Dulles Jet Center (“DJC”) during a major snow storm on February 6, 2010. The parties have exchanged substantial discovery requests addressing both liability and damages. The liability discovery is directed largely at responsibility for the collapse and, more specifically, the existence of any design or construction deficiencies that may have contributed to the failure. Pursuant to the Rules of the Supreme Court of Virginia, the discovery includes requests for ESI. A. The Landow ESI Collection Landow took several steps to promptly collect and preserve ESI, resulting in a collection of more than eight terabytes (TB) (8,000 GB) of forensic electronic images within a few months of the collapse.2 Landow then retained JurInnov, Ltd. to consolidate the images into a more manageable collection of reviewable ESI. JurInnov first conformed and exported all of the email files. Then JurInnov removed all of the duplicate files and the common application/system files. Finally, JurInnov filtered the collection to segregate and eliminate any non-data file types from commonly 2 Landow maintained computer systems that may contain ESI relating to this litigation at two locations – the corporate offices in Bethesda, Maryland, and the DJC location at the Dulles International Airport. Shortly after the collapse, Landow began to collect ESI. On February 17, 2010, Landow collected Norton Ghost backups from most of the operating computers at the Bethesda office. This resulted in the collection of 3.5 TB of data. On March 10, 2010, Simone Forensics Consulting, LLC collected forensic images of all of the personal computers at the DJC location, as well as a forensic image of the data located on the DJC server, resulting in an additional 1.05 TB of data. Then, on April 22 and 27, 2010, Simone returned to the Bethesda office to collect, as a complement to the Ghost backups, forensic images of the hard drives of all the personal computers, as well as the data located on the two operating servers. This effort resulted in the collection of an additional 3.6 TB of data, bringing the entire image collection to just over eight terabytes of data. - 3 - PHDATA 3790937_2 recognized data file types (such as Microsoft Word files, which are doc or docx file types). This processing step reduced the ESI images to a collection of roughly 200 GB of reviewable data – 128 GB of email files, and 71.5 GB of native data files. In order to collect and preserve any subsequently generated ESI, Landow engaged JurInnov to do another data collection from both locations in April of 2011. Although this new ESI has not been fully processed, JurInnov estimates that it will generate an additional 32.5 GB of email files and 18 GB of native files. Based on values typically seen in electronic discovery, this estimated 250 gigabyte collection of reviewable ESI could easily comprise more than two million documents, covering every aspect of the Landow operations for a period of several years. In order to assess the extent to which the documents in the ESI collection might pertain to this case, Landow conducted a cursory preliminary review of the data, from which it became apparent that a significant portion of the Landow ESI is wholly unrelated to the DJC project. JurInnov loaded the ESI from three Landow personnel who were involved in the DJC project into an e-Discovery tool for review and analysis of the documents. The documents were automatically separated by the software into clusters, each cluster representing a relatively unique concept common among the constituent documents. It was readily apparent that the majority of the clusters reflected concepts that were entirely unrelated to the DJC Project, such as email virus warnings and disclaimers. Even when the broad concepts might be pertinent to the DJC, it was obvious that only a fraction of the documents in a cluster actually related to the construction, operation, or collapse of the DJC. It became apparent through the preliminary review that it would be necessary to cull the collected ESI to reduce the burden of document review and improve accuracy by - 4 - PHDATA 3790937_2 eliminating documents having nothing to do with the DJC and generating a much smaller set of documents potentially relevant to the case, which could then be reviewed for discovery purposes. B. Alternatives For Culling The Landow ESI Collection There generally are three ways to cull an ESI collection to locate potentially relevant documents, although they are not equally effective. Historically, documents were reviewed one-by-one by a team of human reviewers. This first-pass linear review, however, is very time-consuming and expensive, and it is not particularly effective. More recently, ESI collections were culled by searching for keywords designed to locate documents containing select words expected to be pertinent to the litigation. Keyword searching typically is less expensive than linear review, but it is generally not very effective at finding relevant documents. Today, the most effective and economical means of reviewing large ESI collections is a technology known as predictive coding.3 Given the size of the Landow ESI collection, first-pass linear review would be extraordinarily expensive and time-consuming. With more than 2 million documents to review, albeit for potential relevance alone, it would take reviewers more than 20,000 hours to review each document individually – that’s ten (10) man-years of billable time. Even at modest contract review rates, a linear review of this magnitude would almost certainly cost more than two million 3 Predictive coding uses direct input from an attorney reviewing a small subset of the collection to generate a mathematical model of relevant documents. The model is then used to identify documents in the balance of the collection that are relevant and segregate them from documents that are not relevant. Predictive coding can be orders of magnitude faster (and less expensive) than linear review, and it is much more effective than both linear review and keyword searching at both locating the relevant documents and eliminating the documents that are not relevant. - 5 - PHDATA 3790937_2 dollars just to identify potentially relevant documents to be reviewed by Landow during discovery. Beyond the sheer magnitude of such a project, the inherent problem with linear review is that it is neither consistent nor particularly effective at identifying and segregating relevant documents from those that are not relevant. See, e.g., Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, XVII Rich. J.L. & Tech. 11 (2011).4 There are two exemplary studies evaluating the consistency among human reviewers, one by Ellen Voorhees and another by the team of Dr. Herbert Roitblat, Anne Kershaw, and Patrick Oot. Id. at 10 -13. Voorhees asked three teams of professional information retrieval experts to identify relevant documents in response to several information requests. Id. at 10 -11. Voorhees found that even experienced information retrieval experts agreed, at most, only 49.4% of the time. Id. Roitblat, et al., similarly asked two teams of lawyers to identify relevant documents in response to a Department of Justice Information Request concerning the Verizon acquisition of MCI. Id. at 13. In the Roitblat study, the agreement between the two teams of lawyers was only 28.1% of the responsive documents.. Id. Thus, “[i]t is well established that human assessors will disagree in a substantial number of cases as to whether a document is relevant, regardless of the information need or the assessors’ expertise and diligence.” Id. at 9. Consistency aside, linear review simply is not very effective. There are two measures of the effectiveness of information retrieval – recall and precision. Recall is the 4 A true and correct copy of the Grossman-Cormack article appearing in the Richmond Journal of Law and Technology (hereafter “Technology-Assisted Review”) is attached hereto as Exhibit A. - 6 - PHDATA 3790937_2 percentage of the relevant documents in the collection that are found by the reviewer. Technology-Assisted Review, p. 8. Thus, a recall of 100% means that a reviewer retrieved every relevant document from a collection. Precision, on the other hand, is the percentage of documents pulled by the reviewer that are actually relevant. Id. The balance of the documents selected by the reviewer would be irrelevant. Therefore, a precision of 70% means that 30% of the documents pulled by the reviewer would be irrelevant. Across both studies, Voorhees and Roitblat, et al., determined that reviewer recall ranged from 52.8% to 83.6%, and precision ranged from 55.5% to 81.9%. Id. at 15 -17. Grossman and Cormack analyzed data from the Text Retrieval Conference (TREC), and found that recall ranged from 25% to 80% (59.3% on average), while precision varied from an abysmal 5% to an unusual 89% (31.7% on average). Technology-Assisted Review, p. 37, Table 7. In a discovery context, this means that linear review misses, on average, 40% of the relevant documents, and the documents pulled by human reviewers are nearly 70% irrelevant. In general, keyword searching is a much less expensive means of culling a collection set. There will be technical costs associated with preparing the data and the indices necessary to conduct an effective keyword search, and costs will escalate as the complexity of the search increases. In addition, there will be legal costs associated with negotiating the appropriate keyword list to use, which often is not a simple, straightforward exercise. And legal costs will similarly increase if iterative keyword searches are used to refine the selection of relevant documents from an ESI collection, as increasing document review and negotiation will be necessary. Keyword searching, however, is simply not an effective means of separating the wheat from the chaff in an effort to locate relevant documents. The preeminent discussion of the - 7 - PHDATA 3790937_2 effectiveness of keyword searching was a study by Blair & Maron in 1985. Technology-Assisted Review, pp. 18 -19.5 With no constraints on the ability to conduct keyword searches, the average recall was only twenty percent (20%), which means that keyword searches missed 80% of the relevant documents. Technology-Assisted Review, p. 18. Although the precision was reasonably high (at 79%), that is not always the case. Indeed, in one case before the United States District Court for the Western District of Pennsylvania only seven percent (7%) of the documents found using keyword searching were ultimately relevant. Hodczak v. Latrobe Specialty Steel Co., 761 F. Supp. 2d 261, 279 (W.D. Pa. 2010) In other words, ninety-three percent (93%) of the documents identified using keyword searches were irrelevant to the litigation. The preliminary review of the Landow ESI suggests that keyword searching would be similarly ineffective in this case. In the review discussed above, the documents were summarily classified as either relevant or irrelevant to get a sense of the likely distribution of electronic documents. JurInnov then compiled a list of dominant keywords contained within the documents, indicating the number of hits for both the relevant and irrelevant sets. In evaluating proposed keywords,6 Landow determined that many of the keywords are likely to generate more documents that are not relevant than those that are relevant. For example, the terms Dulles Jet, Sergio, Plaza, and Curro (terms requested by non-Landow counsel) were not found to be 5 Blair & Maron asked skilled reviewers to compose keyword searches to retrieve at least 75% of the relevant documents in response to a number of document requests derived from a BART train accident in the San Francisco Bay Area. Technology-Assisted Review, p. 18-19. See also, The Sedona Conference Best Practices Commentary on the Use of Search & Information Retrieval Methods in E-Discovery, 8 The Sedona Conf. Journal, Fall 2007, at p. 189. A true and correct copy of the Sedona Conference materials (hereafter “The Sedona Conference Best Practices Commentary”), is attached hereto as Exhibit B. 6 The most recent communication on proposed keywords was a letter from Jonathan Berman dated March 1, 2012, a true and correct copy of which is attached hereto as Exhibit C. - 8 - PHDATA 3790937_2 predominant words in the relevant document set but were found in the irrelevant set. Other terms showed a similar pattern with significant percentages of documents coming from the irrelevant set: Jet Center (64%), hangar (33%), Mickey (71%), column (53%) and inspection (85%). This, by no means, was an exhaustive review. Rather it is illustrative of two problems that would be encountered if keyword searches were used to cull the Landow ESI – (1) a significant number of irrelevant documents would be included in the result; and (2) it would be difficult to determine from the result why certain words were, or were not, contained in either the relevant or irrelevant set, and the implications of that distribution. C. Predictive Coding Predictive coding is an economical, efficient, and effective alternative to both linear review and keyword searching. Predictive coding will retrieve more of the relevant, and fewer of the irrelevant, documents than the other two culling methods, and it will do so more quickly and at a lower cost. The technology underlying predictive coding has been in existence for many years. For example, some predictive coding technologies employ Bayesian probability systems that “set[ ] up a formula that places a value on words, their interrelationships, proximity and frequency.” The Sedona Conference Best Practices Commentary, p. 218. These Bayesian systems are based on a theorem that was developed by a British mathematician in the eighteenth century. Id. Basic predictive coding technology is so prevalent that virtually everyone uses it today. It is the same technology that underlies spam filters, which are used to prevent unwanted emails from flooding our inboxes. Technology-Assisted Review, p. 22. Although technologies differ somewhat, the general operation of predictive coding tools is conceptually similar. First, ESI is loaded into the tool, just as it would be loaded into a review tool for either linear review or - 9 - PHDATA 3790937_2 keyword searching. Then, an experienced reviewing attorney “trains” the tool to recognize relevant documents and differentiate them from documents that are not relevant.7 Once the tool has stabilized, training ceases and the tool applies a developed mathematical model to segregate or prioritize (depending in the particular tool) relevant and irrelevant documents. Predictive coding tools can leverage an attorney’s review of a fraction of the ESI documents into the categorization of millions of documents as either relevant or irrelevant. Because a reviewing attorney has to review and code only a fraction of the collection set, the cost associated with predictive coding can be orders of magnitude less than the cost of linear review, and it will take far less time to identify the relevant documents from among the collected ESI. Indeed, if the predictive coding tool is stabilized by coding 3,000 or fewer documents, it would take less then two weeks to cull the relevant documents from a multi-million document set at roughly 1/100th of the cost of linear review. Similarly, predictive coding can be accomplished in less time and at less expense than an involved iterative keyword search program that requires an attorney to review each set of documents to derive improvements to the keyword search criteria. 7 There are inherent differences in the manner of training predictive coding tools. Certain tools present the attorney with small random or nearly random sets of training documents from the collection set. Other tools depend on the ability to locate and introduce a “seed set” of clearly relevant documents to prompt the tool to select the training documents. The attorney decides whether each document is relevant or not and codes that decision into the tool. As coding decisions are made, the tool processes those decisions and, by developing an evolving mathematical model of the attorney’s decisions, learns to recognize the difference between relevant and irrelevant documents. When the tool retrieves the next set of training documents, the tool “predicts” whether the document will be coded as relevant or irrelevant. When the attorney codes the document, the tool tracks the extent of agreement. As the iterations proceed, the tool becomes more effective at predicting relevance, until it has stabilized and can accurately predict relevance. With random training, the attorney typically only has to review 2,000 to 3,000 documents to stabilize the tool; other tools may require review of ten to thirty percent of the collection before stabilizing. - 10 - PHDATA 3790937_2

Description:
IN THE CIRCUIT COURT FOR LOUDOUN COUNTY. GLOBAL AEROSPACE INC ., et al. BIOLOGY 380, 381 (1996). 35 See Ellen M. Voorhees, The
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.