Beginning with the court’s approval of a technology assisted review (TAR) protocol in Da Silva Moore in early 2012, it seemed TAR was about to revolutionize discovery. In a world of exponentially growing data and corresponding document review costs, TAR was the answer that would save us all! Document review costs could stop soaring into the multi-millions, exhausted attorneys could focus on the law instead of document review, and the accuracy of the produced set would be higher. Yet, as we wrap up 2014, the vast majority of cases in litigation are still relying on brute force document review techniques, rather than TAR.
The legal industry’s hesitancy to adopt TAR has many explanations, but three of the main ones –and why they are wrong– are:
Keywords are simple to understand and explain to the court: a document does or does not contain a specific word. But TAR feels like a black box; it makes decisions that are difficult to explain. Attorneys worry about whether they can explain their process to a judge if challenged. This concern is misguided, however, for at least four reasons.
First, while TAR does involve complex mathematical algorithms, the underlying method can be understood even by non-calculus-loving attorneys. The technology establishes rules for relevancy based on input from subject matter experts, and based on word relationships (much like search terms). For a not-too-technical explanation of TAR, see the webinar, The Flavors of Data Analytics, Served Up for Attorneys.
Second, when considering defensibility, results are more important than process. Of course, you need a good process so that you can get good results. The metrics used to demonstrate TAR results prove to a mathematical certainty whether your process worked. So, while TAR’s process requires more work to explain than keyword search and manual review, the results alone should prove that it worked. In many traditional workflows, attorneys do not test their results in a way that accuracy is statistically demonstrated. You should recognize that if you are not testing results of keyword searches and manual review to establish statistically valid metrics, you have not established their defensibility. When you do test them, you may find the results of your traditional reviews are not as easily defended as you thought.
Third, cooperation can help ensure defensibility. If you propose a process to the opposing side and consider their input, or can even stipulate to a TAR protocol, you can avoid having to defend it later. Many parties do this with search terms today for the same reason. While this requires providing transparency regarding process and sometimes even attorney decisions on responsive and nonresponsive documents, this cooperation is often worth avoiding trouble later.
Finally, beyond defending your process to a court, you will likely have to defend your process to your client at some point (like when they get the bill!). TAR is not a good fit for every single case, but it can save millions of dollars for the right ones by eliminating significant amounts of review. You should be ready to explain to your client the options you considered, and how your decision making process weighed the risks and benefits of each.
This is a legitimate concern. You are wrong, however, to let it keep you from a TAR workflow. When faced with large numbers of documents, it is a certainty that you will not have 100% accuracy. In fact, when TREC Legal Track studied the accuracy of TAR in 2011, the organizers faced a familiar conundrum: how could a gold standard be set to gauge accuracy, when human review was also imperfect? The answer was to sample smaller sets of documents and extrapolate those results to the larger set; this is how we also validate results from other technology assisted and traditional reviews.
It is essential to document review, and all other parts of discovery, to accept that reasonableness, and not perfection, is the correct standard. Your process will be imperfect regardless of what method you use. Expecting perfection is not only impossible, but interferes with the creation of a reasonable workflow, by denying imperfection rather than seeking to control it. A reasonable process requires quality controls and statistical validation to ensure the acceptable accuracy levels are met, and adjusts processes as needed to attain them. When used correctly, TAR can achieve higher accuracy levels than traditional keyword search and review, at a lower cost.
Production of privileged documents can and should be managed in several ways. First, prevent privilege waiver with a FRE 502(d) order, or similar protective order in state courts. Second, TAR can be used to eliminate nonresponsive documents, while still allowing for manual review of documents identified as relevant. Or, if you do not wish to review all of those documents, you can still use keyword search with TAR and ensure a more rigorous review of documents containing privilege terms. In this case, the only privileged documents that would not be manually reviewed are those not containing privileged terms (given the cost tradeoff, it is noteworthy that compared to documents with privilege-related keywords, those are also more likely to be missed in manual review).
This concern is often raised by attorneys who thought they wanted to use TAR, but change their minds when they realize it requires significant amounts of up-front document review by a subject matter expert. TAR does not work on its own; it requires training (would you trust it otherwise?). Especially in cases with tight deadlines, TAR may be impractical given conflicting demands on senior attorneys. However, this problem can be avoided with proper planning. When enough lead time is allowed, TAR often requires less review by senior attorneys than manual review, while also eliminating most of the expenses from junior reviewers.
TAR can require more ramp-up time than traditional review. Subject matter experts must establish their workflow and invest time to train the tool, then validate results. Training the tool takes more time than training junior reviewers, so this may be impractical when time is tight. Even more time is required if you are cooperating with the other side and modify your process based on their feedback.
However, traditional review can involve similar investments of senior attorney time, assuming the process includes quality control validation. In a traditional review, senior reviewers sample documents to test and validate their search terms, and must then also supervise junior reviewers during the manual review. Supervision includes training, quality checking, answering questions, and providing feedback. That supervision is usually less intense than training a TAR system, but it goes on for a much longer period of time and requires more hours in the long run.
TAR is Just Another Discovery Tool
As discussed above, TAR is not appropriate for every case. It is just another tool to tackle unwieldy amounts of documents. When it is appropriate, it is often best used in combination with other tools, including some keyword search and manual review. If your approach to discovery is to never use TAR, you’re missing an important tool that is increasingly important as document volumes continue to grow.
 Da Silva Moore, et al. v. Publicis Groupe, No. 11 Civ. 1279 (ALC)(AJP), 2012 WL 607412 (S.D.N.Y. Feb. 24, 2012)(Judge Peck), aff’d, 2012 WL 1446534 (S.D.N.Y. Apr. 26, 2012)(Judge Carter).
 Maura R. Grossman & Gordon V. Cormack, Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review, XVII RICH. J.L. & TECH. 11 (2011), http://jolt.richmond.edu/v17i3/article11.pdf. The study found TAR resulted in accuracy rates exceeded those of manual review.