“When responding to a document request, technology has rendered the traditional approach to document review impracticable. The traditional method is labor intensive, with people reviewing documents to discern what is (or is not) responsive, with the responsive documents then reviewed for privilege, and with the responsive and non-privileged documents being produced. When reviewing documents in the dozens, hundreds, or low thousands, this worked fine. But with the advent of electronic recordkeeping, documents no longer number in the mere thousands, and various electronic search methods have developed. When electronic records are involved, perhaps the most common technique that is employed is to begin with keyword searches or Boolean searches to a defined universe of documents. Then, the responding party typically reviews the results of those searches to identify what, in fact, is responsive to the request. Implicit in this approach is the fact that some of the documents that are responsive to the word or Boolean search are responsive, while others are not. An emerging approach, and the approach authorized in this case…is to use predictive coding to identify those documents that are responsive. A few key points…are worth highlighting. First, the Court authorized the responding party to use predictive coding, but the Court did not…mandate how the parties proceed from that point…. Second, the Court held open the issue of whether the resulting document production would be sufficient, expressly stating ‘If, after reviewing the results, respondent believes that the response to the discovery request is incomplete, he may file a motion to compel at that time.’ To state the obvious, (1) it is the obligation of the responding party to respond to the discovery, and (2) if the requesting party can articulate a meaningful shortcoming in that response, then the requesting party can seek relief….
“Before moving on, it is helpful to define two concepts relevant to searching and retrieving documents: recall and precision.
“‘A search method’s precision is defined as the percentage of documents retrieved by the methods that are relevant. The higher a search’s precision, the fewer “false positives” there are. A search method’s recall is defined as the percentage of all relevant documents in the search universe that are retrieved by that search method. The higher the recall, the fewer “false negatives” (i.e., relevant but unretrieved documents) there are. Often, there is a trade-off between precision and recall—a broad search that misses few relevant documents will usually capture a lot of irrelevant documents, while a narrower search that minimizes “false positives” will be more likely to miss some relevant documents.’
“Respondent (in effect) argues that the absence of some of the documents found in the Boolean search from the body of retained documents shows that the predictive coding response was flawed, or using the terms just defined, that its level of recall was too low. We will assume that it was flawed, but the question remains whether any relief should be afforded.
“Respondent’s motion is predicated on two myths.
“The first is the myth of human review. As noted in Sedona: ‘It is not possible to discuss this issue without noting that there appears to be a myth that manual review by humans of large amounts of information is as accurate and complete as possible—perhaps even perfect—and constitutes the gold standard by which all searches should be measured.’ 15 Sedona Conf. J. 214, 230 (2014). This myth of human review is exactly that: a myth.
“The second myth is the myth of a perfect response. The Commissioner is seeking a perfect response to his discovery request, but our Rules do not require a perfect response…. Likewise, ‘the Federal Rules of Civil Procedure do not require perfection.’ Like the Tax Court Rules, the Federal Rule of Civil Procedure 26(g) only requires a party to make a ‘reasonable inquiry’ when making discovery responses. The fact that a responding party uses predictive coding to respond to a request for production does not change the standard for measuring the completeness of the response…. ‘One point must be stressed—it is inappropriate to hold TAR [technology assisted review] to a higher standard than keywords or manual review. Doing so discourages parties from using TAR for fear of spending more in motion practice than the savings from using from using TAR for review.'”
Dynamo Holdings v. Commissioner, No.2685-11, 2016 WL 4204067 (T.C. July 13, 2016).