iCONECT Xperts

Technology Adoption in Legal is More Like Turning a Tanker than a Cigarette Boat.

Written by Lynn Frances Jae | July 21, 2017 at 5:07 PM

I had the pleasure of participating on a panel about advances in the eDiscovery technology world. I'll share the highlights of my talk. But first, let me give some accolades to the organizers. 

The Legal Technology Showcase and Conference at South Texas College of Law in Houston provided excellent educational and networking opportunities for an otherwise under-served market when it comes to eDiscovery and legal technology info. The Houston Chapter of the Women in eDiscovery organized the entire event with the volunteer efforts of their board and members. Quite an undertaking. And what a success! I'd love to see more chapters in 2nd and 3rd tier markets follow Houston's lead. 

Along with the keynote by Casey Flaherty, the former in-house counsel at Kia Motors, there were 3 panel sessions. I was invited to participate in the opening session: State of the Industry: Data and insight from industry leaders on the latest trends and advances in technology.

We covered some of the key issues of the day:

  • Technology Assisted Review (TAR)
  • The Cloud and Automation in eDiscovery
  • Cybersecurity and its impact on eDiscovery

During the TAR conversation, the moderator asked me about the challenges we have seen to adoption of predictive coding technology. I discussed 4 main challenges that TAR 1.0 created, and how TAR 2.0 addresses those. These were my main points.

TAR 1.0 flipped the workflow

It required a high-level reviewer to train the system. This meant she was the first person in the dataset and spent lots of that training time reviewing non-responsive documents. This was someone who was accustomed to receiving just the most useful docs from her team after the review. The response, "You want me to look at how many non-responsive documents? Yeah...No." Old habits (workflows) die hard.

With Continuous Active Learning (TAR 2.0) legal teams can follow their traditional workflows, and with the assistance of active machine learning, they end up reviewing the most relevant documents first. Eventually, they hit a point where the prevalence of relevant documents in the un-reviewed set is so low, that the logical next step is to stop the human review and run QC.

We flipped the budget.

In traditional linear review there were lots of hours for the lowest paid members of the legal team at the beginning of the project. With TAR 1.0, there were lots of hours for the highest paid member of the team at the beginning. GCs were worried that they would pay for the training - it wouldn't work - and they'd still need to pay for the entire linear review. It felt like too much of a gamble for them.

By more closely mirroring the traditional linear workflow, TAR 2.0 removes the element of chance from the budget. Reviewers spend a higher percentage of their time looking at relevant documents and the machine reviews the non-responsive documents.

Attorneys didn't trust that the technology would be effective.

The accepted truth in the legal industry, which was really a fallacy, was that human review was the gold standard. Attorneys assumed that they and their colleagues were better at finding relevant information than they really were. They couldn't imagine that the machines would be accurate and efficient.

This challenge wasn't solved by TAR 2.0 as much as it was solved by education. It took years of re-education to get attorneys to admit that the computer might provide more consistency across a large data set.

It created disagreements about Transparency.

Opposing counsel, the courts, and even the attorneys using the technology didn’t have any idea how the computer did what it did. The "Black Box" idea created skepticism. Requesting parties wanted to see the training set, which included non-responsive documents, and producing partners had no interested in producing non-responsive documents. In his Da Silva Moore decision, Judge Peck ruled that the training set should be produced. He reversed that recommendation three years later in Rio Tinto. However, many had shut the door to TAR and weren't looking back.

TAR 2.0 easily provides transparency, as the training set equals the production set (minus hold-backs).

Many litigators have come around and do see the advantages of using machine learning to assist their reviews. But making that shift has been more like turning a tanker than a cigarette boat.