9 minute read

Empowering Lawyers with Generative AI: Exploring the Future of Legal Technology

Next Article
Editor's Desk

Editor's Desk

Interview with Rawia Ashraf of Thomson Reuters

Rawia Ashraf, Vice President of Legal Practice and Productivity at Thomson Reuters, develops customer-driven legal software that helps attorneys successfully do their jobs with a particular focus on innovative technologies and AI. She is a graduate of Cornell Law School and the University of Maryland, and before joining Thomson Reuters, she practiced antitrust law at Simpson Thacher & Bartlett in New York and DC. Besides her work in the legal industry, she is active with Food Bank for New York City and other nonprofits focusing on women and income inequality.

Today’s General Counsel recently interviewed Ms. Ashraf:

How do you see companies incorporating generative AI capabilities in the next 12 months?

I see a few ways legal technology companies might adopt generative AI. First, there will be some who simply incorporate it for the sake of having it, maybe to boast about having the latest tech or to ride the buzz. But it’ll be interesting to see how that pans out.

The companies that truly succeed with generative AI will be the ones who use it to tackle existing problems in a better way. They’ll ask themselves, “What challenges do our users face? What issues are prevalent in our industry?” Then they’ll figure out how to leverage generative AI to address those problems and close the gap.

So, the key is for product teams, engineering teams, and customer researchers to focus on using this new capability to solve real issues that affect their users. By doing that, they’ll make the most out of generative AI’s potential.

To build off of that, what do you think some of those problems are? What would be an example of something generative AI could help with?

Let’s step away from the legal side of things and think about technology on a broader scale. Here’s the thing, most tech users only scratch the surface of what a tool can actually do and probably only use about 10 or 20 percent of its capabilities. Imagine if we could tap into the remaining potential through generative AI. Instead of getting into coding or complex workflows, we could just go for a more user-friendly approach.

Picture this: you’re using an application, and you want it to create a specific automation workflow. Normally, that might intimidate a lot of people, but with generative AI, you could just command the application through dialogue. You’d say something like, “Hey, create this automation for me,” and boom, the AI would act like a tech-savvy assistant, setting up the workflow for you. I think that could be incredibly powerful.

Microsoft 365 Copilot videos and demos are a great example. They showcase how people can get more out of Excel by simply telling it what they want through generative AI. So, instead of fumbling around trying to create a pivot table, you can just ask Excel to do it for you — which is a game-changer.

The other exciting thing is that generative AI is changing the way we interact with tools altogether. Imagine tossing aside those complicated research queries and just asking your documents questions in plain language. Generative AI could mimic the way we think and work, integrating seamlessly into our technology.

With this in mind, how do you see the legal market evolving in the next 12 months?

So, if we zoom out, I really believe that in a few years—maybe three or four—AI assistants will become commonplace tools for every lawyer. It’s going to be the standard. But in the next 12 to 24 months, we’ll start witnessing this transformation taking shape. We’ll begin experimenting with AI in the legal field to see what works and what doesn’t. Honestly, right now, we don’t have all the answers, and it’s a learning phase for all of us.

And sure, there might be some parts of the legal market that will mostly stay the same within the next year. They’ll just be starting to test the waters, not wholeheartedly adopting generative AI in their workflows. However, there will likely be specific areas where AI proves useful, and that’s where we’ll focus our efforts and invest more resources. Of course, a lot depends on how well companies tackle the issues related to the accuracy and trustworthiness of AI-generated answers. That trust factor will play a significant role in guiding the legal sector’s adoption of AI technology.

One example of those accuracy issues is that generative AI can hallucinate when producing content, which understandably might make lawyers very nervous. Do you see this skepticism being mitigated by more product evolution or simply by managing expectations?

Both factors are crucial here. Accuracy and correctness are the lifeblood of a lawyer. People seek lawyers’ advice because they want the right answers, and that’s never going to change. It’s essential that the technology keeps improving and narrows the gap with hallucinations while achieving higher levels of accuracy. Of course, it won’t be perfect, just like humans aren’t, but that’s the direction we need to go in.

Now, when it comes to managing expectations, we can’t expect generative AI to outperform human lawyers. It’s essential for users to understand its limitations and how to use it effectively in specific situations. It’s like working with junior colleagues; you don’t blindly accept everything they give you. You review it, check sources, and build trust with their work over time.

I’m optimistic about the progress being made in minimizing inaccuracies and hallucinations with generative AI. The science is advancing rapidly, especially with techniques like retrieval augmented generation and grounding AI in real content rather than relying solely on standard models. We’ve seen significant progress in just the last six months, and I’m excited about what’s to come.

A lot depends on how well companies tackle the issues related to the accuracy and trustworthiness of AI-generated answers.
It’s generative, so it’s only going to get better?

The interesting thing about generative AI in my experience is that it’s kind of the ultimate people-pleaser. It’s not giving you a correct answer, it’s trying to give you the answer it thinks you want. And so, as we give it better parameters and better context around what we want, and what we are expecting, it can provide better answers.

In your experience as a former attorney and current product leader, what would you consider to be the secret sauce in building the best legal tech?

Well, I think there are probably a few ingredients in a secret sauce, but I guess number one is really understanding your users. Really putting yourself in their shoes and understanding that no one went to law school to chase signatures, organize closing binders, or footnote and format citations. There is a lot of time lawyers and legal professionals spend doing that kind of work. And I think if you’re conscious of helping lawyers practice at the top of their license, that’s critical.

The key is to take out the painful aspects of the legal workflow, so lawyers can focus on what they really went to law school for. It’s like clearing the path for them to do what they do best. But, it’s also important to be realistic and not expect an overnight revolution in how lawyers work. Taking small, incremental steps is the way to go. By understanding the problems they face and working closely with them, solving those smaller issues, and gradually building trust, you can pave the way for adopting new technology and improving the overall workflow. It’s a balanced combination of understanding lawyers’ challenges and adapting technology to fit seamlessly into their ways of working.

Often, the newest and subsequently least qualified attorneys make the first draft of documents, which can trigger rounds and rounds of review. Since you were an associate at one time, can you explain how the technology may have improved your first drafts?

I think this is actually one of those areas where technology can be hugely useful in terms of knowledge management. It’s really interesting how lawyers and organizations often rely on what’s in their minds rather than having everything properly documented. But with AI-based knowledge management solutions you can easily find the best initial draft. There’s really no need for anyone to start from scratch these days unless it’s a completely new area of law. Even then, it’s smarter to look for something similar and use that as a starting point. The key is to have well-organized templates, precedents, and sample contracts that the company has used before. Juniors can benefit a lot from these resources, tapping into the organization’s experience instead of content. Those are net new capabilities that we’ve not experienced before in mass-available technology. So that’s the next space. Generative AI can really shorten that process so that you can focus on the next steps as opposed to summarizing what has happened. reinventing the wheel every time. It just makes the whole process more efficient and effective.

The key is to take out the painful aspects of the legal workflow, so lawyers can focus on what they really went to law school for.The challenge is that setting up knowledge management assistance takes so much time. That’s where generative AI can make those tasks easier and you don’t have to invest up front time in making those things more accessible.
It’s clear that AI technology is focused on specific tasks like classifying documents or extracting key provisions. What other types of tasks might be favorable to process improvement by AI?

So, we talked about automating workflows and taking the grunt work out of legal is a big one. What’s new and exciting about generative AI is the technology’s ability to draft content, summarize content, and modify content. Those are net new capabilities that we’ve not experienced before in mass-available technology. So that’s the next space. Generative AI can really shorten that process so that you can focus on the next steps as opposed to summarizing what has happened.

So as a final thought then, would you say the key area where AI is going to help is by making processes more efficient, less time-consuming, and less menial instead of attempting to make legal decisions that lawyers would typically make?

Yes, that’s where I see it first taking hold because it’s less risky to implement. And as organizations get comfortable with it as an assistant, then they can start to upscale how it’s used. But I don’t see it replacing legal judgment. It shouldn’t. Lawyers should be making decisions about those things, and AI should make it easier to have the relevant information and get them to a decision point more quickly.

If you'd like to continue the discussion, please reach out to rawia.ashraf@thomsonreuters.com.

This article is from: