5 minute read

Effectively Using AI

Next Article
From The President

From The President

If you are reading this issue, then you’re no doubt aware of the explosion of artificial intelligence in the professional world. You’re likely seeing advertisements for AI products promising to make your life better and easier. However, many lawyers are afraid of AI. We have probably heard the horror stories of lawyers submitting documents drafted using ChatGPT that ended up containing fake cases. We may have played around a little bit with AI products, but we feel hesitant to use the products in our practice.

I’m right there with you, but I don’t want to be left behind. So, I turned to some experts: Caitlin Moon is a Professor of the Practice at Vanderbilt Law School, and the Founding Co-Director of the Vanderbilt AI Law Lab and Greg Siskind is the founding partner at Siskind Susser, a Memphis immigration firm, and he has been at the forefront in developing AI programs for lawyers at Visalaw.ai, a company he co-founded.

Moon says that people who are afraid of AI need to replace that fear with curiosity. Many of the legal tools that we use currently have AI embedded. Additionally, technology companies are working to figure out how generative AI will make their tools better. Moon says, “we are in a moment now that we’ve never been in before and we can decide to be proactive and curious and chart our own course here or we can sit back and be reactive and let somebody else make these decisions for us.” Moon describes three tips that can help with this: First, learn how the AI products work. When lawyers get in trouble using AI, it is often because they are unaware of how the tools work. If you know how the tools work, then you can understand how they are useful and how they can be hurtful. It will help you be efficient and avoid major problems that come when you don’t know what you’re using.

For example, Siskind describes a feature of ChatGPT that makes it improper for much of lawyer work: when you load information into ChatGPT, it is used to train their AI and is incorporated into their models. This causes confidentiality issues because your client’s confidential information is now inside of ChatGPT. Also, occasionally documents or prompts get flagged as inappropriate and are sent to human content evaluators. The last thing you want is to have some anonymous person looking at your confidential information, and we know that the content matter we deal with as lawyers is often exactly the kind of content that would be flagged.

Siskind has developed a program that works better for legal work than ChatGPT. His program is different because instead of pulling from everything on the internet, it allows you to build a library of sources, and the AI will pull from only those sources when answering questions or generating documents. Your inputs are not shared with the World Wide Web but instead just stay on the platform. Siskind has also created programs that allow you to load the foundational documents in your practice area and have the AI use those to perform tasks and answer questions.

Second, Moon suggests that law firms create a sandbox for their lawyers to practice and use these tools without any consequences. She warns against firms prohibiting their attorneys from using AI. She says that people are going to be using these tools no matter what, and if you have a blanket prohibition, then you can guarantee that there will be rogue use. In that circumstance, the likelihood of a bad outcome can be higher than if you trained people. Like with anything, you should be able to practice with no stakes before you perform with something on the line.

Third, follow your ethical obligations. There is a lot of fear about AI and how to use it ethically, but Moon says that your obligations while using AI are exactly the same as your ethical obligations using any other technology. You have an obligation to check your work and ensure confidentiality no matter what tool you are using. That does not change with the introduction of AI.

Siskind suggests that you treat work product created by AI the same as you treat work product created by anyone other than you. These programs may create a basic draft of something, but “from an ethics point of view you still have to be the lawyer. You still have to exercise diligence, you still have to check citations, you have to actually read the documents. The AI is good, but it’s not perfect. So, it will get the answer right most of the time, but it will occasionally have something in the answer that isn’t perfect.”

When you are looking at the landscape of AI in the legal profession, think about these three tips. Learn how the products work, practice with the products before using them for clients, and keep on top of your ethical obligations. Following those tips will allow you to be curious about AI and make your practice better.

Ellison Berryhill is an Appellate Attorney with the Nashville Public Defender's Office. His writing has been published in the Louisville Law Review, Virginia Journal of Criminal Law, and Michigan Journal of Law Reform. All work done for the Nashville Bar Association is in his personal capacity and does not represent the views of his employer.
This article is from: