15 minute read
Best Use or No Use? What Should Lawyers Do About AI Right Now?
By Jaime Herren
Jaime Herren is a partner at Hartog Baer Zabronsky, APC, in Orinda, California, in the San Francisco Bay Area. She is the co-vice chair of the Alternative Dispute Resolution Committee to the Litigation Section of ABA RPTE.
Attorneys often ask me whether AI will replace young lawyers in the traditional law firm structure. Let me give my opinion upfront to end that suspense: My response is simply, no. The author thinks it will change how students learn at all levels of education, including law school. Furthermore, the author thinks AI will fundamentally change how law students learn to conduct legal research and how traditional legal resources and practice guides provide search results. It might change how junior attorneys are tested and trained. Like every new powerful and pervasive technology, AI will be a tool that younger generations learn with ease and take for granted and that older generations will adopt, struggle with, or refuse to learn.
The author’s advice is for attorneys to embrace the new functionality of AI as they encounter it in their current practice. Know that Westlaw, Lexis, Google, and every other search engine are already incorporating and will continue to incorporate AI. You will, inevitably, use it. Why not start now?
Only the future will reveal the scope of the effects of AI. For now, what do lawyers do with it and in anticipation of it?
If the reader takes nothing else away from this article, it is that one should not turn away from AI. That is not to say that it is something one needs to conquer. If you have not yet ventured into the fields of coding and software, you should not feel the need to learn how AI operates. If you can use Google, but you don’t know how Google works, then you do not need to know how AI works. But there are things one needs to know to use AI better. Attorneys have the basic ethical obligation to understand the tools that the attorney relies on to provide legal services to their client. The attorney needs the basic knowledge and skills to know whether AI is available for the task, whether the attorney should use AI for that task, and how to use the preferred AI tool to render a good result, while complying with the rules of professional conduct and the attorney’s duties to the client.
Why Do You Need to Obtain Basic Knowledge and Skill About AI?
You need to obtain basic knowledge and skill about AI because you are required to. Lawyers have a duty to provide competent representation. Rule 1.1 of the American Bar Association (ABA) Model Rules of Professional Conduct (the Rules) states, “[a] lawyer shall provide competent representation to a client.” Competent representation “requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Comments to Rule 1.1 include that “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.”
History and Detailed Functionality That Is Beyond the Basics
Here’s what you do not need to know to provide competent legal services. You don’t need to know that Google was originally going to be named Backrub or that Google was intended to be the word “googol” but that it was misspelled and that a googol is the number that is a one followed by 100 zeros. You don’t need to know that OpenAI published research on generative models in 2016 and trained by collecting a vast amount of data in a specific domain, such as images, sentences, or sounds, and then teaching the model to generate similar data. In 2019, OpenAI published further research on GPT-2, which modeled human feedback. In 2022, OpenAI published research on InstructGPT models, siblings of ChatGPT, that show improved instruction following ability and reduced fabrication of facts.
What Do You Need to Know?
If I ask you to Google something—anything I came up with on the spot in a conversation with you—you could do it. But if I ask you how Google rendered your results, could you answer? Probably not. But you can locate the tool, use it for its intended purpose, and have results in a matter of seconds.
More than that, you can also tell if AI malfunctioned or if it did not satisfactorily perform the search. You can compare search engines, and maybe Google is not your preferred or default search engine because you find that you obtain superior results on another platform. You can easily find and use alternative search engines when you need a different result or a different type of result. But you do not know what the search engine actually does when you engage it by hitting that search button. That’s OK, and this is the level of basic knowledge and skill that I recommend that every attorney should have for AI.
If I asked you to use AI to generate a nonconfidential draft cover letter, you should be able to do it without further instruction. You could tell me if the result was decent or if you want to try again, maybe with a different chatbot or with a few more iterations. Perhaps you could get a better generic template cover letter. If you can perform that task with relative ease, I’d say that you know what you need to know about using AI tools. Now you need to consider if the nature of AI tools is suited to your task.
Essential Knowledge of the Nature of AI
AI learns by gathering information. The search process it uses to learn from the web is sometimes referred to as data scraping. AI learning includes retaining the information it is given and that it locates. That means it keeps a copy of the information you provide to it. Read that again because it has heavy implications for attorneys using AI.
Every attorney-client relationship is confidential and privileged. You must never tell AI anything that is otherwise protected by the attorney-client privilege. Just as you would never put confidential information into a search bar, you should never feed confidential information to an open chatbot.AI is already a part of our lives, but you may not even know it. Traditional AI, also known as Narrow or Weak AI, learns from data (input) and produces a response in the form of decisions or predictions based on the input. Forms of traditional AI include search engine algorithms and voice assistants—think of Siri and Alexa. They offer customized recommendations on shopping and streaming platforms, for example.
Then there is Generative AI. According to Google’s AI Overview, “Generative AI is a type of artificial intelligence (AI) that creates new content, such as text, images, music, audio, and videos. It’s different from traditional AI, which is programmed with specific rules and algorithms to make smart decisions. Generative AI learns to create new content by identifying patterns in large datasets and creating new variations based on those patterns.”
The legal profession has already embraced the use of traditional AI, even if practitioners don’t yet know it. AI is embedded in legal research platforms like Westlaw and LexisNexis. It is already being used in discovery in litigation to organize large amounts of data, to analyze contracts, and even to predict how a judge might rule. Generative AI, on the other hand, is a new kid on the block. Its ability to generate new content is uncharted territory, and while it may prove useful, it will present new challenges to lawyers when performing their duties.
Essential Knowledge of Malicious Uses of AI
You should know about the negative uses of AI, including deepfakes and hallucinations. Deepfakes are videos or images where AI generates content based on learned information that is then used to produce an outcome that resembles what it learned. Think of the fake videos seeking to imitate actors or politicians making outrageous statements.
Hallucinations, on the other hand, are where AI generates new content that is not based on input data, or when the model relies too heavily on biases or patterns, resulting in erroneous or novel responses. The danger is that hallucinations can generate results that are seemingly plausible on the surface, yet they have no basis in reality. The best example for our purposes is the nonexistent caselaw cited by various lawyers in their briefs, which has resulted in sanctions.
The Names of Your Tools
The most famous AI chatbot is ChatGPT, but there are many available and the list is ever-changing. Here is a short list that names a few:
Microsoft Copilot (Bing Chat)
Gemini (Google Bard)
Socratic By Google (for kids)
Anthrophic’s Claude
Perplexity.ai
Jasper (subscription $)
You.com
HugginChat (opensource)
CoCounsel (Westlaw)
Protégé (LexisNexis)
Why You Should Be Able to Generate a Nonconfidential Cover Letter and Why the Author Does Not Suggest Generating Any Kind of Legal Brief
Model Rule 1.6 expressly prohibits attorneys from revealing “information relating to the representation of a client.” Model Rules 1.9(c) and 1.18(b) require lawyers to protect information of their former and prospective clients and require attorneys to make “reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of the client.” As stated above, if you wouldn’t put confidential information into a search engine, the same should be true for chatbots. Search engines and chatbots are outside of the confidential attorney-client communications relationship. You cannot reveal important information to chatbots.
As stated earlier, AI retains the information provided to it. If you put in confidential information, the chatbot is going to remember it forever and could generate it in response to someone else’s inquiry without your knowledge. Often when you visit a chatbot website or sign up for an AI service, you agree to Terms of Service (TOS) and other privacy policies. These TOSs and policies may provide insight on how the data you input are being accessed, stored, used, and perhaps shared. The TOSs differ as to each chatbot and, if you intend to become a frequent user of a particular chatbot, you should familiarize yourself with the terms.
Disclosure of the Use of AI May Be Necessary
Attorneys owe their clients a duty to communicate with them about the character of the representation and the means to be used in the course of representation. Model Rule 1.4(a)(2) requires lawyers to “reasonably consult with the client about the means by which the client’s objectives are to be accomplished,” and Model Rule 1.4(b) imposes a duty to explain matters “to the extent reasonably necessary to permit a client to make an informed decision regarding the representation.” The question then becomes when your use of AI technology should be disclosed to clients. You may not need to disclose the use of prevalent AI tools such as legal research platforms. The use of AI must be disclosed if the client inquires about your use of the technology. Similarly, you must communicate with the client when the results of AI will influence significant aspects of the representation, such as evaluating potential outcomes or making the jury selection, or when the use of AI would violate the terms of your engagement agreement or the client’s reasonable expectations on how you will carry out the representation. If your engagement agreement includes a technology clause, review and revise it to make it consistent with your intended use of AI tools. If your engagement agreement states that you do not or will not use AI tools, you should be very careful about the tools you use because many “safe” and “traditional” tools now incorporate AI features.
Attorney Fees and Expenses When Using AI
Model Rule 1.5(a) requires that a lawyer’s fees and expenses be reasonable in light of various nonexclusive factors. Further, Model Rule 1.5(b) requires that the rate and expenses be communicated to the client, preferably in writing and either before or within a reasonable amount of time after starting the representation.
AI may allow attorneys to perform Whereas AI may reflect a decrease in the base hours spent on certain tasks, the necessity for review may rebalance that concern. The quality of AI output should be viewed with the same careful review as the work of a summer associate or one who may have questionable ethics. A stinging sanction order out of New York tells the tale of a chatbot that made up caselaw with citations that looked real and stated that the opinions were drafted by actual judges. Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023). It is not the only such order. See, also, e.g., Park v. Kim, 91 F.4th 610 (2d Cir. 2024).
The cost of the use of AI technology could be considered overhead or an out-of-pocket expense. If the service is set up in your word processing program or perhaps has a subscription fee regardless of how often it is used, this would be an overhead cost that cannot be charged to clients unless this charge was previously disclosed. If the use of AI is for a particular client and the charge is on a per-use basis, this cost can be passed on to the client as an outof-pocket expense.
Conscious Supervision Is Necessary
Model Rules 5.1 and 5.3 address managerial and supervisory lawyers’ responsibilities when overseeing subordinate lawyers and nonlawyers. Lawyers in supervising positions must implement measures and policies to ensure that the lawyers and nonlawyers within the firm comply with their professional obligations when using AI. Similarly, supervisory lawyers should make reasonable efforts to ensure the firm has in effect measures giving reasonable assurance that the conduct of nonlawyers, such as third-party providers, is compatible with the professional obligations of the lawyer.
Managerial lawyers must implement clear policies regarding the use of AI to ensure compliance with the rules of professional conduct. In addition to clear guidelines, managing lawyers should implement training on the proper use of AI tools, including the basics on how this technology works, its benefits and drawbacks, the ethical considerations, and best practices for handling data in a confidential and private manner.
Although supervision over thirdparty nonlawyers may prove to be more difficult, managing attorneys are not without recourse. As already discussed, it will be vital to learn from the provider its Terms of Use and privacy policies to learn what happens with the data collected by the third-party provider. Other alternatives include the configuration of the AI program so that it is set to preserve confidentiality and security and ensuring the reliability of the programs’ security measures.
Appropriate Use of AI Tools
AI is here to stay. If you choose to embrace it in your practice, you must do so with caution while adhering to the duties of professional conduct. The Standing Committee on Ethics and Professional Responsibility of the ABA issued Formal Opinion 512, which examines ethical considerations and provides guidance on best practices when using AI. These are some of the main takeaways.
AI is here to stay. If you choose to embrace it in your practice, you must do so with caution while adhering to the duties of professional conduct. The Standing Committee on Ethics and Professional Responsibility of the ABA issued Formal Opinion 512, which examines ethical considerations and provides guidance on best practices when using AI. These are some of the main takeaways.
Be cautious and verify the accuracy of your results. How much verification is required will depend on the AI tool used and the task performed. To provide competent representation, you must have a reasonable basis for relying on the AI tool’s results, which may require you to test the AI tool to ensure its accuracy and reliability. Similarly, you cannot relegate your responsibilities. While AI could be used as a springboard to generate ideas, it cannot replace your own judgment.
AI tools may disclose information about a client’s representation to persons inside the firm who are prohibited from accessing the information or inadvertently to persons outside the firm. Because of this risk, the client’s informed consent must be obtained before entering the client’s information into chatbots. This consent cannot be obtained by adding boilerplate language to your engagement agreement. Furthermore, the ABA recommends reading, or consulting with someone who has analyzed, the Terms of Use, privacy policy, and related contractual terms and policies of the AI tool you use.
If you are embracing the technology, you may want to consider including this information in your engagement agreement to communicate in advance to your clients your use of AI and to set the expectations of representation.
You should consider whether your use of AI tools should be disclosed to the client. As discussed above, disclosure may vary depending on the type of tool and the purpose for which it is being used. In some instances, disclosure of the use of AI may not need to be disclosed or it may be included in your engagement agreement if used for routine tasks. Other instances, however, will require you to discuss the use of AI with your client, as it may involve important decisions about the strategy of the case.
Another consideration is the fees and costs charged to the client when using AI. Although attorney fees continue to be analyzed based on the type of fee agreement and the reasonableness of the fee, you will want to disclose in the engagement agreement any charges that may be associated with the use of AI or determine if it can be charged as an out-of-pocket expense if the situation warrants it. Finally, managing attorneys should make reasonable efforts to properly train their subordinates to ensure compliance with the rules of professional conduct while using AI and must make every effort to understand the terms of service of the AI provider you choose to use in your practice.