5 minute read

Am I Hallucinating? Artificial Intelligence (AI) Generative Chatbots and ABA Model Rules

By Liz Lindsay-Ochoa, JD, LLM 1

This article provides an overview the ethical considerations raised by the use of generative model chatbots such as ChatGPT, Bard (Google), and Bing AI, for legal work, including the possibility of chatbots creating fake information including legal citations and authorities, unintentional breaches of confidentiality and other issues raised under the ABA Model Rules of Professional Conduct.

ChatGPT (Open AI) has been a hot topic since it was launched to the public. This and other similar generative model chatbots, such as Bard (Google), and Bing AI give rise to ethical considerations. This technology is known as generative AI, where the program creates new content by predicting what words come next based on a prompt from the user. What are the ethical considerations of using this?

The ABA Model Rules of Professional Conduct (Model Rules) give attorneys some guidance on our ethical obligations. Although generative AI touches on many portions of the Model Rules, specifically, I am going to focus on Model Rule 1.1 Competence, 1.6 Confidentiality, and Rule 5.1 and 5.3 Responsibilities of a Partner or Supervisory Lawyer and Responsibilities Regarding Nonlawyer Assistance.

Model Rule 1.1 Competence

A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.

Comment 8 of the Model Rule states that “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.” An attorney’s duty to provide competent representation includes making informed decisions as to whether AI is appropriate for the legal services provided to your client and whether the program performs as marketed.

Artificial hallucination refers to the phenomenon that gives seemingly realistic sensory experiences that do not corre - spond to any real-world input. Generative model chatboxes have been found to produce hallucinations, particularly when trained on large amounts of unsupervised data. By now, you have surely heard about the lawyer that used ChatGPT to write his brief. The AI tool created fake case citations, that is hallucinations. Of course, that lawyer has faced sanctions. The judge stated, in his opinion , there was nothing “inherently improper” about using AI for assisting in legal work, but lawyers must ensure that the filings are correct.

Undoubtedly, there are other aspects to consider. The use of fake information to change the answers in the generative model chatboxes is something to be wary of. Incorporating methods for monitoring and detecting hallucinations can help address this issue. Another issue is the potential to perpetuate biases and discrimination . The tools are only as unbiased as the data and algorithms used to create them, and there is a risk that these tools can perpetuate existing biases in the legal system. Overall, attorneys have an ethical obligation to be competent in the use of technology and to ensure that their use of AI-powered tools does not compromise their clients’ interests.

Model Rule 1.6  Confidentiality of Information

Rule 1.6 includes the obligation to use reasonable efforts to prevent the unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client. As you may have seen, Open AI uses data scraping off the internet to increase its ability to create answers. But, if you look through the Security Portal or the Terms and Policies of ChatGPT, you will some potentially alarming items. If you use ChatGPT, you agree that you will allow OpenAI to share the data it collects with unspecified third parties. Additionally, its employees may review your conversations with ChatGPT. This is all to improve the chatbot’s ability to provide better responses but is not privacy-friendly. Accordingly, an attorney has a duty to understand the vendor’s security practices and decide if the security policies are reasonable prior to sharing any client information.

Model Rule 5.1 and 5.3 Responsibilities of a Partner or Supervisory Lawyer and Responsibilities Regarding Nonlawyer Assistance

This is a novel technology. Are the generative model chatboxes a nonlawyer that you need to supervise? The Model Rules allow lawyers to delegate work to paraprofessionals, but does not delegate responsibility. Lawyers also must ensure that delegation of work to non-lawyers, including the use of vendors who may electronically handle attorney-client communications and confidential information is done responsibly2 It is also clear that the responses generated by ChatGPT and other chatboxes can be imperfect. The technology may not always provide the most up-to-date or relevant information. These tools, when supervised correctly, can help ensure that legal documents adhere to specific requirements and best practices, reducing the risk of errors or omissions, and save time.

I considered writing this article using ChatGPT, but I decided against it3. Finally, although broader than this subject, I would be remiss to not mention that the House of Delegates adopted Resolution 604 at the 2023 ABA Midyear Meeting. The resolution addresses how attorneys, regulators, and other stakeholders should assess issues of accountability, transparency, and traceability in artificial intelligence. This topic is evolving at a rapid pace and lawyers must think through the ethical implications prior to using this or any new technology.

Endnotes

1. Liz is a Private Wealth Strategist and a leader in the Global Wealth Solutions Group at Raymond James. She focuses on estate, transfer, and income tax planning, using her knowledge to problem solve for high-net-worth clients. She is on Council for the American Bar Association’s (ABA) RPTE section and is active in several committees.

2. See https://www.ctbar.org/docs/default-source/publications/ ethics-opinions-informal-opinions/2013-opinions/informal-opinion-2013-07 on cloud computing. This opinion also provides a list of other states opinions that have similar results.

3. However, this is an interesting article written by ChatGPT, showing the prompts on how the author received his results https://clp.law. harvard.edu/knowledge-hub/magazine/issues/generative-ai-in-the-legal-profession/the-implications-of-chatgpt-for-legal-services-and-society/

This article is from: