
6 minute read
Write on Cue
written by JUSTIN A. GOODMAN
with “language /2 model chat /2 bot /p unlicensed /s practice /s law” to research whether my assumption above was correct.
IAchatbots leaped into pop culture this year, and since then, nearly every conversation I’ve had with a lawyer about the technology has focused on whether it’ll “replace us.”
(As a joke, I asked Chat GPT to help me “evict” a colleague. With limited input, it put out a reasonably coherent trial brief.)
An AI company actually tried to put this artificial advocacy into practice, contracting with a “pro per” traffic court defendant (one acting as their own attorney) to use their language model, which would listen to the hearing and generate responses in real time to provide a defense in court.
The traffic court stakes were low (nobody was going to jail), and a financial loss for the defendant would surely be worth less than the PR value for the software company. But the crack legal team called off the stunt when state bar prosecutors threatened litigation for the company practicing law without a license. I’m not convinced this was the right call. The defendant had a constitutional right of self-representation, and he was only using the type of tools lawyers have been using for years to reduce the effort and time required to find the “right answer” to a legal problem.
I was one of the last generations of law students to learn “paper research.” Among other things, “the Law” consists of legislative enactments (codes) and judicial opinions (from the court of appeal or supreme court). Codes change over time, but their overall scheme is generally fixed in legislative volumes, organized by topic. (For instance, the “code of civil procedure” is where the “unlawful detainer statutes” are located.)
Judicial opinions, on the other hand, represent the adjudication of actual controversies (as opposed to well-organized, legislative engineering) and are published, as they are issued, in reporters. Legal research requires a lawyer to find interrelated concepts, knitted together across self-referential cases and codes (it isn’t simply narrated in chronological order), so private legal annotators (like Lexis or Westlaw) index the concepts. In the past, you’d need to research in a physical place, like a library, that was big enough to house centuries worth of volumes for each jurisdiction. You needed to know the name for your concept to use the index, and you needed a lot of time to physically wade through it all.
Fortunately, I was also one of the first generations to learn electronic databases. You still had to know the concept, but you could use Boolean search connectors to ask for all results
Of course, nobody talks like that. About a decade later, these annotators introduced natural language search features. If you didn’t know the concepts or how best to search about them, you could just start with a question. In the quest to find “the right answer,” there was something comforting about being able to search with parameters that would have to be found in a relevant authority and turning up a closed universe of that authority. Natural language searches were more meandering. But they were a good start, and once you found a thread to tug at, you could usually find your way to the core cases and codes that constituted the rule of law.
Recently, targeted advertising started trying to sell me on a software suite that would take this concept one step further. Once it helped you find relevant cases, it would also summarize them and generate a legal brief. While AI companies have been threatened away from entering the courtroom so far, the hype makes you wonder if the only thing protecting lawyers’ livelihood is a litigious trade association, preserving an outmoded model, like taxi medallion owners protesting Uber’s creative destruction.
Surprisingly, some lawyers almost seem to be inviting their obsolescence. This May, a seasoned attorney in New








York got into some trouble opposing a motion with AI-generated content. Chat GPT not only generated the argument in the brief but also the case law itself—it literally just made up the cases! To his credit, the attorney asked it if the cases were “real,” and it told him they were. Presumably, AI could be programmed with safeguards to only use real law instead of rendering a compelling brief at all costs (whether legally tenable or not).
This presents the most important difference between lawyers and language models—intentionality. There are certainly plenty of legal tasks that could be automated to generate work product better than an average first-year associate (and at a fraction of the cost). But even a fresh attorney can exercise discretion.
Lawyers are empowered by experience, but we’re driven by ethics. We owe duties to our clients (of course) but also to the public, opposing counsel and the courts. One essential duty is that of “candor”: we cannot lie to the court about facts or law. We cannot conceal authority we know to be controlling, though adverse to our client’s position. The role of lawyers is to zealously advocate for our clients—not to “win at all costs.” And this ethic is seen throughout civil litigation.
Pleadings (like complaints, answers, and briefs) must be signed by the attorney to indicate that they’re warranted by existing law (or that they present a nonfrivolous argument for a change in law), that the contentions have evidentiary support, and that they are being brought for a proper purpose. Lawyers can be sanctioned for violating these rules, which exist to make lawyers accountable to maintain the integrity of the practice of law. It’s easy to see that the rule of law would implode under the weight of nonsense if an amoral AI was unleashed on civil litigation.
Intention matters. A crucial protection in our practice area is the “litigation privilege.” It applies to communications in judicial proceedings, as well as prelitigation communication that is contemplated in good faith and under serious consideration. A typical eviction notice would obviously “coerce a tenant to vacate” (a species of tenant harassment under the Rent Ordinance), but the litigation privilege is the reason you can’t generally be sued for serving one. But the privilege isn’t absolute, as a prevailing defendant can sue for “malicious prosecution,” and the losing plaintiff can defend, claiming they relied on advice of counsel. The lawyer isn’t just populating pleadings with legal jargon but is also ensuring it effects a good faith purpose within the current legal landscape.
Language models are essentially auto-fill tools—spectacular ones, to be sure— that use past iterations of language to predict what a human would be likely to say next. They are retroactive instead of forward-thinking, and this matters in navigating the murkier areas of law. For instance, while a notice to pay rent or quit is privileged, the rent increase underlying it is not. (It doesn’t directly relate to litigation.) For decades, a retaliatory increase—one imposed to punish a tenant for exercising their rights— could give rise to liability. On the other hand, a Costa-Hawkins Rental Housing Act rent increase was ostensibly safe ground: even in a rent-controlled jurisdiction, the landlord had the right to set the offer price for deregulated housing, and a tenant who couldn’t afford the rate would need to find alternate housing.
SFAA challenged a San Francisco ordinance penalizing an otherwise lawful, market-based increase, if it “coerced” the tenant to vacate. Unfortunately, the Court of Appeal upheld this as a lawful exercise of the City’s ability to regulate the substantive grounds for eviction. While the law does not require a tenant to agree to a rent increase (the landlord can just serve it unilaterally), attorneys began encouraging a colloquy with the tenant, inviting input on market comparables to create trial exhibits in defense of a claim that the increase bullied them out. This sensibility is important for currently lawful (but arguably questionable) acts as well: as with so many issues in this industry, the profit motive must be tempered with best practices because our actions provoke regulatory change.
This year’s chatbots are surprisingly sophisticated. They are capable of output that closely resembles lawyer-generated content, and in many cases, their “work product” might be passable. But the product isn’t a goal in and of itself. In an industry where regulators continuously try to chill the exercise of housing providers’ rights, the “why” and “how” matter more than the “what.” The fact that language models can produce pleadings doesn’t mean that a lawyer ought to rely on them to do so (as a competent lawyer would have to retrace its steps anyway). Our industry must continue to be deliberate, thoughtful, and human if we are to remain effective.



