Designing AI for Law[yers]
Where I share my experience as a guest in Professor Caitlin Moon's class at Vanderbilt Law
Welcome back to all my readers! ❤️ We’re going to change it up a little this week.
And to all of you newbies, yes, you’re in the right place! Come right this way. 👇
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
A couple of weeks ago I got the gift of going back to law school. Well, not my law school, but my friend’s law school in Nashville. And, I got to teach things based on my now (somewhat) lengthy experience. And, I got to visit a few friends too and see more of the city this time. And, I’d like to share my experience of it all with you and my class notes, well, that I took myself of myself, that is. And, in there, I’ll explore some ideas about designing AI and chatbots for lawyers and the law.
If this sounds like your cup of tea, read on…
A New Day: The Morning Rush
The hotel where I stayed was directly across the street from Vanderbilt University’s campus. I stopped by the closest Starbucks and got myself a large Pike with heavy cream and already the place was abuzz with students: in line, scrambling to class, working on laptops, and sharing morning gossip. “Tom, Pike.”
I grabbed my morning cup o’ joe and popped in my AirPods. Clairo’s hushed vocals layered over the controlled chaos. The vibe tickled familiar recesses of my memories. The energy gathered together and surged like a wave as I crossed the street with students to campus, then it crashed apart as the students broke into smaller groups aimed at the school of business, the law school and other locations unknown.
I found a concrete bench that wrapped around the corner of the law school. I sat there, drinking my coffee and taking in the trees, the sun filtering through their vaulted branches and leaves, and students shuffling to class. A family stopped to snap a picture of the law school, professors looped around the stairs to the faculty entrance, and I smiled with the sun on my face.
My Friend, the Legal Designer
My friend, Vanderbilt Law Professor, Director of Innovation Design and founding co-director of VAILL, the Vanderbilt AI + Law Lab, Caitlin (Cat) Moon, is an internationally recognized thought leader in human-centered design for law. Cat has shared her expertise at countless conferences; most recently, she keynoted at LawFest, New Zealand’s premier legal innovation & technology event, about “Designing Tomorrow: The future of law in the age of AI.” She has many more accomplishments and advisory roles than I can list here. But, I will add that she, I, and my other friend Patrick Palace, co-founded the American Legal Technology Awards together, where we get to recognize the best and brightest in legal innovation. It might embarrass her for me to say all of this, but she is, in a word: awesome.
I tell you this to say that I was thrilled when she invited me to be a guest in her class.
Designing AI for Law[yers]1
The class was comprised mostly of third year students, days away from graduation. Bright-eyed and bushy tailed, they were engaged and eager to learn. What follows are my expanded notes on what I shared with the class.
Paradigm Shift: Rule-Based to Learning Models
How one approaches designing the user experience (for lawyers or consumers) depends on what I’ll call the build paradigm.
Up until the advent of generative AI, chatbots were constructed exclusively using rule-based systems with if-then statements, conditional logic, and filters. Those systems are still very effective for many use cases and need not be replaced in the mad dash for all things GenAI. For example, document automation utilizing a guided interview, to capture a clearly-defined set of data, is rule-based and allows the builder to construct a user journey that is predictable and scalable. In building a rule-based interview, the builder can control the script. The builder can also control what follow up questions are asked, and the course of the conversation. In this way, the builder doesn’t have to worry that the chatbot will ask an unexpected question or that the user will go off on an unexpected tangent. A user can’t change the subject. A user also can’t tell their story in a natural and organic way. It’s a double-edged sword. On the one hand, a rule-based approach is organized and controlled. On the other hand, this organization and control can stifle a more human conversation. So, from the perspective of human-centered design, conditional logic-driven chatbot experiences give the author more control but ultimately a less than optimal experience for the user.
Designing chatbot conversations with the aid of generative AI requires a different approach. Rather than design the conversation step-by-step, adding different paths dependent upon user selections, the builder is free to design at a higher level of abstraction. One way to think of it is that designing the user experience on a rule-based system is like giving them the user a map and turn-by-turn directions whereas a generative approach is more like providing the user with a self-driving car, trained on the most efficient routes and equipped with GPS, that you just tell where you want it to go. With the learning paradigm-driven chatbot, the builder controls the chatbot instructions (for example, it’s role, goal, personality), the knowledge it has access to (a knowledge base created from scraped web pages or uploaded documents such as brochures, powerpoints, literature, etc.) and the tools it can use (for example, web search, accessing information from an external API, or something else). Once the chatbot is imbued with these skills, it can work to achieve its objective.
So the design challenge shifts perspectives, from micromanagement to macromanagement. And, where there’s freedom of action, reasonable constraints must be put in place to minimize mischief.
The Law of Unintended Consequences
Each system has its own shortcomings. With the freedom made possible by large language models, there are control issues.
A user may use the chatbot in an unintended way with unintended consequences. The chatbot may be designed to offer self-represented litigants information about their landlord-tenant matter, but a user asks the chatbot for its recommendation of the best pizzeria in New Haven.2 Lately, the news has been replete with stories of users attempting to jailbreak chatbots or use them in new and exciting ways.
Take, for example, a chatbot, released by GM to communicate with potential customers about their vehicles, was manipulated to agree to sell a Chevy Tahoe for $1.3 Chris White, the prankster who started it, explained his motivation: "I saw it was 'powered by ChatGPT,'" he told Business Insider. "So I wanted to see how general it was, and I asked the most non-Chevy-of-Watsonville question I could think of." And, despite the fact that GM’s terms of service ensured that that cockeyed deal would not be legally binding, the public fallout for GM was swift and ruthless.
Or, in Manhattan, a landlord asked the MyCity Chatbot whether he was required to accept section 8 vouchers and was erroneously informed that he did not have to.4 In New York City, it’s illegal for landlords to discriminate by source of income so the chatbot was essentially advising the landlord to break the law. The implications are interesting here, because the chatbot is government sanctioned, it could be argued that the landlord could defend himself against accusations of unlawful discrimination on the grounds that he relied on an opinion offered under color of authority.
Both bots utilized OpenAI’s ChatGPT technology. The former chatbot designed by a marketing company for GM dealerships. The latter created by an internal NYC team.
How to Tame the AI Beast?
Naturally, this chatbot chaos leads one to contemplation: how can we prevent this madness? And, I regret to say that it’s not really the “AI Beast” that is to blame here. It’s the prankster in us that’s creating the mischief. So the issue is less straightforward and more meta: How do we constrain AI’s response to limit the damage that we (human beings) are causing to ourselves?
Interestingly, we can use rules or training to accomplish this.
Rules take the form of “Don’t dos”: “Don’t do X,” “Don’t swear,” “Don’t give legal advice,” “Don’t offer terms on vehicles”… As you can see, the list of rules will ever-expand in response to the infinitely clever ways that humans can think of to give the unwitting chatbot a hard time. This approach may work effectively in concert with efforts to curtail the chatbot’s range of freedom: not allowing it to rely on its generalized training, for instance, and tying it to a source of truth using retrieval augmented generation and restrictive prompting.
Training may also be used as a preventative measure. A moderation API could be called prior to the generation of any output to prevent abuse. The moderation API would consist of a fine-tuned model trained on instances of abuse and edge cases that we want to disallow. In practice, the user would submit a request (like “use profanity”) and the request would be screened by the API and rejected with an explanation. This method may succeed where rule-based restrictions fail, but it would remain limited to its training.
A combination of both rule-based restrictions and training may be necessary to effectively constrain AI's responses and mitigate the potential for misuse.
Closing Thoughts
As legal designers and builders venture into the realm of Generative AI, it is crucial to recognize that traditional approaches to design may not suffice in addressing the unique challenges presented by this technology. While rule-based systems offer a high degree of control and predictability, they often fall short in delivering a truly human-centered experience. Conversely, generative AI models provide greater freedom and flexibility, enabling more natural and organic conversations, but at the cost of increased uncertainty and potential for misuse.
In light of these challenges, legal designers must adopt a more holistic and adaptive approach when working with Generative AI. Rather than focusing on micromanaging every aspect of the conversation, designers should prioritize establishing clear objectives, curating relevant knowledge bases, and implementing reasonable constraints to minimize unintended consequences. This shift in perspective, from micromanagement to macromanagement, requires a delicate balance between empowering the AI to assist users effectively and ensuring that it operates within the boundaries of its intended purpose.
Furthermore, as the examples of the GM and MyCity chatbots demonstrate, the potential for misuse and manipulation of AI systems is ever-present. Legal designers must proactively anticipate and address these risks by employing a combination of rule-based restrictions and advanced training techniques, such as moderation APIs and retrieval augmented generation. By doing so, designers can work to mitigate the damage caused by human mischief and ensure that AI-powered legal tools remain reliable and trustworthy.
Ultimately, the advent of Generative AI in the legal domain presents both opportunities and challenges. As legal designers navigate this uncharted territory, they must remain vigilant, adaptable, and committed to the principles of human-centered design. By embracing a more holistic approach, implementing reasonable constraints, and proactively addressing potential misuse, designers can harness the power of Generative AI to create innovative and effective legal solutions that truly serve the needs of their users.
If you liked this article, I invite you to join the LawDroid Community, a new, exclusive platform designed for pioneers at the intersection of law and AI technology. This is more than just a community; it's a vibrant ecosystem of legal professionals, technologists, and AI enthusiasts who are reshaping the future of legal services.
Joining the LawDroid Community means being part of a select group committed to driving the future of law with AI. Whether you're a seasoned legal tech expert or a legal professional keen on exploring AI's potential, you'll find invaluable connections, insights, and opportunities here.
👉 Interested? Follow this link to apply: https://forms.gle/pdfVbdyef8P2bX189
Professor Moon coined the name of our session.
Sally’s or Pepe’s? That is the question. For a deep dive, watch this.
The full story of how this happened is reported here: A car dealership added an AI chatbot to its site. Then all hell broke loose, Business Insider, December 18, 2023, https://www.businessinsider.com/car-dealership-chevrolet-chatbot-chatgpt-pranks-chevy-2023-12. Suffice it to say that it’s a cautionary tale of how not to design an AI chatbot experience.
NYC AI Chatbot Touted by Adams Tells Businesses to Break the Law, The City, March 29, 2024, https://www.thecity.nyc/2024/03/29/ai-chat-false-information-small-business/