Generative AI Ethics: A Practical Guide to ABA Formal Opinion 512
Where I explore the Ethics of AI and its implications for lawyers using Generative AI
Welcome back, my law-loving luminaries! You're the precedent-setting stars in my legal universe. ⚖️✨
And for you newcomers, order in the court – we've got a comfy spot at the bench just waiting for you to drop some knowledge bombs! 🧑⚖️💥
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
Alrighty, legal eagles, buckle up your briefcases because we're about to dive headfirst into the wild west of AI ethics in law! 🤠💼
Picture this: You're burning the midnight oil on a case when suddenly, your trusty AI sidekick offers to whip up a killer brief faster than you can say "objection!" Sounds like a dream, right? Well, hold your virtual horses, counselor! The ABA has just dropped its hot-off-the-press Formal Opinion 512, and it's serving up more drama than a courtroom thriller. We're talking ethical dilemmas, confidentiality conundrums, and billing brain-teasers that'll make your head spin faster than a judge's gavel. So, if you want to stay ahead of the curve (and avoid an ethical face-plant), strap in and read on. Trust me, by the end of this article, you'll be dropping AI ethics knowledge bombs that'll make even ChatGPT jealous!
If this sounds interesting to you (and you’d prefer to avoid an ethics complaint), please read on…
— Caveat —
The guidance I provide in this article of course relies on ABA Formal Opinion 512 and the ABA Model Rules of Professional Conduct. I understand that these rules and opinion are advisory only and that we lawyers are not bound to them. Certainly, you must comply with your local jurisdiction’s ethics rules and opinions, which may vary either in terms of their focus, detail, or recommendations, or all three. Rely on what your local jurisdiction is requiring you to do. It is my intent to be helpful in providing you with this guidance and sample disclosures and policies, but it is not to be relied on as legal advice. Please edit and revise them as appropriate for your jurisdiction.
Alrighty, did I prove through this disclaimer that, although I am a legal tech founder, I still take my ethical duties as a counselor at law seriously?
Well, all the more reason for us to continue; let’s begin.
Taking Stock
Before we get into breaking down Formal Opinion 512, I want to take stock of why it was necessary.
Since the popular introduction of Generative AI (GenAI) in November 2022 with OpenAI’s ChatGPT, we’ve learned a lot about its nature, limitations and the good things that it can do. We have further been exposed to additional tools beyond ChatGPT, like Claude, Gemini, Perplexity, Cohere and many others. GenAI, which can create text, images, and even legal documents based on simple prompts, has quickly moved from being a futuristic concept to a daily reality in law practices. But with innovation comes the need for regulation and ethical guidance—something the American Bar Association (ABA) recognized when it issued Formal Opinion 512.
This opinion wasn't just timely; it was necessary. Lawyers are increasingly incorporating GenAI into their practice, whether for drafting, research, or client communications. Yet, as powerful as these tools are, they come with significant ethical implications. The ABA needed to step in to provide a clear framework for the responsible use of GenAI, ensuring that lawyers can leverage these technologies without compromising their professional obligations.
What Are the Risks of GenAI?
Generative AI tools are undeniably powerful, but they are not without their risks. First and foremost, there's the risk of inaccuracy. As demonstrated by the Stanford study1, GenAI tools can produce outputs that seem plausible but are factually incorrect or legally unsound. This phenomenon, known as "hallucination," can lead to misleading legal arguments or erroneous conclusions if not carefully checked.
Another major risk is confidentiality. GenAI tools often require inputting sensitive client information, raising concerns about data privacy and the potential for unauthorized access or disclosure. The technology’s self-learning2 nature means that inputted data could be inadvertently exposed or used inappropriately, creating a minefield of ethical dilemmas.
Moreover, there's the risk of over-reliance. While GenAI tools can enhance efficiency, they can also lead to a dangerous abdication of the lawyer’s critical judgment.3 Lawyers must remember that these tools are aids, not substitutes for their legal expertise and professional responsibility. There's a risk of providing incompetent representation if lawyers use GenAI tools without sufficient knowledge of their capabilities and limitations.
Finally, there is the risk (perhaps unavoidable) of the rapid pace of technological change. Keeping educated on the latest developments and capabilities of GenAI and other technological advances can become a job in and of itself. It’s not as though what you learn about today will be what you need to know tomorrow. Yet, as lawyers, our duty of technological competence4 requires us to stay up to date with current technology and its ethical implications.
What Are the Benefits of GenAI?
Despite these risks, GenAI tools offer a range of benefits that can significantly enhance legal practice. The most obvious is efficiency. Tasks that once took hours—like document review, legal research, and contract analysis—can now be completed in minutes, freeing up time for lawyers to focus on higher-level strategic work.
GenAI also offers the potential for cost savings. By automating routine tasks, these tools can reduce billable hours, making legal services more affordable for clients while allowing firms to take on more work without sacrificing quality.
Furthermore, GenAI can improve the quality of legal services. These tools can help identify patterns in large datasets, predict legal outcomes based on historical data, and ensure consistency in legal documents. When used correctly, GenAI can enhance both the depth and breadth of legal analysis.
GenAI can help lawyers to scale their efforts, leveraging its ability to handle large volumes of documents. Certainly, this can help with contract analytics and due diligence for large corporate transactions. But, GenAI can also help solo and small firms to even the playing field with larger law firms, making sharp litigation practices, like burying opposing counsel in a deluge of discovery requests, less effective.
Finally, all of GenAI’s benefits combine to make it possible to throw open the doors to the underserved to get the legal help they need.
Overview of ABA Formal Opinion 512
On July 29, 2024, the ABA issued Formal Opinion 512. It provides a comprehensive roadmap for navigating the ethical use of GenAI in legal practice.
It underscores the importance of maintaining competence in the use of these tools, advising lawyers to stay informed about GenAI’s capabilities and limitations. Notably, it acknowledges that lawyers need not become GenAI experts. Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the GenAI tool of which they make use.
The opinion stresses the duty to protect client confidentiality when using GenAI and what it terms “self-learning” GenAI5, highlighting the need for rigorous risk assessments and, in some cases, informed client consent. At minimum, lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms of any GenAI tool they intend to use or consult with someone who has.
It also emphasizes the importance of verification—lawyers must not blindly trust GenAI outputs but should thoroughly review and validate the content generated by these tools. Nonetheless, the opinion recognizes that as GenAI tools continue to develop and become more widely available, “it is conceivable that lawyers will eventually have to use them to competently complete certain tasks for clients.”
Managerial attorneys must make reasonable efforts to supervise their associates and unlicensed support staff and ensure they comply with their professional obligations when using GenAI tools. Training could include the basics of GenAI technology, the capabilities and limitations of the tools, ethical issues in use of GenAI and best practices for secure data handling, privacy, and confidentiality.
Furthermore, the opinion discusses the implications for billing and fees, urging transparency when charging clients for GenAI-related services and ensuring that such fees are reasonable, especially in light of the time savings effected by GenAI.
Finally, the opinion reiterates a lawyer’s duty of candor toward the court and to not pursue frivolous claims or contentions, referencing well-documented instances of lawyers citing nonexistent opinions and misleading arguments.
What the Opinion Fails to Address
While ABA Formal Opinion 512 is comprehensive, it does miss a few key areas. Notably, it doesn’t fully explore the impact of GenAI on access to justice. These tools have the potential to make legal services more accessible to underserved communities, but the opinion doesn’t explore how this can be ethically managed.
The opinion also glosses over the issue of bias in AI models. GenAI tools are only as good as the data they’re trained on, and biased data can lead to biased outputs—something that can have serious ethical implications in the legal field.
Additionally, there’s little discussion on the long-term implications of integrating GenAI into legal practice. How will these tools reshape the legal profession? What new ethical challenges might arise as GenAI technology continues to evolve? These are questions that remain unanswered.
While the opinion addresses lawyers duty to communicate with clients (insofar as to disclose GenAI use), it does not discuss how GenAI may improve communications for clients. For example, clients will be able to leverage law firm chatbots / automated legal information systems to get immediate access to the information they need. How might this impact the human elements of legal practice, such as empathy and emotional intelligence in client relations?
The opinion focuses on lawyers' use of GenAI, but it doesn't explore the ethical implications of judges or courts using these tools. What of courts usage of GenAI for decisionmaking? For example, Judge Kevin Newsom of the 11th U.S. Circuit Court of Appeals at Atlanta revealed his innovative approach to legal research, using the AI tool ChatGPT to understand the ordinary meaning of “landscaping.”6
Finally, GenAI is not a monolith. There are many and multiplying private and open-source GenAI models. The output we get today will not be what we get tomorrow or even minutes later. The opinion doesn't fully address how lawyers should handle the fact that GenAI models are continuously updated, potentially changing their outputs over time, and the ethical implications of that output.
Guidance
The ABA's Formal Opinion 512 is a roadmap for navigating the GenAI landscape without losing your way (or your license). Let's break down the key principles:
Stay Competent: Keep Sharpening Your AI Acumen
Continuously educate yourself on the latest GenAI tools
Attend workshops, webinars, and conferences to stay ahead of the curve
Consult with AI experts and tech-savvy colleagues to deepen your understanding and knowledge
Remember: competence isn't a destination; it's a lifelong journey!
Protect Confidentiality: Treat Client Data Like Its Your Own
Conduct thorough risk assessments before inputting any client info
Read those Terms of Service and Privacy Policies like your life depends on it (because your professional life kind of does)
Obtain client consent when necessary, and make sure they understand the risks and benefits of the tool
Implement strict access controls and data security measures
Verify Outputs: Trust, But Verify (Then Verify Again)
Never rely solely on GenAI-generated content without a human double-check
Develop a verification process to ensure accuracy, reliability, and relevance
Cross-reference GenAI outputs with authoritative sources and your own legal expertise
Remember: AI is a collaboration tool, not a substitute for lawyering!
Be Transparent: Shine a Light on Your GenAI Use
Clearly communicate with clients about how you're using GenAI tools
Explain the benefits, risks, and limitations of GenAI in plain English.
Disclose any fees or costs associated with GenAI use, and make sure they're fair and reasonable in light of efficiency gains
Foster an open dialogue with clients about their questions, concerns, and preferences regarding GenAI
By following these principles, you'll be well on your way to harnessing the power of GenAI without compromising your ethics or your client's trust. But remember, this is just the starting point. As the technology evolves, so too must our ethical frameworks and best practices.
Implementation
Implementing the ABA's guidance in your law office is like building a house: you need a solid foundation, a clear blueprint, and a skilled team.
First things first, let's talk about training and education. You need to regularly schedule training sessions for all your staff, to ensure everyone understands both the capabilities and the risks of these shiny new toys. And don't just make it a one-and-done affair; keep those training sessions coming because the GenAI landscape is evolving faster than you can say “Got it.”
Next up, risk assessment protocols. Before you even think about plugging client info into a GenAI tool, you need to have a standardized risk assessment form. This isn't just a box to check; it's a practice that must be performed with the utmost care and attention. Think of it like the legal equivalent of a pre-flight safety check: you don't want to be mid-air when you realize you forgot to secure the cabin doors!
Now, let's talk about client communication. Your engagement letters are about to get a makeover. You'll need to update them to include language about your GenAI use, written in plain English. And while you're at it, establish clear procedures for obtaining informed consent. This isn't just a matter of legal CYA; it's about building trust and transparency with your clients. They deserve to know exactly how you're using GenAI in their case, and what the potential risks and benefits are.
Of course, even the most thorough risk assessment and client communication can't replace human review. That's where verification procedures come in. You'll need to create a checklist for reviewing GenAI-generated content, ensuring that every output is scrutinized. No stone should be left unturned, no comma left unchecked. Your reputation (and your license) depends on it!
Finally, it's time to codify all these practices into firm-wide policies. This is where the rubber meets the road, folks. Your policies should cover everything from confidentiality to billing practices, and everything in between. Think of them as your law firm's constitution. And just like the real Constitution, they should be living, breathing documents that evolve with the times. Don't be afraid to amend them as needed, because if there's one thing we know about GenAI, change is constant.
Implementing these guidelines may seem like a daunting task, but trust me, it's worth it. By putting in the hard work now, you're not just protecting your firm; you're positioning yourself as a leader in the brave new world of legal AI. So roll up your sleeves, grab your toolkit (see more about that below), and let's get building!
Closing Thoughts
The GenAI revolution in law isn't just knocking at our door—it's already made itself comfortable in our living room, raided our fridge, and started flipping through our case files. The ABA's Opinion 512 is our much-needed rulebook for taming this wild new houseguest. It's not about putting GenAI back in the box; it's about learning to coexist, collaborate, and thrive alongside each other.
As a legal AI company founder, I'm thrilled by the transformative potential of GenAI. But I'm also acutely aware of the challenges and responsibilities that come with wielding such a powerful tool. We can't just toss our shiny new AI toys to lawyers and say, "Have at it!" Ethics, accountability, and good old-fashioned human judgment must be baked into every step of the process.
The key takeaway from Opinion 512? Use GenAI, but use it wisely. Stay informed, set boundaries, communicate openly, and never, ever outsource your professional duties to a language model (no matter how charming it may be). GenAI is a tool, not a replacement for legal expertise. It's here to enhance our work, not absolve us of our responsibility as licensed professionals.
But let's not forget the exhilarating upside! GenAI can revolutionize access to justice, streamline tedious tasks, and unlock new realms of insight and creativity. Imagine a world where underserved communities can get high-quality legal advice at the click of a button, where lawyers can focus on strategy and empathy rather than drowning in paperwork, and where AI-powered analysis uncovers patterns and precedents that no human eye could spot. That's the promise of GenAI, and it's ours to fulfill.
The GenAI genie is out of the bottle, and it's not going back in. But with the ABA's guidance lighting our path, I'm confident we can harness its magic to create a legal system that's more accessible, more efficient, and more equitable for all. So let's get to work, shall we? The future won't build itself!
Stay curious, stay ethical, and never stop innovating!
If you are dissatisfied with its guidance, share why in the comments below. Thanks!
Fellow legal innovators, I know you're as pumped about the potential of GenAI as I am. That's why I've crafted the ultimate resource to help you and your law firm harness the power of GenAI, ethically.
Introducing the Generative AI Ethics Toolkit for Law Firms! ✨
This comprehensive package is your secret weapon for staying ahead of the curve and keeping your practice on the right side of the ethical line. Here's what's inside:
Video Overview of Generative AI Ethics Issues: Get up to speed on the key ethical considerations, with guidance from yours truly.
Roadmap for Creating, Adopting, and Maintaining a Law Firm AI Policy: Step-by-step guidance for crafting a rock-solid AI policy.
Sample Law Firm Generative AI Policy: No need to start from scratch! Customize this template to fit your firm's unique needs and values.
Sample Generative AI Disclosures: Transparency is key! Use these ready-made disclosures to keep your clients informed and your communications crystal clear.
Generative AI Ethics Implementation Checklist: From training to risk assessment, this checklist ensures you don't miss a beat as you integrate GenAI into your practice.
Generative AI Vendor Due Diligence Checklist: Not all GenAI providers are created equal. Use this checklist to vet vendors and choose the best fit for your firm.
BONUS! Generative AI Output Validation Checklist: Trust, but verify! This essential checklist guides you through the process of validating GenAI outputs, ensuring accuracy and reliability every step of the way.
But here's the best part: every resource in this toolkit is fully downloadable and customizable. That means you can tailor each element to your firm's specific needs, branding, and voice. No generic, one-size-fits-all solutions here!
If you're ready to take your GenAI game to the next level, I want to hear from you!
Leave a comment below, and I'll make sure you're first in line when this game-changing toolkit drops.
Trust me, you won't want to miss out on this legal tech gold! 🌟
For an in-depth exploration of the topic of hallucinations, read my article: https://www.lawdroidmanifesto.com/p/hallucinations-what-are-they-why
Self-learning refers to the AI provider using user input to train its large language model. This process is not automatic or in real time. For example, ChatGPT users have the option to opt out os their input being used for training. Also, data that is captured is not instantaneously used to train the model although some data is captured to personalize your experience of using ChatGPT. For more information on personalization: https://help.openai.com/en/articles/8096356-custom-instructions-for-chatgpt
“Studies highlight that although AI tools can aid decision-making and improve efficiency, they often lead to reduced critical and analytical thinking skills, especially when students become overly dependent on AI-generated content.” Zhai, C., Wibowo, S. & Li, L.D. The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learn. Environ. 11, 28 (2024). https://slejournal.springeropen.com/articles/10.1186/s40561-024-00316-7
Comment 8 to ABA Model Rule of Professional Conduct 1.1 provides: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.” Many states have adopted or are considering equivalent language.
See Note 2 above.
Debra Weiss, In concurrence confession, appeals judge says ChatGPT research 'less nutty' than feared, ABA Journal, June 6, 2024. https://www.abajournal.com/news/article/appeals-judge-makes-a-confession-he-consulted-chatgpt-and-found-the-results-less-nutty-than-i-feared
Great timing! looking forward to the gold!
Thanks for the review Tom. The opinion is like just about every other ethics opinion I have read on using GenAI, picking up and discussing the same issues, although most of the Australian guidance pieces don't discuss fees - so that's a bonus. However, the discussion of fees is pretty shallow like just about every discussion of the pricing of legal services. I also get a bit irked by the whole mindset of ethics as a list of things you must not do. I think the rules of ethics imply a positive professional outlook that requires us to be concerned about our community and the health of the justice system. I would love to see some positive ethical guidance around the use of GenAI.
Another issue: I can't stand how so many pieces on GenAI assume it will become more accurate. I don't see why more data will fix its existing problems. Tighter human training and restrictions on its learning material may help with accuracy - but wouldn't that be going backwards technologically? How about a discussion about what it is currently capable of?
Grumble over. Thank you :)