AI is Like Ozempic: How the Shame of Using AI in Law Practice Mirrors Anxieties About Drug-Aided Weight Loss
Where I explore why high-performing lawyers whisper about their AI use and what our technological shame reveals about authenticity
At a legal tech conference in San Francisco, I overheard two lawyers’ conversation: "Between you and me, I’ve cut my brief-writing time in half using Claude." The other nodded, then added with a conspiratorial whisper and a wink: “My senior partner just thinks I'm a quick editor."
It reminded me of another conversation, at a friend's birthday party in LA, where a software engineer confessed to using Ozempic, while insisting, "But I'm also doing Pilates!" These conversations carried the same undercurrent of shame, the same need to hedge, the same fear of being caught taking a “shortcut” to excellence.
Welcome to the age of performance enhancement anxiety, where our tools work perhaps too well for our comfort.
If this sounds interesting to you, please read on…
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
We're All Doing It, But Few Admit It
These experiences made me realize that, outside of the AI bubble that I and many of you, my dear readers, live in, a growing number of professionals are feeling conflicted, worrying not only that AI may take their jobs, but that using it may also risk their professional reputations.
Yet, Novo Nordisk's stock has more than tripled since 2021, and Clio's Legal Trends Report revealed that 79% of legal professionals are now using AI in some capacity for their legal work. But, in both cases, public admission remains complicated, especially for legal professionals. We're witnessing textbook cognitive dissonance: the uncomfortable tension between our actions and our stated values.
The traditional weight-loss camp champions the nobility of suffering: early morning runs, meal prep Sundays, the satisfaction of earned calluses. Similarly, the legal purists celebrate the monastic dedication of long nights in the office, the tactile pleasure of working a file, the intellectual satisfaction of crafting each argument from scratch. Both camps share a Protestant work ethic that Max Weber would immediately recognize, the belief that the struggle itself confers moral worth.
The Authenticity Trap
But here's where it gets interesting. The shame around both Ozempic and AI use reveals something deeper about our relationship with authenticity. French philosopher Jean Baudrillard warned us about the "hyperreal," simulations more real than reality itself. When a brief written with AI assistance is indistinguishable from (or superior to) one crafted through traditional means, which is more "authentic"?
The legal profession has always traded on a particular performance of expertise. The mahogany bookshelves, the Latin phrases, the theatrical confidence, these aren't just decorations but essential elements of performative behavior. AI threatens this performance not because it produces inferior work, but because it makes the performance too easy. It's like a magician whose tricks work too well, at some point, the audience starts to wonder if actual magic is involved.
The Efficiency Paradox
There's irony in lawyers, who bill by the hour, feeling guilty about tools that make them more efficient. It's as if we've internalized our own business model so thoroughly that suffering has become synonymous with value. We all look forward to a vacation, but are suspicious of technology that may afford us that luxury. This is what anthropologist David Graeber might have called a "bullshit job" tendency, the need to appear busy even when technology has eliminated the busy work.
Yet the early adopters appear to already be reaping rewards. A recent Thomson Reuters study found that lawyers using AI report higher job satisfaction and better work-life balance. They're not replacing legal reasoning with artificial intelligence; they're augmenting it. It's the difference between a chef who grows their own vegetables and one who sources the best ingredients to create extraordinary dishes. Both can produce excellence, but one has more time for more creativity.
The Shame of Success
The most revealing aspect of both the Ozempic and AI phenomena is how success achieved through their use can become a source of shame. When a celebrity appears dramatically transformed, the immediate speculation isn't "good for them" but "what's their secret?" or “it must be Ozempic.” When a junior associate produces exceptionally polished work in record time, the whispered question is, "did they use ChatGPT?"
This shame serves nobody. As philosopher Martha Nussbaum argues, shame is fundamentally about hiding, rooted in the experience of being human and the fear of exposure, particularly of one's vulnerabilities and imperfections. But what if who we truly are includes being smart enough to use the best available tools to succeed? What if authenticity isn't about suffering but about achieving the best possible outcomes for our clients? And that, we center clients, for once, and not ourselves.
Beyond the Binary
The real tragedy of the AI-Ozempic parallel is how it forces us into false binaries. You're either "natural" or "enhanced," "authentic" or "artificial," a "real lawyer" or an impostor who just used ChatGPT to take the “easy way out.” But these categories are as outdated as the belief that "real" writers must use typewriters, or that "serious" photographers can't shoot with digital cameras.
The future belongs to what we might call "augmented authenticity," professionals who combine human judgment with artificial intelligence, who see tools not as crutches but as extensions and amplifiers of their capabilities. Just as the best chefs use both molecular gastronomy and traditional techniques, the best lawyers will seamlessly blend AI assistance with legal expertise.
Closing Thoughts
Here's what I've come to realize: while we're having our little crisis of authenticity about AI, our clients are living in the real world. They want their legal work done well, done fast, and done at a price that doesn't require a second mortgage. They couldn't care less if we used AI to draft that motion any more than they care if their surgeon used a laser instead of a scalpel. They care that the job gets done.
The Ozempic comparison is instructive. Remember when using spell-check was considered lazy? Now sending an email without it would be, well, unprofessional. The shame cycle around new technologies is predictable: denial, secret adoption, grudging acceptance, then finally, “of course I use it, doesn't everyone?”
We're lawyers. We're supposed to be pragmatists, the ones who cut through the noise to get results. Yet here we are, letting misplaced nostalgia for bankers boxes full of discovery prevent us from using tools that demonstrably make us better at our jobs.
So maybe it's time we started treating AI like what it actually is: not a shortcut or a cheat code, but simply the next evolution of how legal work gets done. Because in a profession that bills itself on expertise and results, willfully ignoring our best tools isn't noble, it's malpractice.
If we focus on our clients’ needs, we won’t hesitate to use the best tools available. Just as Ozempic saves lives, legal AI tools can improve outcomes. This is how we advocate for change in our system.
I currently work in reporting and automation (i.e. I write code all day). I’m also a law student. I LOVE AI.
The key is to use it intelligently. If I use AI to help generate code, I read the code and make sure I understand what it’s doing before using it. If I don’t understand some part of it, I spend time reading documentation to make sure I know what’s going on.
Same in legal field, I believe you need to know how to write a good brief before using AI to do it for you. Then you know what to look and can make corrections as needed.