0:00
/
Transcript

The Experiential Teacher: David Colarusso

Where I interview David Colarusso, co-director of Suffolk Law's LIT Lab, about preparing the next generation of lawyers to think critically about AI

Hey there Legal Rebels! 👋 I’m excited to share with you the 67th episode of the LawDroid Manifesto podcast, where I will be continuing to interview key legal innovators to learn how they do what they do. I think you’re going to enjoy this one!

If you want to understand how experiential learning can transform the way lawyers think about AI, and why automation bias is one of the most urgent issues facing today’s legal profession, you need to listen to this episode. David is at the forefront of legal technology education and brings a uniquely hands-on, human-centered perspective to how we prepare lawyers for an AI-driven world.

LawDroid Manifesto is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Shaping the Next Generation of AI-Aware Lawyers Through Experiential Learning

Join me as I interview David Colarusso, Practitioner in Residence and co-director of the Legal Innovation and Technology Lab (LIT Lab) at Suffolk University Law School.

In this insightful podcast episode, David shares his remarkable journey from growing up near White Sands in New Mexico, to designing a custom interdisciplinary major at Cornell, to teaching physics with six million YouTube views, to becoming a public defender, data scientist, and now one of legal education’s most inventive minds. He dives deep into how he uses experiential learning, including a brilliantly designed classroom simulation, to teach law students about automation bias and the hidden dangers of over-relying on AI tools.

His stories and insights underscore the critical importance of helping lawyers understand not just how to use AI, but when to question it. This episode is a must-watch for anyone curious about legal education, access to justice, and the human skills that will define the lawyers of tomorrow.

The Skinny

David Colarusso, Practitioner in Residence and co-director of the LIT Lab at Suffolk University Law School, brings one of the most unconventional and intellectually rich backgrounds in legal tech education. A self-described perpetual learner who went from “nature’s laws to people’s laws,” David designed his own major at Cornell combining physics and film, earned a master’s from Harvard’s Graduate School of Education, built a following of over six million YouTube viewers with physics explainer videos, and became a Fulbright exchange teacher in Scotland before ultimately going to law school and serving as a public defender in Massachusetts. It was during his time at the Public Defenders Office, where a massive drug lab scandal forced his entire office to manually search paper files for a chemist’s signature, that David recognized how technology could transform the way the legal system handles data. That insight led him to become the office’s data scientist, then to an adjunct role at Suffolk, and eventually to his current position leading the LIT Lab. In this episode, David walks us through a vivid classroom experiment in which he tricked law students into experiencing automation bias firsthand, revealing how quickly humans learn to over-trust AI tools, and how dangerous that trust can become when those tools fail in ways users don’t anticipate.

Key Takeaways:

  • Automation bias, the tendency to over-rely on AI decision tools, is a real and measurable phenomenon that affects even attentive, well-intentioned users, as David’s classroom simulation powerfully demonstrated

  • David designed a doc review simulation where students worked with an AI tool that was 100% accurate in early rounds, causing them to stop double-checking, and then performed significantly worse when the tool’s accuracy dropped in later rounds

  • Experiential learning is David’s pedagogical superpower: rather than lecturing students about AI risks, he creates simulations that allow students to discover those risks themselves

  • David’s path from physics teacher to public defender to data scientist to law professor illustrates the value of interdisciplinary thinking in addressing legal technology challenges

  • The Massachusetts drug lab scandal, in which chemist Annie Dookhan falsified thousands of drug tests, became a formative moment that revealed how the legal system’s data practices were woefully inadequate

  • Suffolk’s LIT Lab focuses on public interest law and access to justice, building tools that serve real people facing real legal challenges

  • David draws a sharp distinction between “access to justice” and “access to an attorney”; technology can expand legal access in ways that don’t always require a lawyer

  • Students at the LIT Lab build tools that go out into the world and serve real people, meaning their education creates direct public benefit

  • The future of legal employment isn’t just about big law; the majority of attorneys in the U.S. work in smaller, solo, or public interest settings, and those contexts may benefit most from AI

  • Work-life balance for David means choosing an intentional life: watercolor painting, poetry, walks, and a deep commitment to the examined life, values he absorbed from living abroad in Scotland

Notable Quotes:

  1. “I sort of walked in. I was like, you don’t really know who I am... I’m going to ask you first off to just sit down and we’re going to spend 20 minutes doing doc review.” - David Colarusso (00:03:09-00:03:18)

  2. “Almost all of them fell victim to something. It’s called automation bias. And so this was a way that I was able to sort of teach them about that, but without having to just say it to them.” - David Colarusso (00:04:48-00:04:55)

  3. “They learned to trust the tool, and then the regime in which the tool performed the way they thought it did shifted. And they did not reevaluate their assessment of the tool. And so became over-reliant on it to a detriment to their accuracy.” - David Colarusso (00:08:23-00:08:37)

  4. “The problem with a lot of these tools is they sort of upset our traditional signals for what represents quality.” - David Colarusso (00:11:02-00:11:08)

  5. “I thought I was simplifying my life... And so that ended up with me becoming the data scientist at the Public Defenders here in Massachusetts.” - David Colarusso (00:32:11-00:32:25)

  6. “I don’t want to make the mistake of thinking that access to justice means access to an attorney. In fact, some of the tools we make recognize that that’s not always the case.” - David Colarusso (00:34:57-00:35:05)

  7. “I went from nature’s laws to people’s laws. And one of the things that’s nice about people’s laws is unlike nature’s laws... our laws are not set in stone. They’re not the weather. We can do something about them. They’re aspirational.” - David Colarusso (00:39:27-00:39:46)

  8. “It mostly is my students, right? I’ve been a teacher in many different guises, and being able to interact with people and help them to sort of lead that examined life and to question what it is they want to do and how it is they want to look at the world and give them a tool set they can use when they go out into the world to make the impact they want to make.” - David Colarusso (00:38:34-00:38:55)

Clips

I Built My Own Major


The “Jagged Edge” of AI


You Aren’t Immune to AI Bias

Growing Up With Missile Tests

David’s classroom simulation on automation bias is more than a clever teaching exercise, it is a microcosm of one of the most pressing challenges facing the legal profession today. When students learned to trust an AI tool that was perfectly reliable, they stopped verifying its outputs. When that tool then failed in subtle ways, they performed worse than they would have without any AI assistance at all. This pattern—trust, reliance, reduced vigilance, degraded performance—is not unique to law students. It is the default human response to any decision-support tool, and it will play out in courtrooms, law offices, and regulatory agencies unless lawyers are deliberately trained to guard against it.

What makes David’s approach so compelling is that he doesn’t just warn students about these risks: he lets them live them. The pedagogical philosophy behind the LIT Lab is grounded in the idea that experiential learning changes people in ways that lectures simply cannot. When students build tools that go out into the world and serve real clients, they develop not just technical competence but moral accountability. That combination, technical literacy plus human judgment, is exactly what the legal profession needs most right now.

Closing Thoughts

Conversations like this one remind me of why I started LawDroid in the first place. David Colarusso represents something rare in legal education: a true polymath who has lived across multiple worlds (physics, film, public defense, data science, and now law school teaching) and brought the lessons of each one into his work with students. His automation bias simulation is one of the most elegant pieces of legal pedagogy I’ve come across, because it doesn’t just teach a concept. It makes students feel it.

What strikes me most is David’s underlying conviction that the law is aspirational. Unlike the laws of physics, which simply describe a universe we didn’t choose, our legal system is something we actively construct and can change. That perspective, hopeful, humanistic, and grounded in real public service, is exactly the kind of energy we need shaping the next generation of legal technologists.

For our Legal Rebels community, the lesson here is not to fear AI, but to approach it with clear eyes. Understanding where your tools work well, where they struggle, and when to question their outputs is not a technical skill, it’s a professional one. David is helping law students develop that skill before they step into practice. The rest of us need to keep developing it too.

Loading...

Discussion about this video

User's avatar

Ready for more?