5 Comments

"However, it has also led to challenges, including ethical concerns over AI “hallucinations,” algorithmic bias, data privacy issues, and the potential erosion of traditional legal roles."

Would be awesome to hear your thoughts on each of these elements that are impeding adoption in the legal profession.

Expand full comment

Some I have written about, others for a future occasion. Good reason to come back for future content! :)

Expand full comment

Interesting that you talk about 'perceptions' that others are using or are going to use GenAI. Most lawyers I speak to are experimenting with AI tools integrated into practice management systems but are not raving about it, nor are they particularly complimentary about legal research tools. I wonder if this is peer pressure rather than social proof? I have to say that while I use GenAI regularly, it's less for my legal work and more for presentations, articles, thinking out loud etc. My legal research is very niche and I find the results in GenAI tools embarrassingly bad so far. I couldn't imagine ever being able to delegate anything to it.

Expand full comment

Yes, exactly. By definition, social proof is a psychological phenomenon where people assume the actions of others in an attempt to reflect correct behavior for a given situation. In essence, it's the notion that, since others are doing it, I should be doing it, too. Social proof = peer pressure. Could you delegate answering phone calls to it?

Expand full comment

Is your GenAI tool being fed by custom data sets related to your research field? If not, then that's probably the biggest challenge impacting your perception. You may develop more trust in LLM's responses and capabilities if you combine YOUR data with a waterfall enrichment/accuracy process with access to multiple redundant LLMs, each serving as a fact-checker to the others. Maybe.

Expand full comment