Decoding AI Hype: The Hidden Dangers of Disengagement and Uncritical Enthusiasm
Where I discuss how not all hype is the same and, in some cases, it is entirely justified
Thanks again to all my new readers! Happy to have you here. 😁
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
I want to explore a few things that have been on my mind lately about how we engage with AI in the legal industry. On one side of the spectrum, people have naturally heard too much over the past year about AI and feel the urge to disengage entirely. On the other end, there is a strain of AI optimism that borders on the extreme.
If this sounds interesting at all to you, read on…
Had Enough AI Hype?
HYPE (n)
1: DECEPTION, PUT-ON
2: PUBLICITY. especially : promotional publicity of an extravagant or contrived kind
At a recent legal technology conference, a remark was made about AI hype.
This is a recurring theme that arises not just at legal tech conferences, but also, it seems, in all discussions AI (be they legaltech or otherwise, on LinkedIn, Twitter, or just water cooler talk). The context of the comment was that, the hype about AI is persistent and overwhelming, therefore, it must be overblown and should be ignored, or worse, spur active resistance.
There are other forms of this dismissive behavior, for example, the comment that AI is not yet ready for “prime time.” Another, that Dennis Kennedy has remarked on, is the “Somebody Should Do Something” phenomenon that is so popular, the slogan has been emblazoned on T-shirts. This implies a withdrawal from the discussion and a delegation of action to others until it no longer demands any intellectual or emotional effort from the person.
As an AI-progressive, I admit that I am caught in an echo chamber that breathlessly follows each new AI development with enthusiasm about the positive change it can bring to the world. In a sense, the progress of AI, and our reaction to it, acts as a Rorschach test of our own beliefs, bias and identity. In the absence of evidence, an understanding of the technology, and reasoned argument based on both, we are left to our own biases. My druthers are cautious optimism, others are skeptical. That said, my understanding of the technology and what it is technically capable of coincides in my mind with its promise.
Disruptive technologies, like generative AI, are opportunities for education and growth, yet, people often resist them. Grant McCracken, in commenting on popular reaction to Twitter in 2013, coined the phrase “disruption denial,” describing it as having five stages: 1) confusion, 2) repudiation, 3) shaming, 4) acceptance, and then 5) forgetting. Stage 3, shaming, is “when we are so persuaded that we’re right and the new innovation is wrong that we are prepared to make fun of the credulous among us.”1 Hype as an epithet connotes the notion that AI hype is a put-on makes suckers of those who don’t deny AI’s efficacy.
Disruptive experiences (like the rise of AI) require adjustment; they necessitate a change to the guiding representation, self-concept, or course of action. And, for this reason, people employ various tactics to avoid change. Alex Gillespie, in her research paper on disruption and defensive tactics, argues that the social world, especially audiences, motivate defensive tactics (like denialism).2 However, this social influence “can have a perverse effect on learning; the individual forgoes the opportunity for long-term learning in favor of a short-term gain by promoting the impression that they have nothing to learn.”
But, is AI hype?
Turning from subjective considerations to more objective ones, when we evaluate whether AI is in fact “hype” (in the sense that AI does not measure up to its publicity), we must consider that not all things labeled “hype” are equally deserving to be dismissed. And, past experience of hype linked to other technologies is not indicative of future experience with a different technologies. Technologies such as Bitcoin, blockchain, virtual reality, augmented reality, and even earlier forms of AI, may justifiably have been considered overhyped as they failed to fulfill or surpass their projected promises or expectations. While hype cycles vary, discerning their differences is crucial.
There must be some outside measure of the fulfillment of technological promise. I would argue that the technology’s palpable value (not in the promise of what it might accomplish some day, but in what it can deliver right now) serves as a useful metric.
For example, cryptocurrency, like Bitcoin, was touted as the "perfect currency" enabling digital, peer-to-peer transactions. However, the reality diverged significantly from this promise. Consider one example: cryptocurrency promised decentralization (eliminating the need for traditional financial intermediaries, such as banks), yet in practice, it has led to the opposite with the emergence of centralized crypto exchanges like Mt. Gox and FTX, whose failures have introduced systemic risks.
By contrast, generative AI delivers immediate, tangible value. Ask ChatGPT to create:
an image of a cat, a summary of a document, a letter to a client, blog post ideas, a poem, a translation, or anything else you can imagine…
and it produces what you asked for instantly.
Granted, the deliverable may not be perfect, it may require editing and refinement, but it is undeniably real and actionable. Moreover, the GPT technology underlying ChatGPT can be used to power software applications that layer additional functionality on top of it, and ensure privacy, security and accuracy. AI is demonstrably not hype.
AI Extremism
While AI holds the potential for significant positive impact, there is also a concerning trend of AI extremism. My friend Richard Tromans (also known as Artificial Lawyer) recently highlighted this issue on Twitter.
Utilitarianism posits that the needs of the many outweigh the needs of the few. In this new Andreessen formulation, the needs of AI outweighs the needs of the many. It suggests a world view where the pursuit of AI development could, in extreme cases, eclipse fundamental human values and societal norms. What’s worse is that this dialectical extremist thinking plays into the current cultural and political dialog that relies on black or white thinking and little middle ground for thoughtful expression.
This ideological stance not only raises alarms about the potential for dehumanizing outcomes but also prompts a necessary debate on the governance, ethical deployment, and societal integration of AI technologies. As AI continues to evolve and embed itself deeper into the fabric of daily life, ensuring that its development is guided by a principled framework that prioritizes human welfare and ethical considerations becomes paramount. This approach will help mitigate the risks associated with AI extremism and ensure that the technology serves as a force for positive transformation, aligned with the broader interests of society.
Closing Thoughts
The spectrum of attitudes towards artificial intelligence, ranging from total disengagement to uncritical enthusiasm, presents significant risks. On one end, disengagement could lead to missed opportunities for leveraging AI to address complex challenges in the legal industry. On the other, an uncritical embrace of AI technologies risks overlooking potential ethical, privacy, and security implications. Therefore, the prevailing hype surrounding AI should not be dismissed as mere noise; rather, it should serve as a clarion call to thoughtful engagement and action.
This call to action necessitates a nuanced approach to AI adoption and policy-making. Stakeholders across sectors—policymakers, technologists, ethicists, and the general public—must engage in informed dialogue to navigate the complexities of AI integration. Such discussions should aim to demystify AI, making its workings and implications transparent and understandable to non-experts, thereby fostering a more informed and balanced perspective. It is through this collective scrutiny and collaboration that the potential of AI can be harnessed responsibly, ensuring that its deployment benefits society at large while minimizing harm. By shifting the narrative from one of hype to one of informed optimism, we can chart a course towards an AI-enhanced future that upholds human dignity, equity, and the common good.
Now, whether or not that ever happens, well have to wait and see…
Where do you stand? Are you concerned about the direction of the conversation? Do you think extremism is a risk? Share your thoughts in the comments below.
Grant McCracken. The Five Stages of Disruption Denial. Harvard Business Review. April 15, 2013.
Gillespie, A. (2020). Disruption, Self-Presentation, and Defensive Tactics at the Threshold of Learning. Review of General Psychology, 24(4), 382-396. https://doi.org/10.1177/1089268020914258