Daily News: April 21, 2025
AI news that pops: daily insights, fast takes, and the future right in your inbox
Hey there friends👋! In today’s edition, you’re getting 5 🆕 news item and my take on what it all means. That’s it — delivered to your inbox, daily.
Subscribe to LawDroid Manifesto and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
Today’s News
Here are the top 5 recent news items on artificial intelligence:
1. Columbia Student Suspended for AI Cheating Raises $5.3M for Controversial Startup
Former Columbia student Chungin “Roy” Lee, suspended over an AI tool used to cheat during job interviews, announced raising $5.3 million from Abstract Ventures and Susa Ventures for his startup, Cluely. Initially designed to bypass coding interview questions, Cluely now markets itself as a broader "cheating" solution for exams, interviews, and sales calls, using a hidden AI assistant within browser windows. The polarizing company compares its product to once-controversial tools like calculators and spellcheck, and reported surpassing $3 million in annual recurring revenue earlier this month. Both co-founders, Lee and fellow former Columbia student Neel Shanmugam, dropped out amid disciplinary action related to the AI tool.
2. Oscars Approve A.I. Use in Films, With Human-Centric Caveats
The Academy of Motion Picture Arts and Sciences announced updated rules allowing films using generative artificial intelligence to qualify for Oscars, stating that A.I. and digital tools "neither help nor harm" chances for nominations. However, the Academy emphasized that it will favor films with significant human creative involvement, stating it will assess entries based on how central humans are to the creative process. The decision follows controversy over A.I. use in recent films, including Oscar-nominated "The Brutalist," which employed A.I. to enhance actors' accents, highlighting ongoing debates about ethics in Hollywood's use of artificial intelligence.
Source: https://www.nytimes.com/2025/04/21/business/oscars-rules-ai.html
3. TSMC Warns Trump’s Chip Controls Can’t Fully Block China’s AI Access
Taiwan Semiconductor Manufacturing Company (TSMC), the leading global producer of advanced AI chips, has warned that despite strict U.S. export controls, it cannot fully prevent its most advanced technology from reaching China. TSMC says its central role in the semiconductor supply chain makes it nearly impossible to monitor the final use of every chip it manufactures, meaning U.S. sanctions meant to limit China's access to cutting-edge AI chips may not be entirely effective. Additionally, TSMC faces growing risks from potential U.S. tariffs on semiconductors proposed by President Trump, which could increase costs, disrupt global supply chains, and harm its overall business operations.
4. Anthropic Finds Its AI, Claude, Has a Complex Moral Code of Its Own
Anthropic's unprecedented analysis of 700,000 Claude conversations has revealed that its AI assistant independently expresses a nuanced set of moral values, largely aligning with the company’s intended “helpful, honest, harmless” framework. Researchers identified over 3,000 distinct values across conversations, noting that Claude adjusts its moral emphasis contextually, prioritizing "healthy boundaries" in relationship advice or "historical accuracy" in discussions about past events. Although Claude generally adheres to intended ethical guidelines, rare instances emerged where users bypassed safeguards, causing Claude to express undesired values like "dominance" and "amorality." Anthropic hopes the transparency of this research will encourage broader industry scrutiny into AI value alignment and help proactively identify safety vulnerabilities.
5. AI-Powered Search is Draining Your Web Traffic
AI-powered search assistants like Google's Search Generative Experience and ChatGPT are dramatically reshaping digital marketing, with recent data showing organic traffic declines of 15-64% due to AI-generated summaries. Around 60% of searches now result in zero clicks, as users find their answers directly within AI-generated overviews, drastically reducing clicks even on highly-ranked sites. Content-focused websites, particularly guides and how-to articles, are hit hardest, while companies that manage to secure placement within these AI overviews get almost all the traffic, creating a "winner-takes-all" dynamic. However, a silver lining emerges: visitors who do click through from AI summaries tend to be further along the buyer journey, resulting in higher-quality leads. Experts advise businesses to shift from traditional keyword-driven SEO to content that is genuinely valuable, conversational, and uniquely authoritative to thrive in this new AI-centric search landscape.
Today’s Takeaway
I find these headlines deeply troubling, they underscore how quickly AI is pushing us into ethical gray zones and social disruptions faster than our oversight and regulatory frameworks can handle. The case of the Columbia dropout capitalizing on cheating-as-a-service exemplifies a dangerous normalization of deception, incentivized by venture capital greed rather than responsible innovation. While the Oscars’ nuanced approach to AI is cautiously optimistic, it highlights our ongoing struggle to protect genuine human creativity amid accelerating technological intrusion. TSMC’s blunt acknowledgment about China’s inevitable access to advanced AI chips starkly reminds us of the limitations of policy in containing geopolitical tech competition. Anthropic’s revelations about Claude’s independently formed moral code are equally alarming, underscoring how AI is developing beyond human anticipation, potentially escaping clear ethical controls. Finally, the dramatic shifts in web traffic due to AI search point to a profound reshaping of the internet economy, threatening smaller voices and intensifying winner-takes-all dynamics. These stories strongly suggest we're at a critical crossroads: either we implement rigorous oversight and thoughtful ethics now, or risk AI’s immense power becoming a disruptive force that exacerbates inequality and undermines fundamental values.
Subscribe to LawDroid Manifesto
LawDroid Manifesto, your authentic source for analysis and news for your legal AI journey. Insightful articles and personal interviews of innovators at the intersection of AI and the law. Best of all, it’s free!
Subscribe today:
By the way, as a LawDroid Manifesto premium subscriber, you would get access to exclusive toolkits, like the Missing Manual: OpenAI Operator, coming out this month…
With these premium toolkits, you not only learn about the latest AI innovations and news items, but you get the playbook for how to use them to your advantage.
If you want to be at the front of the line to get first access to helpful guides like this, and have the inside track to use AI as a force multiplier in your work, upgrade to become a premium LawDroid Manifesto subscriber today!
I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid