Daily News: April 18, 2025
AI news that pops: daily insights, fast takes, and the future right in your inbox
Hey there friends👋! In today’s edition, you’re getting 5 🆕 news item and my take on what it all means. That’s it — delivered to your inbox, daily.
Subscribe to LawDroid Manifesto and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
Today’s News
Here are the top 5 recent news items on artificial intelligence:
1. OpenAI’s New Reasoning AI Models Hallucinate More, Raising Concerns
OpenAI’s latest reasoning-focused AI models, o3 and o4-mini, hallucinate significantly more than their predecessors, despite advancements in other areas like coding and math. Internal tests show these models produce more false information, with o3 hallucinating in 33% of queries on OpenAI’s PersonQA benchmark, about double the rate of earlier models, while o4-mini performed even worse at 48%. Researchers also found o3 often fabricated its own actions. Experts suggest that reinforcement learning methods might amplify hallucinations, complicating AI's practical use in accuracy-critical fields. OpenAI acknowledges the issue but hasn't yet identified its root cause, making addressing hallucinations an urgent priority.
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/
2. Actors Regret Selling AI Avatars as Likenesses Used in Scams and Propaganda
Actors who licensed their faces and voices for AI-generated avatars are expressing regret as their digital selves appear in scams, propaganda, and embarrassing videos. Some actors, enticed by quick earnings, unknowingly signed contracts granting companies unrestricted rights to their likenesses. Adam Coy found himself portrayed as a doomsayer, while Simon Lee's avatar promoted dubious health products. Even reputable companies like Synthesia, which recently reached a $2 billion deal with Shutterstock, admit moderation can fail, as shown when actor Connor Yeates' avatar appeared in political propaganda. Synthesia now offers actors equity options, stricter moderation, and opt-outs, but regrets remain about irreversible misuse of their digital identities.
3. Johnson & Johnson Refocuses AI Strategy, Cuts Redundant GenAI Projects
Johnson & Johnson is shifting its generative AI strategy from broad experimentation to targeted, high-value applications. After initially pursuing around 900 generative AI projects companywide, CIO Jim Swanson says the company found that only 10%-15% drove significant business value. J&J is now concentrating on specific uses in drug discovery, supply chain risk mitigation, and internal operations like chatbots that help employees navigate company policies. This strategic pivot involves decentralizing governance to corporate functions better equipped to assess the effectiveness and value of AI applications, eliminating redundant or ineffective initiatives, and scaling successful use cases.
Source: https://www.wsj.com/articles/johnson-johnson-pivots-its-ai-strategy-a9d0631f
4. Google DeepMind Says AI Models Must Move Beyond Human Knowledge
Google's DeepMind researchers argue that current AI approaches relying on static human-generated data are limiting AI’s potential. In their new paper, David Silver and Richard Sutton propose an innovative approach called "streams," allowing AI models to learn continuously through direct, ongoing experiences with the environment, similar to human learning. Rather than merely answering discrete human questions, stream-based agents would independently interact with their surroundings, receiving real-time feedback or "reward signals," enabling them to set and pursue long-term goals. The researchers suggest this approach could vastly surpass existing AI capabilities, leading to unprecedented intelligence—but also raising new risks related to autonomy and human oversight.
Source: https://www.zdnet.com/article/ai-has-grown-beyond-human-knowledge-says-googles-deepmind-unit/
5. Artists Push Back Against Trend of AI-Generated Dolls, Citing Threat to Creativity
Artists and illustrators are voicing frustration over the viral trend of people using AI to turn their photos into doll-like "starter pack" images, fearing it could undermine their livelihoods and creativity. Handmade action figure creator Nick Lavellee, whose commissions sell for hundreds of dollars, worries AI could saturate the market and damage perceptions of authentic craft. Other artists have joined the #StarterPackNoAI movement to protest the superficiality and potential intellectual property issues of AI-generated images. Although some acknowledge AI’s potential usefulness, they emphasize that genuine artistry lies in originality, human effort, and personal expression—qualities AI cannot replicate.
Source: https://www.bbc.com/news/articles/c3v9z45pe93o
Today’s Takeaway
These stories underscore my deepening concern that the relentless push for AI advancement is dangerously outpacing our ability to control its consequences. OpenAI’s increased AI hallucinations raise fundamental doubts about trustworthiness and accountability—essential issues as we integrate AI into critical areas like medicine, law, and education. Actors regretting their AI avatars highlight the severe ethical failures occurring when profit-driven ventures commoditize human identities without adequate safeguards. Johnson & Johnson’s scaling back indicates a crucial realization: indiscriminate adoption isn’t progress, targeted, ethical applications are. DeepMind’s provocative call for autonomous AI beyond human learning is both fascinating and alarming, raising the very real risk of relinquishing human oversight. Artists' protests against AI-generated dolls spotlight the cultural and economic harms that could occur when AI is allowed to dilute genuine creativity and craftsmanship. Overall, these developments clearly warn that we must urgently establish rigorous ethical frameworks, accountability mechanisms, and thoughtful regulation—before unchecked innovation leaves lasting damage on our society and our humanity.
Subscribe to LawDroid Manifesto
LawDroid Manifesto, your authentic source for analysis and news for your legal AI journey. Insightful articles and personal interviews of innovators at the intersection of AI and the law. Best of all, it’s free!
Subscribe today:
By the way, as a LawDroid Manifesto premium subscriber, you would get access to exclusive toolkits, like the Missing Manual: OpenAI Operator, coming out this month…
With these premium toolkits, you not only learn about the latest AI innovations and news items, but you get the playbook for how to use them to your advantage.
If you want to be at the front of the line to get first access to helpful guides like this, and have the inside track to use AI as a force multiplier in your work, upgrade to become a premium LawDroid Manifesto subscriber today!
I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid