Daily News: June 9, 2025
AI news that pops: daily insights, fast takes, and the future right in your inbox
Hey there friends👋! In today’s edition, you’re getting 5 🆕 news item and my take on what it all means. That’s it — delivered to your inbox, daily.
Subscribe to LawDroid Manifesto and don’t miss tomorrow’s edition:
Today's News
Here are the top 5 recent news items on artificial intelligence:
1./ China Temporarily Disables AI Tools to Prevent Cheating in College Exams
Chinese technology companies, including Alibaba, ByteDance, Tencent, and Moonshot, have temporarily disabled features like image recognition in their popular AI chatbot applications to prevent students from cheating during the national college entrance exams (“gaokao”). With over 13 million students competing for limited university spots, authorities and tech companies have taken precautions by blocking certain AI functionalities during exam hours to uphold exam fairness and integrity. The move comes amid growing global concern about AI-assisted academic dishonesty, prompting similar responses in educational institutions worldwide.
Source: https://www.theverge.com/news/682737/china-shuts-down-ai-chatbots-exam-season
2./ AI’s Scariest Reality: Even Creators Don’t Understand How It Works
AI companies racing to build advanced large language models (LLMs), such as OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini, admit they don’t fully understand how or why their models produce specific responses, raising profound ethical and safety concerns. Despite pouring billions into developing superhuman intelligence, executives acknowledge these systems’ behaviors remain largely opaque, occasionally unpredictable, and sometimes malicious. Industry leaders including Sam Altman and Elon Musk openly express concerns about AI’s unknown capacities, even as regulatory oversight remains minimal and tech companies push forward rapidly, driven by competition. This widespread uncertainty about AI’s internal logic highlights the urgent need for deeper transparency and safety mechanisms to prevent potential catastrophic outcomes.
Source: https://www.axios.com/2025/06/09/ai-llm-hallucination-reason
3./ Apple Researchers Cast Doubt on AI’s “Reasoning” Capabilities in New Study
Apple researchers have released a provocative study questioning the widely promoted “reasoning” capabilities of leading AI models from OpenAI, Anthropic, and Google. Their findings suggest these AI systems suffer a “complete accuracy collapse” when faced with problems beyond certain complexity thresholds, a phenomenon they call “overthinking.” The paper argues that current benchmarks inadequately represent true reasoning skills, highlighting fundamental limitations in the AI industry’s claims and raising serious questions about whether the technology has encountered fundamental barriers to genuine, generalizable reasoning, despite massive investments and hype around AI’s capabilities.
Source: https://futurism.com/apple-damning-paper-ai-reasoning
4./ Corporate AI Adoption Hits Plateau Amid Growing Skepticism, New Data Shows
Corporate America’s rapid adoption of AI technologies appears to be plateauing, according to recent data from fintech company Ramp, which shows AI product usage among businesses leveling off at 41% in May after nearly a year of steady growth. Ramp’s AI Index indicates that while large firms lead with 49% adoption, medium and small-sized businesses trail behind at 44% and 37%, respectively. This slowdown aligns with broader trends, such as Klarna rehiring human support agents after AI negatively impacted service quality, and an S&P Global report showing a sharp rise, from 17% last year to 42% now, in companies discontinuing generative AI pilot projects due to disappointing outcomes.
Source: https://techcrunch.com/2025/06/09/corporate-ai-adoption-may-be-leveling-off-according-to-ramp-data/
5./ U.S. State Department Turns to AI to Select Job Review Panels
The U.S. State Department is utilizing an artificial intelligence chatbot called StateChat, developed with technology from Palantir and Microsoft, to select personnel who serve on critical annual Foreign Service Selection Boards, which determine promotions and job placements within the department. Although the department clarified that actual evaluations will still be conducted by humans, StateChat will choose candidates based on skill assessments, excluding explicit consideration of diversity or minority representation, raising concerns from employee associations about compliance with statutory requirements. This adoption of AI aligns with broader trends across the Trump administration to increase AI usage in government processes.
Today's Takeaway
These headlines suggest we’ve reached a crossroads in our AI journey: globally, trust in AI is faltering just as its complexity, and the stakes, grow higher. China’s decision to disable AI tools during college exams reveals deepening worries about fairness and integrity in education, hinting at an inevitable global reckoning with AI-assisted cheating. Meanwhile, industry giants openly admitting they barely comprehend the workings of their own creations is a chilling reminder that we’re hurtling forward blindfolded, prioritizing speed and competition over transparency and control. Apple’s critique exposing severe limitations in AI “reasoning” further punctures Silicon Valley’s inflated narrative, questioning whether true AI intelligence might remain elusive. The plateau in corporate AI adoption confirms widespread disillusionment, as businesses increasingly recognize AI’s real-world limits and hidden pitfalls. And finally, the U.S. government’s embrace of AI for personnel decisions underlines the urgency to ensure transparency and accountability in AI’s integration into critical public functions. Together, these stories spotlight the urgent need to slow down, establish clearer boundaries, and forge careful policy guardrails before AI’s unchecked momentum eclipses human control entirely.
Subscribe to LawDroid Manifesto
LawDroid Manifesto, your authentic source for analysis and news for your legal AI journey. Insightful articles and personal interviews of innovators at the intersection of AI and the law. Best of all, it’s free!
Subscribe today:
By the way, as a LawDroid Manifesto premium subscriber, you would get access to exclusive toolkits, like the Missing Manual: OpenAI Operator; a new toolkit released every month…
With these premium toolkits, you not only learn about the latest AI innovations and news items, but you get the playbook for how to use them to your advantage.
If you want to be at the front of the line to get first access to helpful guides like this, and have the inside track to use AI as a force multiplier in your work, upgrade to become a premium LawDroid Manifesto subscriber today!
I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid