Daily News: April 17, 2025
AI news that pops: daily insights, fast takes, and the future right in your inbox
Hey there friends👋! In today’s edition, you’re getting 5 🆕 news item and my take on what it all means. That’s it — delivered to your inbox, daily.
Subscribe to LawDroid Manifesto and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
Today’s News
Here are the top 5 recent news items on artificial intelligence:
1. Police Deploy AI Social Media Bots to Monitor Protesters and Criminal Suspects
U.S. police departments near the Mexico border are paying significant sums for AI technology from Massive Blue, designed to create lifelike, undercover social media personas to interact with and collect intelligence on suspected criminals, political activists, and "college protesters." The AI product, called Overwatch, generates personas like protesters, escorts, or even juveniles, who communicate with suspects over platforms like Discord, Telegram, and text messaging. Despite claims of effectiveness, authorities have yet to report any arrests linked directly to the system, raising concerns among privacy advocates who warn this technology might infringe upon civil liberties and First Amendment rights. The technology's secrecy and lack of transparency have drawn criticism, particularly as details emerge about police targeting loosely defined groups such as activists and student protesters.
Source: https://www.wired.com/story/massive-blue-overwatch-ai-personas-police-suspects/
2. Google and OpenAI Battle for Students with Free AI Tools
As finals season arrives, Google and OpenAI are competing to win over college students with generous, free access to powerful AI tools. OpenAI recently offered ChatGPT Plus, featuring GPT-4o and DALL·E 3, free to U.S. and Canadian college students through May, providing immediate academic support during exam periods. In response, Google introduced Google One AI Premium free for enrolled students until Spring 2026, featuring Gemini 2.5 Pro, Veo 2 video generation, and 2TB of cloud storage—designed for long-term academic use. These tools significantly reshape how students learn, collaborate, and create, but also challenge universities to rethink curricula and assessments to maintain academic integrity and digital equity. The competition marks a pivotal moment, positioning AI as essential to higher education’s future.
3. Peter Singer Launches AI Chatbot to Explore Ethical Dilemmas
Philosopher Peter Singer, known for his influential ethical thinking, has released an AI-powered chatbot designed to guide users through complex moral questions. Named "Peter Singer AI," the bot engages users using principles from Singer’s extensive philosophical work, employing a Socratic dialogue approach. Guardian journalist Stephanie Convery tested the chatbot and found that while it effectively prompted reflection and ethical consideration, it often gave cautious, generalized answers rather than definitive guidance. Convery observed that, despite prompting users to consider important ethical factors, the chatbot lacks genuine emotional engagement, empathy, and contextual understanding, highlighting the limitations of relying on AI for human moral discourse.
4. Russia is Manipulating AI Chatbots with Propaganda, Highlighting Major Vulnerabilities
Russia is systematically flooding the internet with false narratives specifically designed to manipulate AI chatbots, successfully spreading disinformation on topics such as the Ukraine conflict. These tactics, known as "LLM grooming," leverage automated propaganda networks to trick chatbots into repeating misleading claims, posing significant risks as AI becomes widely adopted for information retrieval. The vulnerability is exacerbated by rushed AI rollouts, weakened government oversight, and reduced content moderation, raising urgent concerns about the integrity of information provided by popular chatbot services.
Source: https://www.washingtonpost.com/technology/2025/04/17/llm-poisoning-grooming-chatbots-russia/
5. The Real Reason Students Are Using AI to Avoid Learning
Students aren't turning to AI because they're lazy; they're doing it because social media has already eroded their attention spans, argues Catherine Goetze. Platforms like TikTok and Instagram have conditioned young minds for instant gratification, making it difficult for students to engage deeply with challenging tasks. AI offers an easy escape from frustration, but it also risks undermining critical skills and self-confidence. Yet, AI isn't inherently harmful, in fact, Goetze highlights its potential to rekindle genuine curiosity and deep learning when used creatively. Rather than restricting AI, educators must model curiosity, teach critical engagement, and address the broader attention crisis caused by the algorithms that have reshaped how young people learn.
Source: https://time.com/7276807/why-students-using-ai-avoid-learning/
Today’s Takeaway
These stories underscore how AI's rapid evolution is outpacing our ethical and societal safeguards, creating serious risks alongside its potential benefits. The use of AI-generated social media personas by law enforcement is especially disturbing, highlighting a dangerous new form of surveillance that threatens civil liberties and democratic freedoms. Google and OpenAI competing to win over students may seem beneficial, but it could inadvertently deepen dependence on AI tools, fundamentally altering education and compromising critical thinking. Peter Singer's ethical chatbot points out AI’s inherent limitations, useful for reflection, yet incapable of genuine human empathy or moral judgment. Russia's manipulation of chatbots emphasizes just how vulnerable our information ecosystems have become, underscoring the urgent need for robust protections against misinformation. Lastly, students turning to AI due to diminished attention spans reveals a broader crisis caused by algorithm-driven platforms, raising vital questions about whether AI can help or further hinder genuine learning. Collectively, these stories send a clear message: we urgently need thoughtful regulation, transparency, and ethical responsibility to ensure AI enhances humanity rather than undermining it.
Subscribe to LawDroid Manifesto
LawDroid Manifesto, your authentic source for analysis and news for your legal AI journey. Insightful articles and personal interviews of innovators at the intersection of AI and the law. Best of all, it’s free!
Subscribe today:
By the way, as a LawDroid Manifesto premium subscriber, you would get access to exclusive toolkits, like the Missing Manual: OpenAI Operator, coming out this month…
With these premium toolkits, you not only learn about the latest AI innovations and news items, but you get the playbook for how to use them to your advantage.
If you want to be at the front of the line to get first access to helpful guides like this, and have the inside track to use AI as a force multiplier in your work, upgrade to become a premium LawDroid Manifesto subscriber today!
I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid