The Silent Takeover: How AI Is Quietly Seizing Control of Our Lives
Where I explore why the battle for AI governance isn’t just about tech workers or billionaires; it’s about who controls the fundamental building blocks of human society
Welcome back, my fellow legal rebels! This one's a little different, maybe a little soapbox-y, if you know what I mean. I've been thinking about the issues I discuss in this article for a while now, and I suspect you may have been too. You see, I watch my life becoming increasingly mediated by AI systems I don't fully understand (and I teach this stuff!) and can't control. Every day, I make choices that feel like my own, but are actually shaped by algorithms optimizing for outcomes I never agreed to.
Here's what keeps me up at night: The question isn't whether you trust today's tech leaders: it's whether you trust the institutional structures we're creating to constrain future ones. Because the power we're concentrating today will eventually be wielded by people we haven't met, pursuing goals we haven't agreed to, using capabilities we can barely imagine. Got your attention? I hope so.
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
You probably don't think of yourself as living under algorithmic rule. You wake up, check your phone, scroll through social media, maybe order an Uber, swipe right for a date, apply for jobs online, and go about your day feeling like you're making independent choices. But here's the uncomfortable truth: artificial intelligence systems are already controlling more of your life than any king, president, or corporate board ever has. This isn't science fiction. It's happening right now, and the window to influence how it develops is closing faster than most people realize.
As UCLA Law Professor Eugene Volokh1 warns in his article, “Generative AI and Political Power,”2 we're witnessing "the likely surge in concentrated Big Tech power" as AI companies acquire "tremendous power to influence political life." When people turn to AI for answers to political questions, and they increasingly will, the responses these systems provide will "subtly but substantially influence public attitudes and, therefore, elections."
If this sounds interesting to you, please read on…
The Invisible Revolution
When we think about artificial intelligence taking over, we imagine robot armies or super-intelligent computers plotting world domination. The reality is far more subtle and, in many ways, more complete. The danger is not that AI will take over; it's that the few who control the care and feeding of AI already have.
AI systems have quietly assumed control over what economists call the "means of production," the fundamental resources and systems that create wealth and organize society. Unlike previous technological revolutions that disrupted individual sectors, AI is becoming like Tolkien's One Ring, a single technology that can rule them all, controlling every aspect of economic and social life simultaneously.
Consider what happened to you just today. An algorithm decided which posts you saw on social media, potentially influencing your mood and opinions. If you used a dating app, an algorithm chose which potential partners you could even discover. If you applied for a job or a loan, an algorithm likely made the initial screening decision about your worthiness. If you ordered food delivery, an algorithm set the price you paid and determined which driver got the work.
These aren’t minor conveniences. These systems are making decisions about your social relationships, economic opportunities, and access to information, the basic building blocks of our postmodern human life.
“The danger is not that AI will take over; it's that the few
who control the care and feeding of AI already have.”
The New Masters
Throughout history, social power has flowed from control over productive resources. Feudal lords controlled land. Industrial capitalists controlled factories and machinery. Financial elites controlled access to credit and investment.
Today, a handful of technology companies (OpenAI, Google, Apple, Microsoft, Amazon) are achieving something unprecedented: simultaneous control across multiple domains of human activity. The same algorithms that decide what news you see also influence who you date, where you work, what you buy, and how you perceive and understand the world. This represents a concentration of power that would make previous elites envious.
Economist Yanis Varoufakis calls this emerging system "technofeudalism," a new form of economic organization where digital platforms extract value not through traditional market mechanisms, but by controlling the digital infrastructure through which all economic and social activity increasingly flows. Like medieval lords who controlled the land that peasants needed to survive, tech companies now control the digital territories where modern life takes place.
But here's what makes this different from past power concentrations: AI control operates at the speed of light across billions of people simultaneously. When a traditional media baron wanted to influence public opinion, they had to craft headlines and hope people read their newspaper. When an AI system wants to influence behavior, it can personalize messaging to billions of individuals in real-time, optimizing each interaction for maximum psychological impact.
Your Data, Their Power
The foundation of this control is your data, and you're giving it away for free. Every search query, every click, every pause while scrolling, every like, every purchase, every location ping from your phone, every second you spend on a screen, becomes training data that makes these systems more powerful and more valuable.
Think about it: you're not just the consumer of AI-powered services, you're also the unpaid laborer training the systems that will eventually control your choices, in your personal life and your work. It's as if factory workers in the 1800s had to build the machines that would replace them, except they also had to pay for the privilege.
This creates what I call the "dependency trap."
Unlike previous technologies that people could choose to avoid, AI systems are becoming essential infrastructure. Try navigating a modern city without GPS, finding a job without online platforms, or maintaining social connections without algorithmic social media. Each year, hundreds of millions more actively use AI and the percentage of AI adoption jumps higher and higher. The deeper these systems embed themselves into daily life, the harder it becomes to opt out or organize resistance.
“AI is becoming like Tolkien's One Ring, a single technology that can rule them all, controlling every aspect of economic and social life simultaneously.”
The Illusion of Choice
Perhaps the most insidious aspect of algorithmic control is how it maintains the illusion of personal freedom while systematically narrowing your options. When you scroll through a dating app, you feel like you're choosing among potential partners. In reality, an algorithm has already decided which people you'll never even see, based on criteria you don't know and can't influence.
When you search for information, you feel like you're exploring the vast landscape of human knowledge. In reality, you're seeing a carefully curated selection, designed to keep you engaged with the platform, not necessarily to inform you accurately.
This represents what Volokh identifies as a fundamental shift from the "User Sovereignty Model," where technology tools like word processors and browsers were designed to be "faithful servants of the user," to what he calls a "Public Safety and Social Justice Model," where AI systems "are designed in part to refuse to output certain answers that their creators think are dangerous or immoral."
AI has flipped the technological script: machines are no longer neutrals, but active participants in the narratives that filter and interpret our reality.
The implications are staggering. As Yuval Noah Harari argues in Nexus, AI represents a fundamental shift from tools that amplify human capabilities to entities that can create and manipulate information independently: "For the first time in history, we face the prospect of information tools that can generate new ideas and narratives without human input." Volokh mirrors this observation: "[A]rguments included in AI outputs will tend to become conventional wisdom" while "arguments AI programs decline to provide will largely vanish to most people."
We're looking at unprecedented control and influence of acceptable thought. But, this is control without coercion. No one forces us to use these systems, but once you do, they shape your reality in ways that are virtually impossible to detect or resist individually. Like a Chinese finger trap, attempts to resist it bind you more tightly within its web of influence.
“AI has flipped the technological script: machines are no longer neutrals, but active participants in the narratives that filter and interpret our reality.”
What's at Stake?
The trajectory we're on leads to a society where a small number of people control the algorithmic systems that govern everyone else's lives. We’ve seen this with social media, but AI amplifies this by orders of magnitude. This isn't necessarily because tech executives are evil, many have good intentions. It's because the current structure of AI development concentrates power by design.
In Varoufakis's analysis, this represents a fundamental departure from capitalism itself. Under traditional capitalism, profit came from producing goods and services for exchange between market participants. But, under technofeudalism, profit comes from controlling the digital infrastructure and extracting rent from everyone who must use it to participate in modern life. You don't buy social connection, information, or economic opportunity; you access them through platforms that monetize your attention and data while controlling your experience.
If this continues unchecked, we're looking at a future where:
Your economic opportunities are determined by algorithms optimizing for corporate profits, not human flourishing
Your social relationships are mediated by systems designed to maximize engagement and data extraction
Your access to information is filtered through the priorities of a few tech companies
Your political choices are influenced by personalized manipulation campaigns that make traditional propaganda look primitive
This isn't just about technology; it's about democracy itself. How can we have meaningful self-governance when the systems that shape our understanding of the world are controlled by unelected corporate boards operating with minimal transparency or accountability? Especially when their highest duty is a fiduciary duty to maximize shareholder value, and not to respect or value the freedom and dignity of ordinary citizens.
Resistance Is Futile?
Traditional forms of resistance don't work well against algorithmic power. You can't strike against an algorithm. You can't march on a data center and expect to change how recommendation systems work. The speed and scale of AI deployment outpaces the normal rhythms of democratic deliberation and social organizing.
More fundamentally, the traditional bulwark against concentrated economic power, organized labor, has been systematically weakened over decades. Union membership has collapsed from over 30% of the workforce in the 1950s to barely 10% today. Just as workers might need collective action most urgently to resist AI displacement, they lack the institutional power to do so. Further, AI is already eliminating jobs across sectors: customer service representatives replaced by chatbots, radiologists whose work is automated by image recognition, journalists aped by algorithms, drivers displaced by autonomous vehicles, artists’ craft approximated by simple prompts, software engineers laid off in favor of AI coding agents. But unlike previous waves of automation that created new categories of work, AI threatens to eliminate entire classes of physical and cognitive labor without clear replacement opportunities.
This leaves law and regulation as the primary remaining checks on concentrated AI power. Yet here too, the path forward appears that it will be systematically blocked. The House of Representatives has just passed a “big, beautiful bill” which aims to ban States from regulating AI for 10 years. It is now being considered in the Senate and looks like it has the votes to pass. To be clear, the proposed ban on State regulation of AI is not because the federal government plans to take action of its own to control AI. On the contrary, the control wrested from the States is to be replaced with inaction, a hands-off approach that prioritizes innovation over democratic governance. This regulatory abdication destroys any viable institutional path to resist the concentration of power we're witnessing.
Meanwhile, the benefits of AI systems are real and immediate. Search engines help you find information. Navigation apps get you where you're going. Recommendation systems introduce you to music and movies you enjoy. ChatGPT answers questions, drafts content, increases productivity, and can even mimic a companion. And now, AI agents promise to deliver outcomes, not just assistance. This makes resistance feel self-limiting and like self-punishment rather than solidarity.
But the good news is that these systems depend entirely on human participation. Every algorithm needs data, and all data comes from people living their lives. This creates potential leverage points, but only if people understand the stakes and act collectively.
Taking Back Control
The path forward isn't to smash the machines or retreat to a pre-digital world. It's to democratize control over the AI systems that are becoming essential infrastructure.
This starts with understanding how these systems work and making their operations visible. Most people have no idea how extensively algorithms shape their daily experience. Making this invisible control visible is the first step toward challenging it.
As Volokh suggests, we need to decide whether we want a return to "User Sovereignty," where AI systems serve users' interests rather than their creators' ideological goals, or accept a model where tech companies decide what information and perspectives you should be allowed to access.
At the local level, we can push for algorithmic transparency in government services, demand accountability from platforms that shape our communities, and support alternative systems designed for human flourishing rather than data extraction.
At the broader level, we need new institutions and governance structures that treat essential algorithms like public utilities: too important to be left entirely to private control, requiring democratic oversight and public accountability. This might include what Volokh calls "protecting competition" through antitrust action, mandating access to essential digital infrastructure, or creating "compulsory licensing schemes" that prevent monopolistic control over the technologies that power AI systems.
Closing Thoughts
The crucial point is timing. Once AI systems become deeply embedded in essential infrastructure, and we're already well down that path, changing their governance becomes exponentially harder. Every day that passes, these systems become more entrenched, more essential, and more difficult to challenge.
The danger isn't that machines will rule us. The danger is that we're voluntarily handing the controls to a small group of humans who've convinced us their interests align with ours. History suggests that's rarely a safe bet.
This isn't a problem for future generations to solve. The decisions being made right now about how AI develops will determine whether these technologies serve human freedom and flourishing or become instruments of unprecedented control.
The choice is still ours, but probably not for much longer. The question isn't whether artificial intelligence will reshape society; it already is doing just that. The question is whether that reshaping will be done to us or for us.
The silent takeover is already underway. Will you help decide how the story ends?
By the way, did you know you that I now offer a daily AI news update? You get 5 🆕 news items and my take on what it all means, delivered to your inbox, every weekday.
Subscribe to the LawDroid AI Daily News and don’t miss tomorrow’s edition:
LawDroid AI Daily News, is here to keep you up to date on the latest news items and analysis about where AI is going, from a local and global perspective. Please share this edition with your friends and colleagues and remember to tell me what you think in the comments below.
If you’re an existing subscriber, you read the daily news here. I look forward to seeing you on the inside. ;)
Cheers,
Tom Martin
CEO and Founder, LawDroid
Eugene Volokh is the Thomas M. Siebel Senior Fellow at the Hoover Institution at Stanford, and the Gary T. Schwartz Distinguished Professor of Law Emeritus and Distinguished Research Professor at UCLA School of Law.
Full disclosure: Professor Volokh taught me both Copyright Law and Constitutional Law at UCLA Law in the late 1990s, when the internet was still young and the idea that algorithms might copy all web content worldwide and control political discourse seemed like a distant dystopian science fiction.
Eugene Volokh, Generative AI and Political Power, 30 UCLA J.L. & Tech. 272 (2025).