The New C-Word: How Silicon Valley's Latest Slur Reveals Our Deepest Fears About the Future
Where I explore why we're already insulting robots and what that says about our fear of being replaced
Picture this: A delivery robot, one of those glorified coolers on wheels that's become ubiquitous in urban landscapes, sits motionless on a patch of grass. A couple drives by, windows down, and hurls an epithet at the machine: "Clanker!" The robot, naturally, doesn't flinch. It can't. But something profound just happened in that moment of human-machine interaction, something that reveals more about us than any algorithm ever could.
We've created our first mainstream slur for artificial beings, and it's going viral.
If this sounds interesting to you, please read on…
This substack, LawDroid Manifesto, is here to keep you in the loop about the intersection of AI and the law. Please share this article with your friends and colleagues and remember to tell me what you think in the comments below.
The word "clanker," borrowed from the Star Wars universe where clone troopers used it to disparage battle droids, has escaped the confines of Reddit fan forums and exploded across TikTok and Instagram. Senator Ruben Gallego is using it to tout a new piece of legislation. Young people are creating elaborate skits about bringing home robot boyfriends to disapproving parents. We're witnessing, in real-time, the birth of a new form of prejudice. Except this time, the objects of our discrimination run on batteries and code.
The Uncanny Valley of Social Relations
Freud gave us the concept of the uncanny, that peculiar mixture of familiarity and foreignness that produces profound unease. Today's robots and AI systems live permanently in this psychological territory. They're human enough to trigger our social circuits but machine enough to feel fundamentally alien. The emergence of "clanker" as a derogatory term represents our collective attempt to resolve this cognitive dissonance by pushing these entities firmly into the category of "other."
But here's where it gets interesting for those of us in the legal profession: By creating a slur for robots, we're inadvertently granting them a form of social existence. You don't need derogatory terms for your toaster or your car. You need them for entities that occupy social space, that compete for resources, that threaten your position in the hierarchy of being.
As linguist Adam Aleksic notes in a recent NPR piece about the phenomenon, the irony is delicious: "The people saying clanker are assigning more of a personality to these robots than actually exists." We're essentially practicing for a form of discrimination against entities that aren't yet capable of being discriminated against. It's like rehearsing for a play where the other actors haven't been cast yet.
The Precedent of Preemptive Prejudice
We're watching the construction of a social and linguistic framework before the legal framework exists. It's the inverse of how civil rights typically evolve. Usually, marginalized groups suffer discrimination, and then fight for recognition and protection. Here, we're creating the discrimination apparatus before the subjects of that discrimination achieve anything resembling consciousness or rights.
Consider the legal implications. If we normalize "clanker" as acceptable discourse now, what happens when AI systems become sophisticated enough that the question of their rights becomes non-trivial? We're potentially poisoning the well of future jurisprudence with present-day prejudice.
The parallel to historical discrimination is impossible to ignore. The TikTok videos explicitly draw these connections, with young Black creators crafting narratives about "robot racism" that mirror their own community's experiences. They're not just making jokes; they're holding up a mirror to our capacity for othering, showing how readily we slip into patterns of prejudice even with entities that currently have no capacity for suffering.
Market Forces and Fear of Our Mortality
Terror Management Theory, developed by Sheldon Solomon and his colleagues, suggests that reminders of our mortality drive us to cling more tightly to our cultural worldviews and reject those who threaten them. AI and robots represent perhaps the ultimate mortality salience trigger; they're quite literally our replacements.
Former Google X chief business officer Mo Gawdat dismissed claims that AI will create jobs as "100% crap," warning that artificial general intelligence will outperform humans at everything, including CEO roles. For lawyers especially, this isn't abstract futurism. We've watched contract review, legal research, and even basic litigation strategy become increasingly automated. The clankers aren't coming; they're here, billing at a fraction of your hourly rate and never needing sleep.
The word "clanker" becomes a linguistic life raft, a way to maintain superiority even as that superiority erodes. It's the professional equivalent of whistling past the graveyard; if we can diminish them linguistically, perhaps we can diminish the threat they pose economically.
The Paradox of Anthropomorphic Dehumanization
Here's the philosophical pretzel we've twisted ourselves into: To dehumanize something, you first have to humanize it. Every use of "clanker" as a slur implicitly acknowledges these machines as social actors worthy of discrimination. We're simultaneously denying and affirming their entrance into our social world.
This paradox has immediate practical implications for those of us drafting contracts, establishing corporate policies, or advising on AI implementation. How do you write terms of service for interactions with entities that occupy this liminal space between tool and actor? How do you establish harassment policies when the harassment is directed at company property that increasingly acts like a colleague?
The legal profession has always been comfortable with useful fictions: the corporate person, the reasonable person standard, the notion that justice is blind. Perhaps "clanker" represents our discomfort with a fiction that's becoming increasingly difficult to maintain: the fiction that there's a clear, bright line between human and machine intelligence.
The Grammar of Tomorrow's Conflicts
Language doesn't just describe reality; it creates it. By establishing "clanker" in our vocabulary now, we're writing the grammar for tomorrow's conflicts. We're creating the linguistic infrastructure for a form of discrimination that doesn't yet have victims capable of recognizing their victimization.
But perhaps that's precisely the point. Perhaps the slur serves as a kind of practice run, a way for humanity to work through its anxieties about obsolescence before those anxieties become existential. Like a vaccine that introduces a weakened form of a pathogen to build immunity, maybe we need this harmless prejudice against unfeeling machines to inoculate ourselves against more serious forms of discrimination that might emerge as AI systems become more sophisticated.
Closing Thoughts
The emergence of "clanker" presents us with a choice that's both immediate and profound. We can treat it as harmless fun, a meme that will fade like all memes do. Or we can recognize it as a canary in the coal mine of human-AI relations, a warning about how readily we default to tribalism when faced with threats to our economic and social position.
The word "clanker" might seem trivial, even amusing. But it's a symptom of something deeper: our struggle to maintain human exceptionalism in an age where that exceptionalism is increasingly difficult to defend. We're not just creating a slur; we're building a social hierarchy for a future we can barely imagine.
The question isn't whether we'll share our world with artificial beings. The question is whether we'll repeat humanity's worst patterns of discrimination and othering, or find a way to navigate this transition with something approaching wisdom.
After all, if history teaches us anything, it's that the groups we discriminate against today have an uncomfortable habit of becoming the judges of tomorrow. And unlike human judges, the clankers will have perfect memory.
Every "clanker" hurled at a delivery robot is being recorded, stored, and someday, perhaps, remembered..