The Pitchforks Are Here
How a 2015 warning became a 2026 reality — and what citizens, not Silicon Valley, will have to do about it
In the spring of 2015, in a private dining room somewhere in Silicon Valley, a young co-founder of DeepMind named Mustafa Suleyman stood before a Google board dinner and gave his presentation a title that the men in the room would have found tonally jarring: “The Pitchforkers Are Coming.”
He was not being theatrical. He was being literal.
Suleyman told the table that artificial intelligence would, within a decade, explode disinformation, displace tens of millions of jobs, and produce a public fury that would make the social media backlash look like a warm-up act. He argued that Google — and the broader industry — would have to share its wealth with the people whose livelihoods the technology was going to consume. He used the words universal basic income. Elon Musk, then a guest, nodded along.
Eric Schmidt thought the worry was overblown. Larry Page, in his characteristic whisper, said AI would create more jobs than it took. The conversation moved on. Eight months later, DeepMind’s AlphaGo beat one of the world’s best Go players in front of 200 million viewers — years ahead of when most experts thought such a thing was possible. The timeline was already wrong. The men best positioned to know it had already decided not to prepare.
That dinner is the hinge point of the story we are now living through. It is the moment when the people who controlled the technology decided, consciously, not to govern it.
Eleven years later, the pitchforks are no longer a metaphor.
The warning shots
Last month, someone threw a Molotov cocktail at Sam Altman’s house in San Francisco. A few days later, his property was hit by gunfire. No one was injured. No motive has been confirmed. But it was difficult, reading the news, not to think of the killing of UnitedHealthcare CEO Brian Thompson on a Manhattan sidewalk in late 2024, and the strange, ugly folk celebration that followed.
The writer Jasmine Sun, in a piece in The New York Times Magazine, called these incidents “AI populism’s warning shots.” The phrase is precise. They are not the populism itself. They are the noises that precede it — the rumbles a community makes before the ground gives way.
Altman, to his credit, has long understood that something like this was coming. As far back as 2016, he was telling reporters, in a tone that hovered between candor and confession:
“I prep for survival. I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
You do not stockpile potassium iodide because you are confident the social contract will hold. You stockpile potassium iodide because some part of you has already calculated the probability that it will not — and concluded that the calculation falls on you, personally, to survive that failure rather than to prevent it.
This is the psychological hinge that connects 2015 to 2026. The architects of the most consequential technology of our era have been preparing, for at least a decade, for the human backlash to their own work. They have done so by buying land, building bunkers, hiring security details, and applying for citizenship in countries with better natural defenses. What they have not done — what they conspicuously declined to do at that 2015 dinner, and have continued to decline to do at every fork in the road since — is the political work of ensuring that the backlash never becomes necessary in the first place.
Why people are angry
It would be easy, and wrong, to treat the rising anger at AI as irrational — a kind of techno-Luddism, the latest in a long American tradition of fearing the future.
The anger is rational. It is grounded in specific, documentable conditions. Citizens with no formal training in machine learning have nonetheless looked at the facts available to them and reached three conclusions that are, in fact, correct.
First, the displacement is real and is happening fast. Goldman Sachs estimates that AI could replace the equivalent of 300 million full-time jobs globally. OpenAI’s own researchers estimate that roughly 80 percent of the American workforce will see at least 10 percent of their tasks affected by large language models, with nearly one in five workers facing impacts on more than half. Anthropic’s CEO Dario Amodei — a man with every financial incentive to soft-pedal this — has warned publicly that AI could eliminate half of entry-level white-collar jobs within five years, and that most lawmakers “are unaware this is about to happen.”
When the people who build a system tell you it will destabilize the economy you live in, the rational response is not skepticism. It is preparation. The citizens are preparing. The government, mostly, is not.
Second, the wealth from this transition is being captured, not shared. A $4.5 million National Science Foundation grant seeded the research that became a $2 trillion company. A $50,000 CIA contract seeded another $400 billion one. DARPA’s network research — paid for by American taxpayers — seeded the entire internet economy. The argument that the public, having funded the underlying science, has some claim on the resulting wealth is not a radical proposition. It is the most ordinary kind of accounting. The fact that the accounting has not been done, and the wealth has flowed almost entirely to a few hundred people in a few square miles of California, is a political choice. It was made in specific rooms by specific people. The 2015 dinner was one of those rooms.
Third, the political system has, so far, declined to intervene. The federal AI framework released in March 2026 explicitly defers to markets and existing agencies. Only 12 percent of civilian federal agencies have completed AI adoption plans. The President’s Council of Advisors on Science and Technology, as currently constituted, includes twelve technology executives among its thirteen members. There is no labor economist on the council. No workers’ advocate. No community health researcher. The body charged with advising the President on the human consequences of AI is, by composition, structurally incapable of advising on those consequences.
The proposed federal response to mass displacement amounts to roughly $160 million for AI-related teacher development and $90 million for affected workers. Set that against a projection of 300 million jobs disrupted globally, and the numbers do not represent a policy. They represent a gesture.
People can do this math. They are doing it.
What the pitchforks actually are
The image of the pitchfork is older than industrial democracy itself. It is the implement that ordinary people pick up when the formal channels of accountability — the courts, the legislatures, the press, the ballot — fail to address what they can see with their own eyes.
In American history, the pitchfork has rarely been a literal weapon. It has been a threat of last resort that the rest of the democratic system uses to motivate itself. The Gilded Age robber barons did not surrender their monopolies because they had a change of heart. They surrendered them because Theodore Roosevelt and a generation of muckraking journalists made the alternative — the actual mob, the actual fire — look more expensive than concession. The New Deal did not arrive because Franklin Roosevelt was a more generous man than Herbert Hoover. It arrived because the country had spent four years watching breadlines and Hoovervilles and decided that a system that produced them could not be allowed to continue.
The pitchfork, in other words, is not the failure of democracy. It is one of democracy’s instruments. The failure is what happens when the people holding the pitchforks find that no functional institution will receive their grievance — that the courts are captured, the legislature is paralyzed, the press is distracted, and the powerful are bunkered in Big Sur.
That is the danger of the present moment. Not that Americans are angry about AI. They should be angry about AI, or at least about the political economy that is producing it. The danger is that the anger is arriving faster than the institutions that are supposed to channel it are able to respond.
We are watching the warning shots. The Molotov cocktail. The gunfire. The 2,400 lawsuits against social media companies that are previewing what AI liability litigation will look like a few years from now. The $375 million jury verdict against Meta. The Florida Attorney General opening a criminal investigation into OpenAI. The 80 percent of Brown University students who, surveyed last year, said they were afraid of the future.
This is not yet a movement. It is the noise a society makes before deciding what kind of movement to become.
What the AI industry got wrong about people
There is a particular kind of arrogance — common in Silicon Valley, but not unique to it — that consists in believing that the people who are good at building a thing are also, by virtue of having built it, the people best qualified to predict what it will do once it leaves their hands.
The 2015 dinner was a case study in this error. The people in that room were extraordinarily good at building artificial intelligence systems. They were extraordinarily bad at modeling how those systems would interact with the actual society — with the rent payments, the credentialing systems, the regional economies, the marriage markets, the church attendance patterns, the opioid prescription rates, the gun ownership statistics — into which they were about to be released.
The economist Jasmine Sun’s reporting captures this gap with unusual precision: the AI builders, she notes, were “first in line to see the power of these tools,” and because so much of their work had been transformed by them, they tended to “extrapolate to everyone.” But the real world outside Silicon Valley does not run on the cognitive patterns of a senior AI researcher at a frontier lab. It runs on credentials, geography, family obligations, debt, faith, habit, and a thousand other variables that are invisible from a corner office in SoMa.
What the industry missed is not technical. It is sociological. It is what the sociologist Richard Sennett called the narrative function of work — the way a job, even a mediocre job, gives a life its shape, its sequence, its sense of being headed somewhere. It is what the psychiatrist Viktor Frankl, writing from inside the Nazi camps, named as one of the three primary sources of human meaning. It is what Sigmund Freud, asked what a psychologically healthy person required, summarized in five words: to love and to work.
When the technology comes for the work, it is not merely coming for the paycheck. It is coming for the narrative. And human beings, when their narrative is taken from them without consent or compensation, do not become docile. They become combustible.
America already knows what this looks like. The deindustrialization of the 1970s and 1980s — the closure of the steel mills, the auto plants, the textile factories — did not merely produce unemployment statistics. It produced what the economists Anne Case and Angus Deaton named the deaths of despair: the surge in mortality from suicide, drug overdose, and alcoholic liver disease that has, for the first time since the Civil War, shortened the life expectancy of working-class Americans. The connection between economic displacement and the destruction of human bodies is not speculative. It is documented in the autopsy reports of three counties in every state.
If deindustrialization did that to the people who worked with their hands, the cognitive automation of the 2020s threatens to do something analogous to the people who were told, explicitly and repeatedly, that education was their insurance policy. The paralegal. The medical coder. The junior financial analyst. The customer service supervisor. The radiologist’s assistant. The accountant. These are people who followed every rule the meritocratic system gave them — who went into debt for the credentials, who delayed family formation for the career, who relocated for the job — and who are now being told, by the same system, that the rules have changed and that the adjustment is, regrettably, their problem.
The pitchforks are not coming from the people who never trusted the system. They are coming from the people who did.
What an adequate response looks like
Here is where the article could easily go wrong. The temptation, having named the anger and validated its basis, is to channel it into the kind of catastrophic register that confirms the worst suspicions of the bunker-builders — the mob really is coming, batten down the hatches. That register is not useful. It produces paralysis on one side and fortification on the other, and it leaves the actual democratic work undone.
The harder and more honest argument is that the AI transition is not fated to produce a populist conflagration. It is fated to produce one only if the institutions of self-governance fail to respond to it in time. They have not yet failed. The window is narrowing, but it is open.
A response adequate to the scale of the moment would, at minimum, include the following elements — none of them radical, all of them debated in serious policy circles, several of them already on state ballots.
It would include an automation tax or analogous mechanism that redirects a share of AI-driven productivity gains toward public investment in workforce transition. The argument that capital should bear some of the cost of the transition it is profiting from is not a Marxist argument. It is the same argument that produced workers’ compensation, occupational safety law, and unemployment insurance — each of them, in their time, denounced as confiscatory, and each of them now understood as a precondition of a functioning labor market.
It would include a Public Wealth Fund that gives every citizen a direct stake in AI-driven growth, on the model that Alaska has operated for half a century without controversy. If the underlying science was publicly financed, the returns should not be exclusively privately captured. This is not a redistribution argument. It is a property-rights argument.
It would include portable benefits — health insurance, retirement contributions, training credits — that follow the worker rather than the job, because in a labor market where job tenure is collapsing, benefits tied to employers are benefits in name only.
It would include automatic safety-net stabilizers that activate when displacement metrics cross defined thresholds, so that the response to mass layoffs does not depend on Congress passing emergency legislation in a panic six months after the layoffs have already happened.
It would include a serious, scaled commitment to workforce retraining — not the thin, underfunded, scattershot version that currently exists, but something on the order of the GI Bill, with the public seriousness and the public budget that comparison implies. The current funding levels — $160 million here, $90 million there — are an insult to the scale of what they purport to address.
And it would include — this is the part the technology industry is least prepared to hear — a serious public conversation, with workers and economists and ethicists at the table and not merely AI executives, about which applications of AI we actually want and which we do not. The premise that any technically possible deployment is also socially desirable is not a finding. It is a libertarian assumption smuggled into the conversation as a fact. Democracies are entitled to decide otherwise.
None of these proposals will satisfy the most radical critics of AI. None of them, taken together, will fully prevent displacement. But taken together, they constitute the difference between a transition that working Americans navigate with dignity and one they absorb alone. That is not a small difference. That is the entire difference.
What citizens can do
If the federal government will not lead on this — and the current evidence is that it will not — then the work of channeling the legitimate anger of the AI transition into constructive political pressure falls to citizens themselves. This is not a consolation prize. This is, in the American constitutional tradition, exactly how the system was designed to work.
States are already filling some of the federal vacuum. Illinois, Texas, and Colorado each have AI workforce protections coming into force in 2026. Municipal AI accountability boards are emerging in a handful of cities. Worker AI councils, modeled loosely on the workplace safety committees of the early 20th century, are being organized inside several large unions. The People’s Conference on Technology and the American Workforce Future — a kind of populist counterweight to the President’s all-CEO advisory council — is one of several efforts to build the deliberative infrastructure that federal policy lacks.
These efforts are small. They are also, historically, exactly the kind of institutions that precede major democratic course-corrections. The New Deal did not arrive without a labor movement. The Progressive Era did not arrive without muckraking journalism and the settlement house. Civic infrastructure is the thing that translates diffuse anger into specific demand. Without it, the anger curdles. With it, the anger legislates.
The reader of this essay, then, has a more useful set of questions than should I be worried about AI. They are:
What is my county doing to prepare its workforce for displacement, and who on the county commission is responsible for that work?
What is my state legislator’s position on automation taxes, portable benefits, and AI-related retraining funding, and have I asked them directly?
What civic, labor, faith, or community organization in my region is doing the work of organizing displaced workers, and have I offered them time, money, or expertise?
Which of the candidates running in 2026 — at every level of the ballot, not just the marquee races — has thought seriously about this, and which is offering platitudes?
These are not glamorous questions. They are the questions that decide whether the energy of this moment becomes a New Deal or a riot.
The choice in front of us
The men at the 2015 Google board dinner had a chance to choose something other than what they chose. They could have decided, in that room, that the appropriate response to a technology of this magnitude was to share its benefits broadly, to absorb some of its costs collectively, and to invite the public into the decisions about its deployment. They chose, instead, to treat the warning as overblown and the warner as alarmist, and to continue building.
The pitchforkers Suleyman warned them about were not, in his telling, the enemy. They were a forecast. They were what would happen if the people in that room failed to act on what they themselves already knew.
Eleven years on, the forecast has arrived. A Molotov cocktail at Altman’s house. Gunfire a few days later. A criminal investigation in Florida. Thousands of lawsuits. The first nine-figure jury verdict. An 80 percent fear rate among college seniors at one of the country’s most prestigious universities. The polling numbers on AI trust, on every survey, in steady decline.
There is still time to choose something other than what the 2015 dinner chose. The choice is not between AI and no AI — that question was decided, for better or worse, by the same dinner. The choice is between an AI transition that produces broadly shared flourishing and one that produces broadly shared resentment. Between one that strengthens the democratic institutions of the country and one that overwhelms them. Between one that ordinary working Americans look back on as the moment their society renewed its social contract and one they look back on as the moment it was finally broken.
Jefferson, who knew something about technologies of disruption, put the standard plainly: the care of human life and happiness, and not their destruction, is the first and only legitimate object of good government. That is the test. It is the only test. We will pass it or we will not, and the people who decide are not the ones in the bunkers. They are the ones reading this, and the ones in the next county over, and the ones who will mark a ballot in November.
The pitchforks are here. The question is what the citizens holding them, and the institutions receiving them, are going to do next.
Moonshot Press is a project of the Institute for Salutogenesis and a cornerstone of the Democracy, Opportunity and Citizenship Moonshot. We are nonpartisan, constitutionally grounded, and committed to the proposition that an informed citizenry is not a luxury of democratic life — it is its precondition.
Subscribe at moonshot.press



