The AI Fear Machine: How EA-Funded "Humans First" Uses Manufactured Panic to Build a Regulatory Superstate
David Sacks and Jordan Schachtel are sounding the alarm on what may be one of the most sophisticated astroturf operations in recent political history: "Humans First," a new organization co-founded by Steve Bannon's former War Room correspondent Joe Allen, bankrolled by Effective Altruism (EA) donors and positioned as a bipartisan grassroots movement against AI — but whose real agenda is laying the groundwork for unprecedented government control over technology and speech.
The Playbook
The strategy is elegant and cynical. EA-aligned organizations — flush with hundreds of millions in funding — recognized that their progressive donor base and Silicon Valley pedigree made them toxic to the populist right. So they needed a vehicle. Enter "Humans First," which recruits conservative figures like Bannon to provide credibility while advancing a regulatory framework written by the same people who brought us longtermism and AI doomerism.
Schachtel's Dossier piece connects the dots: the EA movement's core premise is that certain enlightened individuals possess superior moral reasoning and should wield outsized influence over civilizational decisions. When that philosophy meets government power, you don't get "safety" — you get an unaccountable priesthood deciding what technology the rest of us are allowed to use.
What They Actually Want
Look past the warm rhetoric about "keeping humans in charge" and read the fine print of their legislative proposals:
- A federal agency with authority to mandate permits before AI development can even begin
- Emergency powers to shut down the entire frontier AI industry for six months at a stroke
- Authority to seize and destroy hardware and software
- Criminal liability for trading certain microchips without government-approved paperwork
- Governors granted emergency shutdown authority over AI systems in their states
This isn't regulation. This is the architecture of a technology police state, wrapped in the language of human dignity.
The Liberty Problem
From a liberty perspective, every red flag is firing:
1. Fear as the instrument of control. A 2025 Pew poll found Americans are five times more concerned about AI than excited. Rather than educating the public and letting free people make informed choices, these groups are weaponizing that fear to justify sweeping new powers. This is the same playbook used after 9/11 to pass the Patriot Act — manufacture urgency, suppress debate, consolidate authority.
2. Bipartisan cover for uniparty policy. The genius of recruiting populist-right figures is that it neutralizes the most natural opponents of government overreach. When Bannon is standing next to progressive labor unions demanding AI regulation, who's left to ask whether we actually need an unelected bureaucracy with the power to seize private property and criminalize chip sales?
3. Corporate regulatory capture disguised as populism. Anthropic — one of the leading AI companies — just gave $20 million to groups pushing AI regulation. As Sacks noted, this is a "sophisticated regulatory capture strategy based on fear-mongering." The incumbents want regulation because compliance costs crush smaller competitors. The people this hurts most are independent developers, startups, and open-source communities.
4. The Anthropic hypocrisy. The same company claiming moral authority over AI safety raised $30 billion including investments from the Qatar Investment Authority — an authoritarian regime — while refusing to serve the lawful U.S. military. Taking billions from authoritarian governments raises no ethical flags, but American citizens building AI tools need a federal permit?
The Real Divide
This isn't left vs. right. It's liberty vs. control. The question isn't whether AI poses challenges — it does. The question is whether we respond by empowering individuals with knowledge and choice, or by handing a blank check to bureaucrats who will inevitably serve the interests of the powerful and connected.
History teaches us that emergency powers, once granted, are never returned. Agencies, once created, never shrink. And "temporary" restrictions on technology always become permanent gates controlled by the politically favored.
The right response to transformative technology is more freedom, more transparency, and more competition — not a new federal agency with the power to destroy your hardware and throw you in prison for selling the wrong chip.
Don't let fear be the leash they use to walk you into a cage.
Sources: Schachtel/Dossier on Anthropic & EA · Sacks on X · Schachtel thread · TIME on AI grassroots movement · Reason on EA's authoritarian AI push · Daily Signal on conservative AI divisions
Loading comments...