Palantir AI Surveillance Manifesto Branded 'Technofascism'
Palantir CEO Alex Karp's 320-page AI manifesto defending surveillance contracts is being called 'technofascism' by philosophers. Here's what it means.
Palantir Technologies — a company that runs artificial intelligence (AI) surveillance contracts for U.S. Immigration and Customs Enforcement (ICE), the Israeli military, and the UK government — just had its CEO publish a 320-page book explaining the moral case for exactly that work. Two philosophers publicly called it "technofascism." The backlash was immediate.
The book, The Technological Republic: Hard Power, Soft Belief, and the Future of the West, was written by Palantir co-founder and CEO Alex Karp. A 22-point summary circulated on social media — and prompted Engadget to describe it as text that "reads like the ramblings of a comic book villain." That line spread across tech media because the company promoting the book isn't a think tank. It's an active government surveillance contractor with billions in federal contracts.
A 320-Page AI Manifesto — Ideology, Not Just Business Strategy
Most tech CEOs publish books about productivity or disruption. Karp published a manifesto — a formal declaration of political and moral beliefs designed to persuade, not just inform. At 320 pages, it's longer than most academic philosophy texts, and its central argument can be stated directly: Western democracies are failing because they've prioritized moral tolerance over technological and military power.
The 22-point social media summary released alongside the book argues that:
- Pluralism (the idea that multiple moral frameworks can coexist peacefully) is a form of civilizational weakness
- AI-powered surveillance and military systems are the only reliable guarantors of freedom
- Tech companies that refuse military and intelligence contracts are "morally cowardly"
- Hard power — military force, surveillance capacity, technological dominance — is a prerequisite for soft values like civil liberties
- Democratic deliberation (slow, compromise-driven policy-making) is the problem; AI-powered efficiency is the solution
Karp is not writing this from the sidelines. Palantir currently holds active contracts with ICE (the U.S. agency handling immigration enforcement and deportation), the Israeli military (where Palantir's AI targeting software is used in active operations), and the UK government (where Palantir holds data infrastructure contracts covering health records, immigration processing, and law enforcement analytics). When the CEO of a publicly traded company running three simultaneous surveillance infrastructures across allied democracies publishes a book arguing that moral limits on power are civilization's enemy — that's not an opinion piece. It's a corporate strategic document.
The Philosophers Who Named It 'Technofascism'
Mark Coeckelbergh, a Belgian philosopher specializing in the ethics of technology, was among the first academics to label Karp's ideology directly: "technofascism." The term describes political systems in which technology companies and military-industrial interests merge under the banner of national security — overriding democratic accountability in favor of efficiency and control.
This isn't a casual insult. Fascism has specific historical meaning: it refers to systems that consolidate power in a single authority, reject pluralism, and use force to enforce ideological conformity. "Technofascism" extends this framework to describe a version driven by corporate infrastructure rather than state institutions — a meaningful and important distinction for 2026.
Greek economist and former Finance Minister Yanis Varoufakis issued a separate, starker warning: "AI-powered killer robots are coming" — his assessment of the real-world consequence of the ideology Karp's book represents. Varoufakis has written extensively about the political economy (the study of how power, money, and governance interact) of technology. His concern is structural: he's not arguing that Karp is personally dangerous. He's arguing that the logic being normalized — AI companies partnering with militaries on autonomous (self-operating) targeting systems, outside democratic oversight — represents a category-level threat to civilian control of lethal force.
Why "Technofascism" Is a Precise Claim, Not Hyperbole
Traditional fascism required a centralized state apparatus. Technofascism differs critically: the consolidating power is corporate. A company like Palantir holds infrastructure that governments now depend on. Its software runs deportation logistics, military targeting, and health data processing — simultaneously, across multiple democracies — with no single elected body overseeing the whole picture.
The question this raises is concrete: Who has democratic oversight of decisions made by Palantir's algorithms? If an ICE deportation operation is informed by Palantir's data fusion platform (software that combines records from dozens of disconnected databases into a unified surveillance profile of an individual), which elected body can audit that decision? If an Israeli military targeting decision uses Palantir's AI, who votes on the system's rules of engagement?
Coeckelbergh's characterization is precise because Karp's manifesto explicitly argues that these accountability gaps are features, not bugs. Democratic deliberation is the problem. AI-powered efficiency is the answer. That argument, coming from the CEO of an active surveillance contractor, is what philosophers and economists are responding to.
What Palantir's AI Surveillance Builds — and for Whom
Understanding why philosophers alarmed by a CEO's book is worth taking seriously requires understanding what Palantir actually builds. The company specializes in data integration platforms — software that pulls records from dozens of disconnected government and commercial databases and creates searchable, unified profiles on individuals, populations, or entities.
In practice, that means:
- For ICE: Palantir's FALCON system (Federated Automated Case Life-cycle Nationwide, an AI-powered immigration tracking platform) maintains surveillance profiles on undocumented immigrants — including location history, known associates, financial activity, and travel patterns. This data directly informs arrest and deportation decisions.
- For the Israeli military: Palantir's AI systems are integrated into military targeting workflows, used in active conflict zones including operations in Gaza in 2024 and 2025. The Israeli government has been among Palantir's fastest-growing military clients.
- For the UK government: Palantir holds a major NHS (National Health Service) data infrastructure contract — meaning the company whose CEO argues democratic oversight is a liability also manages the health records of approximately 68 million people.
Palantir's stock (ticker: PLTR) has been among the strongest performers in AI infrastructure in recent years, rising alongside growing government investment in AI-powered surveillance and analytics. The manifesto isn't separate from the business strategy — it is the business strategy, articulated as moral philosophy.
Three AI Surveillance Implications Worth Watching Now
For readers trying to understand what this means practically — not just philosophically — three concrete implications are worth tracking:
- The tech-military partnership debate is going public. Karp's book explicitly names and criticizes tech companies that refuse military contracts. Google, Microsoft, and Amazon have all faced internal employee protests over AI military work. Karp frames that refusal as moral cowardice. Expect those companies to face more explicit demands — from both governments and the public — for a clear position. You can explore how AI systems are being deployed in government contexts in our explainer guides.
- The regulatory gap Palantir operates in is real — and the manifesto declares intent to stay there. No existing democratic body has authority over a company simultaneously running immigration enforcement AI, military targeting AI, and national health data infrastructure — across three separate democracies. The EU AI Act (Europe's new AI regulation framework, passed in 2024) has carve-outs (exceptions) for national security. The U.S. has no equivalent law. Watch what regulatory proposals emerge in response to the book's publication.
- This is an investor signal as much as an ideological document. A 320-page manifesto from a defense-AI CEO arguing that AI-powered surveillance is civilization's only hope is also a pitch to government budget committees and institutional investors (large organizations like pension funds that buy company stock). The book frames Palantir's business model as a moral necessity. That framing has direct implications for how the company competes for future government contracts.
Whether you're a policy researcher, a developer weighing which companies to work with, or someone whose data flows through Palantir's systems — the publication of this manifesto makes explicit what was previously implicit: Palantir's CEO believes democratic accountability is an obstacle, and AI surveillance is the answer. You can now judge that argument on its own terms. All 320 pages of them.
Related Content — Get Started | Guides | More News
Sources
Stay updated on AI news
Simple explanations of the latest AI developments