What Silicon Valley Believes
The ideology of American technology has always been a religion. What's changed is that its priests now hold political power — and they're starting to act like it.

Every civilization has its cosmology — the story it tells about the nature of reality, the proper ordering of human life, and the forces that determine whether history moves toward salvation or catastrophe. For most of Western history, that cosmology was religious: the world was created by a God who cared about human choices, history was moving toward a divine judgment, and the correct response to this situation was some form of moral striving that the relevant institutions (the church, the monastery, the family) would specify.
What Silicon Valley has produced is not the absence of cosmology but its replacement: a secular eschatology in which the apocalypse is produced by misaligned artificial intelligence rather than divine wrath, the path to salvation runs through technical research rather than prayer, and the institutional authority that specifies correct behavior has shifted from the church to the research lab.
This is not a metaphor. It is a description.
The explicit articulation of this worldview emerged in the early 2010s from a cluster of thinkers in the San Francisco Bay Area — the philosopher Nick Bostrom, the writer Eliezer Yudkowsky, the organizations that would become the Machine Intelligence Research Institute and, later, the Center for Effective Altruism. The core claims were: (1) artificial general intelligence is likely to be developed within decades; (2) a misaligned AGI would constitute an existential threat to humanity; (3) this threat was so catastrophically large that reducing its probability by even a small amount was worth essentially unlimited resources; (4) the correct response was technical AI safety research, concentrated at a small number of organizations.
Effective Altruism, the broader movement that absorbed and amplified these ideas, gave them a moral framework: the utilitarian calculus that if you could prevent astronomical amounts of suffering by donating your money and time to the right places, you had an obligation to do so. The "right places," in the EA framework that dominated the movement through around 2022, were almost exclusively those focused on existential risks — AI safety, pandemic prevention, and nuclear security.
Polymarket had a contract in December 2025 asking whether EA organizations' combined fundraising would increase or decrease by more than 20 percent in 2026 compared to 2024. It was trading at 58 percent for "decrease," reflecting the aftermath of the FTX collapse — in which the movement's most prominent donor, Sam Bankman-Fried, was revealed to have been running a fraudulent cryptocurrency exchange — and the subsequent significant withdrawal of institutional credibility and major donors.
But the EA collapse, such as it was, has turned out to be less important than what replaced it.
The dominant ideology of American technology in 2026 is no longer EA in its classic form. It is something harder to name — a synthesis of tech-optimism, American nationalism, and what its proponents call "vitalism" or "building" that has moved decisively away from EA's universalist framework toward something more explicitly about the dominance of specific people, organizations, and countries.
The intellectual genealogy runs through Peter Thiel, whose contrarianism and nationalist Christianity have influenced a generation of tech founders; through Marc Andreessen's "Techno-Optimist Manifesto," which explicitly rejected what Andreessen called "the enemies" of technology — among them, effective altruism; and through the "Dark Enlightenment" and neo-reactionary movements that circulate in the intellectual underground of Silicon Valley and that have, in recent years, moved closer to the surface.
What unites these strands, beneath their considerable surface differences, is a shared set of commitments: that the current rate of technological and economic change is too slow; that the institutions responsible for the slowness (regulators, academic consensus, "the Cathedral" in neo-reactionary terminology) are corrupt and should be bypassed; that the people most capable of accelerating change are a specific class of technically sophisticated entrepreneurs; and that the constraints on those entrepreneurs — whether legal, political, or moral — are the primary obstacle to human flourishing.
Metaculus currently forecasts an 81 percent probability that at least one tech billionaire will hold a formal government position of deputy cabinet level or higher in the United States through 2028. As of early 2026, that forecast has partially resolved Yes. The more interesting question is what ideology those positions import into governance.
There is a genuine insight buried inside the ideology that deserves acknowledgment before being criticized.
The insight is that technological progress has, over the very long run, been the primary driver of improvements in human welfare — in longevity, in material living standards, in the reduction of grinding physical labor. The stagnation thesis — the argument that productivity growth has slowed in ways that are translating into stagnant wages and declining prospects for the non-elite — is real and important. The frustration with regulatory capture, with incumbent industries using government to block competition, with the captured agency that makes it difficult to build housing, or to permit new energy sources, or to run clinical trials in timely ways — this frustration has a genuine empirical foundation.
But an ideology built on genuine insights can still be deeply mistaken about the conclusions those insights justify. The Silicon Valley cosmology commits several characteristic errors that its proponents are systematically reluctant to examine.
The first is what we might call the expertise asymmetry error: the assumption that the skills required to build large software companies transfer directly to the skills required to govern. These are not the same skills. Software companies operate in relatively predictable environments with clear success metrics (growth, revenue, engagement). Governance operates in environments where the costs and benefits of decisions are massively distributed across people who have no voice in those decisions, where the success metrics are deeply contested, and where the second and third-order effects of interventions are typically much larger than the first-order effects. The confidence that successful tech entrepreneurs bring to governance questions is, in this sense, not earned by their track record — it is borrowed against a credibility built in a very different domain.
The second error is the historical myopia that comes from building in an industry that is genuinely new. The internet did not exist in 1990. The smartphone did not exist in 2005. The generative AI capabilities of 2024 did not exist in 2020. For people who have lived their professional lives in this environment, exponential change is the baseline assumption, and anyone who suggests that the past is a useful guide to the future seems naive. But most of the questions that matter in governance — about the distribution of power, the conditions for stable social cooperation, the management of collective action problems — have been studied for centuries, and the lessons of that study are not obviously obsolete.
The third error is the confusion between building and governing — which is to say, the confusion between creating value and distributing it. Silicon Valley has been extraordinarily good at creating new products and services that a large number of people find valuable. It has been, by most measures, quite bad at the questions of distribution: who owns the infrastructure, who sets the terms of access, who benefits from the productivity gains, who bears the costs of disruption. These are not questions that technical expertise resolves. They are political questions, in the deepest sense of that word.
The eschatological dimension of the Silicon Valley worldview — the sense that we are living in a period of civilizational decision, that the choices being made now about AI development will determine the fate of humanity — is not obviously wrong. It may even be right. But it produces a characteristic distortion: it makes the people who hold this belief feel that ordinary political and ethical constraints do not apply to them, because the stakes are too high and the emergency is too acute.
This is not a new dynamic. Every generation of political actors who believe they are acting in the context of civilizational emergency has used that belief to justify the relaxation of ordinary moral constraints. What is distinctive about the current moment is the degree to which this belief is concentrated in people who control an unusually large share of the world's communication infrastructure, financial resources, and, increasingly, formal political power.
On Kalshi, there is a contract asking whether a single individual — any individual — will control companies with a combined market capitalization exceeding $3 trillion by the end of 2027. It is trading at 44 percent. That this question is on the market at all, with a nearly coin-flip probability, tells you something about the world we have entered.
What the market cannot tell you is whether the cosmology animating that concentration is a reliable guide to the future it claims to be forecasting.
Daniel Osei-Kwame is a staff writer at The Auguro covering technology, philosophy, and the politics of the digital economy.