The AuguroSubscribe
Technology◈ Foresight Brief

The AI Jobs Question

Prediction markets are pricing a 40 percent chance of significant labor displacement by 2028. The economists who study this most carefully are more divided than either camp admits.

Daniel Osei-KwameFebruary 28, 2026 · 14 min read
The AI Jobs Question
Illustration by The Auguro

The question of what artificial intelligence will do to employment is, at this point, simultaneously the most important economic question of the decade and one of the least reliably answered. The range of credible forecasts runs from "modest disruption manageable through standard labor market adjustment" to "unprecedented displacement requiring fundamental reimagination of how we distribute income." Both extremes have sophisticated advocates and empirical evidence in their support. Neither side is obviously wrong.

What has changed in the past eighteen months is the nature of the evidence available to adjudicate between these views. We now have two years of data on the deployment of large language models in white-collar work environments — data that the economists who had been making predictions from first principles suddenly have to contend with. And that data is, to put it plainly, unsettling in ways that complicate the optimistic narrative.

On Kalshi, the prediction market on whether U.S. unemployment among workers with bachelor's degrees or higher will exceed 6 percent before January 2028 was trading at 31 percent as of late February 2026. The base rate of bachelor's unemployment exceeding 6 percent in the post-WWII era is roughly 8 percent. The market is thus pricing the probability of this outcome at nearly four times its historical frequency — a significant signal, though not a certainty.


What the models are actually doing to work

The first wave of AI deployment, from roughly 2023 to 2025, concentrated heavily in what economists call "routine cognitive tasks" — the kind of work that involves applying established frameworks to well-defined problems. Legal document review, financial report summarization, customer service routing, basic coding assistance, and certain categories of data analysis all saw significant AI penetration in this period.

The consensus interpretation at the time was that this was consistent with the standard "automation substitutes for routine tasks while complementing non-routine ones" framework that economists have used since the work of David Autor and Daron Acemoglu in the early 2000s. AI would handle the boring parts of knowledge work; knowledge workers would move up the value chain to the judgment-intensive parts that AI couldn't handle.

This interpretation was never quite right, and the second wave of deployment has made its inadequacy apparent. The most recent generation of models — specifically those capable of what is now called "multi-step reasoning" and "agentic" task completion — are not substituting for the routine parts of knowledge work while leaving the complex parts intact. They are beginning to substitute for the complex parts.

A Goldman Sachs research note from January 2026 estimated that AI-capable systems could now fully automate — not assist, but fully replace — between 40 and 60 percent of the tasks performed by entry-level and mid-level professionals in legal services, financial analysis, software development, and management consulting. This estimate is disputed: some economists argue Goldman's methodology overstates task-level substitutability while understating the judgment and contextual understanding required for professional work. But even the skeptics now concede that the effects are larger and faster than most pre-2024 forecasts anticipated.


The productivity paradox

Here is the strange thing: if AI is genuinely automating a significant share of knowledge work, we should be seeing a measurable productivity surge in the sectors where deployment has been heaviest. Instead, we see something more complicated.

In software development — the sector with arguably the deepest and most sustained AI integration — productivity measures are genuinely up. GitHub's research suggests that developers using AI coding assistants complete tasks roughly 55 percent faster for well-defined problems. Studies of AI adoption in customer service document similar magnitude effects. These are large numbers.

But the aggregate productivity data for the economy remains frustratingly flat. The "productivity paradox" — named after Robert Solow's observation in 1987 that "you can see the computer age everywhere but in the productivity statistics" — appears to be reasserting itself with AI. The explanation that economists are converging on is not that the productivity gains are illusory, but that they are being absorbed by expansion of scope rather than reduction of cost. If an AI system allows a law firm to do three times as much document review with the same staff, and the market clears at the same billing rates, then productivity (output per worker) appears unchanged even though the firm's capacity has massively expanded.

The macroeconomic implication is significant: AI may be primarily a tool for incumbents to consolidate competitive advantage rather than a mechanism for broad-based wage growth. The productivity gains accrue to firm owners and shareholders, while workers in AI-adjacent roles face downward wage pressure from the availability of AI substitutes. This is a different distributional story than either the techno-optimist narrative ("AI raises all boats") or the techno-pessimist narrative ("AI eliminates jobs"). It is the story of technology as a mechanism of inequality — more capital share, stagnant labor share.

Metaculus maintains a forecast on whether the labor share of U.S. GDP will decline below 55 percent before 2030. The current level is approximately 58 percent, already historically low. The median forecast is 64 percent probability of falling further. The forecasters are not predicting mass unemployment — they are predicting continued pressure on wages as a share of national income, which is in many ways worse for policy: it is diffuse, gradual, and invisible enough to resist political remedy.


What jobs survive and why

The research on which occupations are most and least susceptible to AI displacement has converged on a set of findings that are more nuanced than either the optimistic or pessimistic camps typically acknowledge.

The most durable jobs share a combination of characteristics: they require physical presence, they involve high-stakes judgment under uncertainty, they depend on relationships and trust that build over time, and they operate in domains where errors have large and immediate consequences. Plumbers are more durable than paralegals, not because plumbing is more cognitively demanding, but because the interface between human judgment and physical reality has proven far more resistant to automation than the interface between human judgment and text.

The intermediate category — the vast middle of knowledge work — is where the uncertainty is greatest. These are jobs that require genuine expertise, but expertise that can increasingly be approximated by large language models trained on the accumulated output of the relevant field. The radiologist who reads 10,000 chest X-rays develops a pattern-recognition ability that has taken decades to cultivate. An AI system trained on 50 million chest X-rays has a different kind of pattern-recognition ability that is arguably better on most dimensions. What the radiologist retains — accountability, contextual judgment, the ability to reason about the specific patient rather than the statistical distribution of patients — is real and important, but it is not as stable a competitive advantage as it appeared five years ago.

The IMF's October 2025 World Economic Outlook estimated that roughly 60 percent of jobs in advanced economies are in occupational categories "exposed" to AI in the sense that AI can perform at least some tasks within the occupation at human-comparable quality. Of these, they estimated roughly half are in a position where AI complements rather than substitutes — where the productivity gains make the human worker more valuable. The other half face net displacement risk.

That is, if the IMF's estimates are correct, roughly 30 percent of workers in advanced economies are in jobs where AI presents a net displacement risk over a 10-year horizon. This does not mean they will all lose their jobs. It means the equilibrium wage for their labor is under sustained downward pressure.


The policy gap

What is striking about the policy debate on AI and labor is how small it is relative to the stakes. The legislative response in the United States has been limited to a handful of executive orders, some sectoral guidance from the NLRB on AI in collective bargaining, and the AI Safety Act's largely voluntary disclosure requirements.

The gap between the scale of the potential disruption and the ambition of the policy response reflects, in part, genuine uncertainty about what policies would help. Worker retraining programs have a poor track record. Universal basic income faces both fiscal and political constraints that make large-scale implementation unlikely in the U.S. political environment. Taxing AI-driven productivity gains to fund social insurance is conceptually straightforward but requires a political consensus that does not exist.

What the forecasters on Kalshi and Metaculus are pricing is not the probability of a policy response commensurate with the challenge. They are pricing the probability of the disruption occurring regardless. The contract on whether U.S. manufacturing + services sector employment falls more than 5 percent by 2029, without a corresponding decline in GDP, was trading at 38 percent in late February. That is, roughly, the probability that the AI labor story becomes the defining political issue of the next four years.

For what it's worth: the contract on whether AI-related labor displacement becomes the top voter issue in the 2028 presidential election was trading at 27 percent. Lower than you might expect, given everything. But nearly triple what it was eighteen months ago.


Daniel Osei-Kwame is a staff writer at The Auguro covering technology, artificial intelligence, and the future of work. He has reported from research labs at Stanford, MIT, and DeepMind.

Topics
artificial intelligencelaborautomationeconomicsfuture of work

Further Reading

Technology

The Intelligence Illusion

AI systems have become extraordinarily good at producing outputs that look like thinking. This has led us to confuse the performance of intelligence with intelligence itself — a confusion with real consequences.

Marcus Webb · March 14, 2026
Technology

The Social Graph Is Dead

Facebook built its empire on a single idea: that mapping human relationships would be the most valuable thing in the history of commerce. The idea was right. The map was wrong.

Marcus Webb · March 3, 2026
All Technology articles →