The Attention Merchants, Redux
TikTok taught us what the attention economy really is. Now that its fate is uncertain, we have to decide what we actually want from the platforms that shape our minds.

Tim Wu's 2016 book The Attention Merchants told the story of how the media industry, from the first newspaper advertising to television to the early internet, had progressively monetized human attention — packaging it as a commodity and selling it to whoever would pay. The book was prescient in many ways, but it could not have fully anticipated what the decade after its publication would produce: platforms so sophisticated in their understanding of human psychology, and so precise in their ability to exploit it, that the commodity being sold was no longer just attention but something closer to the direction of consciousness itself.
TikTok is the clearest example. Not because it is uniquely malevolent — it is probably not — but because it is the platform that made explicit, through its outcomes, what the attention economy had always been moving toward. In 2020, the median TikTok session lasted roughly 10 minutes. By 2024, it was 34 minutes per day, per user. The platform's recommendation algorithm, trained on behavioral signals too granular for any individual user to be aware of, had learned to hold human attention with an efficiency that no previous media technology had approached.
The question of what that efficiency costs us — individually, collectively, neurologically, politically — is one of the most important questions we could be asking. We are, instead, primarily debating whether the app should be owned by an American company or a Chinese one.
On Kalshi, the contract on whether TikTok remains available to U.S. users under its current ownership structure through the end of 2026 was trading at 38 percent as of January. The contract on whether it gets acquired by an American buyer before mid-2026 was at 42 percent. The specific outcome matters less than what the entire episode reveals about how we have chosen to frame the problem.
The bipartisan consensus that has coalesced around TikTok is that the platform is dangerous because it is Chinese. The concern, as articulated in congressional hearings and in the legislation that passed in 2024, is about data security — that ByteDance might share American user data with the Chinese government — and about content influence — that ByteDance might use the algorithm to manipulate American political opinion in ways favorable to Beijing's interests.
These are not implausible concerns. But they are narrow in a way that allows the larger issue to go unexamined. The attention-extraction model that TikTok has perfected is not a Chinese invention. It is an American one. The techniques of behavioral modification through variable-ratio reinforcement that TikTok exploits — the same techniques that made slot machines so effective — were developed by B.F. Skinner. The application of those techniques to digital media was pioneered by American companies: Facebook, YouTube, Twitter, Snapchat. What ByteDance did was not invent the attention-harvesting business model; it built a more efficient engine for executing it.
If TikTok is acquired by an American company and the algorithm is run by people with U.S. security clearances, the fundamental dynamic will not change. The app will still be optimized to hold attention for as long as possible. The behavioral modification will still be occurring. The question of what 34 minutes of daily TikTok-equivalent content is doing to the political epistemology of a generation will remain as unanswered as before.
What does the research actually say about what social media is doing to cognition, attention, and political belief?
The honest answer is: more than we'd like, and less than the most alarming claims suggest. The literature is genuinely contested in ways that public discourse rarely acknowledges.
Jonathan Haidt and Jean Twenge have argued, with considerable data, that the rise of smartphone-mediated social media correlates with significant increases in anxiety, depression, and loneliness among adolescents — particularly girls — in English-speaking countries. The timing of the deterioration in mental health indicators, beginning around 2012 in the United States and coinciding with the widespread adoption of Instagram, is suggestive. Haidt's most recent work argues the mechanism is disrupted sleep, displaced time with friends, and the psychological burden of performing identity in public for a demanding audience.
The critics of this research — most prominently Candice Odgers at UC Irvine — argue that the correlation, while real, does not establish causation, and that the effect sizes in most studies are too small to bear the weight of the claims being made. A 2024 meta-analysis of 226 studies on social media and adolescent mental health found effects in the expected direction but, on average, of a magnitude comparable to the effect of wearing glasses on school performance — statistically significant but unlikely to explain the scale of the mental health crisis being documented.
The truth is probably that social media is one significant contributor to a complex phenomenon that also includes economic anxiety, declining religious and civic participation, housing costs, and the collapse of face-to-face social infrastructure. Blaming the algorithm for everything is too simple. Dismissing its effects because they're hard to isolate is irresponsible.
The political effects are even harder to establish rigorously, partly because "political radicalization" is difficult to define and measure, and partly because the platforms that control the most relevant data have not been particularly cooperative with independent researchers.
What we do know: the platforms' own internal research (made public through the Facebook Papers and similar disclosures) found that their recommendation algorithms amplified divisive and emotionally provocative content because such content generated more engagement. The business logic was straightforward: a user who sees an outrageous post and spends three minutes arguing in the comments is more valuable, in advertising terms, than a user who reads something informative and moves on. The algorithm did not intend to radicalize anyone. It intended to maximize engagement, and radicalization was a byproduct.
Metaculus has a long-range question asking whether social media regulation in the United States will meaningfully reduce "affective polarization" — the degree to which Americans dislike members of the opposing party — by 2030. The median forecast is 14 percent probability of success. This is not a statement that regulation won't pass. It is a statement that even if regulation passes, it is unlikely to reach the core mechanism by which the platforms have been politically consequential: not by promoting specific ideologies, but by promoting emotional intensity as such.
The difference matters enormously for policy. Regulating TikTok as a national security threat addresses one specific application of the attention-harvesting model. Addressing the attention-harvesting model itself — which is to say, the fundamental business logic of surveillance capitalism — requires a different and much more demanding regulatory framework. It requires deciding, as a society, whether the right to monetize human attention is something that can be sold and traded without limit, or whether there are conditions under which it constitutes a public health risk substantial enough to warrant intervention.
That conversation has barely started. Meanwhile, the Kalshi contract on whether any major U.S. social media platform implements a binding daily time limit for users under 18 by the end of 2026 was trading at 31 percent. The market, as usual, is more pessimistic about the prospect of meaningful reform than the advocates who are pushing for it.
Selin Çelik is a staff writer at The Auguro covering digital culture, platform governance, and the politics of technology.