The Surveillance Economy Is Entering Its Second, More Consequential Phase
The first phase of surveillance capitalism — behavioral data collection for advertising targeting — is well documented. The second phase, in which behavioral data is used for credit, insurance, employment, and social control, is less visible and more consequential.

Shoshana Zuboff's The Age of Surveillance Capitalism (2019) provided the conceptual vocabulary for understanding the first phase of the surveillance economy: the systematic extraction of behavioral data from digital activity, the transformation of that data into predictive products, and the sale of those products to advertisers seeking to modify human behavior. The framework was timely and clarifying. But the book was written at the beginning of a transition, and the surveillance economy that is now operating is materially different from the one Zuboff analyzed.
The advertising-targeting application of behavioral surveillance was, in retrospect, the relatively benign phase. Advertising-targeted surveillance primarily modifies consumer choices — which products you buy, which services you use. The second phase of the surveillance economy is extending behavioral data collection to decisions with more consequential implications for individual lives: creditworthiness, insurance risk, employment suitability, housing access, and, in some jurisdictions, social compliance. The transition from advertising to consequential life decisions changes the power relationship in ways that demand different analytical frameworks and different regulatory responses.
The Signal
The insurance industry signal is the clearest indicator of the transition. Auto insurers in the United States now routinely use telematics data — continuous monitoring of driving behavior through smartphone apps or installed devices — in pricing models. Usage-based insurance programs that price risk based on actual driving behavior (speed, braking, time of day, location) rather than demographic proxies are now offered by every major auto insurer and have penetrated to 15-20% of the market.
The expansion from auto insurance is underway. Life insurance companies are exploring wellness app data as underwriting inputs; Progressive's home insurance uses behavioral data from smart home devices; health insurers are examining wearable device data in employer wellness programs. Each expansion represents an extension of continuous behavioral monitoring into a domain where the stakes — the cost of insurance, the availability of coverage — are higher than advertising.
The employment application is less visible but equally significant. Resume screening AI (analyzing language, formatting, and self-presentation) is now used by a majority of large US employers. Video interview AI (analyzing speech patterns, facial expressions, and sentiment in recorded interviews) is used by a substantial and growing fraction. Background check services have expanded their data collection to include social media monitoring, location history analysis, and behavioral inferences drawn from commercial data.
The Historical Context
The history of consequential surveillance has a longer arc than the surveillance capitalism framework suggests. Credit reporting — the systematic collection and distribution of information about individual financial behavior to inform lending decisions — has existed in the United States since the 19th century. The Fair Credit Reporting Act (1970) established the regulatory framework governing credit reports precisely because the consequential nature of credit decisions (affecting housing, employment, and financial access) required stronger protections than the market provided.
The contemporary surveillance expansion is, in important respects, a generalization of the credit reporting model: behavioral data is collected, processed into risk assessments, and used to inform consequential decisions about individuals. What is new is the scope of data (far beyond financial transactions), the sophistication of inference (algorithmic models that identify patterns invisible to human review), and the speed of deployment (these systems were built at a pace that has substantially outrun the regulatory frameworks designed for credit reporting).
The Chinese social credit system is typically cited as the endpoint toward which surveillance economy expansion leads. The comparison is partly useful and partly misleading. The Chinese system is explicitly state-operated and explicitly oriented toward political compliance; the US and European surveillance expansion is primarily private-sector operated and primarily oriented toward commercial risk assessment. These are different phenomena with different power structures. But the consequential similarities — that behavioral data is being used to determine access to important life resources — are real enough to make the comparison analytically instructive if politically imprecise.
The Mechanism
The behavioral surveillance expansion is proceeding through three converging channels.
Data ecosystem integration: The silos that historically separated commercial, financial, health, and government data are being breached through data broker markets, API integrations, and corporate consolidation. Data brokers — companies whose business model is aggregating and selling behavioral data — now compile dossiers on most American adults that include location history, purchase history, social media activity, search history, and inferences drawn from all of the above. This data is available for purchase by insurers, employers, landlords, and anyone else willing to pay.
Algorithmic risk assessment: The translation of behavioral data into consequential decisions requires models that identify patterns in large datasets — a task for which machine learning is genuinely well-suited. The proliferation of algorithmic decision-making in insurance underwriting, credit scoring, employment screening, and tenant selection has been documented extensively. What is less understood is the degree to which these models are opaque not just to the individuals they affect but to the organizations using them: the model produces a score, the score drives a decision, and neither the organization nor the individual typically understands which specific behaviors drove the score.
Regulatory arbitrage: The regulatory frameworks governing consequential data use — FCRA for credit, HIPAA for health, FERPA for education — were designed for specific data categories and specific institutional contexts. The behavioral surveillance economy operates in the gaps between these frameworks: location data is not a medical record but can reveal health conditions; purchase history is not a financial record but can infer financial status; social media behavior is not an employment application but is used in employment decisions. The gap between behavioral reality and regulatory framework is the space in which the surveillance expansion has proceeded most rapidly.
Second-Order Effects
The insurance market implications are creating perverse selection dynamics. Usage-based insurance programs attract drivers who know their behavior is better than average — the people least likely to have accidents. This leaves the standard-pricing pools with higher-risk drivers, increasing costs for those who decline behavioral monitoring. The expansion of telematics pricing will eventually create a two-tier insurance market: low-cost insurance for those who accept continuous monitoring, high-cost insurance for those who do not. The economic pressure toward accepting monitoring will be substantial.
The credit access implications of alternative data in lending are mixed in ways that complicate the standard surveillance critique. For populations with thin credit files — recent immigrants, young adults, people who have historically avoided credit — alternative behavioral data (rental payment history, utility payment history, cash flow patterns) can expand credit access that the traditional FICO model denies. The surveillance expansion in credit has a genuine inclusion argument as well as the exclusion risk that gets more attention.
The chilling effects on behavior are the most difficult to quantify and the most consequential for democratic society. When behavioral data is used for consequential decisions, the rational response to surveillance is behavioral modification — performing the behaviors that algorithmic systems reward and suppressing the behaviors they penalize. The aggregate effect of this rational individual adaptation is a population that moderates its behavior in response to surveillance infrastructure whose specific evaluation criteria are opaque. This is the dynamic that the Chinese social credit system makes explicit; the private-sector surveillance economy produces the same dynamic without the explicit state mandate.
What to Watch
Insurance behavioral data regulatory action: State insurance regulators are the primary oversight body for insurance underwriting. Watch for state insurance commissioner actions on telematics data use, algorithmic underwriting models, and the use of third-party behavioral data in insurance pricing. California, New York, and Washington have historically led insurance regulatory development.
CFPB alternative data guidance: The Consumer Financial Protection Bureau has jurisdiction over credit and lending practices. Watch for guidance on alternative data use in lending — particularly rules governing which behavioral data categories can be used in credit decisions and what disclosure is required when algorithmic models drive lending decisions.
EU AI Act enforcement on high-risk AI: The EU AI Act classifies AI systems used in employment, credit, and education as "high-risk" and requires conformity assessments, transparency, and human oversight. Watch for the first enforcement actions under these provisions — they will establish the practical interpretation of the high-risk category and create the compliance precedents that multinational companies will need to follow.
Data broker regulation: Federal and state legislative activity on data broker regulation is the most direct address to the behavioral data ecosystem that underlies the surveillance expansion. Watch for the California DELETE Act implementation and any federal data broker legislation that would require registration, access, and deletion rights for individuals in commercial behavioral data databases.