James C. Cooper

When a price fails fully to transmit relevant information (or sends false information), consumer decisions are faulty, reducing societal welfare. This seems to describe the online content market, where apps and web sites are free, but collect information in ways that some consumers may find harmful, and may not fully appreciate. In a full information world, even though the nominal prices is $0, if consumers are able to estimate potential prices harms, firms still have incentives to engage in only net beneficial (to society) data practices. But when consumers do not appreciate potential harm, firms do not internalize it, and may engage in net harmful data practices. Regulatory solutions, accordingly, tend to focus on controlling the externality through limiting certain data practices—an approach that can be fraught with error given consumers’ heterogenous preferences for privacy and online content. When asymmetric information makes it hard for consumers to appreciate the full harm associated with a product—and firms have superior information about likely harms and are in the best position to remedy it—strict liability has been shown to lead to efficient care decisions. Because of the contractual relationship, moreover, strict liability allows costs of care and residual risk to be transmitted via price to lead to efficient activity levels. The efficiency of this solution, however, rests on an authority being able to accurately estimate harm. As such, strict liability could be useful when data collection that can lead to monetary harms—e.g., payment card fraud—that are readily quantifiable and generally felt equally by consumers. But in cases that involve non-tangible harm from data practices—e.g., tracking of persistent identifiers—strict liability is problematic for two reasons: first, when ad-supported content is provided for free, there is no price to transmit harm and care cost information to consumers; and second, because intangible harms are felt heterogeneously, consumers have private information, likely rendering authority estimates of harm inaccurate. That said, errors in estimating harm are less costly than those involved in setting standards. Ideally, one would like to elicit the separation mechanism of the market to allow consumers to sort into data practices they find net beneficial. The unraveling principle suggests that consumers do not need to understand the fine details of privacy policies, as firms with better policies have incentives to reveal those favorable terms to consumers to win their business. This incentive to compete on privacy exists as long as (1) consumers value additional privacy sufficiently to justify the cost of revelation; and (2) firms can credibly commit to their privacy promises. Whether condition (1) holds is uncertain—the costs of revelation may be small, but consumer response—given the tradeoffs between data-driven benefits and privacy harms—is unclear. Where regulators may have a role to play, however, is facilitating condition (2) through vigorous enforcement of deception—including monetary remedies—involving privacy promises.