Skip to content

30.1: Market Failures in Open Source Security

The open source ecosystem produces remarkable value—by some estimates, the replacement cost of widely-used open source software exceeds $8.8 trillion1. Yet the security investment in this infrastructure remains strikingly inadequate. The Log4j project, whose library runs in millions of applications handling billions of dollars in transactions, was maintained largely by volunteers in their spare time when Log4Shell was discovered. OpenSSL, which secures the majority of internet traffic, was maintained by two part-time developers when Heartbleed exposed a critical vulnerability in 2014.

This persistent underinvestment isn't a mystery if examined through an economic lens. The open source security challenge exhibits multiple well-documented market failures—situations where free markets fail to allocate resources efficiently. Understanding these market failures explains why good intentions and market forces alone cannot solve the problem, and why various forms of intervention—regulatory, philanthropic, and organizational—are necessary to achieve socially optimal security outcomes.

Public Goods and Free-Rider Problems

Economists classify goods along two dimensions: rivalrous (one person's use diminishes another's) versus non-rivalrous (use doesn't diminish availability), and excludable (access can be restricted) versus non-excludable (everyone can access). Public goods are both non-rivalrous and non-excludable—like national defense or clean air.

Open source security improvements exhibit public good characteristics. When a maintainer fixes a vulnerability in a library, that fix benefits every user of the library equally. The security improvement is non-rivalrous (one company's protection doesn't diminish another's) and largely non-excludable (the fix is available to all users). From an economic perspective, security in open source software functions as a public good.

Public goods suffer from the free-rider problem: rational actors have incentives to consume without contributing, expecting others to bear the costs. If security improvements will happen regardless of your contribution, why pay for them? Better to free-ride on others' investments.

The free-rider problem manifests clearly in open source:

  • Consumers don't pay: Organizations using Log4j, OpenSSL, and other critical libraries typically contribute nothing to their security—neither money nor engineering time
  • Contributors expect others: Even organizations that recognize the need often delay, hoping competitors will invest first
  • No exclusion mechanism: Projects cannot restrict security updates to those who fund them without abandoning the open source model

The result is predictable: investment falls below socially optimal levels because each potential contributor expects others to bear the cost. Everyone benefits from the security improvements, but no one has a direct incentive to pay for them—the economics of public goods create systematic underinvestment.

Consider the numbers: Synopsys research found that 96% of commercial codebases contain open source components2, yet the vast majority of companies using open source contribute nothing to the projects they depend on. This isn't irrational behavior—it's economically rational behavior in a public goods context. And it's precisely this rationality that creates underinvestment.

Information Asymmetry

Information asymmetry exists when one party to a transaction has more or better information than another. In the classic formulation by economist George Akerlof, sellers of used cars know their vehicles' true condition while buyers cannot easily assess quality—leading to market dysfunction.

Open source dependency selection exhibits severe information asymmetry. When developers choose a package, they observe:

  • Download counts and popularity metrics
  • Documentation quality
  • Feature completeness
  • API design and usability
  • Recent commit activity

What they cannot easily observe:

  • Security of the codebase
  • History of security practices
  • Vulnerability response capability
  • Quality of security-relevant code paths
  • Maintainer security expertise

This asymmetry means selection happens on observable dimensions (features, popularity, ease of use) rather than security quality. A beautifully documented package with a polished website may contain critical vulnerabilities, while a less-polished alternative may be substantially more secure. Users cannot tell the difference without expertise and significant investigation time that most don't have.

The asymmetry extends to maintainers themselves. Many maintainers don't know whether their code is secure—they wrote it to work, not necessarily to resist adversarial attack. Security requires specialized knowledge that most developers lack. Projects may be sincerely maintained without any awareness of lurking vulnerabilities.

Tools like OpenSSF Scorecard attempt to reduce information asymmetry by measuring observable security practices. However, even these proxies have limitations—a project can score well on process metrics while containing undiscovered vulnerabilities, and vice versa.

Negative Externalities

Externalities occur when a transaction imposes costs or benefits on parties not involved in that transaction. Pollution is the classic negative externality: a factory imposes costs on nearby residents who receive no compensation. Externalities cause market failure because decision-makers don't bear the full costs of their choices.

Vulnerabilities in open source software create significant negative externalities. When a maintainer releases code with a vulnerability:

  • Downstream consumers face security risk they didn't choose
  • End users of applications built on vulnerable code are exposed
  • Third parties may be harmed by breaches exploiting the vulnerability
  • The internet ecosystem faces degraded trust and increased attack surface

None of these affected parties had any role in the security decisions that exposed them to risk. The maintainer—who made the choices that determined security quality—doesn't bear these costs.

The supply chain amplifies externality effects. A vulnerability in a widely-used library propagates to every application using that library, each of which exposes its own users. A single decision by a single maintainer can affect millions of end users—a massive externality footprint.

Consider the externality chain from Log4Shell:

  1. Log4j maintainers included JNDI lookup functionality (security risk decision)
  2. Thousands of applications incorporated Log4j
  3. Millions of users were exposed to remote code execution risk
  4. Organizations worldwide spent billions on emergency remediation
  5. Some organizations suffered breaches with cascading impacts

The maintainers who made the original design decisions bore essentially none of these costs. Without mechanisms to internalize externalities, there's no economic force pushing toward socially optimal security investment.

Tragedy of the Commons

The tragedy of the commons describes resource depletion when multiple parties share access to a resource without coordinated management. Each party has incentive to extract maximum value, but the aggregate effect destroys the shared resource.

Critical open source infrastructure functions as a commons. Projects like OpenSSL, curl, or the Linux kernel provide shared infrastructure that everyone uses:

  • Organizations extract value through use without contributing proportionally
  • Each individual user's extraction seems harmless
  • Aggregate extraction exceeds sustainable maintenance capacity
  • The shared resource (project health, security) degrades

Before Heartbleed, OpenSSL was used by the majority of the world's web servers—representing trillions of dollars in commerce—yet the project was maintained by two part-time developers with minimal funding3. The commons was being depleted: security debt accumulated as maintainers couldn't devote adequate attention to security, and organizations extracting value contributed almost nothing to maintenance.

The commons tragedy is particularly acute for security work:

  • Invisible value: Security investment produces absence of incidents, not visible features
  • Unglamorous work: Security maintenance is less rewarding than new development
  • Difficult to claim credit: Organizations can't differentiate on contributions to shared infrastructure
  • No natural ownership: Nobody "owns" the commons to manage it

The post-Heartbleed creation of the Core Infrastructure Initiative (now part of OpenSSF) represented explicit recognition that the commons needed coordinated management. But such coordination faces its own challenges—who decides which projects receive investment? How much is enough? These are collective action problems without easy market solutions.

The Lemons Problem

Economist George Akerlof's "Market for Lemons"4 describes how information asymmetry leads to adverse selection—a process where low-quality products drive out high-quality ones. In used car markets, since buyers can't distinguish quality, they offer average prices. Sellers of high-quality cars (worth above average) withdraw from the market. The remaining pool degrades in quality, buyers lower their offers, more quality sellers exit, and the market spirals toward lemons.

Adverse selection dynamics appear in open source package selection. Since users cannot easily assess security quality:

  • Selection happens on observable dimensions (features, popularity)
  • Security investment provides no competitive advantage (users can't see it)
  • Projects rationally underinvest in security (invisible quality)
  • Security-focused projects cannot differentiate themselves effectively
  • The average security quality in the ecosystem declines

The lemons dynamic means that even projects wanting to compete on security face structural disadvantage. Investing in security takes resources from feature development and marketing—observable dimensions where competition happens. Without ability to signal security quality, investing in security represents pure cost with no competitive return.

Certification and badging programs (like OpenSSF Best Practices Badge) attempt to create quality signals that combat adverse selection. However, these signals are imperfect and adoption remains limited. The market still largely selects on visible features rather than security quality.

Why Market Forces Alone Are Insufficient

Each market failure identified above would alone cause underinvestment. Together, they create a structural economic problem that market forces cannot resolve:

  1. Public goods mean individual investment benefits everyone, reducing incentives to contribute
  2. Free-riding means rational actors wait for others to invest
  3. Information asymmetry prevents quality signals that would reward investment
  4. Externalities mean decision-makers don't bear the costs of poor security
  5. Commons dynamics lead to resource depletion without coordinated management
  6. Adverse selection means security investment provides no competitive advantage

No combination of voluntary market actions corrects these dynamics. Even if every actor behaves rationally and with good intentions, the equilibrium involves underinvestment in open source security.

Why won't market forces self-correct?

Some argue that security failures will eventually cause enough harm that markets will adjust. This view misunderstands the dynamics:

  • Diffuse harm: Security failures impose costs across many parties, none of whom individually has sufficient incentive to fund prevention
  • Probabilistic damage: Expected costs from underinvestment don't materialize predictably—organizations gamble and often win
  • Time horizons: Corporate decision-makers often operate on short time horizons that discount future security risks
  • Attribution difficulty: When breaches occur, attributing them to specific underinvestment decisions is difficult

The result is persistent, structural underinvestment that persists even after spectacular failures. Following Heartbleed, industry attention spiked briefly but didn't fundamentally change investment patterns. The market failure is stable—without intervention, it will continue.

Implications for Intervention

Understanding market failure economics points toward intervention approaches:

For policymakers:

  • Market failures justify regulatory intervention
  • Liability reform could internalize externalities
  • Funding for security as public goods is economically rational
  • Information disclosure requirements could reduce asymmetry

For organizations:

  • Collective action (foundations, consortia) addresses free-rider problems
  • Shared investment is more efficient than individual security efforts
  • Competitive advantage comes from reducing collective risk, not individual protection
  • Contributing to commons maintenance is rational long-term strategy

For the ecosystem:

  • Security quality signals (Scorecard, badges) combat information asymmetry
  • Centralized funding (Alpha-Omega) manages commons resources
  • Standards and requirements reduce adverse selection
  • Transparency requirements support informed selection

The economic framework suggests that "more investment" is necessary but insufficient—structural interventions addressing specific market failures are required for sustainable improvement.

Recommendations

We recommend stakeholders apply economic understanding to their decisions:

  1. Recognize structural dynamics: Underinvestment isn't moral failure—it's rational response to broken incentives

  2. Support collective action: Foundations and consortia address free-rider problems that individual action cannot

  3. Invest in information: Tools and practices that make security visible enable markets to function better

  4. Advocate for liability reform: Internalizing externalities through liability creates economic incentives for security

  5. Fund public goods directly: Security improvements benefit everyone; funding them collectively is economically efficient

  6. Create quality signals: Certifications, badges, and metrics that reduce information asymmetry help markets reward quality

  7. Coordinate commons management: Critical infrastructure requires coordinated investment, not just market forces

Economic understanding doesn't solve market failures—but it clarifies why they persist and what interventions might actually work. Policy and strategy built on this foundation can address root causes rather than merely treating symptoms.


  1. Hoffmann, M., Nagle, F., & Zhou, Y. (2024). "The Value of Open Source Software." Harvard Business School Working Paper No. 24-038. 

  2. Synopsys. (2024). "Open Source Security and Risk Analysis Report." 

  3. Based on 2014 reports confirming OpenSSL was maintained by two part-time developers with annual donations under $2,000 and limited individual compensation. 

  4. Akerlof, G.A. (1970). "The Market for 'Lemons': Quality Uncertainty and the Market Mechanism." Quarterly Journal of Economics, 84(3), 488-500.