Skip to content

24.3 Managing Security-Sensitive Contributions

The XZ Utils compromise, discovered in March 2024,1 revealed a sobering reality: patient, sophisticated attackers will spend years building trust to insert malicious code into critical projects. The attacker "Jia Tan" contributed helpful patches and gradually assumed maintainer responsibilities before inserting a backdoor that would have affected countless Linux systems. This wasn't a technical failure—code review didn't catch the backdoor because the attacker had earned trust that bypassed scrutiny.

Open source thrives on accepting contributions from anyone. This openness has built remarkable software through global collaboration. But it also creates opportunity for malicious actors who understand that social engineering the maintainer is often easier than defeating technical controls. Maintainers must balance welcoming contributions—essential to project health—with appropriate caution about who gets access to what.

This section provides practical guidance for evaluating contributions with security implications, vetting contributors, and building trust gradually without closing the door on new participation.

Reviewing Contributions for Security Implications

Not all code is equally sensitive. A typo fix in documentation carries different risk than changes to authentication logic. Understanding which parts of your codebase have security implications helps focus review effort where it matters most.

Security-sensitive code area identification:

Code Area Security Sensitivity Why
Authentication/authorization Critical Controls access to everything
Cryptographic operations Critical Errors can silently break security
Input parsing/deserialization High Common vulnerability entry point
Network protocol handling High Exposed to untrusted input
File system operations High Path traversal, symlink attacks
Process/command execution High Command injection risks
Build system/CI configuration High Supply chain attack vector
Memory management (C/C++) High Buffer overflows, use-after-free
Dependency declarations Medium-High Supply chain attack vector
Error handling Medium Information disclosure, bypass
Logging Medium Information disclosure
Documentation/tests Low Limited direct impact

Identifying sensitive areas in your project:

  1. Map your attack surface: What receives untrusted input? What makes security decisions?

  2. Document sensitive files/directories: Create a CODEOWNERS file that requires additional review

    # Security-sensitive areas require security review
    /auth/**                 @security-team
    /crypto/**               @security-team @crypto-expert
    /src/parsers/**          @security-team
    .github/workflows/**     @maintainer-team
    package.json             @maintainer-team
    

  3. Mark functions and modules: Code comments identifying security-critical code

    # SECURITY-CRITICAL: Validates user authentication
    # Changes require security review before merge
    def validate_token(token: str) -> User:
    

  4. Automate detection: Use tools like GitHub Actions Labeler to flag PRs touching sensitive areas

    # GitHub Action to label PRs touching sensitive files
    on:
      pull_request:
        paths:
          - 'src/auth/**'
          - 'src/crypto/**'
    jobs:
      label:
        runs-on: ubuntu-latest
        steps:
          - uses: actions/labeler@v4
            with:
              repo-token: "${{ secrets.GITHUB_TOKEN }}"
    

Vetting New Contributors

New contributors are the lifeblood of open source—and a potential attack vector. The challenge is welcoming new participants while maintaining appropriate awareness of who is contributing what.

Contributor vetting approaches:

For any new contributor:

Check What to Look For Red Flags
GitHub profile Account age, activity history, other contributions Brand new account, no other activity
Contribution history Contributions to other projects Only contributions to your project
Communication style Clear, helpful, patient Pressure to merge quickly, dismissive of concerns
Technical competence Appropriate for contribution complexity Mismatch between stated experience and code quality

For contributors seeking elevated access:

Check Approach
Verify identity Video call, known employer verification
Track record Extended period of quality contributions (months, not weeks)
Community engagement Participation beyond code (discussions, helping users)
References Endorsements from known community members

What you cannot easily verify:

Be realistic about limitations: - Email addresses can be fabricated - GitHub accounts can be purchased - Personas can be manufactured over years - Even video calls can be deceived with determination

The goal isn't perfect vetting—it's raising the cost of attack while remaining welcoming to legitimate contributors.

Graduated trust model:

Trust should be earned incrementally, with access expanding as track record builds:

Level 1: First Contribution
├── All PRs require maintainer review
├── No merge access
├── Cannot modify CI/CD or dependencies
└── Standard code review

Level 2: Recognized Contributor (months of good contributions)
├── PRs still require review
├── Can be assigned as reviewer (non-binding)
├── Trusted for low-sensitivity changes
└── May receive triage permissions

Level 3: Trusted Contributor (extended track record)
├── Can merge low-sensitivity PRs after review
├── Review required for sensitive areas
├── Can approve others' low-sensitivity PRs
└── Invited to security discussions

Level 4: Maintainer (years of trust, verified identity)
├── Full merge access
├── Can modify CI/CD and dependencies
├── Participates in security response
└── Still requires review for sensitive changes from others

Key principle: No one should be able to unilaterally modify security-sensitive code, regardless of trust level. Even trusted maintainers should require a second set of eyes on sensitive changes.

Elevated Review for Sensitive Code

When contributions touch security-sensitive areas, standard code review isn't sufficient. Elevated review applies additional scrutiny proportional to risk.

Elevated review triggers:

Trigger Response
Changes to auth/authz code Require security-aware reviewer
Cryptographic changes Require reviewer with crypto expertise
Build system modifications Review by project lead/maintainer
New dependencies Dependency evaluation before merge
Large refactoring of sensitive code Extended review period, multiple reviewers
Changes from new contributors to sensitive areas Extra scrutiny, regardless of change size
Obfuscated or complex changes Demand clear explanation before review

Elevated review practices:

  1. Multiple reviewers: Sensitive changes need more than one approval

    # GitHub branch protection for sensitive paths
    - pattern: "src/auth/**"
      required_approving_review_count: 2
      require_code_owner_review: true
    

  2. Extended review period: Don't rush sensitive changes

    Policy: Security-sensitive PRs remain open for minimum 72 hours
    to allow community review, regardless of approval status.
    

  3. Verify understanding: Reviewer should be able to explain what the change does

    "If I can't explain what this code does and why it's safe, I haven't reviewed it—I've just approved it."

  4. Test security implications: Ensure test coverage for security properties

  5. Authentication changes: Test both positive and negative cases
  6. Input parsing: Test malformed input, boundary conditions
  7. Authorization: Test access control bypass attempts

  8. Document the review: Record what was checked and why it's acceptable

    Security review notes:
    - Verified input validation prevents injection
    - Checked that error messages don't leak sensitive info
    - Confirmed changes don't affect auth flow for existing users
    - Reviewed against [OWASP Top 10](https://owasp.org/www-project-top-ten/) relevant to this code
    

Handling Suspicious Contributions

Sometimes contributions feel wrong—unexplained complexity, pressure to merge, patterns that don't make sense. Trust your instincts, but respond professionally.

Suspicious contribution indicators:

Indicator Example Why It's Concerning
Unexplained complexity Simple fix wrapped in complex refactoring Obscures actual changes; may hide malicious code
Pressure to merge "This is urgent," "Other projects already merged this" Attempts to bypass careful review
Dismissive of questions "You don't need to understand this part" Hides intent; prevents proper review
Binary or obfuscated content Compressed/encoded strings, minified code Cannot be meaningfully reviewed
Build system changes Test infrastructure modifications Could execute arbitrary code during build
Mismatch with stated purpose "Typo fix" that changes logic Actual changes don't match description
Manufactured urgency "Security fix—merge immediately" Bypasses review by claiming time pressure
Scope creep Small PR becomes large refactoring May obscure malicious changes

XZ Utils specific patterns (what the attacker did):

  • Created helpful early contributions to build trust
  • Amplified through sock puppet accounts ("users" requesting features)
  • Applied social pressure to overworked maintainer
  • Made changes to test infrastructure (where malicious code was hidden)
  • Introduced complex, hard-to-review build system modifications
  • Exploited maintainer burnout and desire for help

Responding to suspicious contributions:

  1. Don't merge under pressure: "I appreciate the contribution, but I need time to review this thoroughly."

  2. Ask clarifying questions: "Can you explain why this change is necessary? I want to understand the approach."

  3. Request simplification: "This seems more complex than necessary for the stated goal. Can we simplify?"

  4. Demand explanation for complexity: "I don't understand what this code does. Can you add comments or break it into smaller changes?"

  5. Verify claims independently: If they say "Project X does it this way," check Project X yourself.

  6. Involve other reviewers: "I'd like another maintainer to review this before merging."

  7. Document your concerns: Record what concerned you and why, even if you ultimately merge.

If you suspect malicious intent:

  • Do not merge the contribution
  • Do not accuse publicly (you might be wrong)
  • Document the patterns you observed
  • Consult with other maintainers privately
  • If evidence is strong, consider reporting to platform (GitHub Trust & Safety)
  • Block user if behavior continues

The Balance Between Openness and Caution

Open source works because anyone can contribute. Excessive gatekeeping drives away legitimate contributors—including those who might become your next maintainer. The goal is appropriate caution, not paranoia.

Openness vs. caution balance:

Too Open Balanced Too Cautious
Merge without review Review all changes Reject all external contributions
Trust immediately Trust gradually Never trust anyone
No sensitive area distinction Elevated review for sensitive areas Treat everything as sensitive
Accept unexplained changes Ask questions, expect answers Demand exhaustive justification
Anyone can modify anything Access proportional to trust Only founders can change code

Principles for balance:

  1. Be welcoming by default: Assume good faith unless given reason otherwise

  2. Separate trust from access: You can trust someone's intentions while still requiring review of their code

  3. Scale scrutiny to risk: Documentation changes get quick review; auth changes get careful review

  4. Make expectations clear: Document what review looks like for different types of changes

  5. Don't punish good contributors: Make the process lightweight for well-known contributors in non-sensitive areas

  6. Remember: most contributors are legitimate: The XZ incident was exceptional; don't treat every contributor as a suspect

Lessons from the XZ Utils Incident

The XZ Utils compromise provides specific lessons for maintainers:

What happened: - A lone maintainer was overwhelmed and struggling with burnout - "Jia Tan" appeared as helpful contributor, gradually earned trust - Sock puppet accounts created pressure for faster merges and more access - Over ~2 years, attacker gained commit access and ultimately maintainership - Malicious code was hidden in test fixtures and build system - Backdoor would have affected SSH on countless Linux systems - Discovered by chance when a developer noticed performance anomalies

Applied lessons:

Lesson Action
Maintainer burnout creates vulnerability Seek help before desperation; don't accept help uncritically
Test infrastructure is an attack vector Review test changes as carefully as production code
Social pressure is a technique Resist urgency; good contributions can wait for proper review
Long cons exist Time spent doesn't guarantee trustworthiness
Identity is hard to verify Focus on behavior patterns, not claimed credentials
Build systems hide complexity Scrutinize autoconf/cmake/makefile changes especially
Single maintainer is a risk Seek co-maintainers; ensure multiple people review sensitive changes

What maintainers should do differently:

  1. Require multi-person review for sensitive areas: No single person should be able to approve security-critical changes

  2. Be wary of test/build system complexity: If you can't understand it, don't merge it

  3. Watch for social engineering patterns: Multiple accounts pressuring you, manufactured urgency

  4. Take care of yourself: Burnout makes you vulnerable; better to step back than accept help desperately

  5. Keep code reviewable: Reject contributions that can't be clearly understood

  6. Verify identity for elevated access: Video calls, employer verification, community vouching

The XZ attack succeeded not by defeating technical controls but by exploiting the trust that makes open source collaboration possible. Our defenses need to account for both technical and social security.

Recommendations

We recommend the following practices for managing security-sensitive contributions:

  1. Identify sensitive code areas: Know which parts of your codebase have security implications. Document them and require elevated review.

  2. Implement graduated trust: New contributors start with limited access that expands over time with demonstrated trustworthiness.

  3. Require multi-person review for sensitive changes: No single person—regardless of trust level—should be able to unilaterally modify authentication, cryptography, or build infrastructure.

  4. Trust your instincts about suspicious contributions: If something feels wrong, slow down. Ask questions. Demand explanations. Don't merge under pressure.

  5. Watch for social engineering patterns: Manufactured urgency, sock puppet amplification, and pressure to bypass review are attack techniques, not just personality quirks.

  6. Take care of yourself: Maintainer burnout creates security vulnerability. Seek sustainable help before you're desperate enough to accept any help.

  7. Keep code reviewable: Reject changes you can't understand. If a contribution requires trusting the contributor rather than verifying the code, that's a red flag.

  8. Remember that most contributors are genuine: Apply appropriate caution without creating a hostile environment. The goal is welcoming but careful, not paranoid.

The XZ incident was a wake-up call, but the response shouldn't be closing open source to contributions. Instead, it's about building processes that make social engineering attacks harder while preserving the collaborative spirit that makes open source work.


  1. Andres Freund, "backdoor in upstream xz/liblzma leading to ssh server compromise," openwall oss-security mailing list, March 29, 2024, https://www.openwall.com/lists/oss-security/2024/03/29/4