11.3 Evaluating Project Health¶
Understanding your dependency tree (Section 11.2) tells you what packages you depend on. But not all dependencies carry equal risk—a well-maintained package with active security practices poses less supply chain risk than an abandoned project with a single anonymous maintainer. Evaluating project health provides the qualitative context that transforms a dependency inventory into actionable risk assessment.
This section provides a framework for systematically evaluating open source project health across multiple dimensions: maintainer activity, community dynamics, security practices, maturity indicators, and documentation quality.
Maintainer Activity¶
Maintainer activity is the most fundamental health indicator. Active maintenance means security patches are applied, bugs are fixed, and the project adapts to ecosystem changes.
Key Metrics:
| Metric | How to Measure | Healthy Signal |
|---|---|---|
| Commit frequency | Commits per month | Regular activity (varies by project) |
| Release cadence | Days between releases | Consistent, documented schedule |
| Issue response time | Median time to first response | < 7 days |
| PR merge time | Median time to merge | < 30 days for non-trivial PRs |
| Issue close rate | Closed issues / opened issues | > 0.7 (closing more than opening) |
| Last activity | Days since last commit | < 90 days (with exceptions) |
Gathering Metrics:
GitHub API provides most maintainer activity data:
# Get recent commits
curl -H "Authorization: token $GITHUB_TOKEN" \
"https://api.github.com/repos/expressjs/express/commits?per_page=100"
# Get issue statistics
curl -H "Authorization: token $GITHUB_TOKEN" \
"https://api.github.com/repos/expressjs/express/issues?state=all"
Tools:
- GitHub Insights: Built-in graphs for commit activity, contributors
- Augur: Open source CHAOSS metrics platform
- GrimoireLab: Comprehensive software development analytics
Interpreting Activity Patterns:
Activity patterns require contextual interpretation:
- Mature, stable projects: May have low commit frequency because they're "done"—distinguish from abandoned
- Seasonal patterns: Some projects correlate with academic calendars, corporate priorities
- Burst activity: Major releases cause activity spikes; look for sustained baseline activity
- Response vs. proactive: Is activity reactive (only responding to issues) or proactive (improvements)?
Red Flags:
- Last release > 2 years ago (unless explicitly stable/complete)
- Issue response time > 30 days consistently
- Large PR backlog with no maintainer engagement
- Activity from automated bots only, no human commits
- Sudden cessation of activity from previously active project
Community Health¶
Beyond individual maintainers, community health indicates project resilience and governance quality.
Contributor Diversity:
Contributor concentration measures how distributed development effort is:
- High concentration (top 1-2 contributors = 80%+ commits): High key-person risk
- Low concentration (top 5 contributors = 50% commits): Distributed, resilient
Bus Factor:
The bus factor (or truck factor) represents the minimum number of contributors whose departure would leave a project without sufficient expertise to maintain it.
Calculating Bus Factor:
A simplified approach:
- Count contributors with > 10% of recent commits
- Identify contributors with unique knowledge of critical areas
- The lower of these numbers approximates bus factor
Tools provide automated calculation:
- truckfactor and Truck-Factor: Command-line tools implementing academic algorithms
- GitHub contributor graphs: Visual representation of contribution distribution
Bus Factor Interpretation:
| Bus Factor | Risk Level | Implication |
|---|---|---|
| 1 | Critical | Single point of failure |
| 2-3 | High | Limited redundancy |
| 4-7 | Medium | Reasonable resilience |
| 8+ | Low | Well-distributed knowledge |
Governance Transparency:
Healthy projects document how decisions are made:
- GOVERNANCE.md: Decision-making processes
- MAINTAINERS.md: Who has commit/release authority
- Code of conduct: Community behavior expectations
- Meeting notes: For projects with regular meetings
CHAOSS Metrics:
The CHAOSS (Community Health Analytics in Open Source Software) project defines standard community health metrics:
- Contributors: New, casual, and regular contributor counts
- Code Review: Review coverage and responsiveness
- Issue Response: Time to first response, resolution time
- Organizational Diversity: Distribution across employers
Red Flags:
- Single maintainer for >1 year with no succession planning
- No governance documentation for large projects
- Maintainer burnout signals (hostile responses, extended absences)
- Corporate single-sponsor with no community
- Contentious code of conduct discussions or violations
Security Practices¶
Security-specific practices indicate how seriously a project takes security maintenance.
SECURITY.md and Disclosure Policy:
A SECURITY.md file is the standard location for security contact information and disclosure policy.
What Good SECURITY.md Contains:
- Security contact email or form
- Expected response timeline
- Supported versions receiving security updates
- Disclosure policy (coordinated, full, etc.)
- Security advisories location
Checking for SECURITY.md:
# Check if SECURITY.md exists
curl -I "https://api.github.com/repos/expressjs/express/contents/SECURITY.md"
# GitHub Security tab indicates security policy presence
Security Practice Indicators:
| Practice | Signal | How to Check |
|---|---|---|
| SECURITY.md | Has disclosure process | Repository root |
| GitHub Security Advisories | Uses advisory system | Security tab |
| Dependency updates | Keeps dependencies current | Dependabot/Renovate PRs |
| Code scanning | Automated security analysis | Actions, GitHub code scanning |
| Signed releases | Verifiable authenticity | Release signatures |
| 2FA for maintainers | Account security | Not directly visible |
OpenSSF Best Practices Badge:
The OpenSSF Best Practices Badge (formerly CII Badge) certifies that projects meet security criteria:
- Passing: Basic security practices
- Silver: Enhanced practices
- Gold: Comprehensive security program
Check badge status at bestpractices.dev.
Vulnerability History:
Past vulnerability handling indicates future behavior:
- How quickly were past CVEs addressed?
- Were advisories communicated clearly?
- Did patches introduce regressions?
- Is there a pattern of similar vulnerabilities?
Red Flags:
- No SECURITY.md or security contact
- Known vulnerabilities unpatched for extended periods
- Dismissive response to security reports
- No use of dependency updates (Dependabot, Renovate)
- Release artifacts without signatures or checksums
Project Maturity¶
Maturity indicators help distinguish between experimental projects and production-ready dependencies.
Age and Stability:
| Indicator | How to Assess | Consideration |
|---|---|---|
| Project age | First commit date | Older isn't always better |
| Version number | Semantic versioning | v1.0+ suggests stability |
| Release history | Number and consistency | Regular, predictable releases |
| Breaking changes | Major version frequency | Frequent breaking changes = instability |
Adoption Breadth:
Wide adoption provides some validation:
- Download counts: npm weekly downloads, PyPI downloads
- Dependent projects: GitHub "used by" count
- Stars and forks: Popularity indicators (noisy but useful)
- Production references: Known users, case studies
Gathering Adoption Data:
# npm downloads
curl "https://api.npmjs.org/downloads/point/last-week/express"
# PyPI downloads (via pypistats)
pip install pypistats
pypistats overall requests
Interpreting Adoption:
Adoption indicates trust but not safety:
- Popular packages are higher-value targets for attackers
- Wide adoption means security issues have broader impact
- Unpublished or abandoned popular packages (as with
left-pad's 2016 unpublishing incident) can create cascading risks - Niche packages may be well-maintained but less scrutinized
API Stability:
Stable APIs suggest mature design:
- Backward-compatible releases
- Deprecation periods before removal
- Migration guides for breaking changes
- Long-term support (LTS) versions
Red Flags:
- Frequent breaking changes without clear versioning
- Version 0.x for years without progression
- Declining downloads over extended period
- No stable release despite production usage
Documentation Quality¶
Documentation quality correlates with overall project health and security considerations.
Documentation as Health Signal:
Well-documented projects tend to be:
- More carefully designed
- More professionally maintained
- More security-conscious
- More sustainable
Documentation Assessment:
| Element | Quality Indicator |
|---|---|
| README | Clear purpose, installation, basic usage |
| API documentation | Complete, accurate, maintained |
| Examples | Working, current examples |
| Changelog | Detailed change history |
| Contributing guide | Clear contribution process |
| Architecture docs | Design decisions documented |
Onboarding Experience:
The new contributor experience indicates project health:
- Can you set up the development environment from documentation?
- Do contribution guidelines exist and make sense?
- Are tests passing and easy to run?
- Is the code review process documented?
Projects that are hard to contribute to often lack contributors—which affects long-term maintenance.
Security Documentation:
Security-relevant documentation includes:
- Security considerations in README
- Threat model documentation
- Secure usage guidelines
- Known limitations
Red Flags:
- README with outdated examples
- No documentation beyond README
- Documentation contradicts actual behavior
- Security-relevant features undocumented
Comprehensive Health Assessment¶
Combining these dimensions provides holistic health assessment:
Health Assessment Template:
# Project Health Assessment: [Package Name]
### Maintainer Activity
- Commit frequency: ___ commits/month
- Last release: ___ days ago
- Issue response time: ___ days (median)
- Activity trend: [Increasing/Stable/Declining]
### Community Health
- Active contributors: ___
- Bus factor: ___
- Corporate diversity: [Single/Multiple/Community]
- Governance: [Documented/Informal/Unclear]
### Security Practices
- SECURITY.md: [Yes/No]
- OpenSSF Badge: [None/Passing/Silver/Gold]
- Dependency updates: [Automated/Manual/None]
- Vulnerability history: [Good/Concerning]
### Maturity
- Age: ___ years
- Current version: ___
- Weekly downloads: ___
- API stability: [Stable/Evolving/Unstable]
### Documentation
- README quality: [High/Medium/Low]
- API docs: [Complete/Partial/Missing]
- Security docs: [Yes/No]
### Overall Health: [Healthy/Concerns/Unhealthy]
### Key Risks: [List top concerns]
### Recommendation: [Continue/Monitor/Replace/Accept Risk]
Decision Thresholds:
| Health Level | Criteria | Action |
|---|---|---|
| Healthy | Activity, security practices, community all positive | Continue using |
| Concerns | 1-2 areas showing warning signs | Monitor closely, plan alternatives |
| Unhealthy | Multiple red flags or critical failure | Replace or accept documented risk |
Prioritizing Assessment:
You can't deeply assess every dependency. Prioritize:
- Direct dependencies: You chose them; evaluate them
- High fan-in packages: Concentrated risk requires scrutiny
- Security-sensitive packages: Crypto, auth, input handling
- Outdated packages: Already flagged for attention
Connection to Automated Tools¶
Manual assessment doesn't scale. The next section (11.4) covers automated tools, but health assessment informs how you interpret automated scores:
- OpenSSF Scorecard: Automates many security practice checks
- deps.dev: Aggregates project metadata
- libraries.io: Tracks releases, dependencies, activity
Automated tools provide data; health assessment provides interpretation.
Recommendations¶
For Developers:
-
Assess before adopting. Spend 15 minutes on health assessment before adding new dependencies. Check README, recent activity, and SECURITY.md.
-
Maintain awareness. For critical dependencies, subscribe to release notifications. Know when maintainership changes.
-
Contribute to health. If you depend on a project, contribute—bug reports, documentation, code. Your contributions improve project health for everyone.
-
Document your assessments. Keep records of why you chose dependencies and what risks you accepted.
For Security Practitioners:
-
Build assessment into process. Include health evaluation in security reviews of new dependencies.
-
Track health over time. Project health changes. Annual reassessment catches degradation before incidents.
-
Prioritize by risk. Focus detailed assessment on dependencies with highest potential impact.
-
Establish organizational thresholds. Define what health criteria are required for different use cases (development tools vs. production dependencies).
For Engineering Managers:
-
Allocate time for evaluation. Health assessment takes time. Budget it into dependency adoption process.
-
Maintain dependency decision records. Document why dependencies were chosen and what risks were accepted.
-
Monitor critical dependencies. Track health metrics for your most important dependencies over time.
-
Plan for unhealthy dependencies. Know which dependencies would be difficult to replace; monitor their health especially closely.
Understanding Project Support Commitments¶
Not all open source projects provide the same level of support or maintenance commitment. Understanding what level of support you can expect from a dependency is crucial for risk assessment—a project with no maintenance commitment poses different risks than one with funded, ongoing support.
The challenge is that open source maintainers rarely have formal SLAs or support contracts. Most projects exist somewhere on a spectrum from abandoned hobby projects to professionally maintained critical infrastructure. Without clear signals, developers make risky assumptions about what to expect when vulnerabilities are found or when breaking changes in dependencies require updates.
Project Support Archetypes:
While every project is unique, most fall into recognizable patterns based on their maintenance commitment:
| Archetype | Characteristics | Typical Examples | Risk Profile |
|---|---|---|---|
| No Support | Archived, deprecated, or "complete" simple utilities. No expectation of updates. | Archived repositories, homework projects, minimal utilities like is-even, intentionally complete libraries |
High unless truly stable and complete. Vulnerability will not be patched. |
| Minimal Support | Single maintainer hobby project. Updates when maintainer has time and interest. May stop without notice. | Personal projects, early-stage experiments, niche tools with single author | High for production use. Suitable for development tools or non-critical paths. |
| Best Effort | Small team or community. Regular but uncommitted releases. Active when maintainers available. | Community-driven projects, medium-sized libraries, projects with 2-5 active contributors | Medium. Monitor closely. Suitable for most use cases with monitoring. |
| Promised Support | Funded maintenance, roadmaps, defined support periods. Commercial backing or foundation stewardship. | Foundation projects (Apache, CNCF), enterprise-backed OSS, projects with corporate sponsorship and paid maintainers | Lower risk. Verify commitment is real and ongoing. Preferred for critical infrastructure. |
Important caveat: These archetypes describe current status, not future guarantees. A well-funded project can lose backing; a hobby project can gain commercial support. Assess periodically.
How to Determine Support Level:
1. SECURITY-INSIGHTS.yml
The OpenSSF Security Insights specification provides a standardized, machine-readable way for projects to communicate their support model. Check for SECURITY-INSIGHTS.yml in the repository root or .github/ directory.
Example indicators in Security Insights:
# SECURITY-INSIGHTS.yml
header:
schema-version: "1.0.0"
project-url: "https://github.com/example/project"
project-lifecycle:
status: active # Options: active, inactive, deprecated, archived
roadmap-url: "https://github.com/example/project/roadmap"
bug-fixes-only: false # If true, only critical fixes, no features
contribution-policy:
accepts-pull-requests: true
accepts-automated-pull-requests: true
The project-lifecycle section explicitly communicates maintenance commitment:
status: active: Ongoing development and maintenancestatus: inactive: Minimal or no ongoing workstatus: deprecated: Actively discouraging new usestatus: archived: No further changes expectedbug-fixes-only: true: Maintenance mode—security fixes but no new features
Checking for Security Insights:
# Check if project has SECURITY-INSIGHTS.yml
curl -I "https://raw.githubusercontent.com/username/repo/main/SECURITY-INSIGHTS.yml"
# Or check .github directory
curl -I "https://raw.githubusercontent.com/username/repo/main/.github/SECURITY-INSIGHTS.yml"
2. Explicit Documentation
Some projects document their support commitment in README or other files:
Clear commitment examples:
"This project is actively maintained by the XYZ team at Company. We commit to addressing critical security vulnerabilities within 14 days."
"Maintenance mode: This project is feature-complete. We will address critical bugs and security issues but are not adding new features."
"Seeking maintainer: I no longer have time to maintain this project. If you'd like to take over maintenance, please contact me."
Red flags:
- No mention of maintenance status anywhere
- Last release >2 years ago with no explanation
- Multiple "seeking maintainer" issues with no response
- README says "actively maintained" but no commits in months
3. Funding and Organizational Signals
Funding indicates commitment (though not always):
Check for funding sources:
- GitHub Sponsors: Check if project has sponsors badge/link
- Open Collective: Listed at opencollective.com
- Tidelift: Commercial support through tidelift.com
- Foundation membership: Part of Apache, CNCF, OpenSSF, etc.
- Corporate backing: Maintainers employed specifically to work on project
Finding funding information:
# Check for GitHub Sponsors
curl "https://api.github.com/repos/username/repo" | jq '.has_sponsors'
# Check for FUNDING.yml
curl "https://raw.githubusercontent.com/username/repo/main/.github/FUNDING.yml"
Foundation membership signals:
Projects under foundations often have governance and funding:
- Apache Software Foundation: apache.org/index.html#projects-list
- Cloud Native Computing Foundation: cncf.io/projects
- Linux Foundation: Various projects under LF umbrella
- Python Software Foundation, Node.js Foundation, etc.
Foundation membership doesn't guarantee active maintenance, but provides governance structure and often funding.
4. Indirect Activity Signals
When explicit commitment isn't documented, infer from observable behavior:
Strong support signals:
- Regular releases (weekly/monthly/quarterly depending on project type)
- Issues and PRs responded to within days
- Security advisories handled promptly with patches
- Active roadmap with completed milestones
- Multiple employed maintainers (check contributor profiles)
- Long-term support (LTS) versions maintained
Weak support signals:
- Sporadic releases with gaps of many months
- Issues accumulating without response
- PRs sitting for months
- No roadmap or abandoned roadmap
- Single volunteer maintainer
- No documented security process
5. Historical Patterns
Past behavior predicts future behavior:
Review vulnerability response history:
# Check GitHub Security Advisories
# Visit: https://github.com/username/repo/security/advisories
# Check CVE database
curl "https://services.nvd.nist.gov/rest/json/cves/2.0?keywordSearch=projectname"
Questions to answer:
- How quickly were past vulnerabilities patched?
- Were patches released for older supported versions?
- Were vulnerabilities disclosed responsibly?
- Did maintainers communicate clearly about issues?
Review release history:
- Consistent release cadence suggests ongoing commitment
- Sudden cessation of releases indicates potential abandonment
- Return to activity after silence suggests intermittent commitment
Assessing Support vs. Your Needs:
Match dependency support level to your risk tolerance:
Critical Production Use (high stakes):
- Minimum requirement: Best effort with demonstrated activity
- Preferred: Promised support with funding/foundation backing
- Acceptable risk: Mature, stable projects with minimal support if well-tested and audited
- Unacceptable: No support, minimal support, or abandoned projects
Production Dependencies (moderate stakes):
- Minimum requirement: Minimal support with recent activity
- Preferred: Best effort or better
- Acceptable risk: No support if simple, auditable, and truly stable
- Monitor closely: Any project showing decline in activity
Development Tools (lower stakes):
- Minimum requirement: Any level if tool works as needed
- Consider: Can you maintain it yourself if abandoned?
- Risk: May need replacement if abandoned; budget for that possibility
Experimental/Proof-of-Concept:
- Acceptable: Any support level
- Requirement: Clear understanding this is experimental
Decision Framework:
Is this dependency critical to your application?
Yes → Require best effort or promised support
└─ Is promised support verifiable and ongoing?
No → Seek alternatives or accept documented risk
No → Is minimal support acceptable for this use case?
└─ Monitor for decline; have replacement plan
Tools That Parse Security Insights:
Several platforms can aggregate and display Security Insights data:
- CLOMonitor: CNCF project monitoring, tracks security insights
- LFX Insights: Linux Foundation metrics dashboard
- OSPS Baseline Scanner: OpenSSF security baseline checks
These tools provide aggregated views when projects adopt Security Insights.
Documenting Your Assessment:
Include support commitment in your dependency evaluation:
## Dependency Assessment: [package-name]
### Support Commitment
- Archetype: [No/Minimal/Best Effort/Promised]
- Evidence: [SECURITY-INSIGHTS.yml shows active status | Foundation membership | Single maintainer hobby project]
- Funding: [GitHub Sponsors with 5 sponsors | Corporate backing by XYZ | None visible]
- Last release: [Date]
- Vulnerability response: [Historical response within 14 days | No known vulnerabilities to assess | Slow response >60 days]
### Risk Assessment
- Suitable for: [Critical production use | Development tools only | Not recommended]
- Monitoring required: [Weekly | Monthly | Quarterly]
- Replacement plan: [None needed | Identified alternative: X | No viable alternative, accept risk]
Recommendations:
For developers selecting dependencies:
-
Check for SECURITY-INSIGHTS.yml first. If present, trust it as authoritative unless activity contradicts it.
-
Verify promised support. Don't assume—look for funding evidence, employed maintainers, foundation backing.
-
Match support to criticality. Critical dependencies need demonstrated commitment; development tools can accept less.
-
Document your assumptions. Record what support level you're relying on; review periodically.
For security practitioners:
-
Include support assessment in reviews. When evaluating new dependencies, assess maintenance commitment as part of security review.
-
Flag commitment mismatches. Critical dependencies with minimal support require mitigation (monitoring, replacement plans, contributing to maintenance).
-
Monitor for status changes. Projects move between archetypes. Maintainer departure or funding loss changes risk profile.
-
Encourage Security Insights adoption. Ask dependencies to publish SECURITY-INSIGHTS.yml; reduces assessment burden.
For organizations:
-
Define support requirements by criticality. Establish policies: "Tier 1 dependencies require promised support; Tier 2 require best effort" etc.
-
Budget for maintenance contribution. If you depend on best-effort projects critically, consider funding maintenance or contributing engineering time.
-
Plan for support degradation. When a dependency's support level declines, have a process to re-evaluate or replace.
-
Build internal support capability. For critical dependencies, ensure you could maintain them internally if external support vanishes.
Understanding project support commitments transforms dependency selection from guesswork into informed risk management. A package with minimal support isn't necessarily bad—if you're prepared for that reality. The danger lies in assuming ongoing maintenance that doesn't exist, then discovering the assumption was wrong when a critical vulnerability needs patching.
Project health assessment provides the qualitative context that dependency counting lacks. A project with 50 well-maintained dependencies may be safer than one with 10 unmaintained packages. By systematically evaluating maintainer activity, community health, security practices, maturity, documentation, and support commitments, organizations can make informed decisions about which dependencies to trust, which to monitor, and which to avoid.