13.3 Vendoring vs. Dynamic Dependency Resolution¶
When the npm registry experienced an outage in March 2016, thousands of build pipelines failed worldwide. Developers who had vendored their dependencies continued building; those relying on dynamic resolution watched helplessly. This incident—and countless smaller ones since—highlights a fundamental architectural choice: should you copy dependencies into your repository, or fetch them at build time?
This section compares vendoring (copying dependencies into your codebase) with dynamic dependency resolution (fetching dependencies from registries at build time), examining their security trade-offs and helping you choose the right approach for your context.
What Is Vendoring?¶
Vendoring means copying dependency source code or binaries directly into your project's repository, creating a self-contained codebase that doesn't require external fetches to build.
How Vendoring Works:
Instead of:
You have:
your-project/
├── src/
├── package.json
└── vendor/ # Committed to repository
├── lodash/
├── express/
└── ...
The build process uses vendored dependencies rather than fetching from external registries.
Vendoring Across Ecosystems:
| Ecosystem | Vendoring Approach | Command/Pattern |
|---|---|---|
| Go | Native vendor/ directory |
go mod vendor |
| Python | Manual or pip download |
Copy to vendor/, use --find-links |
| npm | bundledDependencies or manual |
Commit node_modules/ or subset |
| Ruby | bundle package |
Creates vendor/cache/ |
| Rust | Cargo vendor | cargo vendor creates vendor/ |
| PHP | Commit vendor/ directory |
composer install then commit |
Go Vendoring Example:
# Create vendor directory with all dependencies
go mod vendor
# Build using vendored dependencies
go build -mod=vendor ./...
# Verify vendor matches go.sum
go mod verify
Go's vendor directory contains complete source code for all dependencies, enabling builds without network access.
Advantages of Vendoring¶
Vendoring provides significant security and operational benefits.
Complete Control:
With vendored dependencies: - You decide exactly what code enters your repository - No external entity can change what you build - Registry compromises don't affect you immediately - Deleted packages don't break your builds
Build Availability:
- Build succeeds even when registries are down
- Air-gapped environments can build without network access
- Historical versions remain buildable indefinitely
- No dependency on external infrastructure
Auditability:
- All code is in your repository for review
- Security scans cover the actual code you ship
- Diff shows exactly what changed between versions
- Compliance audits can examine complete codebase
Supply Chain Isolation:
Vendoring creates a security boundary:
┌─────────────────────────────────────────────┐
│ Your Repository │
│ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Your Code │ │ Vendored Deps │ │
│ └─────────────┘ │ (controlled copy) │ │
│ └─────────────────────┘ │
└─────────────────────────────────────────────┘
▲
│ One-time copy, under your review
│
┌──────────┴──────────┐
│ Package Registry │
│ (untrusted after │
│ initial fetch) │
└─────────────────────┘
Organizations that have experienced vendor compromises often turn to vendoring critical dependencies. The additional work to update vendored packages is intentional friction—it forces review of every change before it enters the codebase.
Disadvantages of Vendoring¶
Vendoring introduces its own challenges.
Maintenance Burden:
- Manual process to update vendored dependencies
- Easy to forget updates, leading to stale versions
- Security patches don't flow automatically
- Each update requires explicit action
Update Workflow:
Updating vendored dependencies requires deliberate process:
# 1. Update the dependency specification
go get -u github.com/pkg/dependency@v1.2.3
# 2. Re-vendor
go mod vendor
# 3. Review changes
git diff vendor/
# 4. Commit
git add vendor/
git commit -m "Update dependency to v1.2.3"
This friction can lead to neglected updates.
Repository Bloat:
Vendored dependencies significantly increase repository size:
| Project Type | Without Vendoring | With Vendoring |
|---|---|---|
| Small Node app | ~1 MB | ~50-200 MB |
| Go microservice | ~100 KB | ~10-50 MB |
| Python application | ~500 KB | ~20-100 MB |
Large repositories slow clones, increase storage costs, and complicate diffs.
Hidden Modifications:
If developers modify vendored code without documentation, you lose the ability to cleanly update:
# "Why won't this update cleanly?"
# Because someone patched vendor/lodash/index.js 6 months ago
# without documenting or upstreaming the fix
Transitive Complexity:
When vendored dependencies have their own dependencies, you must vendor the entire tree—which may have version conflicts or duplications.
Dynamic Dependency Resolution¶
Dynamic dependency resolution fetches dependencies from registries at build time based on specifications in manifest files.
How Dynamic Resolution Works:
# Dependencies declared in manifest
cat package.json
{
"dependencies": {
"express": "^4.18.0"
}
}
# Fetched at build time
npm install # Contacts registry, resolves versions, downloads
Each build contacts external registries to resolve and download dependencies.
Advantages of Dynamic Resolution¶
Automatic Updates:
With lockfiles and appropriate version ranges: - Security patches can flow automatically - Minimal effort to stay current - Reduced maintenance burden
Smaller Repositories:
- Only your code in version control
- Faster clones and operations
- Cleaner diffs focused on your changes
Ecosystem Tooling:
Package managers are designed for dynamic resolution: - Sophisticated version resolution algorithms - Conflict detection and resolution - Ecosystem-wide update tools (Dependabot, Renovate)
Disadvantages of Dynamic Resolution¶
Registry Dependency:
Your builds depend on external infrastructure: - Registry outages break builds - Removed packages break builds (left-pad, 2016) - Registry compromises can inject malicious code
Supply Chain Exposure:
Every build is an opportunity for attack:
┌─────────────────────────────────────────────┐
│ Your Build │
│ │ │
│ │ Every build fetches │
│ ▼ from external source │
└─────────────────────────────────────────────┘
│
┌──────────┴──────────┐
│ Package Registry │
│ (trusted every │
│ build) │
└─────────────────────┘
Reproducibility Challenges:
Without careful lockfile management: - Different builds may get different versions - Historical builds may not reproduce - "Works on my machine" problems multiply
Network Requirements:
- Build environments need registry access
- Air-gapped environments can't build
- Network latency affects build times
Hybrid Approaches¶
Most organizations adopt hybrid approaches that combine benefits of both strategies.
Artifact Repository Proxies:
Tools like JFrog Artifactory, Sonatype Nexus, or AWS CodeArtifact act as caching proxies:
┌─────────────────────────────────────────────────────────┐
│ Your Build │
│ │ │
│ ▼ │
│ ┌─────────────────┐ │
│ │ Artifact Proxy │ ← Scans, caches, │
│ │ (Artifactory) │ applies policy │
│ └────────┬────────┘ │
└───────────────────────┼─────────────────────────────────┘
│
▼
┌─────────────────┐
│ Public Registry │
│ (npm, PyPI) │
└─────────────────┘
Proxy Benefits:
- Caching: Dependencies cached locally; builds continue if upstream fails
- Scanning: Scan packages before caching
- Policy: Block packages that don't meet criteria
- Audit: Log all package requests
- Availability: Cached packages available even if deleted upstream
Configuration Example:
# npm configuration for Artifactory
npm config set registry https://artifactory.example.com/api/npm/npm-remote/
# pip configuration for Nexus
pip install --index-url https://nexus.example.com/repository/pypi-proxy/simple/
Selective Vendoring:
Vendor critical dependencies while using dynamic resolution for others:
your-project/
├── src/
├── vendor/
│ ├── crypto-library/ # Vendored - security critical
│ └── auth-framework/ # Vendored - security critical
├── package.json # Other deps fetched dynamically
└── package-lock.json
This balances control over sensitive code with convenience for less critical dependencies.
Private Mirrors:
Maintain complete mirrors of public registries:
- All packages copied to internal infrastructure
- Builds use only internal mirrors
- Updates controlled through mirror sync process
This is operationally expensive but provides maximum isolation.
Comparison Table¶
| Factor | Vendoring | Dynamic Resolution | Hybrid (Proxy) |
|---|---|---|---|
| Registry dependency | None | High | Low (cached) |
| Build availability | Always | Depends on registry | Usually |
| Security isolation | High | Low | Medium |
| Update effort | High (manual) | Low (automatic) | Medium |
| Repository size | Large | Small | Small |
| Audit capability | Complete | Requires extra tooling | Good |
| Policy enforcement | At update time | Build time | Both |
| Air-gapped support | Yes | No | Partial |
| Initial setup | Medium | Low | High |
| Operational cost | Low | Low | Medium-High |
Decision Framework¶
Use this framework to choose the right approach:
Consider Vendoring When:
| Scenario | Why Vendoring Helps |
|---|---|
| High-security environments | Maximum control over code |
| Air-gapped deployments | No network access during build |
| Critical dependencies | Can't afford supply chain compromise |
| Regulated industries | Compliance requires code audit |
| Long-term support products | Must build years later |
| Small dependency footprint | Manageable overhead |
Consider Dynamic Resolution When:
| Scenario | Why Dynamic Helps |
|---|---|
| Rapid development | Minimal friction for updates |
| Large dependency trees | Vendoring overhead too high |
| Non-critical applications | Lower security requirements |
| Active upstream development | Want latest patches quickly |
| CI/CD with good connectivity | Network access reliable |
Consider Hybrid (Proxy) When:
| Scenario | Why Hybrid Helps |
|---|---|
| Enterprise environments | Balance control with convenience |
| Multiple teams/projects | Shared infrastructure for all |
| Compliance with flexibility | Policy enforcement without full vendoring |
| Moderate security requirements | Better than pure dynamic, less than vendoring |
| Variable connectivity | Cache handles intermittent issues |
Decision Tree:
Is this a security-critical dependency?
├── Yes → Consider vendoring
│ └── Is update overhead acceptable?
│ ├── Yes → Vendor
│ └── No → Hybrid with strict policy
│
└── No → Do you have reliable infrastructure?
├── Yes → Dynamic with lockfiles
└── No → Hybrid with caching proxy
Recommendations¶
For Security-Critical Applications:
-
Vendor security-sensitive dependencies. Cryptographic libraries, authentication frameworks, and other security-critical code deserve the extra protection.
-
Implement review processes. When updating vendored dependencies, review changes explicitly. This friction is intentional.
-
Document modifications. If you must patch vendored code, document why and plan for upstream contribution.
For Most Applications:
-
Deploy artifact repository proxy. Artifactory, Nexus, or cloud equivalents provide security benefits without vendoring overhead.
-
Enable caching aggressively. Cache everything your builds need. Test that builds succeed with upstream offline.
-
Implement proxy policies. Block packages with critical vulnerabilities, suspicious characteristics, or policy violations.
For Development Teams:
-
Use lockfiles religiously. Even with dynamic resolution, lockfiles provide reproducibility and reduce supply chain exposure.
-
Consider selective vendoring. You don't have to vendor everything. Identify your most critical dependencies and vendor those.
-
Test offline builds. Periodically verify that your builds succeed without public registry access. This tests your resilience.
For Organizations:
-
Standardize approach. Choose a strategy and apply it consistently. Mixed approaches create confusion and gaps.
-
Invest in infrastructure. Artifact proxies require setup and maintenance but pay dividends in security and reliability.
-
Plan for disaster. What happens if npm/PyPI/Maven Central disappears tomorrow? Your strategy should have an answer.
The choice between vendoring and dynamic resolution isn't binary—it's a spectrum with hybrid approaches in between. The right choice depends on your security requirements, operational capacity, and risk tolerance. Most organizations benefit from starting with a caching proxy, then selectively vendoring their most critical dependencies. This balanced approach provides meaningful supply chain protection without overwhelming maintenance burden.