Lesson 6: Containerized Development and Testing with Docker
Introduction
Throughout this course, we've treated npm security as a code-level concern — lockfiles, version pinning, scanners. But there's a deeper problem: when you run npm install on your host machine, every package you install has full access to your operating system. A malicious post-install script can read your SSH keys, scan your browser profiles, exfiltrate AI/agent tokens, and access every other repo on your machine.
Containers solve this by creating an isolated boundary between your project and everything else. In this lesson, we'll build a complete containerized development and testing workflow using Docker — from development environments to production images — where npm security is baked into the infrastructure itself.
Why Containers Are a Security Boundary
When you run npm install directly on your laptop, the install process shares the OS namespace with everything else you do. A malicious package running a post-install script can:
- Read
~/.ssh/and exfiltrate your private keys - Scan
~/.aws/credentialsor~/.config/gcloud/for cloud credentials - Access browser profiles and cookies
- Read environment variables containing API tokens
- Access other repos and source code on your filesystem
- Install persistent backdoors that survive project deletion
A Docker container isolates your project. The container has its own filesystem, its own network namespace, and its own process tree. A malicious package inside the container can't see your host's files, SSH keys, or other projects — unless you explicitly mount them.
This doesn't make containers bulletproof (a container escape vulnerability would bypass this), but it raises the bar dramatically compared to bare-metal development.
Part 1: Dev Containers — Isolating Your Development Environment
VS Code Dev Containers
The simplest way to containerize your development workflow is VS Code Dev Containers. You define a container configuration, and VS Code runs your entire development environment inside it.
Create .devcontainer/devcontainer.json in your project root:
To use it:
- Open the project folder in VS Code
- Press
Ctrl/Cmd + Shift + P→ "Dev Containers: Reopen in Container" - All installs, builds, and test runs now happen inside the container
Your host machine never runs npm install. If a malicious package executes code during installation, it's confined to the container.
Hardening the Dev Container
The default dev container configuration provides basic isolation, but you can go further by dropping Linux capabilities and restricting privilege escalation:
What each security option does:
--security-opt=no-new-privileges:true: Prevents processes inside the container from gaining additional privileges (e.g., viasetuidbinaries). If a malicious script tries to escalate permissions, it's blocked.--cap-drop=ALL: Drops all Linux capabilities by default. The container can't do things like modify network settings, load kernel modules, or access raw devices.--cap-add=CHOWN,SETUID,SETGID: Adds back only the minimum capabilities needed for npm to manage file ownership during installation.NODE_OPTIONS=--disable-proto=delete: Disables__proto__access in Node.js, which mitigates an entire class of prototype pollution attacks at the runtime level.
What NOT to Mount into Dev Containers
Be careful about what host directories you share with the container:
If you need Git operations inside the container, use VS Code's built-in credential forwarding rather than mounting your .ssh directory.
Part 2: Production Dockerfiles — Multi-Stage Builds
For production deployments, a well-crafted Dockerfile is your security perimeter. The key principles are: minimize the attack surface, never run as root, and separate build-time dependencies from runtime.
The Insecure Way (Don't Do This)
Problems with this approach:
- Runs as
root— any compromised code has full container access - Includes the full Node.js build toolchain in the final image
- Uses
npm installinstead ofnpm ci— may resolve different versions - Copies everything including
.git,.env, test files, etc. - No
.dockerignoreguidance — secrets could leak into the image
The Secure Way: Multi-Stage Build
Why This Structure Works
Stage 1 (deps): Installs production dependencies with --ignore-scripts — no post-install code runs. If a compromised package relies on a post-install script to activate its payload, it never executes.
Stage 2 (builder): Builds the application. This stage includes devDependencies (TypeScript compiler, bundler, etc.) but none of these ship to production.
Stage 3 (production): The final image contains only the compiled output and production node_modules. No source code, no build tools, no devDependencies, no .git directory, no test files. The attack surface is minimal.
Essential .dockerignore
Always create a .dockerignore to prevent sensitive files from being copied into the build context:
Part 3: Network-Restricted Installs
One of the most effective container security patterns is installing dependencies with no network access. The idea: copy your lockfile and cached packages into the container, install offline, and then only enable networking when the application runs.
Docker Compose with Network Isolation
Why Network Restriction Matters
Many supply chain attacks rely on the malicious code making outbound network requests — sending stolen credentials to an attacker's server, downloading second-stage payloads, or establishing command-and-control connections. If the container has no network access during npm install, these exfiltration attempts fail silently.
The September 2025 attack payload exfiltrated credentials to websocket-api2.publicvm.com. In a network-restricted container, this connection would be blocked.
Part 4: Running Tests in Containers
Testing in containers ensures consistent environments and prevents the "works on my machine" problem. More importantly for security, it means your test suite runs in an isolated environment where a compromised dependency can't affect your host.
docker-compose.test.yml
Dockerfile.test
Running Tests
Part 5: Scanning Container Images
Even a well-built Docker image can contain vulnerabilities — in Node.js itself, in Alpine's system libraries, or in your npm dependencies. Scanning the final image adds another security layer.
Trivy: Container Image Scanning
Integrate Image Scanning into CI/CD
Docker Scout (Docker's Built-In Scanner)
If you use Docker Desktop, Docker Scout provides vulnerability scanning directly:
Part 6: Secrets Management in Containers
One of the biggest mistakes is baking secrets into Docker images. Even if you delete an .env file in a later Dockerfile layer, it remains in the image history.
What NOT to Do
What to Do Instead
For build-time secrets (e.g., private npm registry tokens):
Build with:
For runtime secrets (e.g., API keys, database passwords):
For production: Use your cloud provider's secrets manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault) and inject secrets at container startup, never during image build.
Part 7: Complete Project Template
Here's a complete project structure combining everything from this lesson:
.npmrc (Project-Level)
Complete CI/CD Pipeline
Container Security Checklist
Image Build:
- Use multi-stage builds — separate build from runtime
- Use
npm ci --ignore-scriptsduring install - Run as a non-root user (
USER appuser) - Use Alpine or distroless base images for minimal attack surface
- Create a
.dockerignorethat excludes.env,.git, tests, docs - Never bake secrets into image layers — use BuildKit secrets or runtime injection
- Pin base image versions (
node:20.11-alpine, notnode:latest)
Runtime:
- Drop all Linux capabilities, add back only what's needed
- Set
--security-opt=no-new-privileges:true - Restrict network egress — default-deny outbound connections
- Use read-only filesystem where possible (
--read-only) - Set resource limits (memory, CPU) to contain DoS attacks
CI/CD:
- Scan images with Trivy or Docker Scout before pushing to registry
- Fail builds on critical/high vulnerabilities
- Use immutable image tags (SHA, not
latest) - Sign images for provenance verification
Development:
- Use Dev Containers for daily development
- Never run
npm installdirectly on your host for untrusted projects - Keep dev container images updated
Key Takeaways
- Running
npm installon your host machine gives every package full access to your operating system — SSH keys, cloud credentials, browser profiles, and all your other projects. - Dev Containers sandbox your development environment so a malicious package can't escape to your host.
- Multi-stage Docker builds separate build-time dependencies from the production image, minimizing the attack surface.
- Network-restricted installs prevent malicious packages from exfiltrating data during the install phase.
- Container image scanning (Trivy, Docker Scout) catches vulnerabilities in system libraries and Node.js itself, not just npm packages.
- Never bake secrets into images — use BuildKit secrets for build-time credentials and runtime injection for application secrets.
- Containers aren't a replacement for the practices in Lessons 1–6 — they're an additional layer that limits the blast radius when those practices fail.
What's Next
In Lesson 7, we bring everything together with an incident response playbook. What do you do when you discover a compromised package in your project — including inside your containers? How do you triage, remediate across containerized environments, and prevent recurrence? We'll walk through a real-world response scenario step by step and summarize the complete course framework.