Skip to content

Development Environment

Overview

It is common practice to use virtual environment tools such as venv, pipenv, or virtualenv when running Python projects locally. They isolate project packages from the system Python and from other projects, keeping dependency versions consistent and preventing conflicts. For many projects that level of isolation is sufficient.

Depsight goes further by integrating DevContainers, a modern approach to virtualizing an entire development environment inside a Linux container. Instead of documenting setup steps in a README and hoping every contributor follows them correctly, a DevContainer defines and provisions the full environment as code automatically. Because Depsight's CI pipeline also builds a production Docker image, the DevContainer uses Docker outside of Docker (DooD) so developers can build and test the container image locally without leaving the DevContainer.

Docker outside of Docker (DooD)

DooD mounts the host's Docker socket into the container rather than running a separate Docker daemon inside it, which avoids the complexity and privilege requirements of Docker-in-Docker.

Virtual Development Environment

Traditional Python Virtual Environments

Python virtual environments create isolated spaces where each project can maintain its own set of dependencies without interfering with the system Python installation or other projects. Tools like venv, pipenv, and virtualenv solve the common problem of dependency conflicts by giving each project a self-contained directory of installed packages. This isolation is lightweight, easy to set up, and sufficient for many Python projects.

Among these tools, venv is the most widely used because it is included in the Python standard library since Python 3.3. To create and activate a virtual environment with venv:

python3 -m venv .venv
source .venv/bin/activate
python -m venv .venv
.venv\Scripts\activate

Beyond Traditional Virtualization Techniques

While traditional virtual environments isolate Python packages effectively, DevContainers go further because they control the full operating system layer, not just Python. That makes them useful when a project depends on system tools, specific runtimes, or a development setup that should match CI. Instead of documenting setup steps in a README and hoping every contributor follows them correctly, a DevContainer defines and provisions the full environment as code automatically. However, they might introduce some overhead, as developers need Docker or any other container manager installed and should understand the basics of working with containers. The table below shows when that extra complexity is worth it:

Capability venv / pipenv DevContainer
Keep project packages separate from the system Python and other projects
Guarantee every developer uses the exact same Python interpreter version
Install OS-level libraries via apt (e.g. gcc for C extensions, libpq for Postgres)
Ship tools like uv, ruff, or Nuitka compiler dependencies inside the environment
Automatically install editor extensions and apply workspace settings for every developer
Run the exact same OS, Python, and toolchain locally as the CI pipeline

IDE Support

The DevContainer specification is an open standard supported by multiple editors. VS Code supports it through the Dev Containers extension, JetBrains IDEs connect through Gateway, and the devcontainer CLI enables headless usage in automation and CI pipelines.

Per-IDE configuration lives under the customizations key in devcontainer.json, so extensions and settings for different editors coexist in the same file without conflict. The customizations key is specific to IDEs like VS Code and JetBrains while the devcontainer CLI is a headless runner and ignores this section entirely. This is one of the biggest quality-of-life improvements DevContainers offer since every developer opens the project and immediately gets the right extensions, formatters, and IDE behaviour. There is no manual setup, no "works on my machine" differences.

{
    "customizations": {
        "jetbrains": {
            "plugins": [
                "com.intellij.python",     // Python language support and debugger
                "com.jetbrains.plugins.ini" // TOML / INI file support for pyproject.toml
            ]
        },
        "vscode": {
            "extensions": [
                "ms-python.python",        // Python language support, IntelliSense, and debugging
                "charliermarsh.ruff",      // Fast linter and formatter — enforces code style on save
                "eamodio.gitlens"          // Inline Git blame, history, and branch comparisons
            ]
        }
    }
}

Running a Python venv inside the DevContainer by default

uv sync --all-groups runs as the postCreateCommand and creates a .venv/ directory named after the project (prompt = depsight in .venv/pyvenv.cfg). The ms-python.python extension then auto-detects the .venv/ directory and activates it in every new integrated terminal — no manual step needed.


DevContainer Components

Project Structure

A DevContainer is configured through a .devcontainer/ folder at the root of the repository. The minimum required file is devcontainer.json. A Dockerfile is optional but recommended when the project needs system-level customizations beyond what a pre-built base image provides.

For complex post-creation routines such as configuring git hooks, installing additional tools, or running conditional setup logic, extracting the "postCreateCommand" into a dedicated shell script is recommended over embedding a long one-liner in devcontainer.json.

.devcontainer/
├── devcontainer.json
├── Dockerfile
└── postCreateCommand.sh

Other Lifecycle Hooks

postStartCommand runs on every container start. postAttachCommand runs each time the IDE attaches to the running container.


DevContainer Configuration

The devcontainer.json is the central configuration file. it instructs the IDE how to build the container image, which extensions to install, which ports to forward, and which environment variables and lifecycle commands to apply.

The build section points to the Dockerfile and passes build arguments. ${localEnv:PYTHON_VERSION:3.12} reads PYTHON_VERSION from the host machine's environment — useful when a developer wants to override the version without editing the file. The value after the colon is the fallback default when the variable is not set. The features section adds pre-packaged capabilities from the DevContainer Features registry; here docker-outside-of-docker installs the Docker CLI inside the container and mounts the host's Docker socket (/var/run/docker.sock), so developers can build and test the Depsight production image without leaving the DevContainer while reusing the host daemon. containerEnv injects environment variables into the running container, making them available to every process. forwardPorts maps container ports to the host so they can be accessed from a browser or tool on the developer's machine. workspaceFolder sets the path inside the container where the project is mounted; when omitted, the Dev Containers extension defaults to /workspaces/<repo-name>. The postCreateCommand runs with this folder as the working directory immediately after the project has been mounted:

{
    "name": "Depsight DevContainer",
    "build": {
        "context": "..",
        "dockerfile": "Dockerfile",
        "args": {
            "PYTHON_VERSION": "${localEnv:PYTHON_VERSION:3.12}",
            "UV_VERSION": "${localEnv:UV_VERSION:0.11.1}"
        }
    },
    "features": {
        "ghcr.io/devcontainers/features/docker-outside-of-docker:1": {
            "moby": false
        }
    },
    "customizations": {
        "vscode": {
            "settings": {
                "python.defaultInterpreterPath": "${containerWorkspaceFolder}/.venv/bin/python"
            }
        }
    },
    "containerEnv": {
        "APP_NAME": "DEPSIGHT",
        "DEPSIGHT_ENV": "development"
    },
    "forwardPorts": [8000],
    "mounts": [
        "source=depsight-uv-cache,target=/home/vscode/.cache/uv,type=volume"
    ],
    "portsAttributes": {
        "8000": {
            "label": "MkDocs Dev Server",
            "onAutoForward": "notify"
        }
    },
    "postCreateCommand": "uv sync --all-groups",
    "workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}"
}

Lifecycle Commands

The "postCreateCommand" runs once after the container is created and the project has been mounted into the "workspaceFolder". It is typically used to install project dependencies via uv sync --all-groups, npm install etc.

Installing dependencies inside the Dockerfile instead would not work, since the Dockerfile builds the image before the project is mounted. When the Dev Containers extension mounts the workspace into "workspaceFolder", it overlays that path in the container filesystem, hiding anything that was written there during the image build. Running the install in "postCreateCommand" ensures it happens after the mount.


Container Image

The Dockerfile defines the content of the container image — the pre-installed system tools, users, and their permissions — while devcontainer.json controls how the IDE integrates with that image and which lifecycle commands to run.

When devcontainer.json includes a build block, the IDE builds the image from the Dockerfile before starting the container. Without one, DevContainers use a pre-built image directly.

Depsight's Dockerfile is intentionally minimal — it extends the Microsoft DevContainer base image and only adds what it doesn't already include:

ARG PYTHON_VERSION="3.12"
FROM mcr.microsoft.com/devcontainers/python:${PYTHON_VERSION}

ARG UV_VERSION="0.11.1"
RUN curl -LsSf https://astral.sh/uv/${UV_VERSION}/install.sh \
    | UV_INSTALL_DIR=/usr/local/bin sh

ENV PYTHONUNBUFFERED=1
ENV APP_NAME=DEPSIGHT
ENV DEPSIGHT_ENV=development

EXPOSE 8000

Microsoft's DevContainer Base Images

Microsoft publishes purpose-built base images at mcr.microsoft.com/devcontainers for most common languages and stacks such as Python, JavaScript, Rust, etc. Unlike regular container images, these DevContainer images are built for development:

Feature / Aspect python:3.12 mcr.microsoft.com/devcontainers/python:3.12
Default user root vscode with sudo access
Non-root workflow Manual setup required Ready out of the box
Preinstalled tools Minimal Extensive
Python tooling pip only pip, pipx, common dev tools
Shell sh, bash sh, bash, zsh
VS Code Server Requires manually creating a non-root user and using USER syntax Works out of the box; vscode is the default user

CI/CD Integration

The same container image used for local development can be used directly in CI, eliminating environment drift. Environment drift is the problem where builds pass locally but fail in the pipeline due to a different OS, Python version, or missing system library.

GitHub Actions

GitHub Actions has "native" DevContainer support through the official devcontainers/ci action, maintained by the same project behind the DevContainer specification. It reads the project's devcontainer.json, builds the container, and runs commands inside it:

- name: Lint, test & build wheel
  uses: devcontainers/ci@v0.3
  with:
    configFile: .devcontainer/devcontainer.json
    runCmd: |
      set -e
      source .venv/bin/activate

      depsight --help                         # CLI health check
      ruff check src/ tests/                  # Linting
      mypy src/                               # Type checking
      python -m pytest tests/ -v --tb=short   # Tests
      uv build                                # Build wheel

GitLab CI

GitLab has no native DevContainer support. The @devcontainers/cli npm package can replicate the behaviour, but it requires Docker-in-Docker (DinD) (docker:dind). The setup could be a nightmare since runner privileges, TLS settings, and socket access all vary across GitLab installations and can produce hard-to-diagnose failures.

A more robust alternative is to split the work into two explicit pipeline stages. One job builds and pushes the DevContainer image, and a second job pulls and uses it directly:

stages:
  - build
  - test

build-devcontainer:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  variables:
    DOCKER_HOST: tcp://docker:2375
    DOCKER_TLS_CERTDIR: ""
  script:
    - docker build -f .devcontainer/Dockerfile -t $CI_REGISTRY_IMAGE/devcontainer:$CI_COMMIT_SHORT_SHA .
    - docker push $CI_REGISTRY_IMAGE/devcontainer:$CI_COMMIT_SHORT_SHA

test:
  stage: test
  image: $CI_REGISTRY_IMAGE/devcontainer:$CI_COMMIT_SHORT_SHA
  script:
    - source .venv/bin/activate
    - depsight --help
    - ruff check src/ tests/
    - mypy src/
    - python -m pytest tests/ -v --tb=short
    - uv build