Copilot CLI the isolation journey

Copilot CLI the isolation journey

Adham 7 min read

When Copilot CLI came out, I liked the idea immediately. I spend a lot of time in the terminal, so an AI assistant that can help in both interactive and non-interactive shell sessions felt naturally interesting to me.

At the same time, I did not feel comfortable running that kind of agent directly on my daily machine without thinking about the boundaries first.

This is not really a Copilot-only issue. I would feel the same about any agent that can run commands, edit files, and install packages. Once I started using it more, I noticed something small but annoying. My local environment started to drift. New tools appeared. Temporary files stayed around. Some command history was useful, some was just noise. Nothing terrible happened, but it did not feel clean.

That pushed me to a simple question: if I like Copilot CLI, what is the right place to run it?

One small note before diving in: this post is mainly targeting Linux and macOS. If you are on Windows, WSL is a great way to use CLI in isolation. That said, nothing here strictly requires a Unix environment, so you can still adapt it for Windows if you choose to.

Why isolation matters

For me, isolation is not only about security headlines. It is also about keeping my machine predictable.

The more freedom you give an agent, especially in longer and less interactive sessions, the more you should care about the environment around it. If a toolchain changes, if a package gets installed globally, or if credentials are available in places I did not intend, the host machine becomes harder to trust.

I want a smaller blast radius. I want a workspace that I can start, use, and throw away if needed. That mindset made isolation feel less like a paranoid extra and more like basic hygiene.

The isolation options I looked at

Before I landed on dev containers, I looked at the main options available today.

  1. Built-in approvals and deny rules. Copilot CLI can limit tools and ask for confirmation. This helps, but for me it is a guardrail, not a hard boundary.
  2. A normal container. Run Copilot inside Docker or Podman, mount only the project, do the task, then remove the container. This is clean and fast.
  3. MicroVM sandboxes. Tools like Docker Sandboxes look promising because they move closer to VM-style isolation with a lighter workflow, but they still feel early for my everyday setup.
  4. Virtual machines. This is the strongest separation, but it is also heavier than what I want for normal daily work.

None of these options is wrong. It depends on what you care about most. For me, I wanted a hybrid approach: containers, but dev containers. I wanted the isolation and disposability of containers, with a setup that is easier to repeat and easier to maintain.

Why dev containers won me over

Dev containers felt like the best balance.

First, I get isolation. Copilot runs in the container, not directly in my host shell.

Second, I get repeatability. The environment lives in the repository, so if I come back later, I know what tools are there and how the workspace is supposed to behave.

Third, I get a setup that is easier to return to. If I come back to the same project later, I do not need to remember which tools I installed or what I changed on the host. The environment is described in the project, not floating around in my memory.

I was not trying to build the most secure system on earth. I was trying to build a setup I would actually keep using. Dev containers hit that sweet spot.

How I am doing it today

My current workflow is simple.

I keep a devcontainer configuration in the project. I use the devcontainer CLI to start the environment when needed. Then I run Copilot inside the running container.

On my machine, I prefer Podman, so I pass its binary path to the devcontainer CLI. To make this less repetitive, I wrote a small helper I call dev-copilot.

The script checks that devcontainer and podman are installed, validates the workspace path, confirms a devcontainer config exists, starts the container if needed, and then runs Copilot inside it.

The core idea is small, and that is exactly why I like it:

container_id=$(get_container_id)

if [[ -z "$container_id" ]]; then
        devcontainer up \
                --workspace-folder "$WORKSPACE_PATH" \
                --docker-path "$CONTAINER_BIN_PATH"
fi

devcontainer exec \
        --workspace-folder "$WORKSPACE_PATH" \
        --docker-path "$CONTAINER_BIN_PATH" \
        copilot

My real script has more flags and small shortcuts, but the core idea stays the same. I want Copilot running in a container that has the right tools for the project and access only to the project files I care about.

When I want a more disposable workflow, I also have it stop the container when the session ends. That gives me a nice up, use, down flow, and I do not end up collecting random running containers in the background.

The devcontainer part

This is the part where it becomes more work than a normal devcontainer setup.

I usually start from one of the official base images in the devcontainers images repo, then build on top of that for the specific tools I need. In practice, I often start from something simple like the Ubuntu base image and then layer my project tooling on top.

I also pass the GitHub token into the container as an environment variable. Right now I handle that with the initializeCommand step, which prepares a local .env file, and with runArgs that load this file when the container starts.

And i can install the Copilot CLI itself in the container as part of the postCreateCommand step. That way, it is ready to use as soon as the container is up.

So the container definition is still fairly small, but it is no longer the bare minimum:

{
  "image": "mcr.microsoft.com/devcontainers/typescript-node",
  "remoteUser": "root",
  // Github copilot cli
  "initializeCommand": "echo GITHUB_TOKEN=$(gh auth token) > .devcontainer/.env",
  "runArgs": ["--env-file=${localWorkspaceFolder}/.devcontainer/.env"],
  "postCreateCommand": "npm install -g @github/copilot "
}

And in the Dockerfile, the starting point is usually just a base image like this:

FROM mcr.microsoft.com/devcontainers/base:ubuntu

From there, I can add project-specific tools or features as needed. That part is still evolving. Some projects need extra CLIs. Some need language runtimes. Some probably deserve a prebuilt image to reduce startup time.

What is still evolving

This setup is working for me, but I do not want to present it like a final answer. It definitely takes more setup than a normal devcontainer, and that extra setup is part of the tradeoff.

There are still open questions I am exploring:

  • whether I want a cleaner long-term way to pass credentials than the current .env handoff
  • Copilot persistent sessiion configuration, I am expermenting to mount my .coptilot directiory to the container, but I am not sure if that is the right approach yet

So this post is not a grand conclusion. It is just a snapshot of where I am right now.

The real win

The biggest win for me is not the container itself. It is the habit that comes with it.

I am using Copilot CLI in a more intentional way. I can experiment, test ideas, and keep the mess away from my host machine. That makes me more comfortable using the tool, and it also makes my local setup easier to trust.

This will probably evolve again. Maybe I will end up using a stronger sandbox for some tasks. Maybe I will simplify parts of this flow later. But today, dev containers feel like the right balance between safety, repeatability, and convenience.

If you are curious about Copilot CLI but you are not fully comfortable giving it direct access to your daily environment, I think dev containers are a very good place to start.

References