Managing .env files and keeping them in sync across environments is painful when done manually.
Doesn't it seem odd that Slack is still a common way for .env files to be synced between team members? Shouldn't we be concerned that syntax errors from .env file edits are so prevalent that dotenv linting tools are needed?
It's time to bring the DevOps principles of automation and ephemeral resources to managing environment variables and .env files. We've compiled our best tips and tricks to do just that, covering:
Although the examples are Doppler-centric, we want to provide fresh ideas for improving app config and secrets automation in your organization. We have lots to cover, so let's get started!
It's impossible to automate syncing environment variables across teams, hosting platforms, and development environments without a centralized source of truth at the organization level.
Modern platforms such as Heroku and Vercel provide built-in environment variable storage, but unless all of your applications are hosted on a single platform, they can only function as a source of truth for individual applications. And if you're using cloud-based virtual machines such as those from DigitalOcean or AWS EC2, you're on your own to figure out environment variable storage and access.
With the exception of Vercel's development-scoped environment variables, first-class local development support is missing from every modern platform and cloud provider, explaining why so many teams still rely on .env files, even if not used in production. We know how crucial it is for local environments to closely mirror production, but often we're willing to make tradeoffs when it comes to environment variables.
In the past, secrets managers such as HashiCorp Vault were seen as the only solution. But replacing the simplicity of environment variables with complex SDKs often resulted in siloed secrets and teams going rogue by managing environment variables their own way. Cloud secrets managers didn't improve the local development story either.
We need a new way of managing environment variables that reflects the needs of modern application development.
Beyond just secrets management, SecretOps is designed for multi-cloud deployments and combines the strengths of traditional solutions while addressing their weaknesses. As a starting point, a SecretOps platform must:
Using Doppler as an example, we're tackling these requirements by providing:
Doppler's operating model is that managing secrets should be centralized, but fetching and syncing secrets should be customized for every customer's needs. For example, many of our customers enjoy the Doppler dashboard's superior features and developer experience but sync secrets to Azure Key Vault so production secrets access remains as is.
Our vision for SecretOps is constantly evolving. Our goal is to share, inspire, and help move our industry forward with new ideas that take secrets automation to the next level.
Let's look at several methods for environment variable injection.
Having a platform or infrastructure tool to inject environment variables into your application is the best solution, as you can then stop using .env files altogether.
So while that removes the need for .env files in select platforms, additional tooling is needed for virtual machines, local development, and Kubernetes, just to name a few.
This method uses a CLI to run your application, injecting environment variables directly into the application process.
Here is how the Doppler CLI can be used to inject environment variables into a Node.js application:
1doppler run -- npm start
You can also use a script:
1doppler run -- ./launch-app.sh
Or create a long-running background process in a virtual machine:
1nohup doppler run -- npm start >/dev/null 2>&1 &
Or use the Doppler CLI inside a Docker container:
1# Install Doppler CLI
2RUN curl -Ls --tlsv1.2 --proto "=https" --retry 3 https://cli.doppler.com/install.sh | sh
3CMD ["doppler", "run", "--", "npm", "start"]
A CLI application runner with environment variable injection should have the following properties:
The doppler run command is just one way of accessing secrets, and you can find more examples in our CLI Secrets Access Guide.
This method injects environment variables into a Docker container at runtime, removing the temptation of embedding an .env file in the Docker image (yes, it happens), and avoiding the host creating and mounting the .env file inside the container.
The Doppler CLI pipes secrets in .env file format to the Docker CLI where it reads the output as a file using process substitution:
1docker run \
2 --env-file <(doppler secrets download --no-file --format docker) \
Here we get the benefits of .env file configuration but without an .env file ever touching the disk.
You can see other use cases in our docker-examples GitHub repository.
Docker Compose differs from Docker as it accesses environment variables from the host when docker compose up is run.
Docker Compose sensibly requires you to define which environment variables to pass through to each service as variables such as $PATH are host-specific:
5# Host environment variables passed through to
7 - AUTH_TOKEN
8 - DB_CONNECTION_URL
Because of how Docker Compose access environment variables, we can use the Doppler CLI as an application runner:
1doppler run -- docker compose up
Other use cases for Docker Compose can be found in our docker-examples GitHub repository.
Kubernetes provides excellent support for injecting environment variables into containers using Key-Value pairs stored in a Kubernetes secret.
Doppler provides two options for syncing secrets to Kubernetes:
The first step is to create a generic Kubernetes secret (the first argument being the secret's name) where just like Docker, secrets in .env file format are piped to kubectl where it reads the output as a file:
1kubectl create secret generic my-app-secret \
2 --from-env-file <(doppler secrets download --no-file --format docker)
Our Kubernetes Operator is designed to scale and fully automate secrets syncing from Doppler to Kubernetes. Once installed and configured, it instantly creates and updates Kubernetes secrets as they change in Doppler with support for automatic deployment reloads if the secrets they're consuming have changed.
As it's a more advanced solution that requires Kubernetes cluster administration experience, we won't be covering it in this post, but you can learn more from the Kubernetes Operator documentation.
Injecting environment variables into a deployment from the Key-Value pairs in a Kubernetes secret is done using the envFrom property of a container spec:
4 - name: awesome-app
6 - secretRef:
7 name: my-app-secret # Kubernetes secret name
Environment variable injection is always preferred, but sometimes an .env file is the only workable solution.
Protective measures such as locking down file ownership, file permissions, and heavily restricting shell access should be implemented from the start. But the risk of .env files existing on the file system indefinitely has always been a concern and why we've been hesitant to recommend .env file usage in the past.
Thanks to the Doppler CLI, we can now mount ephemeral .env files that are automatically cleaned up when the application exits. Imagine not having to worry about anyone in your company accidentally committing an .env file again!
One of the most popular use cases is for PHP developers building Laravel applications:
1doppler run --mount .env -- php artisan serve
The file extension is used to automatically set the format (JSON format is also supported):
1doppler run --mount secrets.json -- npm start
You can set the format if the file extension doesn't map to a known type:
1doppler run \
2 --mount app.config \
3 --mount-format json \
4 -- npm start
To increase security, you can also restrict the number of reads:
1doppler run \
2 --mount .env \
3 --mount-max-reads 1 \
4 --command="php artisan config:cache && php artisan serve"
If you're wondering what happens to the mounted file if the Doppler process is force killed: its file contents will appear to vanish instantly. This is because the mounted file isn't a regular file at all, but a Unix named-pipe. If you've heard the phrase “everything is a file in Unix”, you now have a better understanding of what that means.
Named pipes are designed for inter-process communication while still using the file system as the access point. In this case, it's a client-server model, where your application is effectively sending a read request to the Doppler CLI. In the event the Doppler CLI is force killed, the .env file (named pipe) will still exist, but because no process is attached to it, requests to read the file will simply hang.
It's also named pipes that enable us to restrict the number of reads using the --mount-max-reads option as once the limit is exceeded, the CLI simply removes the named pipe from the file system.
I hope you take away some new automation ideas to bring back to your team so you can spend less time updating .env files and more time shipping software.
Stay up to date with new platform releases and get to know the team of experts behind them.