Nov 30, 2025
9 min read

Zero Trust is shifting left: How developers can act now

Zero Trust is shifting left: How developers can act now

It isn’t business as usual anymore. Systems can’t blindly grant access to any identity just because it shares a network or address space. The old perimeter model has now given way to a“never trust, always verify” approach, which forms the foundation of Zero Trust security.

This shift extends beyond compliance or leadership, placing greater responsibility on you as a developer or DevOps engineer. The principles of the Zero Trust framework must be applied from the point where the code is written. You are responsible for building this trustless model into your systems from the ground up, applying the principle of least privilege, utilizing dynamic credentials, and enforcing identity-based access directly within pipelines and configurations.

In this model, every user and device, whether human or machine, must be authenticated and validated to prevent unauthorized access and breaches.
Is this a heavy responsibility? Yes. But it’s also a chance to build security directly into your systems.This article will show you how to take charge and apply this trust definition in practical, developer-focused ways within the workflows and systems your team already uses.

TL;DR

  • Zero Trust security now defines how pipelines and machine identities operate.
  • Treat every connection as untrusted until verified.
  • Replace static keys with short-lived identity tokens(OIDC, workload ID).
  • Build least-privilege environments and audit secret lifecycles.
  • Move from static configs to automated, identity-driven controls.

How Zero Trust security is moving into development and DevOps

Modern workflows make it unsafe to decide access based on an“inside is trusted” assumption. Systems run across multiple clouds, microservices talk across clusters, and workloads appear and disappear dynamically. Jobs or agents can run within your workflow but outside your core network, perform a single task, and then shut down. In such a setup, you can see why the shift from traditional network boundaries is needed. Those imaginary network perimeters no longer offer meaningful protection.

The answer is to embed Zero Trust architecture into your code and DevOps workflows. Enforce multi-factor authentication and SSO for users, use workload identity for services, sign build artifacts, and scope access to the specific use case. This aligns with Microsoft’s guidance to verify explicitly, assume breach, and enforce least privilege, and with AWS’s view of an identity-centric security perimeter across the DevOps pipeline.

Take a CI job, for example. Unlike traditional security models that rely on long-lived API keys for registry access, the job can use OpenID Connect(OIDC) to request a short-lived identity token at runtime. The registry would receive fresh tokens to grant access each time the pipeline runs.

Here is what an example workflow would look like:

This pattern captures what Zero Trust looks like in action, preventing dynamic identity and secret exchanges from becoming weak spots.

Why machine identities and secrets are becoming the new weak spot

Machine identities existing in our stacks are not the problem. In fact, awareness of their presence is a key concept of Zero Trust. The real issue is that the security practices around them have not evolved at the same pace, even though the model mandates stringent identity verification for all workloads.

In the CI example above, dynamic and temporary credentials were used instead of static keys. Yet many teams still create their own weak spots by relying on long-lived credentials, reusing environment variables across environments, or exposing secrets in build logs. These oversights violate the core Zero Trust principle of“never trust by default.”

Consider a scenario where your Kubernetes workloads utilize a single, long-lived access key to connect to a database. Over time, dozens of services may share that same key, including some you may not even be aware of. If one of those services is compromised, the attacker gains access using exposed user identity and credential data.

A more effective approach would be to combine Workload Identity with a centralized secret orchestrator, such as Doppler, HashiCorp Vault, or AWS Secrets Manager. The orchestrator would act as the control plane for configuration hygiene, environment scoping, and audit trails, validating each user and device attempting to connect. While the database credential is minted just in time by the cloud provider, used briefly by the workload, and never persisted.

How would this look in practice, you ask?

The following example uses Doppler to demonstrate the workflow, but the same approach applies to any secrets orchestrator.

1) Configure IAM authentication in your database(RDS PostgreSQL example)

This approach replaces static database passwords with temporary IAM-based tokens. Each token is scoped to a specific database user.

2) Store the database metadata in Doppler

The secrets are referenced securely and never stored in plaintext or reused across environments.

3) Create a least-privilege IAM policy for cluster IRSA (EKS in this case)

The role’s permissions are tightly scoped to enforce least privilege access.

4) Bind the IAM role to a Kubernetes ServiceAccount (IRSA)

This maps Kubernetes workloads directly to AWS IAM identities for fine-grained control.

5) Use the Doppler Operator to orchestrate config scoping

The configuration remains dynamic, enabling automatic updates and revocation via Doppler’s control plane.

6) Mint a short-lived DB token with Workload Identity

The setup above would allow your workload to mint a short-lived database token using its pod identity. Here is a Node.js example with AWS SDK v3 and pg:

This approach implements Zero Trust by limiting exposure, preventing reuse, and keeping machine identities accountable. However, a poorly configured CI/CD pipeline can reintroduce static secrets, inherited permissions, or implicit trust between jobs, undoing the benefits of the model.

How traditional pipelines break the Zero Trust model

In the example above, we took proper steps to follow Zero Trust at runtime. Yet the model can still break when the base Kubernetes infrastructure is created through a traditional CI/CD pipeline. A traditional pipeline often uses long-lived access keys to authenticate to the cloud and deploy resources. If an attacker gains access to a runner or to the pipeline config, that persistent key can be stolen and reused.

Even if you switch to a service account, problems could still arise if the same account is used to provision object storage, create virtual machines, and manage database access. This setup gives the pipeline too much privilege.

If least privilege is not defined in your infrastructure as code(IaC), permissions can become overly broad on individual resources. The fix is to split and scope permissions for each service attempting to access resources.

For example, limit S3 to artifact operations only:

And scope virtual machine changes to a single Auto Scaling Group:

Then have the CI job assume only what it needs:

There are more pitfalls. CI/CD platforms can leak details in logs, including resource identifiers and occasional secret values. The same runner may execute both low-risk builds and critical deployments, so scoping is vital to help limit the blast radius of any compromise.
The takeaway is simple. Apply Zero Trust everywhere: Employ identity-based segmentation in code, in the IaC that creates your infrastructure, in both CI and CD stages, and across every secret and configuration those systems use.

What the concept of Zero Trust looks like in secrets and configs

Just as Zero Trust secures your systems, the keys and configurations that control access must follow the same principles. Start by separating the configuration for each environment, and name them accordingly. For example, use ENVIRONMENT=staging for staging environmentsand LOG_LEVEL=DEBUG or CACHE_TIMEOUT=60s for behavior/debug workflows.

Your secrets should follow a lifecycle guided by Zero Trust security principles; they should authenticate the requesting entity, verify its identity(for example, through OIDC), and then inject only the minimum required data for that workload. Each secret must remain scoped to a single service and environment, grant only necessary actions, be logged for visibility, and expire automatically after use. These are the crucial practical foundations for applying zero trust to secret management.

Here’s how that lifecycle appears in Doppler’s environment view, where each configuration and token is scoped, monitored, and rotated as part of a continuous access control cycle:

Doppler dashboard showing environment-scoped configs and tokens.
Doppler dashboard showing environment-scoped configs and tokens.

These lifecycle steps act as foundational security measures and should be implemented and enforced as a development standard.

Implementation path: Building an identity protection strategy for developers

Zero Trust is no longer just a theory, and it’s no longer just a compliance framework. The concept has evolved into a workload security model that you, as a developer, must apply to match how modern systems actually run. To summarize how you can act on Zero Trust:

  • Stop baking secrets into builds: Fetch secrets only at runtime.
  • Use short-lived tokens: Use OIDC in CI, dynamic secrets, or IRSA at runtime.
  • Enforce least privilege per environment: Separate roles and scopes to match your security requirements across dev, staging, and production.
  • Harden config, not just secrets: Keep config per env, validate via policy(e.g., Open Policy Agent/Conftest) before deploy.
  • Sign what you ship: Enable artifact signing(Sigstore/cosign) and require verified provenance(SLSA-level checks).
  • Lock down your pipelines: Use minimal job permissions, repo/branch-scoped role assumptions, and no shared workspaces for secrets.
  • Track and alert on secret usage: Treat access logs as an audit signal, and use these analytics for enhanced security to detect anomalies.
  • Automate revocation: Make “kill switches” routine and test them. For example, revoke a service token and restart workloads.

Anchor this workflow in a central secrets manager like Doppler to prevent sprawl, standardize environment scoping, and keep an auditable trail. Start with one pipeline, one environment, or one critical secret lifecycle, and expand from there.

Enjoying this content? Stay up to date and get our latest blogs, guides, and tutorials.

Related Content

Explore More