Apr 29, 2026
26 min read

Secrets at the edge: Secure patterns for Cloudflare Workers

Secrets at the edge: Secure patterns for Cloudflare Workers

Imagine you have built a high-performance racing car that can go from 0 to 60 in 0.01 seconds, but before it can move, it has to call a central office in London to request the key.That is effectively what happens when teams build on Cloudflare Workers but fail to think carefully about edge secrets management.

Cloudflare Workers are designed to run your code across more than 300 global data centers, allowing users in distant locations to load pages and applications in under 10 milliseconds. If your system is architected to make these Workers reach across a network to retrieve secrets from a central location, it introduces a fundamental mismatch. A single secrets lookup that takes 100 to 150 milliseconds can erase the performance benefits the edge is meant to deliver.

Today, more than three million Worker scripts rely on secrets, and Gartner estimates that over 75 percent of enterprise data processing now happens at the edge. More than ever, teams need clear guidance on how to secure secrets without sacrificing performance or reliability.

TLDR

This article takes a deep dive into edge-specific secrets patterns. By the end, you will understand the trade-offs between different secrets storage mechanisms at the edge and see practical code examples that reflect production-grade implementations.

For hands-on implementation with Doppler, see: Cloudflare Workers + Doppler: A secure workflow.

Why edge secrets management is different from cloud

It would be a mistake to assume that cloud secrets management can be applied to edge secrets; their architectures differ. While cloud environments usually follow a hub-and-spoke model, at the edge, there is no hub. A worker’s entire job might be to validate a JWT and then issue a redirect, which takes 6ms or less. If a worker has to“call home” and wait 150ms for a 6ms job, the benefits of using edge technology quickly disappear.

Cloud vs edge secrets architecture, highlighting the security, performance, and distribution constraints unique to edge execution.
Cloud vs edge secrets architecture, highlighting the security, performance, and distribution constraints unique to edge execution.

To avoid delays, secrets have to be readily available at execution time. Below are some of the key nuances that set edge secrets apart from the cloud.

  1. Distribution vs. centralization: Cloud systems rely on centralized vaults. This works because applications and vaults usually live in the same region, so a secrets lookup is typically a short hop inside a data center. At the edge, the model flips. Your application runs closer to users, not your infrastructure, and that means secrets have to be distributed as well. To avoid delays, secrets must already exist in every data center or edge location where a Worker’s code can execute.
  2. Network partition resilience: If a fiber-optic cable is cut or a router fails between an application and a central vault, the two will no longer be able to communicate. This often results in errors such as a 500 Internal Server Error. At the edge, there is no single hub. By the time a user in Paris makes a request, a copy of the required secrets has already been distributed and is sitting in memory or a nearby cache. During partial outages, Workers can continue to execute and decrypt secrets locally.
  3. Execution model: In the cloud, when an application starts, it makes an API call to the vault, retrieves secrets, and keeps them in memory for hours or days. This is because the server stays on permanently except during downtimes or necessary maintenance. At the edge, most Workers are ephemeral; they get spun up, complete a task, and are terminated. If your app has to make an API call to your vault for every Worker that gets spun up, it would essentially be carrying out a DDoS attack on your vault. There is no persistent state between Workers to hold secrets and hand them off to another.
  4. Cold start constraints: Cloud containers and virtual machines have cold start times of around 100-500ms. This time range is long enough to pull secrets from a central vault and have them available at runtime, masking delays inside the cold start. Cloudflare Workers use V8 isolates that spin up almost instantly, so the wait time for secrets to be delivered would be noticeably longer.

Here is an example of what it looks like to call an external vault on every request:

Every request waits on a network call, compounding latency as traffic increases. It would be better if the secret is preloaded through an environment variable at the edge location.

Here, there is no external call, and secrets would be injected before execution.

While using environment variables is often a good choice, there are situations where other edge storage option would be a better.

Environment variables vs. KV vs. Secrets Store: When to use each

Choosing a secret storage option is about which one fits your team's latency, security, and lifecycle requirements, not which is best. Let’s look at the available options.

Environment variables

In Cloudflare, environment variables can be used to input configuration values and secrets such as API keys, database passwords, and host names directly into a Worker’s memory at boot. They are technically bindings because they connect a variable name in your code to a value managed by Cloudflare, and are set either using wrangler secret put or as variables in wrangler.toml.

Since environment variables are preloaded into the V8 isolate, they are instantly accessible when the code executes. Cloudflare also encrypts these secrets at rest and treats them as write-only, meaning their values cannot be retrieved after creation.

The main downside of environment variables is that they are scoped per Worker. If multiple Workers need the same secret, it must be configured separately for each one. Because these bindings are static, updating a secret requires redeploying the Worker. Environment variables also do not support versioning or rollback.

Environment variables work best for:

  • Tasks that must execute in under 10ms, such as bot detection or custom routing
  • Secrets that do not change frequently
  • Standalone Workers that do not share state with other parts of the infrastructure

Below is a minimal Cloudflare Worker that reads a secret from an environment variable and uses it in a request without any runtime lookup.

To use the script above, you have to configure the file named wrangler.toml:

Then set the secret:

Workers KV

Workers KV is a storage service used to distribute data globally across Cloudflare’s network. It allows you to store secrets and configuration values and access them from any edge location where your Workers run.

Workers KV is a storage service used to distribute data globally across Cloudflare’s network. It allows you to store secrets and configuration values and access them from any edge location where your Workers run.

Unlike environment variables, which are scoped to a single Worker, a KV namespace can be bound to multiple Workers. This makes it useful when several Workers need access to the same secret or configuration. KV is also suitable for storing larger values, including JSON configurations, public key infrastructure(PKI) data, and allow-lists, with a maximum value size of up to 25 MB. Because KV is a data store, secrets can be updated or rotated without redeploying the Worker script.

However, since reads from KV require asynchronous I/O against the nearest data center, they incur 10–50ms of latency. Credential updates can take up to 60 seconds to persist across all edge locations, creating a race condition where some users might hit data centers still using old keys.

Any user with read access to the namespace can view stored KV values in plain text. It also introduces usage-based cost, currently around $0.50 per GB of storage and $0.50 per million reads, unlike environment variables, which do not incur usage-based charges.

Workers KV works best for:

  • Shared configuration or secrets used across multiple Workers or services
  • Secrets that need to be updated or rotated without redeploying Workers
  • Non-critical parts of a workload where small amounts of added latency are acceptable

To use Workers KV, you first create a namespace:

Wrangler prints a namespace id. Add it to wrangler.toml:

Store the secret value:

Then read it in your Worker using an async KV lookup:

Secrets Store

In Cloudflare Secrets Store, there is a notable shift in credential security and access from Worker-based configuration to an account-level model. It treats secret storage as a dedicated infrastructure layer rather than an add-on configuration script for edge Workers.

Secrets are defined once at the account level, for example, PROD_POSTGRES_PASSWORD, and can then be bound to as many Workers as needed. Secrets stored this way are encrypted at rest and are write-only. Their values cannot be viewed after creation and can only be accessed and decrypted by the Worker runtime at execution.

Secrets Store also supports role-based access control (RBAC) and a unified audit log that records secret creation, binding, rotation, and deletion, along with the actions performed by different team roles. Secrets can be rotated without redeploying Worker scripts or triggering CI/CD pipelines.

That said, Secrets Store is still relatively new. Its beta version was released in April 2025 and currently has some limitations. For example, each account is limited to 100 secrets, with a maximum size of 1 KB per secret. More setup is needed here than for environment variables or KV, as it requires configuring the store and explicitly referencing both a store_id and a secret_name in your Worker configuration.

Secrets Store works best for:

  • Environments with strong compliance or audit requirements
  • Micro-worker architectures where multiple Workers need access to the same secrets
  • Larger teams that need a clear separation between secret administrators and application developers

To get started, create a Secret Store for a credential:

Wrangler will return a store_id. Save it.

Add a secret to the store:

You’ll be prompted to enter the value securely.

Bind the secret to your Worker in the wrangler.toml file:

Use the secret in your Worker:

Then deploy:

As you can see, the differences in use cases between these storage options become clearer once their features are compared by strength and weakness. The table below summarizes these features.

CapabilityEnvironment variablesWorkers KVSecrets Store

Cold start impact

None (0ms)

10–50ms per read

None (0ms)

Shared across Workers

No (per Worker)

Yes (namespace-based)

Yes (account-level)

Update without redeploy

No

Yes

Yes

Encryption at rest

Yes

No (manual)

Yes

Audit logging

No

No

Yes

RBAC support

No

Limited (namespace access)

Yes

Consistency model

Immediate

Eventually consistent (up to ~60s)

Immediate

Max secret size

Small values

Up to 25 MB

1 KB

Cost

Included

$0.50/GB + reads

Included (beta)

Best suited for

Hot paths, low churn

Shared configs, rotation

Compliance, governance

While Cloudflare Workers benefit from these storage options, distributing secrets at the edge still presents a few operational challenges.

Operational challenges of secret distribution at the edge

Here are some challenges that can occur during secret distribution and how to handle them.

Challenge 1: Secret consistency across 300+ locations

Workers have to distribute secrets to more than 300 locations during uploads or rotations. The challenge is that some locations would receive the updated credentials a few seconds after others, causing 401 Unauthorized errors in the lagging regions.

Solution: Set up a dual-credential phase in which the old credentials remain active for a few minutes while the new ones propagate globally. This prevents downtime during rotation.

Challenge 2: Network partition resilience

If your system depends on a centralized vault, there are situations where an edge location can lose connectivity, breaking access to secrets. Your worker must continue to function to avoid request failures.

Solution: Create a fallback pattern that caches secrets at the edge so Workers can continue functioning during temporary connectivity outages.

In this pattern, the Worker prefers a fresh secret but falls back to a locally cached value if connectivity to the source is temporarily unavailable. This approach may briefly allow the use of a rotated secret, so it should be used where a short-lived inconsistency is tolerable.

These challenges show that alternatives to edge storage options are necessary. And, there are cases where it makes more sense to avoid stored secrets and use short-lived tokens instead.

Ephemeral tokens and credential-free edge patterns

While storing secrets at the edge is often unavoidable, it is not always the most secure option.Imagine over three million workers having access to compromised static API keys that have no automatic expiration. That's unlimited access to a company's sensitive systems. Even if the secret is eventually rotated, the window of exposure can still lead to costly damages. Let's observe some ephemeral credential patterns that you could adopt instead.

Workload identity (OIDC)

Instead of giving a Worker a long-lived static credential to access downstream workloads such as databases or APIs, it can request an OIDC token to prove its identity during authentication.

The Worker can use the built-in fetch API to interact with the token endpoint of external identity providers such as AWS, GCP, or Okta. The provider then verifies the Worker’s credentials, which can be a client ID and secret, a certificate, or mutual TLS, and then issues a signed OIDC access token.

In this setup, the Worker stores only a bootstrap credential used to obtain short-lived tokens, while access to downstream services remains scoped and time-bound.

The benefits

  • No downstream workload credentials are stored in the Worker.
  • Tokens are short-lived and expire quickly.
  • No manual rotation is required.

The limitations

  • OIDC only works with services that support it.
  • Cloudflare Workers identity support is still evolving and lacks native, automatic OIDC token issuance/injection.

Time-limited signed URLs

Time-limited signed URLs are most commonly used for large assets, such as images, logs, and backups. Using securely stored signing credentials, a worker could generate a time-sensitive, cryptographically signed link that allows users to perform operations like GET and PUT. It replaces the need to temporarily expose your API for short-term tasks such as streaming a video or uploading a document.

Here is an example of how signed URLs might be implemented in Workers:

This Worker generates short-lived, operation-scoped URLs that allow clients to upload or download specific objects directly from storage without exposing long-lived credentials.

The benefits

  • The signed URL can be tightly scoped to specific operations and resources.
  • Expiration is timestamp-based and automatic.
  • Credentials are never exposed to clients.

The limitations

  • If a signed URL is shared, anyone with the link can access the resource until it expires.
  • Rotating the signing credentials invalidates previously generated signed URLs.

Proxy pattern

While OIDC and Signed URLs are great options for removing the need for stored credentials, some services, such as Stripe and OpenAI, still rely on bearer tokens. In those cases, a Worker can act as a proxy to avoid exchanging credentials between the client and server sides. When the client side sends a request to the Worker, the Worker forwards it to the third-party API while injecting the server-side API key.

This Worker acts as a controlled proxy, forwarding client requests to a third-party API while injecting server-side credentials that are never exposed to the client.

The benefits

  • Secrets are never exposed to the client side.
  • Because all requests pass through the Worker, centralized logging and auditing become possible.
  • Secret rotation is simplified since credentials live in a single server-side location.

The limitations

  • Proxying requests introduces additional network overhead.
  • The Worker must correctly forward headers and handle streaming to avoid breaking upstream APIs.

The table below summarizes each pattern, its typical use cases, and the relative security trade-offs.

PatternBest forComplexitySecurity level

Workload identity (OIDC)

Calling APIs that support short-lived tokens

Medium

High

Time-limited signed URLs

Upload/download of large files without exposing credentials

Low–Medium

High

Proxy pattern (credential isolation)

Services that require API keys (Stripe, OpenAI)

Medium

Medium–High

Stored secrets (env vars / Secrets Store / KV)

Workloads that can’t use identity or signing patterns

Low

Varies

Understanding secrets and credential-free edge patterns provides a solid foundation for safely handling workloads in production.

Safe patterns for production deployments

Here are some safety patterns for deployments in production environments.

Environment-specific secrets for local development and production

Set up clear environment separation by using different credentials for development, staging, and production. Each environment should have its own configuration, managed through environment-specific sections in wrangler.toml files and runtime secret bindings.

Below is an example of a wrangler.toml file specifying multiple environments:

For local development, secrets are kept out of version control using a .dev.vars file:

While for staging and production, secrets are injected at runtime using Wrangler:

Secret rotation without downtime

Secret rotation is a requirement for production systems, but it must be handled carefully to avoid downtime. Here are two ways to rotate secrets safely.

The first approach is the dual-credential phase mentioned earlier, in which a Worker is configured to accept both an old and a new credential. When the old secret gets rotated, the new value gradually propagates across all edge locations in about 15 minutes. During that time, the old secret remains valid and then gets safely expired afterwards.

The second approach is using Doppler's rotation workflow, which can manage two versions of the same secret automatically. The way it works is:

  • Configure Doppler to interact with and store secrets for a target service, such as a database.
  • Set a rotation interval.
  • Sync Doppler with a workload that uses the secret, in this case, a Worker.
  • Doppler maintains two secret versions for the target service, keeping one active and the other inactive.
  • Halfway through a rotation interval, Doppler switches the active and inactive secrets.
  • During scheduled or manual rotation, Doppler switches the inactive secret's value and continues in that cycle.

Least privilege secrets

Apply the principle of least privilege by granting secrets only the permissions they actually need. For example, use scoped API keys, IAM roles with limited permissions, and read-only database credentials for applications that only need to read data and not perform writes.

A bad practice is using admin-level credentials everywhere in a workflow:

A safer approach is to use scoped credentials:

Secrets audit logging

Ensure that the lifecycle of secrets within your workflow is auditable and logged. Vital information to capture includes access timestamps, Worker name, edge location, request ID, and the user or client identifier.

These patterns may seem cumbersome to follow, but many of them become easier to manage in production when they are centralized and managed through a secrets manager.

Integrating a centralized secrets manager with Cloudflare Workers

Even with edge storage options for sensitive data, teams still need a single source of truth to define secrets, audit usage, and manage access control and secrets lifecycle. Doppler can serve as that control plane without introducing delays into the edge runtime.

A typical integration flow looks like this

Install Doppler CLI and log in from your terminal window:

Create a project with environment mappings (dev, staging, prod):

Add secrets in Doppler:

Inject Secrets to Cloudflare and Deploy Workers via CI/CD:

This approach does not introduce latency because secrets from Doppler are synchronized into Cloudflare ahead of execution. Once deployed, Workers read secrets locally from their runtime environment, eliminating the need for per-request vault calls. An added advantage is the centralized audit logging Doppler provides, which strengthens strategic monitoring and supports faster response to security incidents.

Monitoring secret access and incident response

A secret is only as secure as your team’s ability to know when there is a shift in what constitutes normal usage. At the edge, monitoring relies on behavioral signals rather than on explicit secret-access logs. Some key signals to track include:

  • Unusual access patterns: Track the origins of secret-backed requests. Traffic from unexpected regions or Cloudflare colos outside your customer base can indicate misuse.
  • Anomalies in usage volume or timing: Sudden spikes in requests using a secret or activity during unusual hours may signal a compromised Worker or automated abuse, such as credential stuffing.
  • Failed API calls: Repeated authentication failures or clusters of 401 Unauthorized responses sometimes indicate that leaked credentials are actively being tested.

The code below is an example of how these metrics can be tracked in your workflow:

Even with access to these signals, reliability is not guaranteed. Incident response practices must be in place in the event of a security compromise. These actions can be grouped into three phases.

Immediate actions

These actions are carried out immediately after a leak is discovered and focus on blocking unauthorized access:

  • Rotate and redeploy the affected secrets.
  • Revoke the old credentials and verify that requests using them now return 401 Unauthorized or 403 Forbidden errors.

Investigative actions

These actions are carried out after containment and focus on understanding how the secret was compromised:

  • Identify every Worker bound to the affected secret.
  • Review access logs around the exposure window.
  • Search the codebase for hardcoded credentials.
  • Review recent Git commit history and CI logs.

Preventive actions

These steps reduce the likelihood of similar incidents recurring:

  • Add secret scanning to your repositories.
  • Implement pre-commit hooks, such as gitleaks, on developer machines.
  • Review rotation policies and enable audit logging by default.

With tools such as Doppler, parts of the monitoring process can be automated. When Doppler holds your secrets and syncs them into Cloudflare, its activity history can be used to track how your secrets are being used and managed.

You can also enable Webhooks to notify your team of secret-related events, routing alerts to Slack, Discord, or other incident response systems.

Detecting and responding to incidents is important, but it is not the whole equation for securing edge workflows. For teams, especially those in regulated environments such as finance and health care, compliance and governance requirements must also be met at the edge.

Meeting compliance requirements at the edge

Compliance is about showing that your systems follow required rules and standards. These standards do not make a system unbreakable, but they help reduce risk and provide clear guardrails for handling sensitive information. For edge workloads, the same expectations apply. Here are some standards to adhere to.

Encryption at rest and in transit

By default, secrets managed through Cloudflare secret store are encrypted at rest using AES-256 and in transit using TLS 1.3. As stated earlier, secret values stored this way are redacted from the logs and the dashboard. However, when using Workers KV, you are responsible for implementing application-level encryption, maintaining an audit trail, and defining rotation policies.

The example below demonstrates a simple approach to encrypting data before storing it in Workers KV.

Secret rotation policies

Various compliance standards require that secrets be rotated periodically or after certain events. For example, PCI-DSS requires that secrets be rotated every 90 days, while HIPAA requires rotation after an employee's departure.
Relying on long-lived secrets increases exposure time and the radius of exposure. It's best to rotate credentials every 30-60 days, depending on how critical the systems they grant access to are.

Access control and audit logging

SOC 2 requires that security around your systems must answer these four questions: Who accessed the secret, when it was accessed, where the request originated, and why the access occurred.

Regardless of whether you rely on Cloudflare-native tooling or external stores for edge secrets, you must maintain a comprehensive audit trail. When implementing audit logs, it is critical that secret usage metadata is recorded, while secret values themselves remain hidden and are never logged or exposed.

The example below shows how an audit record can be generated when a secret-backed operation is performed in a Worker:

Given the compliance requirements and production safety patterns discussed so far, a simple checklist would provide a practical reference for securing secrets at the edge.

Production checklist for securing Cloudflare Workers secrets

This checklist summarizes the core practices covered in this guide and can be used as a final validation. For clarity, the checklists are grouped into checks to perform before and after deploying Cloudflare Workers to production.

Before deployment checklist 

This checklist can be further categorized into:

Secret storage checklist

  • Secrets are stored using Cloudflare secrets Store or environment variables
  • When Workers KV is used, encryption is done at the application level
  • No secrets are committed to source control
  • Secrets are scoped per environment

Access control checklist

  • Secrets only carry the minimum required permissions
  • No admin-level credentials used in Workers
  • Access paths are documented for each secret

Rotation checklist

  • Rotation policy defined and documented
  • Dual-key rotation is supported where applicable
  • Rotation tested without downtime

Monitoring checklist

  • Audit logging enabled for secret-related operations
  • Logs are routed to a centralized monitoring backend
  • Alerts configured for auth failures and anomalous access patterns

Compliance checklist

  • Encryption at rest and in transit is verified
  • Rotation intervals aligned with PCI-DSS, SOC 2, or HIPAA

After deployment checklist

This checklist can be further categorized into:

Ongoing maintenance checklist

  • Deprecated Workers cleaned up
  • Access is reviewed periodically
  • The incident response playbook is kept up to date
  • CI/CD is reviewed regularly for leaked secrets

Secret hygiene checklist

  • Secrets are rotated on schedule
  • Unused secrets are removed
  • No secrets copied into documentation or tickets

These lists can serve as final checkpoints at various stages in your workflow. Below are also explicit Do’s and Don’ts to help highlight pitfalls to avoid.

DON’T

  • Fetch secrets from a centralized vault at runtime
  • Reuse the same secret across environments
  • Log secret values or headers
  • Delay rotation after a suspected leak

DO

  • Keep secrets local to the edge runtime
  • Rotate credentials proactively
  • Treat 401 spikes as potential security signals
  • Centralize auditing and alerting

With today’s edge security patterns established, it's worth looking ahead at what changes and improvements could occur in edge secrets management over the coming years.

The future of edge secrets management

The future of edge security patterns is not fixed, but based on how edge platforms have evolved so far, it is possible to anticipate where secrets management is heading next.

Trend 1: Workload identity everywhere

Today, most Cloudflare Workers rely on static API keys, manual rotation, and long-lived credentials. Over time, this model is likely to give way to approaches where secrets no longer need to be stored at all.

Instead, Workers are expected to rely on native identities, first-class OIDC integration, and short-lived, on-demand credentials.

In this future model, identity replaces static secrets as the primary trust mechanism.

This hypothetical example illustrates a scenario where Cloudflare Workers expose a native workload identity to authenticate services such as Databases or internal APIs.

Trend 2: Hardware-based secret protection

Currently, secrets at the edge are protected using software-based encryption and memory isolation. In the future, edge platforms are likely to incorporate hardware-backed security primitives such as Hardware Security Modules (HSMs), Trusted Platform Modules (TPMs), and secure enclaves directly into edge infrastructure.

Since this form of protection provides stronger key isolation, it would reduce the risk of secret exposure in application memory and strengthen the compliance posture of teams.

Trend 3: Zero trust at edge

Perimeter-based security models are gradually fading out. Even in cloud environments, there has been a shift toward zero-trust architectures. This trend is expected to extend to the edge as well. Access decisions would rely on continuous verification, context-aware controls, and device posture.

Trend 4: AI-powered threat detection

Behavioral machine learning approaches are replacing rule-based monitoring. AI models could learn access patterns, request routes, and time and volume to detect threats and anomalies, and even automate incident response.

Technology, irrespective of where it runs, must evolve. For edge secrets today, we are migrating to environment variables and secret stores, implementing rotation and auditing, and using secret managers such as Doppler to reduce operational overhead.

Tomorrow, we will adopt even newer innovations as mentioned. Ultimately, it's important to stay aligned with these changes and update your security patterns to build resilient, production-ready edge systems.

Enjoying this content? Stay up to date and get our latest blogs, guides, and tutorials.

Related Content

Explore More