
Imagine you have built a high-performance racing car that can go from 0 to 60 in 0.01 seconds, but before it can move, it has to call a central office in London to request the key.That is effectively what happens when teams build on Cloudflare Workers but fail to think carefully about edge secrets management.
Cloudflare Workers are designed to run your code across more than 300 global data centers, allowing users in distant locations to load pages and applications in under 10 milliseconds. If your system is architected to make these Workers reach across a network to retrieve secrets from a central location, it introduces a fundamental mismatch. A single secrets lookup that takes 100 to 150 milliseconds can erase the performance benefits the edge is meant to deliver.
Today, more than three million Worker scripts rely on secrets, and Gartner estimates that over 75 percent of enterprise data processing now happens at the edge. More than ever, teams need clear guidance on how to secure secrets without sacrificing performance or reliability.
This article takes a deep dive into edge-specific secrets patterns. By the end, you will understand the trade-offs between different secrets storage mechanisms at the edge and see practical code examples that reflect production-grade implementations.
For hands-on implementation with Doppler, see: Cloudflare Workers + Doppler: A secure workflow.
It would be a mistake to assume that cloud secrets management can be applied to edge secrets; their architectures differ. While cloud environments usually follow a hub-and-spoke model, at the edge, there is no hub. A worker’s entire job might be to validate a JWT and then issue a redirect, which takes 6ms or less. If a worker has to“call home” and wait 150ms for a 6ms job, the benefits of using edge technology quickly disappear.

To avoid delays, secrets have to be readily available at execution time. Below are some of the key nuances that set edge secrets apart from the cloud.
Here is an example of what it looks like to call an external vault on every request:
Every request waits on a network call, compounding latency as traffic increases. It would be better if the secret is preloaded through an environment variable at the edge location.
Here, there is no external call, and secrets would be injected before execution.
While using environment variables is often a good choice, there are situations where other edge storage option would be a better.
Choosing a secret storage option is about which one fits your team's latency, security, and lifecycle requirements, not which is best. Let’s look at the available options.
In Cloudflare, environment variables can be used to input configuration values and secrets such as API keys, database passwords, and host names directly into a Worker’s memory at boot. They are technically bindings because they connect a variable name in your code to a value managed by Cloudflare, and are set either using wrangler secret put or as variables in wrangler.toml.
Since environment variables are preloaded into the V8 isolate, they are instantly accessible when the code executes. Cloudflare also encrypts these secrets at rest and treats them as write-only, meaning their values cannot be retrieved after creation.
The main downside of environment variables is that they are scoped per Worker. If multiple Workers need the same secret, it must be configured separately for each one. Because these bindings are static, updating a secret requires redeploying the Worker. Environment variables also do not support versioning or rollback.
Environment variables work best for:
Below is a minimal Cloudflare Worker that reads a secret from an environment variable and uses it in a request without any runtime lookup.
To use the script above, you have to configure the file named wrangler.toml:
Then set the secret:
Workers KV is a storage service used to distribute data globally across Cloudflare’s network. It allows you to store secrets and configuration values and access them from any edge location where your Workers run.
Workers KV is a storage service used to distribute data globally across Cloudflare’s network. It allows you to store secrets and configuration values and access them from any edge location where your Workers run.
Unlike environment variables, which are scoped to a single Worker, a KV namespace can be bound to multiple Workers. This makes it useful when several Workers need access to the same secret or configuration. KV is also suitable for storing larger values, including JSON configurations, public key infrastructure(PKI) data, and allow-lists, with a maximum value size of up to 25 MB. Because KV is a data store, secrets can be updated or rotated without redeploying the Worker script.
However, since reads from KV require asynchronous I/O against the nearest data center, they incur 10–50ms of latency. Credential updates can take up to 60 seconds to persist across all edge locations, creating a race condition where some users might hit data centers still using old keys.
Any user with read access to the namespace can view stored KV values in plain text. It also introduces usage-based cost, currently around $0.50 per GB of storage and $0.50 per million reads, unlike environment variables, which do not incur usage-based charges.
Workers KV works best for:
To use Workers KV, you first create a namespace:
Wrangler prints a namespace id. Add it to wrangler.toml:
Store the secret value:
Then read it in your Worker using an async KV lookup:
In Cloudflare Secrets Store, there is a notable shift in credential security and access from Worker-based configuration to an account-level model. It treats secret storage as a dedicated infrastructure layer rather than an add-on configuration script for edge Workers.
Secrets are defined once at the account level, for example, PROD_POSTGRES_PASSWORD, and can then be bound to as many Workers as needed. Secrets stored this way are encrypted at rest and are write-only. Their values cannot be viewed after creation and can only be accessed and decrypted by the Worker runtime at execution.
Secrets Store also supports role-based access control (RBAC) and a unified audit log that records secret creation, binding, rotation, and deletion, along with the actions performed by different team roles. Secrets can be rotated without redeploying Worker scripts or triggering CI/CD pipelines.
That said, Secrets Store is still relatively new. Its beta version was released in April 2025 and currently has some limitations. For example, each account is limited to 100 secrets, with a maximum size of 1 KB per secret. More setup is needed here than for environment variables or KV, as it requires configuring the store and explicitly referencing both a store_id and a secret_name in your Worker configuration.
Secrets Store works best for:
To get started, create a Secret Store for a credential:
Wrangler will return a store_id. Save it.
Add a secret to the store:
You’ll be prompted to enter the value securely.
Bind the secret to your Worker in the wrangler.toml file:
Use the secret in your Worker:
Then deploy:
As you can see, the differences in use cases between these storage options become clearer once their features are compared by strength and weakness. The table below summarizes these features.
| Capability | Environment variables | Workers KV | Secrets Store |
|---|---|---|---|
Cold start impact | None (0ms) | 10–50ms per read | None (0ms) |
Shared across Workers | No (per Worker) | Yes (namespace-based) | Yes (account-level) |
Update without redeploy | No | Yes | Yes |
Encryption at rest | Yes | No (manual) | Yes |
Audit logging | No | No | Yes |
RBAC support | No | Limited (namespace access) | Yes |
Consistency model | Immediate | Eventually consistent (up to ~60s) | Immediate |
Max secret size | Small values | Up to 25 MB | 1 KB |
Cost | Included | $0.50/GB + reads | Included (beta) |
Best suited for | Hot paths, low churn | Shared configs, rotation | Compliance, governance |
While Cloudflare Workers benefit from these storage options, distributing secrets at the edge still presents a few operational challenges.
Here are some challenges that can occur during secret distribution and how to handle them.
Workers have to distribute secrets to more than 300 locations during uploads or rotations. The challenge is that some locations would receive the updated credentials a few seconds after others, causing 401 Unauthorized errors in the lagging regions.
Solution: Set up a dual-credential phase in which the old credentials remain active for a few minutes while the new ones propagate globally. This prevents downtime during rotation.
If your system depends on a centralized vault, there are situations where an edge location can lose connectivity, breaking access to secrets. Your worker must continue to function to avoid request failures.
Solution: Create a fallback pattern that caches secrets at the edge so Workers can continue functioning during temporary connectivity outages.
In this pattern, the Worker prefers a fresh secret but falls back to a locally cached value if connectivity to the source is temporarily unavailable. This approach may briefly allow the use of a rotated secret, so it should be used where a short-lived inconsistency is tolerable.
These challenges show that alternatives to edge storage options are necessary. And, there are cases where it makes more sense to avoid stored secrets and use short-lived tokens instead.
While storing secrets at the edge is often unavoidable, it is not always the most secure option.Imagine over three million workers having access to compromised static API keys that have no automatic expiration. That's unlimited access to a company's sensitive systems. Even if the secret is eventually rotated, the window of exposure can still lead to costly damages. Let's observe some ephemeral credential patterns that you could adopt instead.
Instead of giving a Worker a long-lived static credential to access downstream workloads such as databases or APIs, it can request an OIDC token to prove its identity during authentication.
The Worker can use the built-in fetch API to interact with the token endpoint of external identity providers such as AWS, GCP, or Okta. The provider then verifies the Worker’s credentials, which can be a client ID and secret, a certificate, or mutual TLS, and then issues a signed OIDC access token.
In this setup, the Worker stores only a bootstrap credential used to obtain short-lived tokens, while access to downstream services remains scoped and time-bound.
The benefits
The limitations
Time-limited signed URLs are most commonly used for large assets, such as images, logs, and backups. Using securely stored signing credentials, a worker could generate a time-sensitive, cryptographically signed link that allows users to perform operations like GET and PUT. It replaces the need to temporarily expose your API for short-term tasks such as streaming a video or uploading a document.
Here is an example of how signed URLs might be implemented in Workers:
This Worker generates short-lived, operation-scoped URLs that allow clients to upload or download specific objects directly from storage without exposing long-lived credentials.
The benefits
The limitations
While OIDC and Signed URLs are great options for removing the need for stored credentials, some services, such as Stripe and OpenAI, still rely on bearer tokens. In those cases, a Worker can act as a proxy to avoid exchanging credentials between the client and server sides. When the client side sends a request to the Worker, the Worker forwards it to the third-party API while injecting the server-side API key.
This Worker acts as a controlled proxy, forwarding client requests to a third-party API while injecting server-side credentials that are never exposed to the client.
The benefits
The limitations
The table below summarizes each pattern, its typical use cases, and the relative security trade-offs.
| Pattern | Best for | Complexity | Security level |
|---|---|---|---|
Workload identity (OIDC) | Calling APIs that support short-lived tokens | Medium | High |
Time-limited signed URLs | Upload/download of large files without exposing credentials | Low–Medium | High |
Proxy pattern (credential isolation) | Services that require API keys (Stripe, OpenAI) | Medium | Medium–High |
Stored secrets (env vars / Secrets Store / KV) | Workloads that can’t use identity or signing patterns | Low | Varies |
Understanding secrets and credential-free edge patterns provides a solid foundation for safely handling workloads in production.
Here are some safety patterns for deployments in production environments.
Set up clear environment separation by using different credentials for development, staging, and production. Each environment should have its own configuration, managed through environment-specific sections in wrangler.toml files and runtime secret bindings.
Below is an example of a wrangler.toml file specifying multiple environments:
For local development, secrets are kept out of version control using a .dev.vars file:
While for staging and production, secrets are injected at runtime using Wrangler:
Secret rotation is a requirement for production systems, but it must be handled carefully to avoid downtime. Here are two ways to rotate secrets safely.
The first approach is the dual-credential phase mentioned earlier, in which a Worker is configured to accept both an old and a new credential. When the old secret gets rotated, the new value gradually propagates across all edge locations in about 15 minutes. During that time, the old secret remains valid and then gets safely expired afterwards.
The second approach is using Doppler's rotation workflow, which can manage two versions of the same secret automatically. The way it works is:
Apply the principle of least privilege by granting secrets only the permissions they actually need. For example, use scoped API keys, IAM roles with limited permissions, and read-only database credentials for applications that only need to read data and not perform writes.
A bad practice is using admin-level credentials everywhere in a workflow:
A safer approach is to use scoped credentials:
Ensure that the lifecycle of secrets within your workflow is auditable and logged. Vital information to capture includes access timestamps, Worker name, edge location, request ID, and the user or client identifier.
These patterns may seem cumbersome to follow, but many of them become easier to manage in production when they are centralized and managed through a secrets manager.
Even with edge storage options for sensitive data, teams still need a single source of truth to define secrets, audit usage, and manage access control and secrets lifecycle. Doppler can serve as that control plane without introducing delays into the edge runtime.
Install Doppler CLI and log in from your terminal window:
Create a project with environment mappings (dev, staging, prod):
Add secrets in Doppler:
Inject Secrets to Cloudflare and Deploy Workers via CI/CD:
This approach does not introduce latency because secrets from Doppler are synchronized into Cloudflare ahead of execution. Once deployed, Workers read secrets locally from their runtime environment, eliminating the need for per-request vault calls. An added advantage is the centralized audit logging Doppler provides, which strengthens strategic monitoring and supports faster response to security incidents.
A secret is only as secure as your team’s ability to know when there is a shift in what constitutes normal usage. At the edge, monitoring relies on behavioral signals rather than on explicit secret-access logs. Some key signals to track include:
The code below is an example of how these metrics can be tracked in your workflow:
Even with access to these signals, reliability is not guaranteed. Incident response practices must be in place in the event of a security compromise. These actions can be grouped into three phases.
These actions are carried out immediately after a leak is discovered and focus on blocking unauthorized access:
These actions are carried out after containment and focus on understanding how the secret was compromised:
These steps reduce the likelihood of similar incidents recurring:
With tools such as Doppler, parts of the monitoring process can be automated. When Doppler holds your secrets and syncs them into Cloudflare, its activity history can be used to track how your secrets are being used and managed.
You can also enable Webhooks to notify your team of secret-related events, routing alerts to Slack, Discord, or other incident response systems.
Detecting and responding to incidents is important, but it is not the whole equation for securing edge workflows. For teams, especially those in regulated environments such as finance and health care, compliance and governance requirements must also be met at the edge.
Compliance is about showing that your systems follow required rules and standards. These standards do not make a system unbreakable, but they help reduce risk and provide clear guardrails for handling sensitive information. For edge workloads, the same expectations apply. Here are some standards to adhere to.
By default, secrets managed through Cloudflare secret store are encrypted at rest using AES-256 and in transit using TLS 1.3. As stated earlier, secret values stored this way are redacted from the logs and the dashboard. However, when using Workers KV, you are responsible for implementing application-level encryption, maintaining an audit trail, and defining rotation policies.
The example below demonstrates a simple approach to encrypting data before storing it in Workers KV.
Various compliance standards require that secrets be rotated periodically or after certain events. For example, PCI-DSS requires that secrets be rotated every 90 days, while HIPAA requires rotation after an employee's departure.
Relying on long-lived secrets increases exposure time and the radius of exposure. It's best to rotate credentials every 30-60 days, depending on how critical the systems they grant access to are.
SOC 2 requires that security around your systems must answer these four questions: Who accessed the secret, when it was accessed, where the request originated, and why the access occurred.
Regardless of whether you rely on Cloudflare-native tooling or external stores for edge secrets, you must maintain a comprehensive audit trail. When implementing audit logs, it is critical that secret usage metadata is recorded, while secret values themselves remain hidden and are never logged or exposed.
The example below shows how an audit record can be generated when a secret-backed operation is performed in a Worker:
Given the compliance requirements and production safety patterns discussed so far, a simple checklist would provide a practical reference for securing secrets at the edge.
This checklist summarizes the core practices covered in this guide and can be used as a final validation. For clarity, the checklists are grouped into checks to perform before and after deploying Cloudflare Workers to production.
This checklist can be further categorized into:
Secret storage checklist
Access control checklist
Rotation checklist
Monitoring checklist
Compliance checklist
This checklist can be further categorized into:
Ongoing maintenance checklist
Secret hygiene checklist
These lists can serve as final checkpoints at various stages in your workflow. Below are also explicit Do’s and Don’ts to help highlight pitfalls to avoid.
With today’s edge security patterns established, it's worth looking ahead at what changes and improvements could occur in edge secrets management over the coming years.
The future of edge security patterns is not fixed, but based on how edge platforms have evolved so far, it is possible to anticipate where secrets management is heading next.
Today, most Cloudflare Workers rely on static API keys, manual rotation, and long-lived credentials. Over time, this model is likely to give way to approaches where secrets no longer need to be stored at all.
Instead, Workers are expected to rely on native identities, first-class OIDC integration, and short-lived, on-demand credentials.
In this future model, identity replaces static secrets as the primary trust mechanism.
This hypothetical example illustrates a scenario where Cloudflare Workers expose a native workload identity to authenticate services such as Databases or internal APIs.
Currently, secrets at the edge are protected using software-based encryption and memory isolation. In the future, edge platforms are likely to incorporate hardware-backed security primitives such as Hardware Security Modules (HSMs), Trusted Platform Modules (TPMs), and secure enclaves directly into edge infrastructure.
Since this form of protection provides stronger key isolation, it would reduce the risk of secret exposure in application memory and strengthen the compliance posture of teams.
Perimeter-based security models are gradually fading out. Even in cloud environments, there has been a shift toward zero-trust architectures. This trend is expected to extend to the edge as well. Access decisions would rely on continuous verification, context-aware controls, and device posture.
Behavioral machine learning approaches are replacing rule-based monitoring. AI models could learn access patterns, request routes, and time and volume to detect threats and anomalies, and even automate incident response.
Technology, irrespective of where it runs, must evolve. For edge secrets today, we are migrating to environment variables and secret stores, implementing rotation and auditing, and using secret managers such as Doppler to reduce operational overhead.
Tomorrow, we will adopt even newer innovations as mentioned. Ultimately, it's important to stay aligned with these changes and update your security patterns to build resilient, production-ready edge systems.



Trusted by the world’s best DevOps and security teams. Doppler is the secrets manager developers love.
