
In March 2025, a popular GitHub Action,(tj-actions/changed-files), was compromised. The malicious version silently logged CI/CD secrets into build logs. Any repository that ran the action risked leaking credentials to attackers.
What makes this incident particularly telling is that many of the affected teams already had rotation policies in place. Tokens might have been set to expire within hours or days under a strict rotation schedule, but that didn’t matter. Once those secrets hit the logs, attackers had everything they needed to exploit them in real time.
This is the blind spot in most rotation strategies. Rotation doesn’t protect you in the moments between compromise and detection, and that moment is enough for an attacker to spin up infrastructure or pivot deeper into your systems. That moment is what we call the window of exploitability. Closing it requires more than rotation.
Every secret goes through a lifecycle:
Rotation only controls the first and last steps; however, the dangerous part is the middle, where secrets are alive and usable. Every type of secret, from encryption keys and database passwords to API credentials, faces the same risks.
The tj-actions/changed-files breach is a perfect example. The secrets involved could have been short-lived or valid only during a single CI run. But that small window was enough. An attacker doesn’t need days to cause damage. With automated tooling, seconds are sufficient to pull data from an S3 bucket, spin up EC2 instances, or inject malicious code that persists long after the token itself has expired. The image below helps illustrate what that looks like.

By the time rotation or revocation kicks in, the attacker has already achieved their goal. Incident response teams are often left playing catch-up once stolen credentials have already been used. Unless there’s a way to detect misuse during that middle stage, the exploit window stays wide open.
The exploit window rarely opens through a sophisticated zero-day. More often, it’s the result of small, familiar mistakes that teams repeat every day. Below are some of the most common ways it happens.
Logs are one of the easiest places for secrets to slip. A developer might add an echo for debugging, or a third-party Action might write environment values into build output. Suddenly, what looked like a locked-down API key is sitting in plaintext for anyone with access to the logs.Consider the following GitHub action example:
From the developer’s perspective, this is harmless troubleshooting. But for an attacker, those logs are gold. All they need to do is read the build output, copy the token, and immediately query APIs or exfiltrate data. Even if the key only lives for a few minutes, that’s more than enough time to do real damage. A new secret might be issued afterwards, but rotation doesn’t even get the chance to help.
As AI assistants become a normal part of development, they introduce a new leakage path that feels almost invisible. Developers often paste connection strings or API keys into prompts, asking for help with a migration or configuration. The assistant processes the request, generates the code, and logs the conversation, and in doing so, the secret is now preserved outside its intended scope.
A typical slip might look like this:
The assistant responds helpfully, but the damage is already done. The secret may persist in logs or feedback loops for weeks. It can be used to train the AI and resurface as autocomplete suggestions for other users or, worse, be indexed by a search engine if the chat is shared publicly. If an attacker gains access to those systems, they inherit a set of keys that were never meant to leave your machine. Short-lived or not, the exposure window is long enough to matter.
Another recurring pattern is secrets being hard-coded into artifacts. It often happens in Dockerfiles, where a developer sets an environment variable during the build process for convenience. The build works, the container runs, and the secret becomes a permanent part of the image.
When the image is pushed to a registry or cached on a runner, the secret travels with it. Even if you remove it from your code later, it still exists in earlier layers and can be retrieved by anyone with access to the image. What started as a shortcut has now turned into a distribution mechanism for your most sensitive credentials.
In each of these cases, the secret exposure could be brief, but that short moment is sufficient for an attacker to act and access sensitive data. The only real defense is closing the detection gap.
Security-focused teams treat secret management as more than just storage and rotation. They take a holistic approach that covers how secrets are created, used, and protected. Here’s how you can do the same.
There’s an important difference between rotated secrets and dynamic secrets. Rotation means replacing a secret on a schedule or after an incident. Dynamic secrets are generated on demand, with a short lifetime, and are tied to a specific use. Rotation shortens how long a leaked secret stays valid. Dynamic secrets go further; they don’t even exist until you ask for them, and they expire right after use. That makes them perfect for CI/CD pipelines, where a token might only be needed for a single build or deploy.
To minimize the window of attack, leverage secrets management tools to issue dynamic secrets in your CI/CD workflow, so credentials are short-lived, used once, and expire immediately after. With Doppler, for example, you can connect third-party services like AWS, GCP, or databases. Doppler issues dynamic credentials for those services and injects them into your job at runtime. The secret lives just long enough for your workflow to run, and then it disappears.
To keep your team security-focused, you need to see how secrets are used, not just where they’re stored. Emit an audit event every time any secret is retrieved or validated. Capture who or what used it, which workload ran, the source IP or network, region, outcome, and a precise timestamp.Furthermore, the alerts should be opinionated and high-signal. Flag access from outside your allowlisted networks or regions, use by an unexpected identity, unusual hours for a given pipeline, spikes in request rate, or any use after a revoke event.
Popular platforms already expose usage logs you can wire into alerts, such as the Doppler Access log, CloudTrail for AWS Secrets Manager, and Cloud Audit Logs for Google Cloud Secret Manager. For example, the Doppler access log provides an intuitive user access history.

From there, you can directly see who accessed your secrets, from what device, the method they used, and the time frame. That way, you can easily investigate and quickly take action when a rule trips.
In addition to monitoring, guardrails should be put around sensitive systems. Monitoring tells you something is wrong; guardrails block it. For these systems, start with deny by default and allow access only from verified identities, allowlisted networks, and approved runtimes.
Keep services private and expose them only through VPN, VPC endpoints, or an API gateway with an IP allowlist. If a request comes from outside your range, refuse it. Databases should accept connections only from private subnets, not the public internet. Also, bind access to workload identity, not static keys.
For example, in GitHub actions and AWS, you can use OIDC to lock an IAM role to one repo and branch.
This way, even if a token leaks, it can’t be reused by another repo or branch. The scope is tightly bound, so attackers can’t pivot into other environments.
Not every problem needs a technical fix; some of them come down to practice. Make it policy that secrets are never pasted into AI prompts or copilots. If you need to show an assistant how a config or connection string works, swap in dummy values that preserve the format but not the real keys. This builds habits that keep sensitive data from leaving your environment in the first place.
Do:
Don't:
That way, you can still lean on AI for help without turning it into an accidental exfiltration channel.
Secrets rotation is important, but detection is what closes the exploit gap for many organizations managing cloud pipelines. The GitHub Action compromise showed that even teams with strict rotation policies were exposed, with secrets leaking in real time and attackers moving faster than the next scheduled rotation. That window of exploitability is the real risk. Secrets management should focus on usage and detection, making sure leaks are spotted and shut down before they turn into breaches.
This is even more true with the recent AI boom, where code is being generated and shared at unprecedented speed. Rotation alone cannot keep up with the sprawl. Detection and real-time monitoring are what keep those leaks from growing into breaches.
Want to see how teams are staying ahead in the AI era? Book a Doppler demo and watch how dynamic secrets and real-time monitoring shrink that exploit window to almost nothing.



Trusted by the world’s best DevOps and security teams. Doppler is the secrets manager developers love.
