There is a number that should focus every security team's attention right now: 1 billion.
That's roughly how many commits developers pushed to GitHub in 2025, up 25% year over year, according to GitHub's Octoverse 2025 report. More than 230 new repositories were created every minute. Over 36 million new developers joined the platform in a single year, more than one per second.
Those are productivity metrics. They are also exposure metrics. Because buried inside that volume of code is a growing inventory of secrets: API keys, access tokens, database credentials, service account passwords, pushed into public repositories by developers who either didn't realize they were there, or assumed no one would look.
AI didn't create this problem. But it is making it significantly worse, faster than most organizations are ready to handle.
GitHub Copilot is now part of the default developer experience. Roughly 80% of new GitHub users try Copilot within their first week on the platform. More than 1.1 million public repositories depend on an LLM SDK, a figure that grew 178% year over year, with nearly 700,000 of those repositories created in just the past twelve months.
This matters beyond productivity. Many of the developers fueling this growth are building production applications using AI assistants and natural language prompts, sometimes without deep familiarity with the code being generated or the security implications of the design choices being made for them. The industry has started calling this "vibe coding," and it introduces a specific class of risk at scale.
CodeRabbit's December 2025 report found that AI-generated code contains 70% more errors than human-written code, and those errors trend more severe. Security vulnerabilities appear at nearly three times the human baseline. Misconfigurations run 75% more frequent. Code churn is up 41%, meaning more code is being written, revised, and committed at a pace that compresses the review cycles that historically caught credential exposure before it hit a public branch.
The secret-leak signal is direct: AI-assisted commits show a 3.2% secret-leak rate compared to a 1.5% baseline across all public GitHub commits. Applied against a billion commits, the arithmetic is not comfortable.
Georgia Tech's Systems Software and Security Lab tracked at least 35 CVEs disclosed in March 2026 alone that were the direct result of AI-generated code. That's a single month. These aren't hypothetical findings. They are real vulnerabilities in production software that made it into public repositories and were subsequently catalogued in the national vulnerability database.
GitHub's own CodeQL data reinforces the pattern. In 2025, Broken Access Control overtook Injection as the most common CodeQL alert, flagged in 151,000-plus repositories. A significant portion stem from AI-scaffolded endpoints that look syntactically correct but skip critical authentication checks, a class of error that's difficult to catch without deliberate review and easy for LLMs to produce when they're optimizing for working code rather than secure code.
What makes secret leaks distinct from other vulnerability classes is the immediacy of the damage window. A misconfigured auth endpoint requires an attacker to identify it, understand it, and build an exploit. An exposed API key can be scraped, validated, and weaponized in minutes. Automated scanners continuously index GitHub for credential patterns, and the window between a credential being pushed and one of those scanners finding it is narrow. That's exactly why detection speed matters as much as detection coverage.
Security teams were not designed to monitor a platform growing at 230 repositories per minute.
Manual code review doesn't scale to a billion commits. Periodic scans miss the window where an exposed credential is live. Alert volume without context is a structural challenge for the industry at large. Without prioritization, even well-resourced teams end up managing a backlog rather than closing exposure windows.
The exposure is rarely obvious. It doesn't announce itself. It accumulates quietly in config files, test scaffolds, and commit histories that persist long after the "fix" commit removes the secret from the current branch. GitHub's own tooling can surface some of these patterns, but raw alert volume without context is not a security outcome.
The organizations handling this well have generally moved toward continuous, automated detection, treating code repositories as part of their external attack surface rather than a separate concern for developers to self-police.
Axur's Code Secret Exposure continuously monitors public repositories, including GitHub and other platforms, for exposed secrets, API keys, tokens, and credentials associated with your organization. When a match is found, it's surfaced as a prioritized, actionable alert rather than raw signal for a team to manually triage. The goal is closing the exposure window before an attacker finds it, not reconstructing what happened afterward.
Code Secret Exposure is one of several capabilities within Axur's Data Leakage suite. The suite covers corporate and customer credential exposure, infostealer-harvested credentials, sensitive data leaks, and credit card exposure, in addition to secrets in public repositories. Each use case represents a different vector through which your organization's data can surface in places it shouldn't.
If your organization uses AI coding assistants, and most do at this point, there are a few concrete places to focus attention.
Start with visibility. Know whether any of your organization's API keys, tokens, or service credentials are currently exposed in a public repository, and make sure that monitoring covers not just current file states but commit histories, forks, and gists, where secrets often persist long after a developer believes they've been removed.
Then make sure findings are actionable. Exposure alerts are only useful if they're prioritized and tied to a clear remediation path. A long queue of undifferentiated findings doesn't move the needle; it just adds to the workload.
These are solvable problems. The challenge is doing it at the scale that modern development actually runs at.
The code surface is growing faster than any team can manually supervise. Building visibility into it now, before exposure becomes incident, is the most practical step available.
Axur's Code Secret Exposure is part of the Data Leakage suite. Learn more or run a free threat scan.
Sources: GitHub Octoverse 2025 (github.blog); CodeRabbit December 2025 Report; Georgia Tech Systems Software and Security Lab, March 2026 CVE disclosures.