| Wiz Blog
Rami McCarthy
October 15, 2025
Wiz Research uncovered 500+ leaked secrets in VSCode and Open VSX extensions, exposing 150K installs to risk. Learn what happened and how it was fixed.
Wiz Research identified a pattern of secret leakage by publishers of VSCode IDE Extensions. This occurred across both the VSCode and Open VSX marketplaces, the latter of which is used by AI-powered VSCode forks like Cursor and Windsurf. Critically, in over a hundred cases this included leakage of access tokens granting the ability to update the extension itself. By default, VS Code will auto-update extensions as new versions become available. A leaked VSCode Marketplace or OpenVSX PAT allows an attacker to directly distribute a malicious extension update across the entire install base. An attacker who discovered this issue would have been able to directly distribute malware to the cumulative 150,000 install base.
Each leaked secret is a result of publisher error. However, after reporting this issue via Microsoft's Security Response Center (MSRC), Wiz has been collaborating with Microsoft on platform level improvements to provide guardrails against future secrets leakage in the VSCode Marketplace. Together, we've also launched a notification campaign to alert impacted publishers and help them address these vulnerabilities.
Discovering a massive secrets leak
In February, attackers started attempting to introduce malware to the VSCode Marketplace. Our initial goal was to identify additional malicious extensions, investigate them, and report them to the Marketplace for removal. While we did end up identifying several interesting malicious extensions, we stumbled on something much more impactful: a scourge of secrets leaking in extension packages.
VSCode extensions are distributed as .vsix files, which can be unzipped and inspected. However, we found that publishers often failed to consider that everything in the package was publicly available, or failed to successfully sanitize their extensions of hardcoded secrets.
In total, we found over 550 validated secrets, distributed across more than 500 extensions from hundreds of distinct publishers. Across the 67 distinct types of secrets we found, there were a few notable categories:
AI provider secrets (OpenAI, Gemini, Anthropic, XAI, DeepSeek, HuggingFace, Perplexity)
High risk profession platform secrets (AWS, Github, Stripe, Auth0, GCP)
Database secrets (MongoDB, Postgres, Supabase)
From themes to threats
The most interesting and globally impactful secrets are the access tokens that grant the ability to update the extension. For the VSCode Marketplace, these are Azure DevOps Personal Access Tokens. The Open VSX Marketplace uses open-vsx.org Access Tokens.
Over one hundred valid leaked VSCode Marketplace PATs were identified within VSCode Marketplace extensions. Together, they represent an install base of over 85,000 extension installs.
Over thirty leaked OVSX Access Tokens were identified, within either VSCode Marketplace or OVSX extensions. Together, they represent an install base of over 100,000 extension installs.
Much of this massive vulnerable install base is actually contributed by themes. This is interesting, because themes are generally viewed as safer than other extensions, given they don’t carry any code. However, they still increase your attack surface, as there is no technical control preventing themes from bundling malware.
An additional interesting lens on these leaked tokens involves the public distribution of company internal or vendor specific extensions. If you investigate the marketplace, you’ll notice extensions that have a low install count, but are specifically designed to support a single company’s engineers or customers. Internal extensions should not be distributed publicly, but often are for convenience. In one case, we found a VSCode Marketplace PAT that would allow us to push targeted malware to the workforce of a $30 billion market cap Chinese megacorp. Vendor specific extensions are common, and allow for interesting targeting opportunities if compromised. For example, one at risk extension belonged to a Russian construction technology company.
Now how did that get there?
Whenever we discover a new dataset of leaked secrets, we attempt to identify patterns that might indicate the root cause(s) and potential mitigations. In this case, the largest contributor to secrets leakage was the bundling of hidden files, also known as dotfiles. The quantity of .env files was especially prominent, although hardcoded credentials in extension source code were also prevalent.
Over the course of the year, we saw an increase in secrets leaking via AI related configuration files, including config.json, mcp.json and .cursorrules. Other common sources included build configuration (e.g package.json) and documentation (e.g README.md).
Hardening and Remediation
Discovering this critical issue was one thing, getting it fixed is another. We’ve spent the past six months working with Microsoft to help resolve this issue centrally, ensuring we can patch this gap and disclose responsibly.
The response to this issue took multiple forms.
Notification: Wiz made targeted notifications of the highest risk disclosed secrets throughout this process. Microsoft has further made several rounds of notification to impacted extension publishers reported by Wiz and asked them to take action. Every leaked Visual Studio Marketplace PAT was revoked. For other secrets, Microsoft communicated with publishers regarding their exposure and provided appropriate guidance.
Prevention:
Microsoft integrated secrets scanning capabilities prior to publishing and now blocks extensions with verified secrets, notifying extension owners when secrets are detected. See their announcement: Upcoming Security Enhancement: Secret Detection for Extensions, and follow up Secret Prevention for Extensions: Now in Blocking Mode.
OpenVSX is adding a prefix (ovsxp_) to their tokens. Microsoft supports OpenVSX tokens within their secret scanning of the VSCode Marketplace.
Mitigation: Having prevented further introduction of secrets, Microsoft scanned all existing extensions, for embedded secrets, and will be working with extension owners to ensure they are remediated by publishing a new, sanitized version of the affected extension.
In June, Microsoft shared their progress and roadmap for VSCode Marketplace security in Security and Trust in Visual Studio Marketplace.
On the publisher side, VSCode extension publishers should scan for secrets prior to publishing.
Guidance for users and administrators
For VSCode users:
Limit the number of installed extensions. Each extension introduces extended threat surface, which should be measured against the benefit of their usage.
Review extension trust criteria. Consider installation prevalence, reviews, extension history, and publisher reputation, among other metadata, prior to adoption.
Consider auto-update tradeoffs. Auto-updating extensions ensures you consume security updates, but introduces the risk of a compromised extension pushing malware to your machine.
For corporate security teams:
Develop an IDE extension inventory, in order to respond to reports of malicious extensions.
Consider a centralized allowlist for VSCode extensions.
Consider sourcing extensions from the VSCode Marketplace, which has higher review rigor and controls currently, over the OpenVSX Marketplace.
Guidance for Platforms on Hardening Secrets
Throughout this process, we observed the diversity in secrets formatting practice, and the downstream impact that can have on security. We want to take this opportunity to highlight the following security practices that platforms can implement in their secrets:
Expiration: defaulting to a reasonable secret lifetime decreases the exploitation window for leaked secrets. In this research, for example, we observed a significant volume of VSCode PATs leaked in 2023 that had expired automatically. In several cases, Open VSX PATs were leaked in the same location, and still valid. This demonstrates the benefit of expiration.
Identifiable structure: GitHub and Microsoft have long been advocates of structuring secrets for easier identification and protection. Identifiable prefixes, checksums, or the full Common Annotated Security Key (CASK) standard all offer an advantage to defenders. Our results will over-represent well-structured secrets, but remaining risks post-disclosure will predominantly be secrets that lack easily detectable structure.
GitHub Advanced Secret Scanning: Platforms should strongly consider enrolling in the Secret Scanning Partner Program. As shown in our past research, GitHub can be home to a large volume of secrets. In this project, we saw that a number of secrets leaked in VSCode extensions were also leaked on GitHub. For secrets supported by Advanced Secret Scanning, that meant publishers had already been notified of the risk automatically.
Takeaways & Timeline
We are relieved to have found, responsibly disclosed, and helped comprehensively resolve this risk.
The issue highlights the continued risks of extensions and plugins, and supply chain security in general. It continues to validate the impression that any package repository carries a high risk of mass secrets leakage. It also reflects our findings that AI secrets are a large part of the modern secrets leakage landscape, and indicates the role vibe coding might play in that problem.
Finally, our work with Microsoft highlights the role that responsible platforms can play in protecting the ecosystem. We are grateful to Microsoft for the partnership and working to protect customers together. Without their willingness to lean in here, it would have been impossible to scale disclosure and remediation.
For more documentation on VSCode Extension security, please visit:
Extension runtime security
Publishing Extensions
Walkthrough: Publish a Visual Studio extension
Timeline
March 30th, 2025: Wiz Research reports this issue to MSRC.
April 4th, 2025: Wiz reports initial batch of 250 leaked secrets.
April 25th, 2025: MSRC completes notification of impacted third-parties who had leaked reported secrets.
May 1st, 2025: MSRC marks the report Ineligible for Bounty, and closes the case as Complete.
May 2nd, 2025: Wiz notes potential negative impact of disclosure without additional controls in place, and requests information on platform level improvements.
May 13th, 2025: MSRC re-opens the case, and starts “working on a plan and a timeline for preventative measures”.
July 10th, 2025: MSRC shares plans for remediation, and requests a late-September disclosure timeline.
June 11th, 2025: Microsoft publishes Security and Trust in Visual Studio Marketplace
Aug 12th, 2025: MSRC and Wiz Research meet, and expand on remediation plans. Wiz identifies and highlights VSCode Marketplace PAT detection gap in secrets scanning. VSCode Marketplace team announces Secret Detection for Extensions.
Aug 27th, 2025: MSRC sets September 25th as the disclosure date.
Sep 18th, 2025: MSRC requests a delay in disclosure due to a performance issue in an implemented hardening measure.
Sep 23rd, 2025: MSRC suggests October 15, 2025 disclosure date.
bbc.com
Joe TidyCyber correspondent, BBC World Service
Prepare to switch to offline systems in the event of a cyber-attack, firms are being advised.
People should plan for potential cyber-attacks by going back to pen and paper, according to the latest advice.
The government has written to chief executives across the country strongly recommending that they should have physical copies of their plans at the ready as a precaution.
A recent spate of hacks has highlighted the chaos that can ensue when hackers take computer systems down.
The warning comes as the National Cyber-Security Centre (NCSC) reported an increase in nationally significant attacks this year.
Criminal hacks on Marks and Spencer, The Co-op and Jaguar Land Rover have led to empty shelves and production lines being halted this year as the companies struggled without their computer systems.
Organisations need to "have a plan for how they would continue to operate without their IT, (and rebuild that IT at pace), were an attack to get through," said Richard Horne, chief executive of the NCSC.
Firms are being urged to look beyond cyber-security controls toward a strategy known as "resilience engineering", which focuses on building systems that can anticipate, absorb, recover, and adapt, in the event of an attack.
Plans should be stored in paper form or offline, the agency suggests, and include information about how teams will communicate without work email and other analogue work arounds.
These types of cyber attack contingency plans are not new but it's notable that the UK's cyber authority is putting the advice prominently in its annual review.
Although the total number of hacks that the NCSC dealt with in the first nine months of this year was, at 429, roughly the same as for a similar period last year, there was an increase in hacks with a bigger impact.
The number of "nationally significant" incidents represented nearly half, or 204, of all incidents. Last year only 89 were in that category.
A nationally significant incident covers cyber-attacks in the three highest categories in the NCSC and UK law enforcement categorisation model:
Category 1: National cyber-emergency.
Category 2: Highly significant incident.
Category 3: Significant incident.
Category 4: Substantial incident.
Category 5: Moderate incident.
Category 6: Localised incident.
Amongst this year's incidents, 4% (18) were in the second highest category "highly significant".
This marks a 50% increase in such incidents, an increase for the third consecutive year.
The NCSC would not give details on which attacks, either public or undisclosed, fall into which category.
But, as a benchmark, it is understood that the wave of attacks on UK retailers in the spring, which affected Marks and Spencer, The Co-op and Harrods, would be classed as a Significant incident.
One of the most serious attacks last year, on a blood testing provider, caused major problems for London hospitals. It resulted in significant clinical disruption and directly contributed to at least one patient death.
The NCSC would not say which category this incident would fall into.
The vast majority of attacks are financially motivated with criminal gangs using ransomware or data extortion to blackmail a victim into sending Bitcoins in ransom.
Whilst most cyber-crime gangs are headquartered in Russian or former Soviet countries, there has been a resurgence in teenage hacking gangs thought to be based in English-speaking countries.
So far this year seven teenagers have been arrested in the UK as part of investigations into major cyber-attacks.
As well as the advice over heightened preparations and collaboration, the government is asking organisations to make better use of the free tools and services offered by the NCSC, for example free cyber-insurance for small businesses that have completed the popular Cyber-Essentials programme.
'Basic protection'
Paul Abbott, whose Northamptonshire transport firm KNP closed after hackers encrypted its operational systems and demanded money in 2023, says it's no longer a case of "if" such incidents will happen, but when.
"We were throwing £120,000 a year at [cyber-security] with insurance and systems and third-party managed systems," Mr Abbott told BBC Radio 5 Live on Tuesday.
He said he now focuses on security, education and contingency - key to which involves planning what is needed to keep a business running in the event of an attack or outage.
"The call for pen and paper might sound old-fashioned, but it's practical," said Graeme Stewart, head of public sector at cyber-security firm Check Point, noting digital systems can be rendered "useless" once targeted by hackers.
"You wouldn't walk onto a building site without a helmet - yet companies still go online without basic protection," he added.
"Cybersecurity needs to be treated with the same seriousness as health and safety: not optional, not an afterthought, but part of everyday working life."
Malicious app required to make “Pixnapping” attack work requires no permissions.
Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.
The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.
Like taking a screenshot
Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.
“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”
The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.
Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.
“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”
Pixnapping in three steps
The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.
In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).
“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”
The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.
As Ars reader hotball put it in the comments below:
Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.
In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:
Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.
Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).
Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.
Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.
The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:
To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.
… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.
In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”
Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.