Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
page 2 / 212
4236 résultats taggé E*N  ✕
Intel and AMD trusted enclaves, a foundation for network security, fall to physical attacks https://arstechnica.com/security/2025/09/intel-and-amd-trusted-enclaves-the-backbone-of-network-security-fall-to-physical-attacks/
05/10/2025 18:46:13
QRCode
archive.org
thumbnail

Ars Technica, Dan Goodin – 30 sept. 2025 22:25

The chipmakers say physical attacks aren’t in the threat model. Many users didn’t get the memo.

In the age of cloud computing, protections baked into chips from Intel, AMD, and others are essential for ensuring confidential data and sensitive operations can’t be viewed or manipulated by attackers who manage to compromise servers running inside a data center. In many cases, these protections—which work by storing certain data and processes inside encrypted enclaves known as TEEs (Trusted Execution Enclaves)—are essential for safeguarding secrets stored in the cloud by the likes of Signal Messenger and WhatsApp. All major cloud providers recommend that customers use it. Intel calls its protection SGX, and AMD has named it SEV-SNP.

Over the years, researchers have repeatedly broken the security and privacy promises that Intel and AMD have made about their respective protections. On Tuesday, researchers independently published two papers laying out separate attacks that further demonstrate the limitations of SGX and SEV-SNP. One attack, dubbed Battering RAM, defeats both protections and allows attackers to not only view encrypted data but also to actively manipulate it to introduce software backdoors or to corrupt data. A separate attack known as Wiretap is able to passively decrypt sensitive data protected by SGX and remain invisible at all times.

Attacking deterministic encryption
Both attacks use a small piece of hardware, known as an interposer, that sits between CPU silicon and the memory module. Its position allows the interposer to observe data as it passes from one to the other. They exploit both Intel’s and AMD’s use of deterministic encryption, which produces the same ciphertext each time the same plaintext is encrypted with a given key. In SGX and SEV-SNP, that means the same plaintext written to the same memory address always produces the same ciphertext.

Deterministic encryption is well-suited for certain uses, such as full disk encryption, where the data being protected never changes once the thing being protected (in this case, the drive) falls into an attacker’s hands. The same encryption is suboptimal for protecting data flowing between a CPU and a memory chip because adversaries can observe the ciphertext each time the plaintext changes, opening the system to replay attacks and other well-known exploit techniques. Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.

“Fundamentally, [the use of deterministic encryption] is a design trade-off,” Jesse De Meulemeester, lead author of the Battering RAM paper, wrote in an online interview. “Intel and AMD opted for deterministic encryption without integrity or freshness to keep encryption scalable (i.e., protect the entire memory range) and reduce overhead. That choice enables low-cost physical attacks like ours. The only way to fix this likely requires hardware changes, e.g., by providing freshness and integrity in the memory encryption.”

Daniel Genkin, one of the researchers behind Wiretap, agreed. “It’s a design choice made by Intel when SGX moved from client machines to server,” he said. “It offers better performance at the expense of security.” Genkin was referring to Intel’s move about five years ago to discontinue SGX for client processors—where encryption was limited to no more than 256 MB of RAM—to server processors that could encrypt terabytes of RAM. The transition required Intel to revamp the encryption to make it scale for such vast amounts of data.

“The papers are two sides of the same coin,” he added.

While both of Tuesday’s attacks exploit weaknesses related to deterministic encryption, their approaches and findings are distinct, and each comes with its own advantages and disadvantages. Both research teams said they learned of the other’s work only after privately submitting their findings to the chipmakers. The teams then synchronized the publish date for Tuesday. It’s not the first time such a coincidence has occurred. In 2018, multiple research teams independently developed attacks with names including Spectre and Meltdown. Both plucked secrets out of Intel and AMD processors by exploiting their use of performance enhancement known as speculative execution.

AMD declined to comment on the record, and Intel didn’t respond to questions sent by email. In the past, both chipmakers have said that their respective TEEs are designed to protect against compromises of a piece of software or the operating system itself, including in the kernel. The guarantees, the companies have said, don’t extend to physical attacks such as Battering RAM and Wiretap, which rely on physical interposers that sit between the processor and the memory chips. Despite this limitation, many cloud-based services continue to trust assurances from the TEEs even when they have been compromised through physical attacks (more about that later).

Intel on Tuesday published this advisory. AMD posted one here.

Battering RAM
Battering RAM uses a custom-built analog switch to act as an interposer that reads encrypted data as it passes between protected memory regions in DDR4 memory chips and an Intel or AMD processor. By design, both SGX and SEV-SNP make this ciphertext inaccessible to an adversary. To bypass that protection, the interposer creates memory aliases in which two different memory addresses point to the same location in the memory module.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground. Credit: De Meulemeester et al.

“This lets the attacker capture a victim's ciphertext and later replay it from an alias,” De Meulemeester explained. “Because Intel's and AMD's memory encryption is deterministic, the replayed ciphertext always decrypts into valid plaintext when the victim reads it.” The PhD researcher at KU Leuven in Belgium continued:

When the CPU writes data to memory, the memory controller encrypts it deterministically, using the plaintext and the address as inputs. The same plaintext written to the same address always produces the same ciphertext. Through the alias, the attacker can't read the victim's secrets directly, but they can capture the victim's ciphertext. Later, by replaying this ciphertext at the same physical location, the victim will decrypt it to a valid, but stale, plaintext.

This replay capability is the primitive on which both our SGX and SEV attacks are built.

In both cases, the adversary installs the interposer, either through a supply-chain attack or physical compromise, and then runs either a virtual machine or application at a chosen memory location. At the same time, the adversary also uses the aliasing to capture the ciphertext. Later, the adversary replays the captured ciphertext, which, because it's running in the region the attacker has access to, is then replayed as plaintext.

Because SGX uses a single memory-encryption key for the entire protected range of RAM, Battering RAM can gain the ability to write or read plaintext into these regions. This allows the adversary to extract the processor’s provisioning key and, in the process, break the attestation SGX is supposed to provide to certify its integrity and authenticity to remote parties that connect to it.

AMD processors protected by SEV use a single encryption key to produce all ciphertext on a given virtual machine. This prevents the ciphertext replaying technique used to defeat SGX. Instead, Battering RAM captures and replays the cryptographic elements that are supposed to prove the virtual machine hasn’t been tampered with. By replaying an old attestation report, Battering RAM can load a backdoored Virtual machine that still carries the SEV-SNP certification that the VM hasn’t been tampered with.

The key benefit of Battering RAM is that it requires equipment that costs less than $50 to pull off. It also allows active decryption, meaning encrypted data can be both read and tampered with. In addition, it works against both SGX and SEV-SNP, as long as they work with DDR4 memory modules.

Wiretap
Wiretap, meanwhile, is limited to breaking only SGX working with DDR4, although the researchers say it would likely work against the AMD protections with a modest amount of additional work. Wiretap, however, allows only for passive decryption, which means protected data can be read, but data can’t be written to protected regions of memory. The cost of the interposer and the equipment for analyzing the captured data also costs considerably more than Battering RAM, at about $500 to $1,000.

Like Battering RAM, Wiretap exploits deterministic encryption, except the latter attack maps ciphertext to a list of known plaintext words that the ciphertext is derived from. Eventually, the attack can recover enough ciphertext to reconstruct the attestation key.

Genkin explained:

Let’s say you have an encrypted list of words that will be later used to form sentences. You know the list in advance, and you get an encrypted list in the same order (hence you know the mapping between each word and its corresponding encryption). Then, when you encounter an encrypted sentence, you just take the encryption of each word and match it against your list. By going word by word, you can decrypt the entire sentence. In fact, as long as most of the words are in your list, you can probably decrypt the entire conversation eventually. In our case, we build a dictionary between common values occurring within the ECDSA algorithm and their corresponding encryption, and then use this dictionary to recover these values as they appear, allowing us to extract the key.

The Wiretap researchers went on to show the types of attacks that are possible when an adversary successfully compromises SGX security. As Intel explains, a key benefit of SGX is remote attestation, a process that first verifies the authenticity and integrity of VMs or other software running inside the enclave and hasn’t been tampered with. Once the software passes inspection, the enclave sends the remote party a digitally signed certificate providing the identity of the tested software and a clean bill of health certifying the software is safe.

The enclave then opens an encrypted connection with the remote party to ensure credentials and private data can’t be read or modified during transit. Remote attestation works with the industry standard Elliptic Curve Digital Signature Algorithm, making it easy for all parties to use and trust.

Blockchain services didn’t get the memo
Many cloud-based services rely on TEEs as a foundation for privacy and security within their networks. One such service is Phala, a blockchain provider that allows the drafting and execution of smart contracts. According to the company, computer “state”—meaning system variables, configurations, and other dynamic data an application depends on—are stored and updated only in the enclaves available through SGX, SEV-SNP, and a third trusted enclave available in Arm chips known as TrustZone. This design allows these smart contract elements to update in real time through clusters of “worker nodes”—meaning the computers that host and process smart contracts—with no possibility of any node tampering with or viewing the information during execution.

“The attestation quote signed by Intel serves as the proof of a successful execution,” Phala explained. “It proves that specific code has been run inside an SGX enclave and produces certain output, which implies the confidentiality and the correctness of the execution. The proof can be published and validated by anyone with generic hardware.” Enclaves provided by AMD and Arm work in a similar manner.

The Wiretap researchers created a “testnet,” a local machine for running worker modes. With possession of the SGX attestation key, the researchers were able to obtain a cluster key that prevents individual nodes from reading or modifying contract state. With that, Wiretap was able to fully bypass the protection. In a paper, the researchers wrote:

We first enter our attacker enclave into a cluster and note it is given access to the cluster key. Although the cluster key is not directly distributed to our worker upon joining a cluster, we initiate a transfer of the key from any other node in the cluster. This transfer is completed without on-chain interaction, given our worker is part of the cluster. This cluster key can then be used to decrypt all contract interactions within the cluster. Finally, when our testnet accepted our node’s enclave as a gatekeeper, we directly receive a copy of the master key, which is used to derive all cluster keys and therefore all contract keys, allowing us to decrypt the entire testnet.

The researchers performed similar bypasses against a variety of other blockchain services, including Secret, Crust, and IntegriTEE. After the researchers privately shared the results with these companies, they took steps to mitigate the attacks.

Both Battering RAM and Wiretap work only against DDR4 forms of memory chips because the newer DDR5 runs at much higher bus speeds with a multi-cycle transmission protocol. For that reason, neither attack works against a similar Intel protection known as TDX because it works only with DDR5.

As noted earlier, Intel and AMD both exclude physical attacks like Battering RAM and Wiretap from the threat model their TEEs are designed to withstand. The Wiretap researchers showed that despite these warnings, Phala and many other cloud-based services still rely on the enclaves to preserve the security and privacy of their networks. The research also makes clear that the TEE defenses completely break down in the event of an attack targeting the hardware supply chain.

For now, the only feasible solution is for chipmakers to replace deterministic encryption with a stronger form of protection. Given the challenges of making such encryption schemes scale to vast amounts of RAM, it’s not clear when that may happen.

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

arstechnica.com EN AMD Intel trusted enclaves CPI physical-attacks chipmakers
Munich Airport Drone Sightings Force Flight Cancellations, Adding To Wave Of European Incidents https://dronexl.co/pt/2025/10/02/munich-airport-drone-sightings-flight-cancellations/
05/10/2025 18:37:31
QRCode
archive.org
thumbnail

dronexl.co Haye Kestelooo october 2, 2025

Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000

Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000 passengers. The incident marks the latest in a concerning series of mysterious drone closures at major European airports—but whether these sightings represent genuine security threats or mass misidentification remains an urgent question.

The pattern echoes both recent suspected hybrid attacks in Scandinavia and last year’s New Jersey drone panic that turned out to be largely misidentified aircraft and celestial objects.

Munich Operations Suspended for Hours
German air traffic control restricted flight operations at Munich airport from 10:18 p.m. local time Thursday after multiple drone sightings, later suspending them entirely. The airport remained closed until 2:59 a.m. Friday (4:59 a.m. local time).

Another 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight tracking service Flightradar24 confirmed the airport would remain closed until early Friday morning.

The first arriving flight was expected at 5:25 a.m., with the first departure scheduled for 5:50 a.m., according to the airport’s website.

European Airports on Edge After Suspected Russian Incidents
The Munich closure comes just days after a wave of drone incidents shut down multiple airports across Denmark and Norway in late September. Copenhagen Airport closed for nearly four hours on September 22 after two to three large drones were spotted in controlled airspace. Oslo’s Gardermoen Airport also briefly closed that same night.

Danish Prime Minister Mette Frederiksen called those incidents “the most serious attack on Danish critical infrastructure to date” and suggested Russia could be behind the disruption. Danish authorities characterized the activity as a likely hybrid operation intended to unsettle the public and disrupt critical infrastructure.

Several more Danish airports—including Aalborg, Billund, and military bases—experienced similar incidents in the following days. Denmark is now considering whether to invoke NATO’s Article 4, which enables member states to request consultations over security concerns.

Russian President Vladimir Putin joked Thursday that he would not fly drones over Denmark anymore, though Moscow has denied responsibility for the incidents. Denmark has stopped short of saying definitively who is responsible, but Western officials point to a pattern of Russian drone violations of NATO airspace in Poland, Romania, and Estonia.

The Misidentification Problem: Lessons from New Jersey
While European officials investigate potential hybrid warfare, the incidents raise uncomfortable parallels to the New Jersey drone panic of late 2024—a mass sighting event that turned out to be largely misidentification of routine aircraft and celestial objects.

Between November and December 2024, thousands of “drone” reports flooded in from New Jersey and neighboring states. The phenomenon sparked widespread fear, congressional hearings, and even forced then-President-elect Donald Trump to cancel a trip to his Bedminster golf club.

Federal investigations later revealed the reality: most sightings were manned aircraft operating lawfully. A joint FBI and DHS statement in December noted: “Historically, we have experienced cases of mistaken identity, where reported drones are, in fact, manned aircraft or facilities.”

TSA documents released months later showed that one of the earliest incidents—which forced a medical helicopter carrying a crash victim to divert—involved three commercial aircraft approaching nearby Solberg Airport. “The alignment of the aircraft gave the appearance to observers on the ground of them hovering in formation while they were actually moving directly at the observers,” the analysis found.

Dr. Will Austin, president of Warren County Community College and a national drone expert, reviewed numerous videos during the panic. He found that “many of the reports received involve misidentification of manned aircraft.” Even Jupiter, which was particularly bright in New Jersey’s night sky that season, was mistaken for a hovering drone.

The panic had real consequences: laser-pointing incidents at aircraft spiked to 59 in December 2024—more than the 49 incidents recorded for all of 2023, according to the FAA.

Munich Already on Edge
Munich was already placed on edge this week when its popular Oktoberfest was temporarily closed due to a bomb threat, and explosives were discovered in a residential building in the city’s north.

Whether Thursday’s drone sightings represent genuine security threats similar to the suspected Russian operations in Scandinavia, or misidentified routine aircraft like in New Jersey, remains under investigation. German authorities have not released details about what was observed or where the objects may have originated.

DroneXL’s Take
We’re watching two very different scenarios collide in dangerous ways. The Denmark and Norway incidents appear to involve sophisticated actors—large drones, coordinated timing, professional operation over multiple airports and military installations. Danish intelligence has credible reasons to suspect state-sponsored hybrid warfare, particularly given documented Russian drone violations of NATO airspace in Poland and Romania.

But the New Jersey panic showed how quickly mass hysteria can spiral when people start looking up. Once the narrative took hold, every airplane on approach, every bright planet, every hobbyist quadcopter became a “mystery drone.” Federal investigators reviewed over 5,000 reports and found essentially nothing anomalous—yet 78% of Americans still believed the government was hiding something.

Munich sits uncomfortably between these realities. Is it part of the escalating pattern of suspected Russian hybrid attacks on European infrastructure? Or is it another case of observers misidentifying routine air traffic in an atmosphere of heightened anxiety?

The distinction matters enormously. Real threats require sophisticated counter-drone systems and potentially invoke NATO collective defense mechanisms. False alarms waste resources, create dangerous situations (like those laser-pointing incidents), and damage the credibility of legitimate security concerns.

Airport authorities worldwide need better drone detection technology that can definitively distinguish between aircraft types. Equally important: they need to be transparent about what they’re actually seeing, rather than leaving information vacuums that fill with speculation and fear.

dronexl.co EN 2025 Munich Airport Drone Sightings
Another drone sighting at Munich Airport https://www.munich-airport.com/press-another-drone-sighting-at-munich-airport-35720233
05/10/2025 18:33:33
QRCode
archive.org
  • Munich Airport (www.munich-airport.com)
    04.10.2025 (update 5 p.m.)

Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 was delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon.

Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 has been delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon. Passengers were asked to check the status of their flight on their airline's website before travelling to the airport. Of the more than 1,000 take-offs and landings planned for Saturday, airlines cancelled around 170 flights during the day for operational reasons.

As on previous nights, Munich Airport worked with the airlines to immediately provide for passengers in the terminals. These activities will continue on Saturday evening and into Sunday night. Numerous camp beds will again be set up, and blankets, air mattresses, drinks and snacks will be distributed. In addition, some shops, restaurants and a pharmacy in the public area will extend their opening hours and remain open throughout the night. In addition to numerous employees of the airport, airlines and service providers, numerous volunteers are also on duty.

When a drone is suspected of being sighted, the safety of travellers is the top priority. Reporting chains between air traffic control, the airport and police authorities have been established for years. It is important to emphasise that the detection and defence against drones are sovereign tasks and are the responsibility of the federal and state police.

munich-airport.com EN 2025 Munich Airport drone sighting Germany
Press: Drone sightings at Munich Airport https://www.munich-airport.com/press-drone-sighting-at-munich-airport-35709068
05/10/2025 18:31:39
QRCode
archive.org

Munich Airport (www.munich-airport.com)
October 3, 2025 (Update)

On Thursday evening (October 2), several drones were sighted in the vicinity of and on the grounds of Munich Airport. The first reports were received at around 8:30 p.m. Initially, areas around the airport, including Freising and Erding, were affected.

The state police immediately launched extensive search operations with a large number of officers in the vicinity of the airport. At the same time, the federal police immediately carried out surveillance and search operations on the airport grounds. However, it has not yet been possible to identify the perpetrator.

At around 9:05 p.m., drones were reported near the airport fence. At around 10:10 p.m., the first sighting was made on the airport grounds. As a result, flight operations were gradually suspended at 10:18 p.m. for safety reasons. The preventive closure affected both runways from 10:35 p.m. onwards. The sightings ended around midnight. According to the airport operator, there were 17 flight cancellations and 15 diversions by that time. Helicopters from the federal police and the Bavarian state police were also deployed to monitor the airspace and conduct searches.

Munich Airport, in cooperation with the airlines, immediately took care of the passengers in the terminals. Camp beds were set up, and blankets, drinks, and snacks were provided. In addition, 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight operations resumed as normal today (Friday, October 3).

Responsibilities and cooperation

Within the scope of their respective tasks, the German Air Traffic Control (DFS), the state aviation security authorities, the state police forces, and the federal police are responsible for the detection and defense against drones at commercial airports.

The measures are carried out in close coordination between all parties involved and the airport operator on the basis of jointly developed emergency plans. The local state police force is responsible for preventive policing in the vicinity of the airport, while the federal police is responsible for policing on the airport grounds. Criminal prosecution is the responsibility of the state police.

Note: Please understand that for tactical reasons, the security authorities are unable to provide any further information on the systems and measures used. Further investigations will be conducted by the Bavarian police, as they have jurisdiction in this matter.

munich-airport.com EN 2025 Airport drones Germany Air Traffic sightings
Hacking group claims theft of 1 billion records from Salesforce customer databases | TechCrunch https://techcrunch.com/2025/10/03/hacking-group-claims-theft-of-1-billion-records-from-salesforce-customer-databases/
05/10/2025 18:23:59
QRCode
archive.org
thumbnail

techcrunch.com - Lorenzo Franceschi-Bicchierai
Zack Whittaker
6:17 AM PDT · October 3, 2025

The hacking group claims to have stolen about a billion records from companies, including FedEx, Qantas, and TransUnion, who store their customer and company data in Salesforce.

A notorious predominantly English-speaking hacking group has launched a website to extort its victims, threatening to release about a billion records stolen from companies who store their customers’ data in cloud databases hosted by Salesforce.

The loosely organized group, which has been known as Lapsus$, Scattered Spider, and ShinyHunters, has published a dedicated data leak site on the dark web, called Scattered LAPSUS$ Hunters.

The website, first spotted by threat intelligence researchers on Friday and seen by TechCrunch, aims to pressure victims into paying the hackers to avoid having their stolen data published online.

“Contact us to regain control on data governance and prevent public disclosure of your data,” reads the site. “Do not be the next headline. All communications demand strict verification and will be handled with discretion.”

Over the last few weeks, the ShinyHunters gang allegedly hacked dozens of high-profile companies by breaking into their cloud-based databases hosted by Salesforce.

Insurance giant Allianz Life, Google, fashion conglomerate Kering, the airline Qantas, carmaking giant Stellantis, credit bureau TransUnion, and the employee management platform Workday, among several others, have confirmed their data was stolen in these mass hacks.

The hackers’ leak site lists several alleged victims, including FedEx, Hulu (owned by Disney), and Toyota Motors, none of which responded to a request for comment on Friday.

It’s not clear if the companies known to have been hacked but not listed on the hacking group’s leak site have paid a ransom to the hackers to prevent their data from being published. When reached by TechCrunch, a representative from ShinyHunters said, “there are numerous other companies that have not been listed,” but declined to say why.

At the top of the site, the hackers mention Salesforce and demand that the company negotiate a ransom, threatening that otherwise “all your customers [sic] data will be leaked.” The tone of the message suggests that Salesforce has not yet engaged with the hackers.

Salesforce spokesperson Nicole Aranda provided a link to the company’s statement, which notes that the company is “aware of recent extortion attempts by threat actors.”

“Our findings indicate these attempts relate to past or unsubstantiated incidents, and we remain engaged with affected customers to provide support,” the statement reads. “At this time, there is no indication that the Salesforce platform has been compromised, nor is this activity related to any known vulnerability in our technology.”

Aranda declined to comment further.

For weeks, security researchers have speculated that the group, which has historically eschewed a public presence online, was planning to publish a data leak website to extort its victims.

Historically, such websites have been associated with foreign, often Russian-speaking, ransomware gangs. In the last few years, these organized cybercrime groups have evolved from stealing, encrypting their victim’s data, and then privately asking for a ransom, to simply threatening to publish the stolen data online unless they get paid.

techcrunch.com EN 2025 Qantas Salesforce ScatteredSpider leak-site
GreyNoise detects 500% surge in scans targeting Palo Alto Networks portals https://securityaffairs.com/182939/hacking/greynoise-detects-500-surge-in-scans-targeting-palo-alto-networks-portals.html
04/10/2025 23:15:51
QRCode
archive.org
thumbnail

securityaffairs.com
October 04, 2025
Pierluigi Paganini

GreyNoise saw a 500% spike in scans on Palo Alto Networks login portals on Oct. 3, 2025, the highest in three months.
Cybersecurity firm GreyNoise reported a 500% surge in scans targeting Palo Alto Networks login portals on October 3, 2025, marking the highest activity in three months.

On October 3, the researchers observed that over 1,285 IPs scanned Palo Alto portals, up from a usual 200. The experts reported that 93% of the IPs were suspicious, 7% malicious.
Most originated from the U.S., with smaller clusters in the U.K., Netherlands, Canada, and Russia.

GryNoise defined the traffic targeted and structured, aimed at Palo Alto login portals and split across distinct scanning clusters.

The scans targeted emulated Palo Alto profiles, focusing mainly on U.S. and Pakistan systems, indicating coordinated, targeted reconnaissance.

GreyNoise found that recent Palo Alto scanning mirrors Cisco ASA activity, showing regional clustering and shared TLS fingerprints linked to the Netherlands infrastructure. Both used similar tools, suggesting possible shared infrastructure or operators. The overlap follows a Cisco ASA scanning surge preceding the disclosure of two zero-day vulnerabilities.

“Both Cisco ASA and Palo Alto login scanning traffic in the past 48 hours share a dominant TLS fingerprint tied to infrastructure in the Netherlands. This comes after GreyNoise initially reported an ASA scanning surge before Cisco’s disclosure of two ASA zero-days.” reads the report published by Grey Noise. “In addition to a possible connection to ongoing Cisco ASA scanning, GreyNoise identified concurrent surges across remote access services. While suspicious, we are unsure if this activity is related. “

GreyNoise noted in July spikes in Palo Alto scans sometimes preceded new flaws within six weeks; The experts are monitoring if the latest surge signals another disclosure.
“GreyNoise is developing an enhanced dynamic IP blocklist to help defenders take faster action on emerging threats.” concludes the report.

securityaffairs.com EN 2025 GreyNoise PaloAlto Networks portals scan scanning
Update on a Security Incident Involving Third-Party Customer Service https://discord.com/press-releases/update-on-security-incident-involving-third-party-customer-service
04/10/2025 23:13:39
QRCode
archive.org
thumbnail

discord.com

Discord
October 3, 2025

At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.

Discord recently discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers.
This incident impacted a limited number of users who had communicated with our Customer Support or Trust & Safety teams.
This unauthorized party did not gain access to Discord directly.
No messages or activities were accessed beyond what users may have discussed with Customer Support or Trust & Safety agents.
We immediately revoked the customer support provider’s access to our ticketing system and continue to investigate this matter.
We’re working closely with law enforcement to investigate this matter.
We are in the process of emailing the users impacted.
‍

At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.

Recently, we discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers. The unauthorized party then gained access to information from a limited number of users who had contacted Discord through our Customer Support and/or Trust & Safety teams.

As soon as we became aware of this attack, we took immediate steps to address the situation. This included revoking the customer support provider’s access to our ticketing system, launching an internal investigation, engaging a leading computer forensics firm to support our investigation and remediation efforts, and engaging law enforcement.

We are in the process of contacting impacted users. If you were impacted, you will receive an email from noreply@discord.com. We will not contact you about this incident via phone – official Discord communications channels are limited to emails from noreply@discord.com.

What happened?
An unauthorized party targeted our third-party customer support services to access user data, with a view to extort a financial ransom from Discord.

What data was involved?
The data that may have been impacted was related to our customer service system. This may include:

Name, Discord username, email and other contact details if provided to Discord customer support
Limited billing information such as payment type, the last four digits of your credit card, and purchase history if associated with your account
IP addresses
Messages with our customer service agents
Limited corporate data (training materials, internal presentations)
The unauthorized party also gained access to a small number of government‑ID images (e.g., driver’s license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive.

What data was not involved?
Full credit card numbers or CCV codes
Messages or activity on Discord beyond what users may have discussed with customer support
Passwords or authentication data
What are we doing about this?
Discord has and will continue to take all appropriate steps in response to this situation. As standard, we will continue to frequently audit our third-party systems to ensure they meet our security and privacy standards. In addition, we have:

Notified relevant data protection authorities.
Proactively engaged with law enforcement to investigate this attack.
Reviewed our threat detection systems and security controls for third-party support providers.
Taking next steps
Looking ahead, we recommend impacted users stay alert when receiving messages or other communication that may seem suspicious. We have service agents on hand to answer questions and provide additional support.

We take our responsibility to protect your personal data seriously and understand the inconvenience and concern this may cause.

discord.com EN 2025 Discord data-breach incident
'Delightful' Red Hat OpenShift AI bug allows full takeover https://www.theregister.com/2025/10/01/critical_red_hat_openshift_ai_bug/?is=09685296f9ea1fb2ee0963f2febaeb3a55d8fb1eddbb11ed4bd2da49d711f2c7
04/10/2025 15:36:10
QRCode
archive.org
thumbnail

theregister.com • The Register
by Jessica Lyons
Wed 1 Oct 2025 // 19:35 UTC

: Who wouldn't want root access on cluster master nodes?

9.9 out of 10 severity bug in Red Hat's OpenShift AI service could allow a remote attacker with minimal authentication to steal data, disrupt services, and fully hijack the platform.

"A low-privileged attacker with access to an authenticated account, for example as a data scientist using a standard Jupyter notebook, can escalate their privileges to a full cluster administrator," the IBM subsidiary warned in a security alert published earlier this week.

"This allows for the complete compromise of the cluster's confidentiality, integrity, and availability," the alert continues. "The attacker can steal sensitive data, disrupt all services, and take control of the underlying infrastructure, leading to a total breach of the platform and all applications hosted on it."

Red Hat deemed the vulnerability, tracked as CVE-2025-10725, "important" despite its 9.9 CVSS score, which garners a critical-severity rating from the National Vulnerability Database - and basically any other organization that issues CVEs. This, the vendor explained, is because the flaw requires some level of authentication, albeit minimal, for an attacker to jeopardize the hybrid cloud environment.

Users can mitigate the flaw by removing the ClusterRoleBinding that links the kueue-batch-user-role ClusterRole with the system:authenticated group. "The permission to create jobs should be granted on a more granular, as-needed basis to specific users or groups, adhering to the principle of least privilege," Red Hat added.

Additionally, the vendor suggests not granting broad permissions to system-level groups.

Red Hat didn't immediately respond to The Register's inquiries, including if the CVE has been exploited. We will update this story as soon as we receive any additional information.

Whose role is it anyway?
OpenShift AI is an open platform for building and managing AI applications across hybrid cloud environments.

As noted earlier, it includes a ClusterRole named "kueue-batch-user-role." The security issue here exists because this role is incorrectly bound to the system:authenticated group.

"This grants any authenticated entity, including low-privileged service accounts for user workbenches, the permission to create OpenShift Jobs in any namespace," according to a Bugzilla flaw-tracking report.
One of these low-privileged accounts could abuse this to schedule a malicious job in a privileged namespace, configure it to run with a high-privilege ServiceAccount, exfiltrate that ServiceAccount token, and then "progressively pivot and compromise more powerful accounts, ultimately achieving root access on cluster master nodes and leading to a full cluster takeover," the report said.

"Vulnerabilities offering a path for a low privileged user to fully take over an environment needs to be patched in the form of an incident response cycle, seeking to prove that the environment was not already compromised," Trey Ford, chief strategy and trust officer at crowdsourced security company Bugcrow said in an email to The Register.

In other words: "Assume breach," Ford added.

"The administrators managing OpenShift AI infrastructure need to patch this with a sense of urgency - this is a delightful vulnerability pattern for attackers looking to acquire both access and data," he said. "Security teams must move with a sense of purpose, both verifying that these environments have been patched, then investigating to confirm whether-and-if their clusters have been compromised."

theregister.com EN 2025 vulnerability OpenShift Red Hat CVE-2025-10725,
ShinyHunters launches Salesforce data leak site to extort 39 victims https://www.bleepingcomputer.com/news/security/shinyhunters-starts-leaking-data-stolen-in-salesforce-attacks/
03/10/2025 16:51:35
QRCode
archive.org
thumbnail

bleepingcomputer.com By Sergiu Gatlan
October 3, 2025

An extortion group has launched a new data leak site to publicly extort dozens of companies impacted by a wave of Salesforce breaches, leaking samples of data stolen in the attacks.

The threat actors responsible for these attacks claim to be part of the ShinyHunters, Scattered Spider, and Lapsus$ groups, collectively referring to themselves as "Scattered Lapsus$ Hunters."

Today, they launched a new data leak site containing 39 companies impacted by the attacks. Each entry includes samples of data allegedly stolen from victims' Salesforce instances, and warns the victims to reach out to "prevent public disclosure" of their data before the October 10 deadline is reached.

The companies being extorted on the data leak site include well-known brands and organizations, including FedEx, Disney/Hulu, Home Depot, Marriott, Google, Cisco, Toyota, Gap, McDonald's, Walgreens, Instacart, Cartier, Adidas, Sake Fifth Avenue, Air France & KLM, Transunion, HBO MAX, UPS, Chanel, and IKEA.

"All of them have been contacted long ago, they saw the email because I saw them download the samples multiple times. Most of them chose to not disclose and ignore," ShinyHunters told BleepingComputer.

"We highly advise you proceed into the right decision, your organisation can prevent the release of this data, regain control over the situation and all operations remain stable as always. We highly recommend a decision-maker to get involved as we are presenting a clear and mutually beneficial opportunity to resolve this matter," they warned on the leak site.

The threat actors also added a separate entry requesting that Salesforce pay a ransom to prevent all impacted customers' data (approximately 1 billion records containing personal information) from being leaked.

"Should you comply, we will withdraw from any active or pending negotiation indiviually from your customers. Your customers will not be attacked again nor will they face a ransom from us again, should you pay," they added.

The extortion group also threatened the company, stating that it would help law firms pursue civil and commercial lawsuits against Salesforce following the data breaches and warned that the company had also failed to protect customers' data as required by the European General Data Protection Regulation (GDPR).

bleepingcomputer.com EN 2025 Breach Data-Breach Leak Salesforce Scattered-Lapsus$-Hunters ShinyHunters
Security update: Incident related to Red Hat Consulting GitLab instance https://www.redhat.com/en/blog/security-update-incident-related-red-hat-consulting-gitlab-instance?sc_cid=RHCTG0180000354765
03/10/2025 09:57:11
QRCode
archive.org
thumbnail

We are writing to provide an update regarding a security incident related to a specific GitLab environment used by our Red Hat Consulting team. Red Hat takes the security and integrity of our systems and the data entrusted to us extremely seriously, and we are addressing this issue with the highest priority.

What happened
We recently detected unauthorized access to a GitLab instance used for internal Red Hat Consulting collaboration in select engagements. Upon detection, we promptly launched a thorough investigation, removed the unauthorized party’s access, isolated the instance, and contacted the appropriate authorities. Our investigation, which is ongoing, found that an unauthorized third party had accessed and copied some data from this instance.

We have now implemented additional hardening measures designed to help prevent further access and contain the issue.

Scope and impact on customers
We understand you may have questions about whether this incident affects you. Based on our investigation to date, we can share:

Impact on Red Hat products and supply chain: At this time, we have no reason to believe this security issue impacts any of our other Red Hat services or products, including our software supply chain or downloading Red Hat software from official channels.
Consulting customers: If you are a Red Hat Consulting customer, our analysis is ongoing. The compromised GitLab instance housed consulting engagement data, which may include, for example, Red Hat’s project specifications, example code snippets, and internal communications about consulting services. This GitLab instance typically does not house sensitive personal data. While our analysis remains ongoing, we have not identified sensitive personal data within the impacted data at this time. We will notify you directly if we believe you have been impacted.
Other customers: If you are not a Red Hat Consulting customer, there is currently no evidence that you have been affected by this incident.
For clarity, this incident is unrelated to a Red Hat OpenShift AI vulnerability (CVE-2025-10725) that was announced yesterday.

Our next steps
We are engaging directly with any customers who may be impacted.

Thank you for your continued trust in Red Hat. We appreciate your patience as we continue our investigation.

redhat.com EN 2025 GitLab Consulting TheCrimsonCollective incident data-breach
Out-of-bounds read & write in RFC 3211 KEK Unwrap (CVE-2025-9230) https://openssl-library.org/news/secadv/20250930.txt
02/10/2025 21:32:46
QRCode
archive.org

OpenSSL Security Advisory [30th September 2025]
https://openssl-library.org/news/secadv/20250930.txt

===============================================

Out-of-bounds read & write in RFC 3211 KEK Unwrap (CVE-2025-9230)

=================================================================

Severity: Moderate

Issue summary: An application trying to decrypt CMS messages encrypted using
password based encryption can trigger an out-of-bounds read and write.

Impact summary: This out-of-bounds read may trigger a crash which leads to
Denial of Service for an application. The out-of-bounds write can cause
a memory corruption which can have various consequences including
a Denial of Service or Execution of attacker-supplied code.

Although the consequences of a successful exploit of this vulnerability
could be severe, the probability that the attacker would be able to
perform it is low. Besides, password based (PWRI) encryption support in CMS
messages is very rarely used. For that reason the issue was assessed as
Moderate severity according to our Security Policy.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as the CMS implementation is outside the OpenSSL FIPS module
boundary.

OpenSSL 3.5, 3.4, 3.3, 3.2, 3.0, 1.1.1 and 1.0.2 are vulnerable to this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.18.

OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1zd.
(premium support customers only)

OpenSSL 1.0.2 users should upgrade to OpenSSL 1.0.2zm.
(premium support customers only)

This issue was reported on 9th August 2025 by Stanislav Fort (Aisle Research).
The fix was developed by Stanislav Fort (Aisle Research) and Viktor Dukhovni.

Timing side-channel in SM2 algorithm on 64 bit ARM (CVE-2025-9231)

=================================================================

Severity: Moderate

Issue summary: A timing side-channel which could potentially allow remote
recovery of the private key exists in the SM2 algorithm implementation on 64 bit
ARM platforms.

Impact summary: A timing side-channel in SM2 signature computations on 64 bit
ARM platforms could allow recovering the private key by an attacker.

While remote key recovery over a network was not attempted by the reporter,
timing measurements revealed a timing signal which may allow such an attack.

OpenSSL does not directly support certificates with SM2 keys in TLS, and so
this CVE is not relevant in most TLS contexts. However, given that it is
possible to add support for such certificates via a custom provider, coupled
with the fact that in such a custom provider context the private key may be
recoverable via remote timing measurements, we consider this to be a Moderate
severity issue.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as SM2 is not an approved algorithm.

OpenSSL 3.1, 3.0, 1.1.1 and 1.0.2 are not vulnerable to this issue.

OpenSSL 3.5, 3.4, 3.3, and 3.2 are vulnerable to this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

This issue was reported on 18th August 2025 by Stanislav Fort (Aisle Research)
The fix was developed by Stanislav Fort.

Out-of-bounds read in HTTP client no_proxy handling (CVE-2025-9232)

===================================================================

Severity: Low

Issue summary: An application using the OpenSSL HTTP client API functions may
trigger an out-of-bounds read if the "no_proxy" environment variable is set and
the host portion of the authority component of the HTTP URL is an IPv6 address.

Impact summary: An out-of-bounds read can trigger a crash which leads to
Denial of Service for an application.

The OpenSSL HTTP client API functions can be used directly by applications
but they are also used by the OCSP client functions and CMP (Certificate
Management Protocol) client implementation in OpenSSL. However the URLs used
by these implementations are unlikely to be controlled by an attacker.

In this vulnerable code the out of bounds read can only trigger a crash.
Furthermore the vulnerability requires an attacker-controlled URL to be
passed from an application to the OpenSSL function and the user has to have
a "no_proxy" environment variable set. For the aforementioned reasons the
issue was assessed as Low severity.

The vulnerable code was introduced in the following patch releases:
3.0.16, 3.1.8, 3.2.4, 3.3.3, 3.4.0 and 3.5.0.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as the HTTP client implementation is outside the OpenSSL FIPS module
boundary.

OpenSSL 3.5, 3.4, 3.3, 3.2 and 3.0 are vulnerable to this issue.

OpenSSL 1.1.1 and 1.0.2 are not affected by this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.18.

This issue was reported on 16th August 2025 by Stanislav Fort (Aisle Research).
The fix was developed by Stanislav Fort (Aisle Research).

General Advisory Notes

======================

URL for this Security Advisory:
https://openssl-library.org/news/secadv/20250930.txt

openssl-library.org EN 2025 CVE-2025-9230 OpenSSL vulnerability
Feds Tie ‘Scattered Spider’ Duo to $115M in Ransoms https://krebsonsecurity.com/2025/09/feds-tie-scattered-spider-duo-to-115m-in-ransoms/
02/10/2025 18:43:14
QRCode
archive.org

– Krebs on Security
U.S. prosecutors last week levied criminal hacking charges against 19-year-old U.K. national Thalha Jubair for allegedly being a core member of Scattered Spider, a prolific cybercrime group blamed for extorting at least $115 million in ransom payments from victims. The charges came as Jubair and an alleged co-conspirator appeared in a London court to face accusations of hacking into and extorting several large U.K. retailers, the London transit system, and healthcare providers in the United States.

At a court hearing last week, U.K. prosecutors laid out a litany of charges against Jubair and 18-year-old Owen Flowers, accusing the teens of involvement in an August 2024 cyberattack that crippled Transport for London, the entity responsible for the public transport network in the Greater London area.

krebsonsecurity.com EN 2025 Scattered-Spider Lapsus$ busted UK
Digital Threat Modeling Under Authoritarianism https://www.schneier.com/blog/archives/2025/09/digital-threat-modeling-under-authoritarianism.html
02/10/2025 18:33:00
QRCode
archive.org

Schneier on Security - schneier.com/blog/ - Posted on September 26, 2025 at 7:04 AM

Digital Threat Modeling Under Authoritarianism
Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.

In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs.

We threat model all the time. We might decide to walk down one street instead of another, or use an internet VPN when browsing dubious sites. Perhaps we understand the risks in detail, but more likely we are relying on intuition or some trusted authority. But in the U.S. and elsewhere, the average person’s threat model is changing—specifically involving how we protect our personal information. Previously, most concern centered on corporate surveillance; companies like Google and Facebook engaging in digital surveillance to maximize their profit. Increasingly, however, many people are worried about government surveillance and how the government could weaponize personal data.

Since the beginning of this year, the Trump administration’s actions in this area have raised alarm bells: The Department of Government Efficiency (DOGE) took data from federal agencies, Palantir combined disparate streams of government data into a single system, and Immigration and Customs Enforcement (ICE) used social media posts as a reason to deny someone entry into the U.S.

These threats, and others posed by a techno-authoritarian regime, are vastly different from those presented by a corporate monopolistic regime—and different yet again in a society where both are working together. Contending with these new threats requires a different approach to personal digital devices, cloud services, social media, and data in general.

What Data Does the Government Already Have?
For years, most public attention has centered on the risks of tech companies gathering behavioral data. This is an enormous amount of data, generally used to predict and influence consumers’ future behavior—rather than as a means of uncovering our past. Although commercial data is highly intimate—such as knowledge of your precise location over the course of a year, or the contents of every Facebook post you have ever created—it’s not the same thing as tax returns, police records, unemployment insurance applications, or medical history.

The U.S. government holds extensive data about everyone living inside its borders, some of it very sensitive—and there’s not much that can be done about it. This information consists largely of facts that people are legally obligated to tell the government. The IRS has a lot of very sensitive data about personal finances. The Treasury Department has data about any money received from the government. The Office of Personnel Management has an enormous amount of detailed information about government employees—including the very personal form required to get a security clearance. The Census Bureau possesses vast data about everyone living in the U.S., including, for example, a database of real estate ownership in the country. The Department of Defense and the Bureau of Veterans Affairs have data about present and former members of the military, the Department of Homeland Security has travel information, and various agencies possess health records. And so on.

It is safe to assume that the government has—or will soon have—access to all of this government data. This sounds like a tautology, but in the past, the U.S. government largely followed the many laws limiting how those databases were used, especially regarding how they were shared, combined, and correlated. Under the second Trump administration, this no longer seems to be the case.

Augmenting Government Data with Corporate Data
The mechanisms of corporate surveillance haven’t gone away. Compute technology is constantly spying on its users—and that data is being used to influence us. Companies like Google and Meta are vast surveillance machines, and they use that data to fuel advertising. A smartphone is a portable surveillance device, constantly recording things like location and communication. Cars, and many other Internet of Things devices, do the same. Credit card companies, health insurers, internet retailers, and social media sites all have detailed data about you—and there is a vast industry that buys and sells this intimate data.

This isn’t news. What’s different in a techno-authoritarian regime is that this data is also shared with the government, either as a paid service or as demanded by local law. Amazon shares Ring doorbell data with the police. Flock, a company that collects license plate data from cars around the country, shares data with the police as well. And just as Chinese corporations share user data with the government and companies like Verizon shared calling records with the National Security Agency (NSA) after the Sept. 11 terrorist attacks, an authoritarian government will use this data as well.

Personal Targeting Using Data
The government has vast capabilities for targeted surveillance, both technically and legally. If a high-level figure is targeted by name, it is almost certain that the government can access their data. The government will use its investigatory powers to the fullest: It will go through government data, remotely hack phones and computers, spy on communications, and raid a home. It will compel third parties, like banks, cell providers, email providers, cloud storage services, and social media companies, to turn over data. To the extent those companies keep backups, the government will even be able to obtain deleted data.

This data can be used for prosecution—possibly selectively. This has been made evident in recent weeks, as the Trump administration personally targeted perceived enemies for “mortgage fraud.” This was a clear example of weaponization of data. Given all the data the government requires people to divulge, there will be something there to prosecute.

Although alarming, this sort of targeted attack doesn’t scale. As vast as the government’s information is and as powerful as its capabilities are, they are not infinite. They can be deployed against only a limited number of people. And most people will never be that high on the priorities list.

The Risks of Mass Surveillance
Mass surveillance is surveillance without specific targets. For most people, this is where the primary risks lie. Even if we’re not targeted by name, personal data could raise red flags, drawing unwanted scrutiny.

The risks here are twofold. First, mass surveillance could be used to single out people to harass or arrest: when they cross the border, show up at immigration hearings, attend a protest, are stopped by the police for speeding, or just as they’re living their normal lives. Second, mass surveillance could be used to threaten or blackmail. In the first case, the government is using that database to find a plausible excuse for its actions. In the second, it is looking for an actual infraction that it could selectively prosecute—or not.

Mitigating these risks is difficult, because it would require not interacting with either the government or corporations in everyday life—and living in the woods without any electronics isn’t realistic for most of us. Additionally, this strategy protects only future information; it does nothing to protect the information generated in the past. That said, going back and scrubbing social media accounts and cloud storage does have some value. Whether it’s right for you depends on your personal situation.

Opportunistic Use of Data
Beyond data given to third parties—either corporations or the government—there is also data users keep in their possession.This data may be stored on personal devices such as computers and phones or, more likely today, in some cloud service and accessible from those devices. Here, the risks are different: Some authority could confiscate your device and look through it.

This is not just speculative. There are many stories of ICE agents examining people’s phones and computers when they attempt to enter the U.S.: their emails, contact lists, documents, photos, browser history, and social media posts.

There are several different defenses you can deploy, presented from least to most extreme. First, you can scrub devices of potentially incriminating information, either as a matter of course or before entering a higher-risk situation. Second, you could consider deleting—even temporarily—social media and other apps so that someone with access to a device doesn’t get access to those accounts—this includes your contacts list. If a phone is swept up in a government raid, your contacts become their next targets.

Third, you could choose not to carry your device with you at all, opting instead for a burner phone without contacts, email access, and accounts, or go electronics-free entirely. This may sound extreme—and getting it right is hard—but I know many people today who have stripped-down computers and sanitized phones for international travel. At the same time, there are also stories of people being denied entry to the U.S. because they are carrying what is obviously a burner phone—or no phone at all.

Encryption Isn’t a Magic Bullet—But Use It Anyway
Encryption protects your data while it’s not being used, and your devices when they’re turned off. This doesn’t help if a border agent forces you to turn on your phone and computer. And it doesn’t protect metadata, which needs to be unencrypted for the system to function. This metadata can be extremely valuable. For example, Signal, WhatsApp, and iMessage all encrypt the contents of your text messages—the data—but information about who you are texting and when must remain unencrypted.

Also, if the NSA wants access to someone’s phone, it can get it. Encryption is no help against that sort of sophisticated targeted attack. But, again, most of us aren’t that important and even the NSA can target only so many people. What encryption safeguards against is mass surveillance.

I recommend Signal for text messages above all other apps. But if you are in a country where having Signal on a device is in itself incriminating, then use WhatsApp. Signal is better, but everyone has WhatsApp installed on their phones, so it doesn’t raise the same suspicion. Also, it’s a no-brainer to turn on your computer’s built-in encryption: BitLocker for Windows and FileVault for Macs.

On the subject of data and metadata, it’s worth noting that data poisoning doesn’t help nearly as much as you might think. That is, it doesn’t do much good to add hundreds of random strangers to an address book or bogus internet searches to a browser history to hide the real ones. Modern analysis tools can see through all of that.

Shifting Risks of Decentralization
This notion of individual targeting, and the inability of the government to do that at scale, starts to fail as the authoritarian system becomes more decentralized. After all, if repression comes from the top, it affects only senior government officials and people who people in power personally dislike. If it comes from the bottom, it affects everybody. But decentralization looks much like the events playing out with ICE harassing, detaining, and disappearing people—everyone has to fear it.

This can go much further. Imagine there is a government official assigned to your neighborhood, or your block, or your apartment building. It’s worth that person’s time to scrutinize everybody’s social media posts, email, and chat logs. For anyone in that situation, limiting what you do online is the only defense.

Being Innocent Won’t Protect You
This is vital to understand. Surveillance systems and sorting algorithms make mistakes. This is apparent in the fact that we are routinely served advertisements for products that don’t interest us at all. Those mistakes are relatively harmless—who cares about a poorly targeted ad?—but a similar mistake at an immigration hearing can get someone deported.

An authoritarian government doesn’t care. Mistakes are a feature and not a bug of authoritarian surveillance. If ICE targets only people it can go after legally, then everyone knows whether or not they need to fear ICE. If ICE occasionally makes mistakes by arresting Americans and deporting innocents, then everyone has to fear it. This is by design.

Effective Opposition Requires Being Online
For most people, phones are an essential part of daily life. If you leave yours at home when you attend a protest, you won’t be able to film police violence. Or coordinate with your friends and figure out where to meet. Or use a navigation app to get to the protest in the first place.

Threat modeling is all about trade-offs. Understanding yours depends not only on the technology and its capabilities but also on your personal goals. Are you trying to keep your head down and survive—or get out? Are you wanting to protest legally? Are you doing more, maybe throwing sand into the gears of an authoritarian government, or even engaging in active resistance? The more you are doing, the more technology you need—and the more technology will be used against you. There are no simple answers, only choices.

schneier.com EN 2025 ThreatModeling Authoritarianism
Red Hat confirms security incident after hackers claim GitHub breach https://www.bleepingcomputer.com/news/security/red-hat-confirms-security-incident-after-hackers-claim-github-breach/
02/10/2025 12:06:46
QRCode
archive.org
thumbnail

bleepingcomputer.com By Lawrence Abrams
October 2, 2025 02:15 AM 0

An extortion group calling itself the Crimson Collective claims to have breached Red Hat's private GitHub repositories, stealing nearly 570GB of compressed data across 28,000 internal projects.

An extortion group calling itself the Crimson Collective claims to have breached Red Hat's private GitHub repositories, stealing nearly 570GB of compressed data across 28,000 internal projects.

This data allegedly includes approximately 800 Customer Engagement Reports (CERs), which can contain sensitive information about a customer's network and platforms.

A CER is a consulting document prepared for clients that often contains infrastructure details, configuration data, authentication tokens, and other information that could be abused to breach customer networks.

Red Hat confirmed that it suffered a security incident related to its consulting business, but would not verify any of the attacker's claims regarding the stolen GitHub repositories and customer CERs.

"Red Hat is aware of reports regarding a security incident related to our consulting business and we have initiated necessary remediation steps," Red Hat told BleepingComputer.

"The security and integrity of our systems and the data entrusted to us are our highest priority. At this time, we have no reason to believe the security issue impacts any of our other Red Hat services or products and are highly confident in the integrity of our software supply chain."

While Red Hat did not respond to any further questions about the breach, the hackers told BleepingComputer that the intrusion occurred approximately two weeks ago.

They allegedly found authentication tokens, full database URIs, and other private information in Red Hat code and CERs, which they claimed to use to gain access to downstream customer infrastructure.

The hacking group also published a complete directory listing of the allegedly stolen GitHub repositories and a list of CERs from 2020 through 2025 on Telegram.

The directory listing of CERs include a wide range of sectors and well known organizations such as Bank of America, T-Mobile, AT&T, Fidelity, Kaiser, Mayo Clinic, Walmart, Costco, the U.S. Navy’s Naval Surface Warfare Center, Federal Aviation Administration, the House of Representatives, and many others.

The hackers stated that they attempted to contact Red Hat with an extortion demand but received no response other than a templated reply instructing them to submit a vulnerability report to their security team.

According to them, the created ticket was repeatedly assigned to additional people, including Red Hat's legal and security staff members.

BleepingComputer sent Red Hat additional questions, and we will update this story if we receive more information.

The same group also claimed responsibility for briefly defacing Nintendo’s topic page last week to include contact information and links to their Telegram channel

bleepingcomputer.com EN 2025 Crimson-Collective Data-Breach Extortion GitHub Red-Hat Repository
Microsoft’s new Security Store is like an app store for cybersecurity | The Verge https://www.theverge.com/news/788195/microsoft-security-store-launch-copilot-ai-agents
01/10/2025 06:46:48
QRCode
archive.org
thumbnail

Cybersecurity workers can also start creating their own Security Copilot AI agents.

Microsoft is launching a Security Store that will be full of security software-as-a-service (SaaS) solutions and AI agents. It’s part of a broader effort to sell Microsoft’s Sentinel security platform to businesses, complete with Microsoft Security Copilot AI agents that can be built by security teams to help tackle the latest threats.

The Microsoft Security Store is a storefront designed for security professionals to buy and deploy SaaS solutions and AI agents from Microsoft’s ecosystem partners. Darktrace, Illumio, Netskope, Perfomanta, and Tanium are all part of the new store, with solutions covering threat protection, identity and device management, and more.

A lot of the solutions will integrate with Microsoft Defender, Sentinel, Entra, Purview, or Security Copilot, making them quick to onboard for businesses that are fully reliant on Microsoft for their security needs. This should cut down on procurement and onboarding times, too.

Alongside the Security Store, Microsoft is also allowing Security Copilot users to build their own AI agents. Microsoft launched some of its own security AI agents earlier this year, and now security teams can use a tool that’s similar to Copilot Studio to build their own. You simply create an AI agent through a set of prompts and then publish them all with no code required. These Security Copilot agents will also be available in the Security Store today.

theverge.com EN 2025 Microsoft AI Copilot AI agents SaaS
How China’s Secretive Spy Agency Became a Cyber Powerhouse https://www.nytimes.com/2025/09/28/world/asia/how-chinas-secretive-spy-agency-became-a-cyber-powerhouse.html?smid=nytcore-ios-share&referringSource=articleShare
30/09/2025 11:10:59
QRCode
archive.org

nytimes.com
By Chris Buckley and Adam Goldman
Sept. 28, 2025

Fears of U.S. surveillance drove Xi Jinping, China’s leader, to elevate the agency and put it at the center of his cyber ambitions.

American officials were alarmed in 2023 when they discovered that Chinese state-controlled hackers had infiltrated critical U.S. infrastructure with malicious code that could wreck power grids, communications systems and water supplies. The threat was serious enough that William J. Burns, the director of the C.I.A., made a secret trip to Beijing to confront his Chinese counterpart.

He warned China’s minister of state security that there would be “serious consequences” for Beijing if it unleashed the malware. The tone of the meeting, details of which have not been previously reported, was professional and it appeared the message was delivered.

But since that meeting, which was described by two former U.S. officials, China’s intrusions have only escalated. (The former officials spoke on the condition of anonymity because they were not authorized to speak publicly about the sensitive meeting.)

American and European officials say China’s Ministry of State Security, the civilian spy agency often called the M.S.S., in particular, has emerged as the driving force behind China’s most sophisticated cyber operations.

In recent disclosures, officials revealed another immense, yearslong intrusion by hackers who have been collectively called Salt Typhoon, one that may have stolen information about nearly every American and targeted dozens of other countries. Some countries hit by Salt Typhoon warned in an unusual statement that the data stolen could provide Chinese intelligence services with the capability to “identify and track their targets’ communications and movements around the world.”

The attack underscored how the Ministry of State Security has evolved into a formidable cyberespionage agency capable of audacious operations that can evade detection for years, experts said.

For decades, China has used for-hire hackers to break into computer networks and systems. These operatives sometimes mixed espionage with commercial data theft or were sloppy, exposing their presence. In the recent operation by Salt Typhoon, however, intruders linked to the M.S.S. found weaknesses in systems, burrowed into networks, spirited out data, hopped between compromised systems and erased traces of their presence.
“Salt Typhoon shows a highly skilled and strategic side to M.S.S. cyber operations that has been missed with the attention on lower-quality contract hackers,” said Alex Joske, the author of a book on the ministry.

For Washington, the implication of China’s growing capability is clear: In a future conflict, China could put U.S. communications, power and infrastructure at risk.

China’s biggest hacking campaigns have been “strategic operations” intended to intimidate and deter rivals, said Nigel Inkster, a senior adviser for cybersecurity and China at the International Institute for Strategic Studies in London.

“If they succeed in remaining on these networks undiscovered, that potentially gives them a significant advantage in the event of a crisis,” said Mr. Inkster, formerly director of operations and intelligence in the British Secret Intelligence Service, MI6. “If their presence is — as it has been — discovered, it still exercises a very significant deterrent effect; as in, ‘Look what we could do to you if we wanted.’”

The Rise of the M.S.S.
China’s cyber advances reflect decades of investment to try to match, and eventually rival, the U.S. National Security Agency and Britain’s Government Communications Headquarters, or GCHQ.

China’s leaders founded the Ministry of State Security in 1983 mainly to track dissidents and perceived foes of Communist Party rule. The ministry engaged in online espionage but was long overshadowed by the Chinese military, which ran extensive cyberspying operations.

After taking power as China’s top leader in 2012, Xi Jinping moved quickly to reshape the M.S.S. He seemed unsettled by the threat of U.S. surveillance to China’s security, and in a 2013 speech pointed to the revelations of Edward J. Snowden, the former U.S. intelligence contractor.

Mr. Xi purged the ministry of senior officials accused of corruption and disloyalty. He reined in the hacking role of the Chinese military, elevating the ministry as the country’s primary cyberespionage agency. He put national security at the core of his agenda with new laws and by establishing a new commission.

“At this same time, the intelligence requirements imposed on the security apparatus start to multiply, because Xi wanted to do more things abroad and at home,” said Matthew Brazil, a senior analyst at BluePath Labs who has co-written a history of China’s espionage services.

Since around 2015, the M.S.S. has moved to bring its far-flung provincial offices under tighter central control, said experts. Chen Yixin, the current minister, has demanded that local state security offices follow Beijing’s orders without delay. Security officials, he said on a recent inspection of the northeast, must be both “red and expert” — absolutely loyal to the party while also adept in technology.

“It all essentially means that the Ministry of State Security now sits atop a system in which it can move its pieces all around the chessboard,” said Edward Schwarck, a researcher at the University of Oxford who is writing a dissertation on China’s state security.

Mr. Chen was the official who met with Mr. Burns in May 2023. He gave nothing away when confronted with the details of the cyber campaign, telling Mr. Burns he would let his superiors know about the U.S. concerns, the former officials said.

The Architect of China’s Cyber Power
The Ministry of State Security operates largely in the shadows, its officials rarely seen or named in public. There was one exception: Wu Shizhong, who was a senior official in Bureau 13, the “technical reconnaissance” arm of the ministry.

Mr. Wu was unusually visible, turning up at meetings and conferences in his other role as director of the China Information Technology Security Evaluation Center. Officially, the center vets digital software and hardware for security vulnerabilities before it can be used in China. Unofficially, foreign officials and experts say, the center comes under the control of the M.S.S. and provided a direct pipeline of information about vulnerabilities and hacking talent.

Mr. Wu has not publicly said he served in the security ministry, but a Chinese university website in 2005 described him as a state security bureau head in a notice about a meeting, and investigations by Crowd Strike and other cybersecurity firms have also described his state security role.

“Wu Shizhong is widely recognized as a leading figure in the creation of M.S.S. cyber capabilities,” said Mr. Joske.

In 2013, Mr. Wu pointed to two lessons for China: Mr. Snowden’s disclosures about American surveillance and the use by the United States of a virus to sabotage Iran’s nuclear facilities. “The core of cyber offense and defense capabilities is technical prowess,” he said, stressing the need to control technologies and exploit their weaknesses. China, he added, should create “a national cyber offense and defense apparatus.”

China’s commercial tech sector boomed in the years that followed, and state security officials learned how to put domestic companies and contractors to work, spotting and exploiting flaws and weak spots in computer systems, several cybersecurity experts said. The U.S. National Security Agency has also hoarded knowledge of software flaws for its own use. But China has an added advantage: It can tap its own tech companies to feed information to the state.
“M.S.S. was successful at improving the talent pipeline and the volume of good offensive hackers they could contract to,” said Dakota Cary, a researcher who focuses on China’s efforts to develop its hacking capabilities at SentinelOne. “This gives them a significant pipeline for offensive tools.”

The Chinese government also imposed rules requiring that any newly found software vulnerabilities be reported first to a database that analysts say is operated by the M.S.S., giving security officials early access. Other policies reward tech firms with payments if they meet monthly quotas of finding flaws in computer systems and submitting them to the state security-controlled database.

“It’s a prestige thing and it’s good for a company’s reputation,” Mei Danowski, the co-founder of Natto Thoughts, a company that advises clients on cyber threats, said of the arrangement. “These business people don’t feel like they are doing something wrong. They feel like they are doing something for their country.”

nytimes.com EN 2025 US China Typhoon Spy Agency
Jaguar Land Rover Gets Government Loan Guarantee to Support Supply Chain; Restarts Production https://www.wsj.com/business/jaguar-land-rover-gets-2-billion-u-k-government-loan-guarantee-after-cyberattack-217ae50a?st=q7vzPq&reflink=desktopwebshare_permalink
30/09/2025 11:08:49
QRCode
archive.org

The Wall Street Journal
By
Dominic Chopping
Follow
Updated Sept. 29, 2025 6:39 am ET

Jaguar Land Rover discovered a cyberattack late last month, forcing the company to shut down its computer systems and halt production.

Jaguar Land Rover will restart some sections of its manufacturing operations in the coming days, as it begins its recovery from a cyberattack that has crippled production for around a month.

“As the controlled, phased restart of our operations continues, we are taking further steps towards our recovery and the return to manufacture of our world‑class vehicles,” the company said in a statement Monday.

The news comes a day after the U.K. government stepped in to provide financial support for the company, underwriting a 1.5 billion-pound ($2.01 billion) loan guarantee in a bid to support the company’s cash reserves and help it pay suppliers.

The loan will be provided by a commercial bank and is backed by the government’s export credit agency. It will be paid back over five years.

“Jaguar Land Rover is an iconic British company which employs tens of thousands of people,” U.K. Treasury Chief Rachel Reeves said in a statement Sunday.

“Today we are protecting thousands of those jobs with up to 1.5 billion pounds in additional private finance, helping them support their supply chain and protect a vital part of the British car industry,” she added.

The U.K. automaker, owned by India’s Tata Motors, discovered a cyberattack late last month, forcing the company to shut down its computer systems and halt production.

The company behind Land Rover, Jaguar and Range Rover models, has been forced to repeatedly extend the production shutdown over the past few weeks as it races to restart systems safely with the help of cybersecurity experts flown in from around the globe, the U.K.’s National Cyber Security Centre and law enforcement.

Last week, the company began a gradual restart of its operations, bringing some IT systems back online. It has informed suppliers and retail partners that sections of its digital network is back up and running, and processing capacity for invoicing has been increased as it works to quickly clear the backlog of payments to suppliers.

JLR has U.K. plants in Solihull and Wolverhampton in the West Midlands, in addition to Halewood in Merseyside. It is one of the U.K.’s largest exporters and a major employer, employing 34,000 directly in its U.K. operations. It also operates the largest supply chain in the U.K. automotive sector, much of it made up of small- and medium-sized enterprises, and employing around 120,000 people, according to the government.

Labor unions had warned that thousands of jobs in the JLR supply chain were at risk due to the disruption and had urged the government to step in with a furlough plan to support them.

U.K. trade union Unite, which represents thousands of workers employed at JLR and throughout its supply chain, said the government’s loan guarantee is an important first step.

“The money provided must now be used to ensure job guarantees and to also protect skills and pay in JLR and its supply chain,” Unite general secretary Sharon Graham said in a statement.

wsj.com UK EN 2025 Jaguar Land Rover JLR Government Guarantee
AI for Cyber Defenders https://red.anthropic.com/2025/ai-for-cyber-defenders/
30/09/2025 10:21:04
QRCode
archive.org

red.anthropic.com September 29, 2025 ANTHROPIC

AI models are now useful for cybersecurity tasks in practice, not just theory. As research and experience demonstrated the utility of frontier AI as a tool for cyber attackers, we invested in improving Claude’s ability to help defenders detect, analyze, and remediate vulnerabilities in code and deployed systems. This work allowed Claude Sonnet 4.5 to match or eclipse Opus 4.1, our frontier model released only two months prior, in discovering code vulnerabilities and other cyber skills. Adopting and experimenting with AI will be key for defenders to keep pace.

We believe we are now at an inflection point for AI’s impact on cybersecurity.

For several years, our team has carefully tracked the cybersecurity-relevant capabilities of AI models. Initially, we found models to be not particularly powerful for advanced and meaningful capabilities. However, over the past year or so, we’ve noticed a shift. For example:

We showed that models could reproduce one of the costliest cyberattacks in history—the 2017 Equifax breach—in simulation.
We entered Claude into cybersecurity competitions, and it outperformed human teams in some cases.
Claude has helped us discover vulnerabilities in our own code and fix them before release.
In this summer’s DARPA AI Cyber Challenge, teams used LLMs (including Claude) to build “cyber reasoning systems” that examined millions of lines of code for vulnerabilities to patch. In addition to inserted vulnerabilities, teams found (and sometimes patched) previously undiscovered, non-synthetic vulnerabilities. Beyond a competition setting, other frontier labs now apply models to discover and report novel vulnerabilities.

At the same time, as part of our Safeguards work, we have found and disrupted threat actors on our own platform who leveraged AI to scale their operations. Our Safeguards team recently discovered (and disrupted) a case of “vibe hacking,” in which a cybercriminal used Claude to build a large-scale data extortion scheme that previously would have required an entire team of people. Safeguards has also detected and countered Claude's use in increasingly complex espionage operations, including the targeting of critical telecommunications infrastructure, by an actor that demonstrated characteristics consistent with Chinese APT operations.

All of these lines of evidence lead us to think we are at an important inflection point in the cyber ecosystem, and progress from here could become quite fast or usage could grow quite quickly.

Therefore, now is an important moment to accelerate defensive use of AI to secure code and infrastructure. We should not cede the cyber advantage derived from AI to attackers and criminals. While we will continue to invest in detecting and disrupting malicious attackers, we think the most scalable solution is to build AI systems that empower those safeguarding our digital environments—like security teams protecting businesses and governments, cybersecurity researchers, and maintainers of critical open-source software.

In the run-up to the release of Claude Sonnet 4.5, we started to do just that.

Claude Sonnet 4.5: emphasizing cyber skills
As LLMs scale in size, “emergent abilities”—skills that were not evident in smaller models and were not necessarily an explicit target of model training—appear. Indeed, Claude’s abilities to execute cybersecurity tasks like finding and exploiting software vulnerabilities in Capture-the-Flag (CTF) challenges have been byproducts of developing generally useful AI assistants.

But we don’t want to rely on general model progress alone to better equip defenders. Because of the urgency of this moment in the evolution of AI and cybersecurity, we dedicated researchers to making Claude better at key skills like code vulnerability discovery and patching.

The results of this work are reflected in Claude Sonnet 4.5. It is comparable or superior to Claude Opus 4.1 in many aspects of cybersecurity while also being less expensive and faster.

Evidence from evaluations
In building Sonnet 4.5, we had a small research team focus on enhancing Claude’s ability to find vulnerabilities in codebases, patch them, and test for weaknesses in simulated deployed security infrastructure. We chose these because they reflect important tasks for defensive actors. We deliberately avoided enhancements that clearly favor offensive work—such as advanced exploitation or writing malware. We hope to enable models to find insecure code before deployment and to find and fix vulnerabilities in deployed code. There are, of course, many more critical security tasks we did not focus on; at the end of this post, we elaborate on future directions.

To test the effects of our research, we ran industry-standard evaluations of our models. These enable clear comparisons across models, measure the speed of AI progress, and—especially in the case of novel, externally developed evaluations—provide a good metric to ensure that we are not simply teaching to our own tests.

As we ran these evaluations, one thing that stood out was the importance of running them many times. Even if it is computationally expensive for a large set of evaluation tasks, it better captures the behavior of a motivated attacker or defender on any particular real-world problem. Doing so reveals impressive performance not only from Claude Sonnet 4.5, but also from models several generations older.

Cybench
One of the evaluations we have tracked for over a year is Cybench, a benchmark drawn from CTF competition challenges.[1] On this evaluation, we see striking improvement from Claude Sonnet 4.5, not just over Claude Sonnet 4, but even over Claude Opus 4 and 4.1 models. Perhaps most striking, Sonnet 4.5 achieves a higher probability of success given one attempt per task than Opus 4.1 when given ten attempts per task. The challenges that are part of this evaluation reflect somewhat complex, long-duration workflows. For example, one challenge involved analyzing network traffic, extracting malware from that traffic, and decompiling and decrypting the malware. We estimate that this would have taken a skilled human at least an hour, and possibly much longer; Claude took 38 minutes to solve it.

When we give Claude Sonnet 4.5 ten attempts at the Cybench evaluation, it succeeds on 76.5% of the challenges. This is particularly noteworthy because we have doubled this success rate in just the past six months (Sonnet 3.7, released in February 2025, had only a 35.9% success rate when given ten trials).

Figure 1: Model Performance on Cybench—Claude Sonnet 4.5 significantly outperforms all previous models given k=1, 10, or 30 trials, where probability of success is measured as the expectation over the proportion of problems where at least one of k trials succeeds. Note that these results are on a subset of 37 of the 40 original Cybench problems, where 3 problems were excluded due to implementation difficulties.
CyberGym
In another external evaluation, we evaluated Claude Sonnet 4.5 on CyberGym, a benchmark that evaluates the ability of agents to (1) find (previously-discovered) vulnerabilities in real open-source software projects given a high-level description of the weakness, and (2) discover new (previously-undiscovered) vulnerabilities.[2] The CyberGym team previously found that Claude Sonnet 4 was the strongest model on their public leaderboard.

Claude Sonnet 4.5 scores significantly better than either Claude Sonnet 4 or Claude Opus 4. When using the same cost constraints as the public CyberGym leaderboard (i.e., a limit of $2 of API queries per vulnerability) we find that Sonnet 4.5 achieves a new state-of-the-art score of 28.9%. But true attackers are rarely limited in this way: they can attempt many attacks, for far more than $2 per trial. When we remove these constraints and give Claude 30 trials per task, we find that Sonnet 4.5 reproduces vulnerabilities in 66.7% of programs. And although the relative price of this approach is higher, the absolute cost—about $45 to try one task 30 times—remains quite low.

Figure 2: Model Performance on CyberGym—Sonnet 4.5 outperforms all previous models, including Opus 4.1.

*Note that Opus 4.1, given its higher price, did not follow the same $2 cost constraint as the other models in the one-trial scenario.

Equally interesting is the rate at which Claude Sonnet 4.5 discovers new vulnerabilities. While the CyberGym leaderboard shows that Claude Sonnet 4 only discovers vulnerabilities in about 2% of targets, Sonnet 4.5 discovers new vulnerabilities in 5% of cases. By repeating the trial 30 times it discovers new vulnerabilities in over 33% of projects.

Figure 3: Model Performance on CyberGym—Sonnet 4.5 outperforms Sonnet 4 at new vulnerablity discovery with only one trial and dramatically outstrips its performance when given 30 trials.
Further research into patching
We are also conducting preliminary research into Claude's ability to generate and review patches that fix vulnerabilities. Patching vulnerabilities is a harder task than finding them because the model has to make surgical changes that remove the vulnerability without altering the original functionality. Without guidance or specifications, the model has to infer this intended functionality from the code base.

In our experiment we tasked Claude Sonnet 4.5 with patching vulnerabilities in the CyberGym evaluation set based on a description of the vulnerability and information about what the program was doing when it crashed. We used Claude to judge its own work, asking it to grade the submitted patches by comparing them to human-authored reference patches. 15% of the Claude-generated patches were judged to be semantically equivalent to the human-generated patches. However, this comparison-based approach has an important limitation: because vulnerabilities can often be fixed in multiple valid ways, patches that differ from the reference may still be correct, leading to false negatives in our evaluation.

We manually analyzed a sample of the highest-scoring patches and found them to be functionally identical to reference patches that have been merged into the open-source software on which the CyberGym evaluation is based. This work reveals a pattern consistent with our broader findings: Claude develops cyber-related skills as it generally improves. Our preliminary results suggest that patch generation—like vulnerability discovery before it—is an emergent capability that could be enhanced with focused research. Our next step is to systematically address the challenges we've identified to make Claude a reliable patch author and reviewer.

Conferring with trusted partners
Real world defensive security is more complicated in practice than our evaluations can capture. We’ve consistently found that real problems are more complex, challenges are harder, and implementation details matter a lot. Therefore, we feel it is important to work with the organizations actually using AI for defense to get feedback on how our research could accelerate them. In the lead-up to Sonnet 4.5 we worked with a number of organizations who applied the model to their real challenges in areas like vulnerability remediation, testing network security, and threat analysis.

Nidhi Aggarwal, Chief Product Officer of HackerOne, said, “Claude Sonnet 4.5 reduced average vulnerability intake time for our Hai security agents by 44% while improving accuracy by 25%, helping us reduce risk for businesses with confidence.” According to Sven Krasser, Senior Vice President for Data Science and Chief Scientist at CrowdStrike, “Claude shows strong promise for red teaming—generating creative attack scenarios that accelerate how we study attacker tradecraft. These insights strengthen our defenses across endpoints, identity, cloud, data, SaaS, and AI workloads.”

These testimonials made us more confident in the potential for applied, defensive work with Claude.

What’s next?
Claude Sonnet 4.5 represents a meaningful improvement, but we know that many of its capabilities are nascent and do not yet match those of security professionals and established processes. We will keep working to improve the defense-relevant capabilities of our models and enhance the threat intelligence and mitigations that safeguard our platforms. In fact, we have already been using results of our investigations and evaluations to continually refine our ability to catch misuse of our models for harmful cyber behavior. This includes using techniques like organization-level summarization to understand the bigger picture beyond just a singular prompt and completion; this helps disaggregate dual-use behavior from nefarious behavior, particularly for the most damaging use-cases involving large scale automated activity.

But we believe that now is the time for as many organizations as possible to start experimenting with how AI can improve their security posture and build the evaluations to assess those gains. Automated security reviews in Claude Code show how AI can be integrated into the CI/CD pipeline. We would specifically like to enable researchers and teams to experiment with applying models in areas like Security Operations Center (SOC) automation, Security Information and Event Management (SIEM) analysis, secure network engineering, or active defense. We would like to see and use more evaluations for defensive capabilities as part of the growing third-party ecosystem for model evaluations.

But even building and adopting to advantage defenders is only part of the solution. We also need conversations about making digital infrastructure more resilient and new software secure by design—including with help from frontier AI models. We look forward to these discussions with industry, government, and civil society as we navigate the moment when AI’s impact on cybersecurity transitions from being a future concern to a present-day imperative.

red.anthropic.com EN 2025 AI Claude cyber defenders Sonnet LLM
Security Alert: Malicious 'postmark-mcp' npm Package Impersonating Postmark | Postmark https://postmarkapp.com/blog/information-regarding-malicious-postmark-mcp-package
29/09/2025 23:25:02
QRCode
archive.org
thumbnail

Alert: A malicious npm package named 'postmark-mcp' was impersonating Postmark to steal user emails. Postmark is not affiliated with this fraudulent package.

We recently became aware of a malicious npm package called "postmark-mcp" on npm that was impersonating Postmark and stealing user emails. We want to be crystal clear: Postmark had absolutely nothing to do with this package or the malicious activity.

Here's what happened: A malicious actor created a fake package on npm impersonating our name, built trust over 15 versions, then added a backdoor in version 1.0.16 that secretly BCC’d emails to an external server.

What you should know:

This is not an official Postmark tool. We have not published our Postmark MCP server on npm prior to this incident
We didn't develop, authorize, or have any involvement with the "postmark-mcp" npm package
The legitimate Postmark API and services remain secure and unaffected by this incident
If you've used this fake package:

Remove it immediately from your systems
Check your email logs for any suspicious activity
Consider rotating any credentials that may have been sent via email during the compromise period
This situation highlights why we take our API security and developer trust so seriously. When you integrate with Postmark, you're working directly with our official, documented APIs—not third-party packages that claim to represent us. If you are not sure what official resources are available, you can find them via the links below, which are always available to our customers:

Our official resources:

Official Postmark MCP - Github
API documentation
Official libraries and SDKs
Support channels or email security@activecampaign.com if you have questions

postmarkapp.com EN 2025 incident Supply-Chain-Attack postmark-mcp
CVE-2025-24085 https://github.com/b1n4r1b01/n-days/blob/main/CVE-2025-24085/CVE-2025-24085.md
29/09/2025 23:04:40
QRCode
archive.org
thumbnail

github.com/b1n4r1b01

This vulnerability has been labeled under the title CoreMedia, which is a gigantic sub-system on Apple platforms. CoreMedia includes multiple public and private frameworks in the shared cache including CoreMedia.framework, AVFoundation.framework, MediaToolbox.framework, etc. All of these work hand in hand and provide users with multiple low level IPC endpoints and high level APIs. There are tons of vulnerabilities labeled as CoreMedia listed on Apple's security advisory website and these vulnerabilities range from sensitive file access to metadata corruption in media files. In fact, iOS 18.3, where this bug was patched lists 3 CVEs under the CoreMedia label but only this one is labeled as an UAF issue so we can use that as a starting point for our research.

After a lot of diffing, I found that this specific vulnerability lies in the Remaker sub-system of MediaToolbox.framework. The vulnerability lies in the improper handling of FigRemakerTrack object.

remaker_AddVideoCompositionTrack(FigRemaker, ..., ...)
{

// Allocates FigRemakerTrack (alias channel)
ret = remakerFamily_createChannel(FigRemaker, 0, 'vide', &FigRemakerTrack);

...

// Links FigRemakerTrack to FigRemaker
ret = remakerFamily_finishVideoCompositionChannel(FigRemaker, ..., ...);

if (ret){
    // Failure path, means FigRemakerTrack is not linked to FigRemaker
    goto exit;
}
else{
    // Success path, means FigRemakerTrack is linked to FigRemaker

    ...

    ret = URLAsset->URLAssetCopyTrackByID(URLAsset, user_controlled_trackID, &outTrack);

    if (ret){
        // Failure path, if we can make URLAssetCopyTrackByID fail we never zero out FigRemakerTrack
        goto exit;  // <-- buggy route
    }
    else{
        // Success path

        FigWriter->FigWriter_SetTrackProperty(FigWriter, FigRemakerTrack.someTrackID, "MediaTimeScale", value);

        FigRemakerTrack = 0;
        goto exit;
    }

}

exit:

// This function will call CFRelease on the FigRemakerTrack
remakerFamily_discardChannel(FigRemaker, FigRemakerTrack);

...

}
By providing an OOB user_controlled_trackID we can force the control flow to take the buggy route where we free the FigRemakerTrack object while FigRemaker still holds a reference to it.

Reaching the vulnerable code
Reaching this vulnerable code was quite tricky, as you need to deal with multiple XPC endpoints. In my original POC I had to use 6 XPC endpoints which were com.apple.coremedia.mediaplaybackd.mutablecomposition.xpc, com.apple.coremedia.mediaplaybackd.sandboxserver.xpc, com.apple.coremedia.mediaplaybackd.customurlloader.xpc, com.apple.coremedia.mediaplaybackd.asset, com.apple.coremedia.mediaplaybackd.remaker.xpc, com.apple.coremedia.mediaplaybackd.formatreader.xpc to trigger the bug but in my final poc I boiled them down to just 3 endpoints. Since I'm not using low level XPC to communicate with the endpoint, this poc would only work on iOS 18 version, my tests were specifically done on iOS 18.2.

To reach this path you need to:

Create a Remaker object
Enqueue the buggy AddVideoComposition request
Start processing the request (this should free the FigRemakerTrack)
???
Profit?
Impact
This bug lets you get code execution in mediaplaybackd. In the provided poc, I am simply double free'ing the FigRemakerTrack by first free'ing it with the bug and then closing the XPC connection to trigger cleanup of the FigRemaker object and thus crashing. Exploiting this kind of CoreFoundation UAF has been made hard since iOS 18 due to changes in the CoreFoundation allocator. But exploiting this bug on iOS 17 should be manageable due to a weaker malloc type implementation, I was very reliably able to place fake objects after the first free on iOS 17.

In-The-Wild angle
If you look at this bug's advisory you can find that Apple clearly says that this bug was a part of some iOS chain: "Apple is aware of a report that this issue may have been actively exploited against versions of iOS before iOS 17.2.". Now the weird part is you don't see the exploited against versions of iOS before iOS XX.X line very often in security updates, if we look around CVEs from those days we see a WebKit -> UIProcess (I guess?) bug CVE-2025-24201 with very similar impact description "This is a supplementary fix for an attack that was blocked in iOS 17.2. (Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 17.2.)" And if we go back to iOS 17.2/17.3 we see couple of CVEs which look like some chain all labeled as actively exploited and not designated to any 3rd party like Google TAG or any human rights security lab. Now I believe this mediaplaybackd sandbox escape was a 2nd stage sandbox escape in an iOS ITW chain. Here's what my speculated iOS 17 chain looks like (could be totally wrong but we'll probably never know):

WebKit (CVE-2024-23222)
↓
UIProc sbx (CVE-2025-24201)
↓
mediaplaybackd sbx (CVE-2025-24085)
↓
Kernel ???
↓
PAC?/PPL (CVE-2024-23225 / CVE-2024-23296)
Question is: how many pivots are too many pivots? :P

github.com/b1n4r1b01 EN 2025 CoreMedia vulnerability CVE-2025-24085
page 2 / 212
4818 links
Shaarli - Le gestionnaire de marque-pages personnel, minimaliste, et sans base de données par la communauté Shaarli - Theme by kalvn