Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
page 5 / 214
4278 résultats taggé EN  ✕
Scattered LAPSUS$ Hunters Ransomware Group Claims New Victims on New Website https://dailydarkweb.net/scattered-lapsus-hunters-ransomware-group-claims-new-victims-on-new-website/
05/10/2025 22:16:25
QRCode
archive.org
thumbnail
  • Daily Dark Web - dailydarkweb.net
    October 3, 2025

The newly formed cybercrime alliance, “Scattered LAPSUS$ Hunters,” has launched a new website detailing its claims of a massive data breach affecting Salesforce and its extensive customer base. This development is the latest move by the group, a notorious collaboration between members of the established threat actor crews ShinyHunters, Scattered Spider, and LAPSUS$. On their new site, the group is extorting Salesforce directly, threatening to leak nearly one billion records with a ransom deadline of October 10, 2025.

This situation stems from a widespread and coordinated campaign that targeted Salesforce customers throughout mid-2025. According to security researchers, the attacks did not exploit a vulnerability in Salesforce’s core platform. Instead, the threat actors, particularly those from the Scattered Spider group, employed sophisticated social engineering tactics.

The primary method involved voice phishing (vishing), where attackers impersonated corporate IT or help desk staff in phone calls to employees of target companies. These employees were then manipulated into authorizing malicious third-party applications within their company’s Salesforce environment. This action granted the attackers persistent access tokens (OAuth), allowing them to bypass multi-factor authentication and exfiltrate vast amounts of data. The alliance has now consolidated the data from these numerous breaches for this large-scale extortion attempt against Salesforce itself.

The website lists dozens of high-profile Salesforce customers allegedly compromised in the campaign. The list of alleged victims posted by the group includes:

Toyota Motor Corporations (🇯🇵): A multinational automotive manufacturer.
FedEx (🇺🇸): A global courier delivery services company.
Disney/Hulu (🇺🇸): A multinational mass media and entertainment conglomerate.
Republic Services (🇺🇸): An American waste disposal company.
UPS (🇺🇸): A multinational shipping, receiving, and supply chain management company.
Aeroméxico (🇲🇽): The flag carrier airline of Mexico.
Home Depot (🇺🇸): The largest home improvement retailer in the United States.
Marriott (🇺🇸): A multinational company that operates, franchises, and licenses lodging.
Vietnam Airlines (🇻🇳): The flag carrier of Vietnam.
Walgreens (🇺🇸): An American company that operates the second-largest pharmacy store chain in the United States.
Stellantis (🇳🇱): A multinational automotive manufacturing corporation.
McDonald’s (🇺🇸): A multinational fast food chain.
KFC (🇺🇸): A fast food restaurant chain that specializes in fried chicken.
ASICS (🇯🇵): A Japanese multinational corporation which produces sportswear.
GAP, INC. (🇺🇸): A worldwide clothing and accessories retailer.
HMH (hmhco.com) (🇺🇸): A publisher of textbooks, instructional technology materials, and assessments.
Fujifilm (🇯🇵): A multinational photography and imaging company.
Instructure.com – Canvas (🇺🇸): An educational technology company.
Albertsons (Jewel Osco, etc) (🇺🇸): An American grocery company.
Engie Resources (Plymouth) (🇺🇸): A retail electricity provider.
Kering (🇫🇷): A global luxury group that manages brands like Gucci, Balenciaga, and Brioni.
HBO Max (🇺🇸): A subscription video on-demand service.
Instacart (🇺🇸): A grocery delivery and pick-up service.
Petco (🇺🇸): An American pet retailer.
Puma (🇩🇪): A German multinational corporation that designs and manufactures athletic footwear and apparel.
Cartier (🇫🇷): A French luxury goods conglomerate.
Adidas (🇩🇪): A multinational corporation that designs and manufactures shoes, clothing, and accessories.
TripleA (aaa.com) (🇺🇸): A federation of motor clubs throughout North America.
Qantas Airways (🇦🇺): The flag carrier of Australia.
CarMax (🇺🇸): A used vehicle retailer.
Saks Fifth (🇺🇸): An American luxury department store chain.
1-800Accountant (🇺🇸): A nationwide accounting firm.
Air France & KLM (🇫🇷/🇳🇱): A major European airline partnership.
Google Adsense (🇺🇸): A program run by Google through which website publishers serve advertisements.
Cisco (🇺🇸): A multinational digital communications technology conglomerate.
Pandora.net (🇩🇰): A Danish jewelry manufacturer and retailer.
TransUnion (🇺🇸): An American consumer credit reporting agency.
Chanel (🇫🇷): A French luxury fashion house.
IKEA (🇸🇪): A Swedish-founded multinational group that designs and sells ready-to-assemble furniture.
According to the actor, the breach involves nearly 1 billion records from Salesforce and its clients. The allegedly compromised data includes:

Sensitive Personally Identifiable Information (PII)
Strategic business records that could impact market position
Data from over 100 other demand instances hosted on Salesforce infrastructure

dailydarkweb.net EN 2025 data-breach extortion Salesforce Scattered Lapsus$ Hunters scattered-spiderShinyHunters supply-chain-attack vishing
Submarine cable security is all at sea https://www.theregister.com/2025/09/29/submarine_cable_security_report_uk
05/10/2025 22:12:55
QRCode
archive.org
thumbnail

• The Register
Mon 29 Sep 2025 // 08:01 UTC
by Danny Bradbury

Feature: Guess how much of our direct transatlantic data capacity runs through two cables in Bude?

The first transatlantic cable, laid in 1858, delivered a little over 700 messages before promptly dying a few weeks later. 167 years on, the undersea cables connecting the UK to the outside world process £220 billion in daily financial transactions. Now, the UK Parliament's Joint Committee on National Security Strategy (JCNSS) has told the government that it has to do a better job of protecting them.

The Committee's report, released on September 19, calls the government "too timid" in its approach to protecting the cables that snake from the UK to various destinations around the world. It warns that "security vulnerabilities abound" in the UK's undersea cable infrastructure, when even a simple anchor-drag can cause major damage.

There are 64 cables connecting the UK to the outside world, according to the report, carrying most of the country's internet traffic. Satellites can't shoulder the data volumes involved, are too expensive, and only account for around 5 percent of traffic globally.

These cables are invaluable to the UK economy, but they're also difficult to protect. They are heavily shielded in the shallow sea close to those points. That's because accidental damage from fishing operations and other vessels is common. On average, around 200 cables suffer faults each year. But as they get further out, the shielding is less robust. Instead, the companies that lay the cables rely on the depth of the sea to do its job (you'll be pleased to hear that sharks don't generally munch on them).

The report praises a strong cable infrastructure, and admits that in some areas at least we have the redundancy in the cable infrastructure to handle disruptions. For example, it notes that 75 percent of UK transatlantic traffic routes through two cables that come ashore in Bude, Cornwall. That seems like quite the vulnerability, but it acknowledges that we have plenty of infrastructure to route around if anything happened to them. There is "no imminent threat to the UK's national connectivity," it soothes.

But it simultaneously cautions against adopting what it describes as "business-as-usual" views in the industry. The government "focuses too much on having 'lots of cables' and pays insufficient attention to the system's actual ability to absorb unexpected shocks," it frets. It warns that "the impacts on connectivity would be much more serious," if onward connections to Europe suffered as part of a coordinated attack.

"While our national connectivity does not face immediate danger, we must prepare for the possibility that our cables can be threatened in the event of a security crisis," it says.

Reds on the sea bed
Who is the most likely to mount such an attack, if anyone? Russia seems front and center, according to experts. It has reportedly been studying the topic for years. Keir Giles, director at The Centre for International Cyber Conflict and senior consulting fellow of the Russia and Eurasia Programme at Chatham House, argues that Russia has a long history of information warfare that stepped up after it annexed Crimea in 2014.

"The thinking part of the Russian military suddenly decided 'actually, this information isolation is the way to go, because it appears to win wars for us without having to fight them'," Giles says, adding that this approach is often combined with choke holds on land-based information sources. Cutting off the population in the target area from any source of information other than what the Russian troops feed them achieves results at low cost.

In a 2021 paper he co-wrote for the NATO Cooperative Cyber Defence Centre of Excellence, he pointed to the Glavnoye upravleniye glubokovodnykh issledovaniy (Main Directorate for Deep-Water Research, or GUGI), a secretive Russian agency responsible for analyzing undersea cables for intelligence or disruption. According to the JCNSS report, this organization operates the Losharik, a titanium-hulled submarine capable of targeting cables at extreme depth.

Shenanigans under the sea
You don't need a fancy submarine to snag a cable, as long as you're prepared to do it in plain sight closer to the coast. The JNCSS report points to several incidents around the UK and the Baltics. November last year saw two incidents. In the first, Chinese-flagged cargo vessel Yi Peng 3 dragged its anchor for 300km and cut two cables between Sweden and Lithuania. That same month, the UK and Irish navies shadowed Yantar, a Russian research ship loitering around UK cable infrastructure in the Irish sea.

The following month saw Cook Islands-flagged ship Eagle S damage one power cable and three data cables linking Finland and Estonia. This May, unaffiliated vessel Jaguar approached an underseas cable off Estonia and was escorted out of the country's waters.

The real problem with brute-force physical damage from vessels is that it's difficult to prove that it's intentional. On one hand, it's perfect for an aggressor's plausible deniability, and could also be a way to test the boundaries of what NATO is willing to tolerate. On the other, it could really be nothing.

"Attribution of sabotage to critical undersea infrastructure is difficult to prove, a situation significantly complicated by the prevalence of under-regulated and illegal shipping activities, sometimes referred to as the shadow fleet," a spokesperson for NATO told us.

"I'd push back on an assertion of a coordinated campaign," says Alan Mauldin, research director at analyst company TeleGeography, which examines undersea cable infrastructure warns. He questions assumptions that the Baltic cable damage was anything other than a SNAFU.

The Washington Post also reported comment from officials on both sides of the Atlantic that the Baltic anchor-dragging was probably accidental. Giles scoffs at that. "Somebody had been working very hard to persuade countries across Europe that this sudden spate of cables being broken in the Baltic Sea, one after another, was all an accident, and they were trying to say that it's possible for ships to drag their anchors without noticing," he says.

One would hope that international governance frameworks could help. The UN Convention on the Law of the Sea [PDF] has a provision against messing with undersea cables, but many states haven't enacted the agreement. In any case, plausible deniability makes things more difficult.

"The main challenge in making meaningful governance reforms to secure submarine cables is figuring out what these could be. Making fishing or anchoring accidents illegal would be disproportionate," says Anniki Mikelsaar, doctoral researcher at Oxford University's Oxford Internet Institute. "As there might be some regulatory friction, regional frameworks could be a meaningful avenue to increase submarine cable security."

The difficulty in pinning down intent hasn't stopped NATO from stepping in. In January it launched Baltic Sentry, an initiative to protect undersea infrastructure in the region. That effort includes frigates, patrol aircraft, and naval drones to keep an eye on what happens both above and below the waves.

Preparing for the worst
Regardless of whether vessels are doing this deliberately or by accident, we have to be prepared for it, especially as cable installation shows no sign of slowing. Increasing bandwidth needs will boost global cable kilometers by 48 percent between now and 2040, says TeleGeography, adding that annual repairs will increase 36 percent between now and 2040.

"Many cable maintenance ships are reaching the end of their design life cycle, so more investment into upgrading the fleets is needed. This is important to make repairs faster," says Mikelsaar.

There are 62 vessels capable of cable maintenance today, and TeleGeography predicts that'll be enough for the next 15 years. However, it takes time to build these vessels and train the operators, meaning that we'll need to start delivering new vessels soon.

The problem for the UK is that it doesn't own any of that repair capacity, says the JNSS. It can take a long time to travel to a cable and repair it, and ships can only work on one at a time. The Committee reported that the UK doesn't own any sovereign repair capacity, and advises that it gets some, prescribing a repair ship by 2030.

"This could be leased to industry on favorable terms during peacetime and made available for Government use in a crisis," it says, adding that the Navy should establish a set of reservists that will be trained and ready to operate the vessel.

Sir Chris Bryant MP, the Minister for Data Protection and Telecoms, told the Committee it that it was being apocalyptic and "over-egging the pudding" by examining the possibility of a co-ordinated attack. "We disagree," the Committee said in the report, arguing that the security situation in the next decade is uncertain.

"Focusing on fishing accidents and low-level sabotage is no longer good enough," the report adds. "The UK faces a strategic vulnerability in the event of hostilities. Publicly signaling tougher defensive preparations is vital, and may reduce the likelihood of adversaries mounting a sabotage effort in the first place."

To that end, it has made a battery of recommendations. These include building the risk of a coordinated campaign against undersea infrastructure into its risk scenarios, and protecting the stations - often in remote coastal locations - where the cables come onto land.

The report also recommends that the Department for Science, Innovation and Technology (DSIT) ensures all lead departments have detailed sector-by-sector technical impact studies addressing widespread cable outages.

"Government works around the clock to ensure our subsea cable infrastructure is resilient and can withstand hostile and non-hostile threats," DSIT told El Reg, adding that when breaks happen, the UK has some of the fastest cable repair times in the world, and there's usually no noticeable disruption."

"Working with NATO and Joint Expeditionary Force allies, we're also ensuring hostile actors cannot operate undetected near UK or NATO waters," it added. "We're deploying new technologies, coordinating patrols, and leading initiatives like Nordic Warden alongside NATO's Baltic Sentry mission to track and counter undersea threats."

Nevertheless, some seem worried. Vili Lehdonvirta, head of the Digital Economic Security Lab (DIESL) and professor of Technology Policy at Aalto University, has noticed increased interest from governments and private sector organizations alike in how much their daily operations depend on oversea connectivity. He says that this likely plays into increased calls for digital sovereignty.

"The rapid increase in data localization laws around the world is partly explained by this desire for increased resilience," he says. "But situating data and workloads physically close as opposed to where it is economically efficient to run them (eg. because of cheaper electricity) comes with an economic cost."

So the good news is that we know exactly how vulnerable our undersea cables are. The bad news is that so does everyone else with a dodgy cargo ship and a good poker face. Sleep tight.

theregister.com EN 2025 UK Submarine cable sea data
Cybersecurity Training Programs Don’t Prevent Employees from Falling for Phishing Scams https://today.ucsd.edu/story/cybersecurity-training-programs-dont-prevent-employees-from-falling-for-phishing-scams
05/10/2025 22:03:04
QRCode
archive.org
thumbnail

today.ucsd.edu UC San Diego
September 17, 2025
Story by:
Ioana Patringenaru - ipatrin@ucsd.edu

Study involving 19,500 UC San Diego Health employees evaluated the effectiveness of two different types of cybersecurity training

Cybersecurity training programs as implemented today by most large companies do little to reduce the risk that employees will fall for phishing scams–the practice of sending malicious emails posing as legitimate to get victims to share personal information, such as their social security numbers.

That’s the conclusion of a study evaluating the effectiveness of two different types of cybersecurity training during an eight-month, randomized controlled experiment. The experiment involved 10 different phishing email campaigns developed by the research team and sent to more than 19,500 employees at UC San Diego Health.

The team presented their research at the Blackhat conference Aug. 2 to 7 in Las Vegas. The team originally shared their work at the 46th IEEE Symposium on Security and Privacy in May in San Francisco.

Researchers found that there was no significant relationship between whether users had recently completed an annual, mandated cybersecurity training and the likelihood of falling for phishing emails. The team also examined the efficacy of embedded phishing training – the practice of sharing anti-phishing information after a user engages with a phishing email sent by their organization as a test. For this type of training, researchers found that the difference in failure rates between employees who had completed the training and those who did not was extremely low.

“Taken together, our results suggest that anti-phishing training programs, in their current and commonly deployed forms, are unlikely to offer significant practical value in reducing phishing risks,” the researchers write.

Why is it important to combat phishing?

Whether phishing training is effective is an important question. In spite of 20 years of research and development into malicious email filtering techniques, a 2023 IBM study identifies phishing as the single largest source of successful cybersecurity breaches–16% overall, researchers write.

This threat is particularly challenging in the healthcare sector, where targeted data breaches have reached record highs. In 2023 alone, the U.S. Department of Health and Human Services (HHS) reported over 725 large data breach events, covering over 133 million health records, and 460 associated ransomware incidents.

As a result, it has become standard in many sectors to mandate both formal security training annually and to engage in unscheduled phishing exercises, in which employees are sent simulated phishing emails and then provided “embedded” training if they mistakenly click on the email’s links.

Researchers were trying to understand which of these types of training are most effective. It turns out, as currently administered, that none of them are.

Why are cybersecurity trainings not effective?
One reason the trainings are not effective is that the majority of people do not engage with the embedded training materials, said Grant Ho, study co-author and a faculty member at the University of Chicago, who did some of this work as a postdoctoral researcher at UC San Diego. Overall, 75% of users engaged with the embedded training materials for a minute or less. One-third immediately closed the embedded training page without engaging with the material at all.

“This does lend some suggestion that these trainings, in their current form, are not effective,” said Ariana Mirian, another paper co-author, who did the work as a Ph.D. student in the research group of UC San Diego computer science professors Stefan Savage and Geoff Voelker.

study of 19,500 employees over eight months
To date, this is the largest study of the effectiveness of anti-phishing training, covering 19,500 employees at UC San Diego Health. In addition, it’s one of only two studies that used a randomized control trial method to determine whether employees would receive training, and what kind of phishing emails–or lures–they would receive.

After sending 10 different types of phishing emails over the course of eight months, the researchers found that embedded phishing training only reduced the likelihood of clicking on a phishing link by 2%. This is particularly striking given the expense in time and effort that these trainings require, the researchers note.

Researchers also found that more employees fell for the phishing emails as time went on. In the first month of the study, only 10% of employees clicked on a phishing link. By the eighth month, more than half had clicked on at least one phishing link.

In addition, researchers found that some phishing emails were considerably more effective than others. For example, only 1.82% of recipients clicked on a phishing link to update their Outlook password. But 30.8% clicked on a link that purported to be an update to UC San Diego Health’s vacation policy.

Given the results of the study, researchers recommend that organizations refocus their efforts to combat phishing on technical countermeasures. Specifically, two measures would have better return on investment: two-factor authentication for hardware and applications, as well as password managers that only work on correct domains, the researchers write.

This work was supported in part by funding from the University of California Office of the President “Be Smart About Safety” program–an effort focused on identifying best practices for reducing the frequency and severity of systemwide insurance losses. It was also supported in part by U.S. National Science Foundation grant CNS-2152644, the UCSD CSE Postdoctoral Fellows program, the Irwin Mark and Joan Klein Jacobs Chair in Information and Computer Science, the CSE Professorship in Internet Privacy and/or Internet Data Security, a generous gift from Google, and operational support from the UCSD Center for Networked Systems.

today.ucsd.edu EN 2025 Cybersecurity Training Programs phishing Study US
NIRS fire destroys government's cloud storage system, no backups available https://koreajoongangdaily.joins.com/news/2025-10-01/national/socialAffairs/NIRS-fire-destroys-governments-cloud-storage-system-no-backups-available/2412936
05/10/2025 21:55:24
QRCode
archive.org
thumbnail

Korea JoongAng Daily
Wednesday
October 1, 2025
BY JEONG JAE-HONG [yoon.soyeon@joongang.co.kr],D

A fire at the National Information Resources Service (NIRS)'s Daejeon headquarters destroyed the government’s G-Drive cloud storage system, erasing work files saved individually by some 750,000 civil servants, the Ministry of the Interior and Safety said Wednesday.

The fire broke out in the server room on the fifth floor of the center, damaging 96 information systems designated as critical to central government operations, including the G-Drive platform. The G-Drive has been in use since 2018, requiring government officials to store all work documents in the cloud instead of on personal computers. It provided around 30 gigabytes of storage per person.

However, due to the system’s large-capacity, low-performance storage structure, no external backups were maintained — meaning all data has been permanently lost.

The scale of damage varies by agency. The Ministry of Personnel Management, which had mandated that all documents be stored exclusively on G-Drive, was hit hardest. The Office for Government Policy Coordination, which used the platform less extensively, suffered comparatively less damage.

The Personnel Ministry stated that all departments are expected to experience work disruptions. It is currently working to recover alternative data using any files saved locally on personal computers within the past month, along with emails, official documents and printed records.

The Interior Ministry noted that official documents created through formal reporting or approval processes were also stored in the government’s Onnara system and may be recoverable once that system is restored.

“Final reports and official records submitted to the government are also stored in OnNara, so this is not a total loss,” said a director of public services at the Interior Ministry.

The Interior Ministry explained that while most systems at the Daejeon data center are backed up daily to separate equipment within the same center and to a physically remote backup facility, the G-Drive’s structure did not allow for external backups. This vulnerability ultimately left it unprotected.

Criticism continues to build regarding the government's data management protocols.

koreajoongangdaily.joins.com EN 2025 government data-center fire South-Korea
Intel and AMD trusted enclaves, a foundation for network security, fall to physical attacks https://arstechnica.com/security/2025/09/intel-and-amd-trusted-enclaves-the-backbone-of-network-security-fall-to-physical-attacks/
05/10/2025 18:46:13
QRCode
archive.org
thumbnail

Ars Technica, Dan Goodin – 30 sept. 2025 22:25

The chipmakers say physical attacks aren’t in the threat model. Many users didn’t get the memo.

In the age of cloud computing, protections baked into chips from Intel, AMD, and others are essential for ensuring confidential data and sensitive operations can’t be viewed or manipulated by attackers who manage to compromise servers running inside a data center. In many cases, these protections—which work by storing certain data and processes inside encrypted enclaves known as TEEs (Trusted Execution Enclaves)—are essential for safeguarding secrets stored in the cloud by the likes of Signal Messenger and WhatsApp. All major cloud providers recommend that customers use it. Intel calls its protection SGX, and AMD has named it SEV-SNP.

Over the years, researchers have repeatedly broken the security and privacy promises that Intel and AMD have made about their respective protections. On Tuesday, researchers independently published two papers laying out separate attacks that further demonstrate the limitations of SGX and SEV-SNP. One attack, dubbed Battering RAM, defeats both protections and allows attackers to not only view encrypted data but also to actively manipulate it to introduce software backdoors or to corrupt data. A separate attack known as Wiretap is able to passively decrypt sensitive data protected by SGX and remain invisible at all times.

Attacking deterministic encryption
Both attacks use a small piece of hardware, known as an interposer, that sits between CPU silicon and the memory module. Its position allows the interposer to observe data as it passes from one to the other. They exploit both Intel’s and AMD’s use of deterministic encryption, which produces the same ciphertext each time the same plaintext is encrypted with a given key. In SGX and SEV-SNP, that means the same plaintext written to the same memory address always produces the same ciphertext.

Deterministic encryption is well-suited for certain uses, such as full disk encryption, where the data being protected never changes once the thing being protected (in this case, the drive) falls into an attacker’s hands. The same encryption is suboptimal for protecting data flowing between a CPU and a memory chip because adversaries can observe the ciphertext each time the plaintext changes, opening the system to replay attacks and other well-known exploit techniques. Probabilistic encryption, by contrast, resists such attacks because the same plaintext can encrypt to a wide range of ciphertexts that are randomly chosen during the encryption process.

“Fundamentally, [the use of deterministic encryption] is a design trade-off,” Jesse De Meulemeester, lead author of the Battering RAM paper, wrote in an online interview. “Intel and AMD opted for deterministic encryption without integrity or freshness to keep encryption scalable (i.e., protect the entire memory range) and reduce overhead. That choice enables low-cost physical attacks like ours. The only way to fix this likely requires hardware changes, e.g., by providing freshness and integrity in the memory encryption.”

Daniel Genkin, one of the researchers behind Wiretap, agreed. “It’s a design choice made by Intel when SGX moved from client machines to server,” he said. “It offers better performance at the expense of security.” Genkin was referring to Intel’s move about five years ago to discontinue SGX for client processors—where encryption was limited to no more than 256 MB of RAM—to server processors that could encrypt terabytes of RAM. The transition required Intel to revamp the encryption to make it scale for such vast amounts of data.

“The papers are two sides of the same coin,” he added.

While both of Tuesday’s attacks exploit weaknesses related to deterministic encryption, their approaches and findings are distinct, and each comes with its own advantages and disadvantages. Both research teams said they learned of the other’s work only after privately submitting their findings to the chipmakers. The teams then synchronized the publish date for Tuesday. It’s not the first time such a coincidence has occurred. In 2018, multiple research teams independently developed attacks with names including Spectre and Meltdown. Both plucked secrets out of Intel and AMD processors by exploiting their use of performance enhancement known as speculative execution.

AMD declined to comment on the record, and Intel didn’t respond to questions sent by email. In the past, both chipmakers have said that their respective TEEs are designed to protect against compromises of a piece of software or the operating system itself, including in the kernel. The guarantees, the companies have said, don’t extend to physical attacks such as Battering RAM and Wiretap, which rely on physical interposers that sit between the processor and the memory chips. Despite this limitation, many cloud-based services continue to trust assurances from the TEEs even when they have been compromised through physical attacks (more about that later).

Intel on Tuesday published this advisory. AMD posted one here.

Battering RAM
Battering RAM uses a custom-built analog switch to act as an interposer that reads encrypted data as it passes between protected memory regions in DDR4 memory chips and an Intel or AMD processor. By design, both SGX and SEV-SNP make this ciphertext inaccessible to an adversary. To bypass that protection, the interposer creates memory aliases in which two different memory addresses point to the same location in the memory module.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground.

The Battering-RAM interposer, containing two analog switches (bottom center), is controlled by a microcontroller (left). The switches can dynamically either pass through the command signals to the connected DIMM or connect the respective lines to ground. Credit: De Meulemeester et al.

“This lets the attacker capture a victim's ciphertext and later replay it from an alias,” De Meulemeester explained. “Because Intel's and AMD's memory encryption is deterministic, the replayed ciphertext always decrypts into valid plaintext when the victim reads it.” The PhD researcher at KU Leuven in Belgium continued:

When the CPU writes data to memory, the memory controller encrypts it deterministically, using the plaintext and the address as inputs. The same plaintext written to the same address always produces the same ciphertext. Through the alias, the attacker can't read the victim's secrets directly, but they can capture the victim's ciphertext. Later, by replaying this ciphertext at the same physical location, the victim will decrypt it to a valid, but stale, plaintext.

This replay capability is the primitive on which both our SGX and SEV attacks are built.

In both cases, the adversary installs the interposer, either through a supply-chain attack or physical compromise, and then runs either a virtual machine or application at a chosen memory location. At the same time, the adversary also uses the aliasing to capture the ciphertext. Later, the adversary replays the captured ciphertext, which, because it's running in the region the attacker has access to, is then replayed as plaintext.

Because SGX uses a single memory-encryption key for the entire protected range of RAM, Battering RAM can gain the ability to write or read plaintext into these regions. This allows the adversary to extract the processor’s provisioning key and, in the process, break the attestation SGX is supposed to provide to certify its integrity and authenticity to remote parties that connect to it.

AMD processors protected by SEV use a single encryption key to produce all ciphertext on a given virtual machine. This prevents the ciphertext replaying technique used to defeat SGX. Instead, Battering RAM captures and replays the cryptographic elements that are supposed to prove the virtual machine hasn’t been tampered with. By replaying an old attestation report, Battering RAM can load a backdoored Virtual machine that still carries the SEV-SNP certification that the VM hasn’t been tampered with.

The key benefit of Battering RAM is that it requires equipment that costs less than $50 to pull off. It also allows active decryption, meaning encrypted data can be both read and tampered with. In addition, it works against both SGX and SEV-SNP, as long as they work with DDR4 memory modules.

Wiretap
Wiretap, meanwhile, is limited to breaking only SGX working with DDR4, although the researchers say it would likely work against the AMD protections with a modest amount of additional work. Wiretap, however, allows only for passive decryption, which means protected data can be read, but data can’t be written to protected regions of memory. The cost of the interposer and the equipment for analyzing the captured data also costs considerably more than Battering RAM, at about $500 to $1,000.

Like Battering RAM, Wiretap exploits deterministic encryption, except the latter attack maps ciphertext to a list of known plaintext words that the ciphertext is derived from. Eventually, the attack can recover enough ciphertext to reconstruct the attestation key.

Genkin explained:

Let’s say you have an encrypted list of words that will be later used to form sentences. You know the list in advance, and you get an encrypted list in the same order (hence you know the mapping between each word and its corresponding encryption). Then, when you encounter an encrypted sentence, you just take the encryption of each word and match it against your list. By going word by word, you can decrypt the entire sentence. In fact, as long as most of the words are in your list, you can probably decrypt the entire conversation eventually. In our case, we build a dictionary between common values occurring within the ECDSA algorithm and their corresponding encryption, and then use this dictionary to recover these values as they appear, allowing us to extract the key.

The Wiretap researchers went on to show the types of attacks that are possible when an adversary successfully compromises SGX security. As Intel explains, a key benefit of SGX is remote attestation, a process that first verifies the authenticity and integrity of VMs or other software running inside the enclave and hasn’t been tampered with. Once the software passes inspection, the enclave sends the remote party a digitally signed certificate providing the identity of the tested software and a clean bill of health certifying the software is safe.

The enclave then opens an encrypted connection with the remote party to ensure credentials and private data can’t be read or modified during transit. Remote attestation works with the industry standard Elliptic Curve Digital Signature Algorithm, making it easy for all parties to use and trust.

Blockchain services didn’t get the memo
Many cloud-based services rely on TEEs as a foundation for privacy and security within their networks. One such service is Phala, a blockchain provider that allows the drafting and execution of smart contracts. According to the company, computer “state”—meaning system variables, configurations, and other dynamic data an application depends on—are stored and updated only in the enclaves available through SGX, SEV-SNP, and a third trusted enclave available in Arm chips known as TrustZone. This design allows these smart contract elements to update in real time through clusters of “worker nodes”—meaning the computers that host and process smart contracts—with no possibility of any node tampering with or viewing the information during execution.

“The attestation quote signed by Intel serves as the proof of a successful execution,” Phala explained. “It proves that specific code has been run inside an SGX enclave and produces certain output, which implies the confidentiality and the correctness of the execution. The proof can be published and validated by anyone with generic hardware.” Enclaves provided by AMD and Arm work in a similar manner.

The Wiretap researchers created a “testnet,” a local machine for running worker modes. With possession of the SGX attestation key, the researchers were able to obtain a cluster key that prevents individual nodes from reading or modifying contract state. With that, Wiretap was able to fully bypass the protection. In a paper, the researchers wrote:

We first enter our attacker enclave into a cluster and note it is given access to the cluster key. Although the cluster key is not directly distributed to our worker upon joining a cluster, we initiate a transfer of the key from any other node in the cluster. This transfer is completed without on-chain interaction, given our worker is part of the cluster. This cluster key can then be used to decrypt all contract interactions within the cluster. Finally, when our testnet accepted our node’s enclave as a gatekeeper, we directly receive a copy of the master key, which is used to derive all cluster keys and therefore all contract keys, allowing us to decrypt the entire testnet.

The researchers performed similar bypasses against a variety of other blockchain services, including Secret, Crust, and IntegriTEE. After the researchers privately shared the results with these companies, they took steps to mitigate the attacks.

Both Battering RAM and Wiretap work only against DDR4 forms of memory chips because the newer DDR5 runs at much higher bus speeds with a multi-cycle transmission protocol. For that reason, neither attack works against a similar Intel protection known as TDX because it works only with DDR5.

As noted earlier, Intel and AMD both exclude physical attacks like Battering RAM and Wiretap from the threat model their TEEs are designed to withstand. The Wiretap researchers showed that despite these warnings, Phala and many other cloud-based services still rely on the enclaves to preserve the security and privacy of their networks. The research also makes clear that the TEE defenses completely break down in the event of an attack targeting the hardware supply chain.

For now, the only feasible solution is for chipmakers to replace deterministic encryption with a stronger form of protection. Given the challenges of making such encryption schemes scale to vast amounts of RAM, it’s not clear when that may happen.

Dan Goodin is Senior Security Editor at Ars Technica, where he oversees coverage of malware, computer espionage, botnets, hardware hacking, encryption, and passwords. In his spare time, he enjoys gardening, cooking, and following the independent music scene. Dan is based in San Francisco. Follow him at here on Mastodon and here on Bluesky. Contact him on Signal at DanArs.82.

arstechnica.com EN AMD Intel trusted enclaves CPI physical-attacks chipmakers
Munich Airport Drone Sightings Force Flight Cancellations, Adding To Wave Of European Incidents https://dronexl.co/pt/2025/10/02/munich-airport-drone-sightings-flight-cancellations/
05/10/2025 18:37:31
QRCode
archive.org
thumbnail

dronexl.co Haye Kestelooo october 2, 2025

Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000

Drone sightings Thursday evening forced Germany’s Munich airport to suspend operations, cancelling 17 flights and disrupting travel for nearly 3,000 passengers. The incident marks the latest in a concerning series of mysterious drone closures at major European airports—but whether these sightings represent genuine security threats or mass misidentification remains an urgent question.

The pattern echoes both recent suspected hybrid attacks in Scandinavia and last year’s New Jersey drone panic that turned out to be largely misidentified aircraft and celestial objects.

Munich Operations Suspended for Hours
German air traffic control restricted flight operations at Munich airport from 10:18 p.m. local time Thursday after multiple drone sightings, later suspending them entirely. The airport remained closed until 2:59 a.m. Friday (4:59 a.m. local time).

Another 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight tracking service Flightradar24 confirmed the airport would remain closed until early Friday morning.

The first arriving flight was expected at 5:25 a.m., with the first departure scheduled for 5:50 a.m., according to the airport’s website.

European Airports on Edge After Suspected Russian Incidents
The Munich closure comes just days after a wave of drone incidents shut down multiple airports across Denmark and Norway in late September. Copenhagen Airport closed for nearly four hours on September 22 after two to three large drones were spotted in controlled airspace. Oslo’s Gardermoen Airport also briefly closed that same night.

Danish Prime Minister Mette Frederiksen called those incidents “the most serious attack on Danish critical infrastructure to date” and suggested Russia could be behind the disruption. Danish authorities characterized the activity as a likely hybrid operation intended to unsettle the public and disrupt critical infrastructure.

Several more Danish airports—including Aalborg, Billund, and military bases—experienced similar incidents in the following days. Denmark is now considering whether to invoke NATO’s Article 4, which enables member states to request consultations over security concerns.

Russian President Vladimir Putin joked Thursday that he would not fly drones over Denmark anymore, though Moscow has denied responsibility for the incidents. Denmark has stopped short of saying definitively who is responsible, but Western officials point to a pattern of Russian drone violations of NATO airspace in Poland, Romania, and Estonia.

The Misidentification Problem: Lessons from New Jersey
While European officials investigate potential hybrid warfare, the incidents raise uncomfortable parallels to the New Jersey drone panic of late 2024—a mass sighting event that turned out to be largely misidentification of routine aircraft and celestial objects.

Between November and December 2024, thousands of “drone” reports flooded in from New Jersey and neighboring states. The phenomenon sparked widespread fear, congressional hearings, and even forced then-President-elect Donald Trump to cancel a trip to his Bedminster golf club.

Federal investigations later revealed the reality: most sightings were manned aircraft operating lawfully. A joint FBI and DHS statement in December noted: “Historically, we have experienced cases of mistaken identity, where reported drones are, in fact, manned aircraft or facilities.”

TSA documents released months later showed that one of the earliest incidents—which forced a medical helicopter carrying a crash victim to divert—involved three commercial aircraft approaching nearby Solberg Airport. “The alignment of the aircraft gave the appearance to observers on the ground of them hovering in formation while they were actually moving directly at the observers,” the analysis found.

Dr. Will Austin, president of Warren County Community College and a national drone expert, reviewed numerous videos during the panic. He found that “many of the reports received involve misidentification of manned aircraft.” Even Jupiter, which was particularly bright in New Jersey’s night sky that season, was mistaken for a hovering drone.

The panic had real consequences: laser-pointing incidents at aircraft spiked to 59 in December 2024—more than the 49 incidents recorded for all of 2023, according to the FAA.

Munich Already on Edge
Munich was already placed on edge this week when its popular Oktoberfest was temporarily closed due to a bomb threat, and explosives were discovered in a residential building in the city’s north.

Whether Thursday’s drone sightings represent genuine security threats similar to the suspected Russian operations in Scandinavia, or misidentified routine aircraft like in New Jersey, remains under investigation. German authorities have not released details about what was observed or where the objects may have originated.

DroneXL’s Take
We’re watching two very different scenarios collide in dangerous ways. The Denmark and Norway incidents appear to involve sophisticated actors—large drones, coordinated timing, professional operation over multiple airports and military installations. Danish intelligence has credible reasons to suspect state-sponsored hybrid warfare, particularly given documented Russian drone violations of NATO airspace in Poland and Romania.

But the New Jersey panic showed how quickly mass hysteria can spiral when people start looking up. Once the narrative took hold, every airplane on approach, every bright planet, every hobbyist quadcopter became a “mystery drone.” Federal investigators reviewed over 5,000 reports and found essentially nothing anomalous—yet 78% of Americans still believed the government was hiding something.

Munich sits uncomfortably between these realities. Is it part of the escalating pattern of suspected Russian hybrid attacks on European infrastructure? Or is it another case of observers misidentifying routine air traffic in an atmosphere of heightened anxiety?

The distinction matters enormously. Real threats require sophisticated counter-drone systems and potentially invoke NATO collective defense mechanisms. False alarms waste resources, create dangerous situations (like those laser-pointing incidents), and damage the credibility of legitimate security concerns.

Airport authorities worldwide need better drone detection technology that can definitively distinguish between aircraft types. Equally important: they need to be transparent about what they’re actually seeing, rather than leaving information vacuums that fill with speculation and fear.

dronexl.co EN 2025 Munich Airport Drone Sightings
Another drone sighting at Munich Airport https://www.munich-airport.com/press-another-drone-sighting-at-munich-airport-35720233
05/10/2025 18:33:33
QRCode
archive.org
  • Munich Airport (www.munich-airport.com)
    04.10.2025 (update 5 p.m.)

Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 was delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon.

Following drone sightings late on Thursday and Friday evening and further drone sightings early on Saturday morning, the start of flight operations on 4 October 2025 has been delayed. Flight operations were gradually ramped up and stabilised over the course of the afternoon. Passengers were asked to check the status of their flight on their airline's website before travelling to the airport. Of the more than 1,000 take-offs and landings planned for Saturday, airlines cancelled around 170 flights during the day for operational reasons.

As on previous nights, Munich Airport worked with the airlines to immediately provide for passengers in the terminals. These activities will continue on Saturday evening and into Sunday night. Numerous camp beds will again be set up, and blankets, air mattresses, drinks and snacks will be distributed. In addition, some shops, restaurants and a pharmacy in the public area will extend their opening hours and remain open throughout the night. In addition to numerous employees of the airport, airlines and service providers, numerous volunteers are also on duty.

When a drone is suspected of being sighted, the safety of travellers is the top priority. Reporting chains between air traffic control, the airport and police authorities have been established for years. It is important to emphasise that the detection and defence against drones are sovereign tasks and are the responsibility of the federal and state police.

munich-airport.com EN 2025 Munich Airport drone sighting Germany
Press: Drone sightings at Munich Airport https://www.munich-airport.com/press-drone-sighting-at-munich-airport-35709068
05/10/2025 18:31:39
QRCode
archive.org

Munich Airport (www.munich-airport.com)
October 3, 2025 (Update)

On Thursday evening (October 2), several drones were sighted in the vicinity of and on the grounds of Munich Airport. The first reports were received at around 8:30 p.m. Initially, areas around the airport, including Freising and Erding, were affected.

The state police immediately launched extensive search operations with a large number of officers in the vicinity of the airport. At the same time, the federal police immediately carried out surveillance and search operations on the airport grounds. However, it has not yet been possible to identify the perpetrator.

At around 9:05 p.m., drones were reported near the airport fence. At around 10:10 p.m., the first sighting was made on the airport grounds. As a result, flight operations were gradually suspended at 10:18 p.m. for safety reasons. The preventive closure affected both runways from 10:35 p.m. onwards. The sightings ended around midnight. According to the airport operator, there were 17 flight cancellations and 15 diversions by that time. Helicopters from the federal police and the Bavarian state police were also deployed to monitor the airspace and conduct searches.

Munich Airport, in cooperation with the airlines, immediately took care of the passengers in the terminals. Camp beds were set up, and blankets, drinks, and snacks were provided. In addition, 15 arriving flights were diverted to Stuttgart, Nuremberg, Vienna, and Frankfurt. Flight operations resumed as normal today (Friday, October 3).

Responsibilities and cooperation

Within the scope of their respective tasks, the German Air Traffic Control (DFS), the state aviation security authorities, the state police forces, and the federal police are responsible for the detection and defense against drones at commercial airports.

The measures are carried out in close coordination between all parties involved and the airport operator on the basis of jointly developed emergency plans. The local state police force is responsible for preventive policing in the vicinity of the airport, while the federal police is responsible for policing on the airport grounds. Criminal prosecution is the responsibility of the state police.

Note: Please understand that for tactical reasons, the security authorities are unable to provide any further information on the systems and measures used. Further investigations will be conducted by the Bavarian police, as they have jurisdiction in this matter.

munich-airport.com EN 2025 Airport drones Germany Air Traffic sightings
Hacking group claims theft of 1 billion records from Salesforce customer databases | TechCrunch https://techcrunch.com/2025/10/03/hacking-group-claims-theft-of-1-billion-records-from-salesforce-customer-databases/
05/10/2025 18:23:59
QRCode
archive.org
thumbnail

techcrunch.com - Lorenzo Franceschi-Bicchierai
Zack Whittaker
6:17 AM PDT · October 3, 2025

The hacking group claims to have stolen about a billion records from companies, including FedEx, Qantas, and TransUnion, who store their customer and company data in Salesforce.

A notorious predominantly English-speaking hacking group has launched a website to extort its victims, threatening to release about a billion records stolen from companies who store their customers’ data in cloud databases hosted by Salesforce.

The loosely organized group, which has been known as Lapsus$, Scattered Spider, and ShinyHunters, has published a dedicated data leak site on the dark web, called Scattered LAPSUS$ Hunters.

The website, first spotted by threat intelligence researchers on Friday and seen by TechCrunch, aims to pressure victims into paying the hackers to avoid having their stolen data published online.

“Contact us to regain control on data governance and prevent public disclosure of your data,” reads the site. “Do not be the next headline. All communications demand strict verification and will be handled with discretion.”

Over the last few weeks, the ShinyHunters gang allegedly hacked dozens of high-profile companies by breaking into their cloud-based databases hosted by Salesforce.

Insurance giant Allianz Life, Google, fashion conglomerate Kering, the airline Qantas, carmaking giant Stellantis, credit bureau TransUnion, and the employee management platform Workday, among several others, have confirmed their data was stolen in these mass hacks.

The hackers’ leak site lists several alleged victims, including FedEx, Hulu (owned by Disney), and Toyota Motors, none of which responded to a request for comment on Friday.

It’s not clear if the companies known to have been hacked but not listed on the hacking group’s leak site have paid a ransom to the hackers to prevent their data from being published. When reached by TechCrunch, a representative from ShinyHunters said, “there are numerous other companies that have not been listed,” but declined to say why.

At the top of the site, the hackers mention Salesforce and demand that the company negotiate a ransom, threatening that otherwise “all your customers [sic] data will be leaked.” The tone of the message suggests that Salesforce has not yet engaged with the hackers.

Salesforce spokesperson Nicole Aranda provided a link to the company’s statement, which notes that the company is “aware of recent extortion attempts by threat actors.”

“Our findings indicate these attempts relate to past or unsubstantiated incidents, and we remain engaged with affected customers to provide support,” the statement reads. “At this time, there is no indication that the Salesforce platform has been compromised, nor is this activity related to any known vulnerability in our technology.”

Aranda declined to comment further.

For weeks, security researchers have speculated that the group, which has historically eschewed a public presence online, was planning to publish a data leak website to extort its victims.

Historically, such websites have been associated with foreign, often Russian-speaking, ransomware gangs. In the last few years, these organized cybercrime groups have evolved from stealing, encrypting their victim’s data, and then privately asking for a ransom, to simply threatening to publish the stolen data online unless they get paid.

techcrunch.com EN 2025 Qantas Salesforce ScatteredSpider leak-site
GreyNoise detects 500% surge in scans targeting Palo Alto Networks portals https://securityaffairs.com/182939/hacking/greynoise-detects-500-surge-in-scans-targeting-palo-alto-networks-portals.html
04/10/2025 23:15:51
QRCode
archive.org
thumbnail

securityaffairs.com
October 04, 2025
Pierluigi Paganini

GreyNoise saw a 500% spike in scans on Palo Alto Networks login portals on Oct. 3, 2025, the highest in three months.
Cybersecurity firm GreyNoise reported a 500% surge in scans targeting Palo Alto Networks login portals on October 3, 2025, marking the highest activity in three months.

On October 3, the researchers observed that over 1,285 IPs scanned Palo Alto portals, up from a usual 200. The experts reported that 93% of the IPs were suspicious, 7% malicious.
Most originated from the U.S., with smaller clusters in the U.K., Netherlands, Canada, and Russia.

GryNoise defined the traffic targeted and structured, aimed at Palo Alto login portals and split across distinct scanning clusters.

The scans targeted emulated Palo Alto profiles, focusing mainly on U.S. and Pakistan systems, indicating coordinated, targeted reconnaissance.

GreyNoise found that recent Palo Alto scanning mirrors Cisco ASA activity, showing regional clustering and shared TLS fingerprints linked to the Netherlands infrastructure. Both used similar tools, suggesting possible shared infrastructure or operators. The overlap follows a Cisco ASA scanning surge preceding the disclosure of two zero-day vulnerabilities.

“Both Cisco ASA and Palo Alto login scanning traffic in the past 48 hours share a dominant TLS fingerprint tied to infrastructure in the Netherlands. This comes after GreyNoise initially reported an ASA scanning surge before Cisco’s disclosure of two ASA zero-days.” reads the report published by Grey Noise. “In addition to a possible connection to ongoing Cisco ASA scanning, GreyNoise identified concurrent surges across remote access services. While suspicious, we are unsure if this activity is related. “

GreyNoise noted in July spikes in Palo Alto scans sometimes preceded new flaws within six weeks; The experts are monitoring if the latest surge signals another disclosure.
“GreyNoise is developing an enhanced dynamic IP blocklist to help defenders take faster action on emerging threats.” concludes the report.

securityaffairs.com EN 2025 GreyNoise PaloAlto Networks portals scan scanning
Update on a Security Incident Involving Third-Party Customer Service https://discord.com/press-releases/update-on-security-incident-involving-third-party-customer-service
04/10/2025 23:13:39
QRCode
archive.org
thumbnail

discord.com

Discord
October 3, 2025

At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.

Discord recently discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers.
This incident impacted a limited number of users who had communicated with our Customer Support or Trust & Safety teams.
This unauthorized party did not gain access to Discord directly.
No messages or activities were accessed beyond what users may have discussed with Customer Support or Trust & Safety agents.
We immediately revoked the customer support provider’s access to our ticketing system and continue to investigate this matter.
We’re working closely with law enforcement to investigate this matter.
We are in the process of emailing the users impacted.
‍

At Discord, protecting the privacy and security of our users is a top priority. That’s why it’s important to us that we’re transparent with them about events that impact their personal information.

Recently, we discovered an incident where an unauthorized party compromised one of Discord’s third-party customer service providers. The unauthorized party then gained access to information from a limited number of users who had contacted Discord through our Customer Support and/or Trust & Safety teams.

As soon as we became aware of this attack, we took immediate steps to address the situation. This included revoking the customer support provider’s access to our ticketing system, launching an internal investigation, engaging a leading computer forensics firm to support our investigation and remediation efforts, and engaging law enforcement.

We are in the process of contacting impacted users. If you were impacted, you will receive an email from noreply@discord.com. We will not contact you about this incident via phone – official Discord communications channels are limited to emails from noreply@discord.com.

What happened?
An unauthorized party targeted our third-party customer support services to access user data, with a view to extort a financial ransom from Discord.

What data was involved?
The data that may have been impacted was related to our customer service system. This may include:

Name, Discord username, email and other contact details if provided to Discord customer support
Limited billing information such as payment type, the last four digits of your credit card, and purchase history if associated with your account
IP addresses
Messages with our customer service agents
Limited corporate data (training materials, internal presentations)
The unauthorized party also gained access to a small number of government‑ID images (e.g., driver’s license, passport) from users who had appealed an age determination. If your ID may have been accessed, that will be specified in the email you receive.

What data was not involved?
Full credit card numbers or CCV codes
Messages or activity on Discord beyond what users may have discussed with customer support
Passwords or authentication data
What are we doing about this?
Discord has and will continue to take all appropriate steps in response to this situation. As standard, we will continue to frequently audit our third-party systems to ensure they meet our security and privacy standards. In addition, we have:

Notified relevant data protection authorities.
Proactively engaged with law enforcement to investigate this attack.
Reviewed our threat detection systems and security controls for third-party support providers.
Taking next steps
Looking ahead, we recommend impacted users stay alert when receiving messages or other communication that may seem suspicious. We have service agents on hand to answer questions and provide additional support.

We take our responsibility to protect your personal data seriously and understand the inconvenience and concern this may cause.

discord.com EN 2025 Discord data-breach incident
'Delightful' Red Hat OpenShift AI bug allows full takeover https://www.theregister.com/2025/10/01/critical_red_hat_openshift_ai_bug/?is=09685296f9ea1fb2ee0963f2febaeb3a55d8fb1eddbb11ed4bd2da49d711f2c7
04/10/2025 15:36:10
QRCode
archive.org
thumbnail

theregister.com • The Register
by Jessica Lyons
Wed 1 Oct 2025 // 19:35 UTC

: Who wouldn't want root access on cluster master nodes?

9.9 out of 10 severity bug in Red Hat's OpenShift AI service could allow a remote attacker with minimal authentication to steal data, disrupt services, and fully hijack the platform.

"A low-privileged attacker with access to an authenticated account, for example as a data scientist using a standard Jupyter notebook, can escalate their privileges to a full cluster administrator," the IBM subsidiary warned in a security alert published earlier this week.

"This allows for the complete compromise of the cluster's confidentiality, integrity, and availability," the alert continues. "The attacker can steal sensitive data, disrupt all services, and take control of the underlying infrastructure, leading to a total breach of the platform and all applications hosted on it."

Red Hat deemed the vulnerability, tracked as CVE-2025-10725, "important" despite its 9.9 CVSS score, which garners a critical-severity rating from the National Vulnerability Database - and basically any other organization that issues CVEs. This, the vendor explained, is because the flaw requires some level of authentication, albeit minimal, for an attacker to jeopardize the hybrid cloud environment.

Users can mitigate the flaw by removing the ClusterRoleBinding that links the kueue-batch-user-role ClusterRole with the system:authenticated group. "The permission to create jobs should be granted on a more granular, as-needed basis to specific users or groups, adhering to the principle of least privilege," Red Hat added.

Additionally, the vendor suggests not granting broad permissions to system-level groups.

Red Hat didn't immediately respond to The Register's inquiries, including if the CVE has been exploited. We will update this story as soon as we receive any additional information.

Whose role is it anyway?
OpenShift AI is an open platform for building and managing AI applications across hybrid cloud environments.

As noted earlier, it includes a ClusterRole named "kueue-batch-user-role." The security issue here exists because this role is incorrectly bound to the system:authenticated group.

"This grants any authenticated entity, including low-privileged service accounts for user workbenches, the permission to create OpenShift Jobs in any namespace," according to a Bugzilla flaw-tracking report.
One of these low-privileged accounts could abuse this to schedule a malicious job in a privileged namespace, configure it to run with a high-privilege ServiceAccount, exfiltrate that ServiceAccount token, and then "progressively pivot and compromise more powerful accounts, ultimately achieving root access on cluster master nodes and leading to a full cluster takeover," the report said.

"Vulnerabilities offering a path for a low privileged user to fully take over an environment needs to be patched in the form of an incident response cycle, seeking to prove that the environment was not already compromised," Trey Ford, chief strategy and trust officer at crowdsourced security company Bugcrow said in an email to The Register.

In other words: "Assume breach," Ford added.

"The administrators managing OpenShift AI infrastructure need to patch this with a sense of urgency - this is a delightful vulnerability pattern for attackers looking to acquire both access and data," he said. "Security teams must move with a sense of purpose, both verifying that these environments have been patched, then investigating to confirm whether-and-if their clusters have been compromised."

theregister.com EN 2025 vulnerability OpenShift Red Hat CVE-2025-10725,
ShinyHunters launches Salesforce data leak site to extort 39 victims https://www.bleepingcomputer.com/news/security/shinyhunters-starts-leaking-data-stolen-in-salesforce-attacks/
03/10/2025 16:51:35
QRCode
archive.org
thumbnail

bleepingcomputer.com By Sergiu Gatlan
October 3, 2025

An extortion group has launched a new data leak site to publicly extort dozens of companies impacted by a wave of Salesforce breaches, leaking samples of data stolen in the attacks.

The threat actors responsible for these attacks claim to be part of the ShinyHunters, Scattered Spider, and Lapsus$ groups, collectively referring to themselves as "Scattered Lapsus$ Hunters."

Today, they launched a new data leak site containing 39 companies impacted by the attacks. Each entry includes samples of data allegedly stolen from victims' Salesforce instances, and warns the victims to reach out to "prevent public disclosure" of their data before the October 10 deadline is reached.

The companies being extorted on the data leak site include well-known brands and organizations, including FedEx, Disney/Hulu, Home Depot, Marriott, Google, Cisco, Toyota, Gap, McDonald's, Walgreens, Instacart, Cartier, Adidas, Sake Fifth Avenue, Air France & KLM, Transunion, HBO MAX, UPS, Chanel, and IKEA.

"All of them have been contacted long ago, they saw the email because I saw them download the samples multiple times. Most of them chose to not disclose and ignore," ShinyHunters told BleepingComputer.

"We highly advise you proceed into the right decision, your organisation can prevent the release of this data, regain control over the situation and all operations remain stable as always. We highly recommend a decision-maker to get involved as we are presenting a clear and mutually beneficial opportunity to resolve this matter," they warned on the leak site.

The threat actors also added a separate entry requesting that Salesforce pay a ransom to prevent all impacted customers' data (approximately 1 billion records containing personal information) from being leaked.

"Should you comply, we will withdraw from any active or pending negotiation indiviually from your customers. Your customers will not be attacked again nor will they face a ransom from us again, should you pay," they added.

The extortion group also threatened the company, stating that it would help law firms pursue civil and commercial lawsuits against Salesforce following the data breaches and warned that the company had also failed to protect customers' data as required by the European General Data Protection Regulation (GDPR).

bleepingcomputer.com EN 2025 Breach Data-Breach Leak Salesforce Scattered-Lapsus$-Hunters ShinyHunters
Security update: Incident related to Red Hat Consulting GitLab instance https://www.redhat.com/en/blog/security-update-incident-related-red-hat-consulting-gitlab-instance?sc_cid=RHCTG0180000354765
03/10/2025 09:57:11
QRCode
archive.org
thumbnail

We are writing to provide an update regarding a security incident related to a specific GitLab environment used by our Red Hat Consulting team. Red Hat takes the security and integrity of our systems and the data entrusted to us extremely seriously, and we are addressing this issue with the highest priority.

What happened
We recently detected unauthorized access to a GitLab instance used for internal Red Hat Consulting collaboration in select engagements. Upon detection, we promptly launched a thorough investigation, removed the unauthorized party’s access, isolated the instance, and contacted the appropriate authorities. Our investigation, which is ongoing, found that an unauthorized third party had accessed and copied some data from this instance.

We have now implemented additional hardening measures designed to help prevent further access and contain the issue.

Scope and impact on customers
We understand you may have questions about whether this incident affects you. Based on our investigation to date, we can share:

Impact on Red Hat products and supply chain: At this time, we have no reason to believe this security issue impacts any of our other Red Hat services or products, including our software supply chain or downloading Red Hat software from official channels.
Consulting customers: If you are a Red Hat Consulting customer, our analysis is ongoing. The compromised GitLab instance housed consulting engagement data, which may include, for example, Red Hat’s project specifications, example code snippets, and internal communications about consulting services. This GitLab instance typically does not house sensitive personal data. While our analysis remains ongoing, we have not identified sensitive personal data within the impacted data at this time. We will notify you directly if we believe you have been impacted.
Other customers: If you are not a Red Hat Consulting customer, there is currently no evidence that you have been affected by this incident.
For clarity, this incident is unrelated to a Red Hat OpenShift AI vulnerability (CVE-2025-10725) that was announced yesterday.

Our next steps
We are engaging directly with any customers who may be impacted.

Thank you for your continued trust in Red Hat. We appreciate your patience as we continue our investigation.

redhat.com EN 2025 GitLab Consulting TheCrimsonCollective incident data-breach
Out-of-bounds read & write in RFC 3211 KEK Unwrap (CVE-2025-9230) https://openssl-library.org/news/secadv/20250930.txt
02/10/2025 21:32:46
QRCode
archive.org

OpenSSL Security Advisory [30th September 2025]
https://openssl-library.org/news/secadv/20250930.txt

===============================================

Out-of-bounds read & write in RFC 3211 KEK Unwrap (CVE-2025-9230)

=================================================================

Severity: Moderate

Issue summary: An application trying to decrypt CMS messages encrypted using
password based encryption can trigger an out-of-bounds read and write.

Impact summary: This out-of-bounds read may trigger a crash which leads to
Denial of Service for an application. The out-of-bounds write can cause
a memory corruption which can have various consequences including
a Denial of Service or Execution of attacker-supplied code.

Although the consequences of a successful exploit of this vulnerability
could be severe, the probability that the attacker would be able to
perform it is low. Besides, password based (PWRI) encryption support in CMS
messages is very rarely used. For that reason the issue was assessed as
Moderate severity according to our Security Policy.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as the CMS implementation is outside the OpenSSL FIPS module
boundary.

OpenSSL 3.5, 3.4, 3.3, 3.2, 3.0, 1.1.1 and 1.0.2 are vulnerable to this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.18.

OpenSSL 1.1.1 users should upgrade to OpenSSL 1.1.1zd.
(premium support customers only)

OpenSSL 1.0.2 users should upgrade to OpenSSL 1.0.2zm.
(premium support customers only)

This issue was reported on 9th August 2025 by Stanislav Fort (Aisle Research).
The fix was developed by Stanislav Fort (Aisle Research) and Viktor Dukhovni.

Timing side-channel in SM2 algorithm on 64 bit ARM (CVE-2025-9231)

=================================================================

Severity: Moderate

Issue summary: A timing side-channel which could potentially allow remote
recovery of the private key exists in the SM2 algorithm implementation on 64 bit
ARM platforms.

Impact summary: A timing side-channel in SM2 signature computations on 64 bit
ARM platforms could allow recovering the private key by an attacker.

While remote key recovery over a network was not attempted by the reporter,
timing measurements revealed a timing signal which may allow such an attack.

OpenSSL does not directly support certificates with SM2 keys in TLS, and so
this CVE is not relevant in most TLS contexts. However, given that it is
possible to add support for such certificates via a custom provider, coupled
with the fact that in such a custom provider context the private key may be
recoverable via remote timing measurements, we consider this to be a Moderate
severity issue.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as SM2 is not an approved algorithm.

OpenSSL 3.1, 3.0, 1.1.1 and 1.0.2 are not vulnerable to this issue.

OpenSSL 3.5, 3.4, 3.3, and 3.2 are vulnerable to this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

This issue was reported on 18th August 2025 by Stanislav Fort (Aisle Research)
The fix was developed by Stanislav Fort.

Out-of-bounds read in HTTP client no_proxy handling (CVE-2025-9232)

===================================================================

Severity: Low

Issue summary: An application using the OpenSSL HTTP client API functions may
trigger an out-of-bounds read if the "no_proxy" environment variable is set and
the host portion of the authority component of the HTTP URL is an IPv6 address.

Impact summary: An out-of-bounds read can trigger a crash which leads to
Denial of Service for an application.

The OpenSSL HTTP client API functions can be used directly by applications
but they are also used by the OCSP client functions and CMP (Certificate
Management Protocol) client implementation in OpenSSL. However the URLs used
by these implementations are unlikely to be controlled by an attacker.

In this vulnerable code the out of bounds read can only trigger a crash.
Furthermore the vulnerability requires an attacker-controlled URL to be
passed from an application to the OpenSSL function and the user has to have
a "no_proxy" environment variable set. For the aforementioned reasons the
issue was assessed as Low severity.

The vulnerable code was introduced in the following patch releases:
3.0.16, 3.1.8, 3.2.4, 3.3.3, 3.4.0 and 3.5.0.

The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as the HTTP client implementation is outside the OpenSSL FIPS module
boundary.

OpenSSL 3.5, 3.4, 3.3, 3.2 and 3.0 are vulnerable to this issue.

OpenSSL 1.1.1 and 1.0.2 are not affected by this issue.

OpenSSL 3.5 users should upgrade to OpenSSL 3.5.4.

OpenSSL 3.4 users should upgrade to OpenSSL 3.4.3.

OpenSSL 3.3 users should upgrade to OpenSSL 3.3.5.

OpenSSL 3.2 users should upgrade to OpenSSL 3.2.6.

OpenSSL 3.0 users should upgrade to OpenSSL 3.0.18.

This issue was reported on 16th August 2025 by Stanislav Fort (Aisle Research).
The fix was developed by Stanislav Fort (Aisle Research).

General Advisory Notes

======================

URL for this Security Advisory:
https://openssl-library.org/news/secadv/20250930.txt

openssl-library.org EN 2025 CVE-2025-9230 OpenSSL vulnerability
Feds Tie ‘Scattered Spider’ Duo to $115M in Ransoms https://krebsonsecurity.com/2025/09/feds-tie-scattered-spider-duo-to-115m-in-ransoms/
02/10/2025 18:43:14
QRCode
archive.org

– Krebs on Security
U.S. prosecutors last week levied criminal hacking charges against 19-year-old U.K. national Thalha Jubair for allegedly being a core member of Scattered Spider, a prolific cybercrime group blamed for extorting at least $115 million in ransom payments from victims. The charges came as Jubair and an alleged co-conspirator appeared in a London court to face accusations of hacking into and extorting several large U.K. retailers, the London transit system, and healthcare providers in the United States.

At a court hearing last week, U.K. prosecutors laid out a litany of charges against Jubair and 18-year-old Owen Flowers, accusing the teens of involvement in an August 2024 cyberattack that crippled Transport for London, the entity responsible for the public transport network in the Greater London area.

krebsonsecurity.com EN 2025 Scattered-Spider Lapsus$ busted UK
Digital Threat Modeling Under Authoritarianism https://www.schneier.com/blog/archives/2025/09/digital-threat-modeling-under-authoritarianism.html
02/10/2025 18:33:00
QRCode
archive.org

Schneier on Security - schneier.com/blog/ - Posted on September 26, 2025 at 7:04 AM

Digital Threat Modeling Under Authoritarianism
Today’s world requires us to make complex and nuanced decisions about our digital security. Evaluating when to use a secure messaging app like Signal or WhatsApp, which passwords to store on your smartphone, or what to share on social media requires us to assess risks and make judgments accordingly. Arriving at any conclusion is an exercise in threat modeling.

In security, threat modeling is the process of determining what security measures make sense in your particular situation. It’s a way to think about potential risks, possible defenses, and the costs of both. It’s how experts avoid being distracted by irrelevant risks or overburdened by undue costs.

We threat model all the time. We might decide to walk down one street instead of another, or use an internet VPN when browsing dubious sites. Perhaps we understand the risks in detail, but more likely we are relying on intuition or some trusted authority. But in the U.S. and elsewhere, the average person’s threat model is changing—specifically involving how we protect our personal information. Previously, most concern centered on corporate surveillance; companies like Google and Facebook engaging in digital surveillance to maximize their profit. Increasingly, however, many people are worried about government surveillance and how the government could weaponize personal data.

Since the beginning of this year, the Trump administration’s actions in this area have raised alarm bells: The Department of Government Efficiency (DOGE) took data from federal agencies, Palantir combined disparate streams of government data into a single system, and Immigration and Customs Enforcement (ICE) used social media posts as a reason to deny someone entry into the U.S.

These threats, and others posed by a techno-authoritarian regime, are vastly different from those presented by a corporate monopolistic regime—and different yet again in a society where both are working together. Contending with these new threats requires a different approach to personal digital devices, cloud services, social media, and data in general.

What Data Does the Government Already Have?
For years, most public attention has centered on the risks of tech companies gathering behavioral data. This is an enormous amount of data, generally used to predict and influence consumers’ future behavior—rather than as a means of uncovering our past. Although commercial data is highly intimate—such as knowledge of your precise location over the course of a year, or the contents of every Facebook post you have ever created—it’s not the same thing as tax returns, police records, unemployment insurance applications, or medical history.

The U.S. government holds extensive data about everyone living inside its borders, some of it very sensitive—and there’s not much that can be done about it. This information consists largely of facts that people are legally obligated to tell the government. The IRS has a lot of very sensitive data about personal finances. The Treasury Department has data about any money received from the government. The Office of Personnel Management has an enormous amount of detailed information about government employees—including the very personal form required to get a security clearance. The Census Bureau possesses vast data about everyone living in the U.S., including, for example, a database of real estate ownership in the country. The Department of Defense and the Bureau of Veterans Affairs have data about present and former members of the military, the Department of Homeland Security has travel information, and various agencies possess health records. And so on.

It is safe to assume that the government has—or will soon have—access to all of this government data. This sounds like a tautology, but in the past, the U.S. government largely followed the many laws limiting how those databases were used, especially regarding how they were shared, combined, and correlated. Under the second Trump administration, this no longer seems to be the case.

Augmenting Government Data with Corporate Data
The mechanisms of corporate surveillance haven’t gone away. Compute technology is constantly spying on its users—and that data is being used to influence us. Companies like Google and Meta are vast surveillance machines, and they use that data to fuel advertising. A smartphone is a portable surveillance device, constantly recording things like location and communication. Cars, and many other Internet of Things devices, do the same. Credit card companies, health insurers, internet retailers, and social media sites all have detailed data about you—and there is a vast industry that buys and sells this intimate data.

This isn’t news. What’s different in a techno-authoritarian regime is that this data is also shared with the government, either as a paid service or as demanded by local law. Amazon shares Ring doorbell data with the police. Flock, a company that collects license plate data from cars around the country, shares data with the police as well. And just as Chinese corporations share user data with the government and companies like Verizon shared calling records with the National Security Agency (NSA) after the Sept. 11 terrorist attacks, an authoritarian government will use this data as well.

Personal Targeting Using Data
The government has vast capabilities for targeted surveillance, both technically and legally. If a high-level figure is targeted by name, it is almost certain that the government can access their data. The government will use its investigatory powers to the fullest: It will go through government data, remotely hack phones and computers, spy on communications, and raid a home. It will compel third parties, like banks, cell providers, email providers, cloud storage services, and social media companies, to turn over data. To the extent those companies keep backups, the government will even be able to obtain deleted data.

This data can be used for prosecution—possibly selectively. This has been made evident in recent weeks, as the Trump administration personally targeted perceived enemies for “mortgage fraud.” This was a clear example of weaponization of data. Given all the data the government requires people to divulge, there will be something there to prosecute.

Although alarming, this sort of targeted attack doesn’t scale. As vast as the government’s information is and as powerful as its capabilities are, they are not infinite. They can be deployed against only a limited number of people. And most people will never be that high on the priorities list.

The Risks of Mass Surveillance
Mass surveillance is surveillance without specific targets. For most people, this is where the primary risks lie. Even if we’re not targeted by name, personal data could raise red flags, drawing unwanted scrutiny.

The risks here are twofold. First, mass surveillance could be used to single out people to harass or arrest: when they cross the border, show up at immigration hearings, attend a protest, are stopped by the police for speeding, or just as they’re living their normal lives. Second, mass surveillance could be used to threaten or blackmail. In the first case, the government is using that database to find a plausible excuse for its actions. In the second, it is looking for an actual infraction that it could selectively prosecute—or not.

Mitigating these risks is difficult, because it would require not interacting with either the government or corporations in everyday life—and living in the woods without any electronics isn’t realistic for most of us. Additionally, this strategy protects only future information; it does nothing to protect the information generated in the past. That said, going back and scrubbing social media accounts and cloud storage does have some value. Whether it’s right for you depends on your personal situation.

Opportunistic Use of Data
Beyond data given to third parties—either corporations or the government—there is also data users keep in their possession.This data may be stored on personal devices such as computers and phones or, more likely today, in some cloud service and accessible from those devices. Here, the risks are different: Some authority could confiscate your device and look through it.

This is not just speculative. There are many stories of ICE agents examining people’s phones and computers when they attempt to enter the U.S.: their emails, contact lists, documents, photos, browser history, and social media posts.

There are several different defenses you can deploy, presented from least to most extreme. First, you can scrub devices of potentially incriminating information, either as a matter of course or before entering a higher-risk situation. Second, you could consider deleting—even temporarily—social media and other apps so that someone with access to a device doesn’t get access to those accounts—this includes your contacts list. If a phone is swept up in a government raid, your contacts become their next targets.

Third, you could choose not to carry your device with you at all, opting instead for a burner phone without contacts, email access, and accounts, or go electronics-free entirely. This may sound extreme—and getting it right is hard—but I know many people today who have stripped-down computers and sanitized phones for international travel. At the same time, there are also stories of people being denied entry to the U.S. because they are carrying what is obviously a burner phone—or no phone at all.

Encryption Isn’t a Magic Bullet—But Use It Anyway
Encryption protects your data while it’s not being used, and your devices when they’re turned off. This doesn’t help if a border agent forces you to turn on your phone and computer. And it doesn’t protect metadata, which needs to be unencrypted for the system to function. This metadata can be extremely valuable. For example, Signal, WhatsApp, and iMessage all encrypt the contents of your text messages—the data—but information about who you are texting and when must remain unencrypted.

Also, if the NSA wants access to someone’s phone, it can get it. Encryption is no help against that sort of sophisticated targeted attack. But, again, most of us aren’t that important and even the NSA can target only so many people. What encryption safeguards against is mass surveillance.

I recommend Signal for text messages above all other apps. But if you are in a country where having Signal on a device is in itself incriminating, then use WhatsApp. Signal is better, but everyone has WhatsApp installed on their phones, so it doesn’t raise the same suspicion. Also, it’s a no-brainer to turn on your computer’s built-in encryption: BitLocker for Windows and FileVault for Macs.

On the subject of data and metadata, it’s worth noting that data poisoning doesn’t help nearly as much as you might think. That is, it doesn’t do much good to add hundreds of random strangers to an address book or bogus internet searches to a browser history to hide the real ones. Modern analysis tools can see through all of that.

Shifting Risks of Decentralization
This notion of individual targeting, and the inability of the government to do that at scale, starts to fail as the authoritarian system becomes more decentralized. After all, if repression comes from the top, it affects only senior government officials and people who people in power personally dislike. If it comes from the bottom, it affects everybody. But decentralization looks much like the events playing out with ICE harassing, detaining, and disappearing people—everyone has to fear it.

This can go much further. Imagine there is a government official assigned to your neighborhood, or your block, or your apartment building. It’s worth that person’s time to scrutinize everybody’s social media posts, email, and chat logs. For anyone in that situation, limiting what you do online is the only defense.

Being Innocent Won’t Protect You
This is vital to understand. Surveillance systems and sorting algorithms make mistakes. This is apparent in the fact that we are routinely served advertisements for products that don’t interest us at all. Those mistakes are relatively harmless—who cares about a poorly targeted ad?—but a similar mistake at an immigration hearing can get someone deported.

An authoritarian government doesn’t care. Mistakes are a feature and not a bug of authoritarian surveillance. If ICE targets only people it can go after legally, then everyone knows whether or not they need to fear ICE. If ICE occasionally makes mistakes by arresting Americans and deporting innocents, then everyone has to fear it. This is by design.

Effective Opposition Requires Being Online
For most people, phones are an essential part of daily life. If you leave yours at home when you attend a protest, you won’t be able to film police violence. Or coordinate with your friends and figure out where to meet. Or use a navigation app to get to the protest in the first place.

Threat modeling is all about trade-offs. Understanding yours depends not only on the technology and its capabilities but also on your personal goals. Are you trying to keep your head down and survive—or get out? Are you wanting to protest legally? Are you doing more, maybe throwing sand into the gears of an authoritarian government, or even engaging in active resistance? The more you are doing, the more technology you need—and the more technology will be used against you. There are no simple answers, only choices.

schneier.com EN 2025 ThreatModeling Authoritarianism
Red Hat confirms security incident after hackers claim GitHub breach https://www.bleepingcomputer.com/news/security/red-hat-confirms-security-incident-after-hackers-claim-github-breach/
02/10/2025 12:06:46
QRCode
archive.org
thumbnail

bleepingcomputer.com By Lawrence Abrams
October 2, 2025 02:15 AM 0

An extortion group calling itself the Crimson Collective claims to have breached Red Hat's private GitHub repositories, stealing nearly 570GB of compressed data across 28,000 internal projects.

An extortion group calling itself the Crimson Collective claims to have breached Red Hat's private GitHub repositories, stealing nearly 570GB of compressed data across 28,000 internal projects.

This data allegedly includes approximately 800 Customer Engagement Reports (CERs), which can contain sensitive information about a customer's network and platforms.

A CER is a consulting document prepared for clients that often contains infrastructure details, configuration data, authentication tokens, and other information that could be abused to breach customer networks.

Red Hat confirmed that it suffered a security incident related to its consulting business, but would not verify any of the attacker's claims regarding the stolen GitHub repositories and customer CERs.

"Red Hat is aware of reports regarding a security incident related to our consulting business and we have initiated necessary remediation steps," Red Hat told BleepingComputer.

"The security and integrity of our systems and the data entrusted to us are our highest priority. At this time, we have no reason to believe the security issue impacts any of our other Red Hat services or products and are highly confident in the integrity of our software supply chain."

While Red Hat did not respond to any further questions about the breach, the hackers told BleepingComputer that the intrusion occurred approximately two weeks ago.

They allegedly found authentication tokens, full database URIs, and other private information in Red Hat code and CERs, which they claimed to use to gain access to downstream customer infrastructure.

The hacking group also published a complete directory listing of the allegedly stolen GitHub repositories and a list of CERs from 2020 through 2025 on Telegram.

The directory listing of CERs include a wide range of sectors and well known organizations such as Bank of America, T-Mobile, AT&T, Fidelity, Kaiser, Mayo Clinic, Walmart, Costco, the U.S. Navy’s Naval Surface Warfare Center, Federal Aviation Administration, the House of Representatives, and many others.

The hackers stated that they attempted to contact Red Hat with an extortion demand but received no response other than a templated reply instructing them to submit a vulnerability report to their security team.

According to them, the created ticket was repeatedly assigned to additional people, including Red Hat's legal and security staff members.

BleepingComputer sent Red Hat additional questions, and we will update this story if we receive more information.

The same group also claimed responsibility for briefly defacing Nintendo’s topic page last week to include contact information and links to their Telegram channel

bleepingcomputer.com EN 2025 Crimson-Collective Data-Breach Extortion GitHub Red-Hat Repository
Microsoft’s new Security Store is like an app store for cybersecurity | The Verge https://www.theverge.com/news/788195/microsoft-security-store-launch-copilot-ai-agents
01/10/2025 06:46:48
QRCode
archive.org
thumbnail

Cybersecurity workers can also start creating their own Security Copilot AI agents.

Microsoft is launching a Security Store that will be full of security software-as-a-service (SaaS) solutions and AI agents. It’s part of a broader effort to sell Microsoft’s Sentinel security platform to businesses, complete with Microsoft Security Copilot AI agents that can be built by security teams to help tackle the latest threats.

The Microsoft Security Store is a storefront designed for security professionals to buy and deploy SaaS solutions and AI agents from Microsoft’s ecosystem partners. Darktrace, Illumio, Netskope, Perfomanta, and Tanium are all part of the new store, with solutions covering threat protection, identity and device management, and more.

A lot of the solutions will integrate with Microsoft Defender, Sentinel, Entra, Purview, or Security Copilot, making them quick to onboard for businesses that are fully reliant on Microsoft for their security needs. This should cut down on procurement and onboarding times, too.

Alongside the Security Store, Microsoft is also allowing Security Copilot users to build their own AI agents. Microsoft launched some of its own security AI agents earlier this year, and now security teams can use a tool that’s similar to Copilot Studio to build their own. You simply create an AI agent through a set of prompts and then publish them all with no code required. These Security Copilot agents will also be available in the Security Store today.

theverge.com EN 2025 Microsoft AI Copilot AI agents SaaS
How China’s Secretive Spy Agency Became a Cyber Powerhouse https://www.nytimes.com/2025/09/28/world/asia/how-chinas-secretive-spy-agency-became-a-cyber-powerhouse.html?smid=nytcore-ios-share&referringSource=articleShare
30/09/2025 11:10:59
QRCode
archive.org

nytimes.com
By Chris Buckley and Adam Goldman
Sept. 28, 2025

Fears of U.S. surveillance drove Xi Jinping, China’s leader, to elevate the agency and put it at the center of his cyber ambitions.

American officials were alarmed in 2023 when they discovered that Chinese state-controlled hackers had infiltrated critical U.S. infrastructure with malicious code that could wreck power grids, communications systems and water supplies. The threat was serious enough that William J. Burns, the director of the C.I.A., made a secret trip to Beijing to confront his Chinese counterpart.

He warned China’s minister of state security that there would be “serious consequences” for Beijing if it unleashed the malware. The tone of the meeting, details of which have not been previously reported, was professional and it appeared the message was delivered.

But since that meeting, which was described by two former U.S. officials, China’s intrusions have only escalated. (The former officials spoke on the condition of anonymity because they were not authorized to speak publicly about the sensitive meeting.)

American and European officials say China’s Ministry of State Security, the civilian spy agency often called the M.S.S., in particular, has emerged as the driving force behind China’s most sophisticated cyber operations.

In recent disclosures, officials revealed another immense, yearslong intrusion by hackers who have been collectively called Salt Typhoon, one that may have stolen information about nearly every American and targeted dozens of other countries. Some countries hit by Salt Typhoon warned in an unusual statement that the data stolen could provide Chinese intelligence services with the capability to “identify and track their targets’ communications and movements around the world.”

The attack underscored how the Ministry of State Security has evolved into a formidable cyberespionage agency capable of audacious operations that can evade detection for years, experts said.

For decades, China has used for-hire hackers to break into computer networks and systems. These operatives sometimes mixed espionage with commercial data theft or were sloppy, exposing their presence. In the recent operation by Salt Typhoon, however, intruders linked to the M.S.S. found weaknesses in systems, burrowed into networks, spirited out data, hopped between compromised systems and erased traces of their presence.
“Salt Typhoon shows a highly skilled and strategic side to M.S.S. cyber operations that has been missed with the attention on lower-quality contract hackers,” said Alex Joske, the author of a book on the ministry.

For Washington, the implication of China’s growing capability is clear: In a future conflict, China could put U.S. communications, power and infrastructure at risk.

China’s biggest hacking campaigns have been “strategic operations” intended to intimidate and deter rivals, said Nigel Inkster, a senior adviser for cybersecurity and China at the International Institute for Strategic Studies in London.

“If they succeed in remaining on these networks undiscovered, that potentially gives them a significant advantage in the event of a crisis,” said Mr. Inkster, formerly director of operations and intelligence in the British Secret Intelligence Service, MI6. “If their presence is — as it has been — discovered, it still exercises a very significant deterrent effect; as in, ‘Look what we could do to you if we wanted.’”

The Rise of the M.S.S.
China’s cyber advances reflect decades of investment to try to match, and eventually rival, the U.S. National Security Agency and Britain’s Government Communications Headquarters, or GCHQ.

China’s leaders founded the Ministry of State Security in 1983 mainly to track dissidents and perceived foes of Communist Party rule. The ministry engaged in online espionage but was long overshadowed by the Chinese military, which ran extensive cyberspying operations.

After taking power as China’s top leader in 2012, Xi Jinping moved quickly to reshape the M.S.S. He seemed unsettled by the threat of U.S. surveillance to China’s security, and in a 2013 speech pointed to the revelations of Edward J. Snowden, the former U.S. intelligence contractor.

Mr. Xi purged the ministry of senior officials accused of corruption and disloyalty. He reined in the hacking role of the Chinese military, elevating the ministry as the country’s primary cyberespionage agency. He put national security at the core of his agenda with new laws and by establishing a new commission.

“At this same time, the intelligence requirements imposed on the security apparatus start to multiply, because Xi wanted to do more things abroad and at home,” said Matthew Brazil, a senior analyst at BluePath Labs who has co-written a history of China’s espionage services.

Since around 2015, the M.S.S. has moved to bring its far-flung provincial offices under tighter central control, said experts. Chen Yixin, the current minister, has demanded that local state security offices follow Beijing’s orders without delay. Security officials, he said on a recent inspection of the northeast, must be both “red and expert” — absolutely loyal to the party while also adept in technology.

“It all essentially means that the Ministry of State Security now sits atop a system in which it can move its pieces all around the chessboard,” said Edward Schwarck, a researcher at the University of Oxford who is writing a dissertation on China’s state security.

Mr. Chen was the official who met with Mr. Burns in May 2023. He gave nothing away when confronted with the details of the cyber campaign, telling Mr. Burns he would let his superiors know about the U.S. concerns, the former officials said.

The Architect of China’s Cyber Power
The Ministry of State Security operates largely in the shadows, its officials rarely seen or named in public. There was one exception: Wu Shizhong, who was a senior official in Bureau 13, the “technical reconnaissance” arm of the ministry.

Mr. Wu was unusually visible, turning up at meetings and conferences in his other role as director of the China Information Technology Security Evaluation Center. Officially, the center vets digital software and hardware for security vulnerabilities before it can be used in China. Unofficially, foreign officials and experts say, the center comes under the control of the M.S.S. and provided a direct pipeline of information about vulnerabilities and hacking talent.

Mr. Wu has not publicly said he served in the security ministry, but a Chinese university website in 2005 described him as a state security bureau head in a notice about a meeting, and investigations by Crowd Strike and other cybersecurity firms have also described his state security role.

“Wu Shizhong is widely recognized as a leading figure in the creation of M.S.S. cyber capabilities,” said Mr. Joske.

In 2013, Mr. Wu pointed to two lessons for China: Mr. Snowden’s disclosures about American surveillance and the use by the United States of a virus to sabotage Iran’s nuclear facilities. “The core of cyber offense and defense capabilities is technical prowess,” he said, stressing the need to control technologies and exploit their weaknesses. China, he added, should create “a national cyber offense and defense apparatus.”

China’s commercial tech sector boomed in the years that followed, and state security officials learned how to put domestic companies and contractors to work, spotting and exploiting flaws and weak spots in computer systems, several cybersecurity experts said. The U.S. National Security Agency has also hoarded knowledge of software flaws for its own use. But China has an added advantage: It can tap its own tech companies to feed information to the state.
“M.S.S. was successful at improving the talent pipeline and the volume of good offensive hackers they could contract to,” said Dakota Cary, a researcher who focuses on China’s efforts to develop its hacking capabilities at SentinelOne. “This gives them a significant pipeline for offensive tools.”

The Chinese government also imposed rules requiring that any newly found software vulnerabilities be reported first to a database that analysts say is operated by the M.S.S., giving security officials early access. Other policies reward tech firms with payments if they meet monthly quotas of finding flaws in computer systems and submitting them to the state security-controlled database.

“It’s a prestige thing and it’s good for a company’s reputation,” Mei Danowski, the co-founder of Natto Thoughts, a company that advises clients on cyber threats, said of the arrangement. “These business people don’t feel like they are doing something wrong. They feel like they are doing something for their country.”

nytimes.com EN 2025 US China Typhoon Spy Agency
page 5 / 214
4888 links
Shaarli - Le gestionnaire de marque-pages personnel, minimaliste, et sans base de données par la communauté Shaarli - Theme by kalvn