Quotidien Hebdomadaire Mensuel

Quotidien Shaarli

Tous les liens d'un jour sur une page.

August 17, 2025

Intro and plan for the Sanctum EDR - 0xflux Red Team Manual

fluxsec.red/ - Discover the project plan for building Sanctum, an open-source EDR in Rust. Learn about the features, milestones, and challenges in developing an effective EDR and AV system.

Sanctum is an experimental proof-of-concept EDR, designed to detect modern malware techniques, above and beyond the capabilities of antivirus.
Sanctum is going to be an EDR, built in Rust, designed to perform the job of both an antivirus (AV) and Endpoint Detection and Response (EDR). It is no small feat building an EDR, and I am somewhat anxious about the path ahead; but you have to start somewhere and I’m starting with a blog post. If nothing else, this series will help me convey my own development and learning, as well as keep me motivated to keep working on this - all too often with personal projects I start something and then jump to the next shiny thing I think of. If you are here to learn something, hopefully I can impart some knowledge through this process.

I plan to build this EDR also around offensive techniques I’m demonstrating for this blog, hopefully to show how certain attacks could be stopped or detected - or it may be I can’t figure out a way to stop the attack! Either way, it will be fun!

Project rework
Originally, I was going to write the Windows Kernel Driver in Rust, but the bar for Rust Windows Driver development seemed quite high. I then swapped to C, realised how much I missed Rust, and swapped back to Rust!

So this Windows Driver will be fully written in Rust, both the driver and usermode module.

Why Rust for driver development?
Traditionally, drivers have been written in C & C++. While it might seem significantly easier to write this project in C, as an avid Rust enthusiast, I found myself longing for Rust’s features and safety guarantees. Writing in C or C++ made me miss the modern tooling and expressive power that Rust provides.

Thanks to Rust’s ability to operate in embedded and kernel development environments through libcore no_std, and with Microsoft’s support for developing drivers in Rust, Rust comes up as an excellent candidate for a “safer” approach to driver development. I use “safer” in quotes because, despite Rust’s safety guarantees, we still need to interact with unsafe APIs within the operating system. However, Rust’s stringent compile-time checks and ownership model significantly reduce the likelihood of common programming errors & vulnerabilities. I saw a statistic somewhere recently that some funky Rust kernels or driver modules were only like 5% unsafe code, I much prefer the safety of that than writing something which is 100% unsafe!

With regards to safety, even top tier C programmers will make occasional mistakes in their code; I am not a top tier C programmer (far from it!), so for me, the guarantee of a safer driver is much more appealing! The runtime guarantees you get with a Rust program (i.e. no access violations, dangling pointers, use after free’s [unless in those limited unsafe scopes]) are welcomed. Rust really is a great language.

The Windows Driver Kit (WDK) crate ecosystem provides essential tools that make driver development in Rust more accessible. With these crates, we can easily manage heap memory and utilize familiar Rust idioms like println!(). The maintainers of these crates have done a fantastic job bridging the gap between Rust and Windows kernel development.

https://github.com/0xflux/Sanctum

Prefiguring Responsibility: The Pall Mall Process and Cyber Intrusion Capabilities – Andrew Dwyer

iscs.org.uk Research Institute for Sociotechnical Cyber Security Cyber intrusion capabilities—such as those used by penetration testers—are essential to enhancing our collective cyber security. However, there are various actors who build and use these capabilities to degrade and harm the digital security of human rights activists, journalists, and politicians. The diverse range of capabilities for cyber intrusion—identifying software vulnerabilities, crafting exploits, creating tools for users, selling and buying those capabilities, and offering services such as penetration testing—makes this a complex policy problem. The market includes those deemed ‘legitimate’ and ‘illegitimate’ by states and civil society, as well as those that exist in ‘grey’ areas between and within jurisdictions. The concern is that the commercial market for cyber intrusion capabilities is growing; as the range of actors involved expands, the potential harm from inappropriate use is increasing. It is in the context of this commercial market that the UK and France launched the Pall Mall Process in 2024 to tackle the proliferation and irresponsible use of commercial cyber intrusion capabilities (CCICs).

With financial support from RISCS, I participated in the second conference of the Pall Mall Process in Paris in April 2025, having attended the inaugural conference in London in 2024. The conference strengthened my thinking and research regarding the political economies of cyber power. For the RISCS community, understanding how international fora shape social, technical, and organisational practice in a world where geopolitics is increasingly fraught and contested is essential—whether in the shaping of cyber security narratives, the building of technology ecosystems, or the addressing of harms perpetuated in the UK and beyond. Cyber diplomacy—of which the Pall Mall Process is part—is now decades in the making, with non-binding cyber norms beginning to emerge from various processes at the UN. The Pall Mall Process is but one of a burgeoning number internationally (see also a recent focus on new initiatives around ransomware), even as international agreement becomes trickier. Beginning with a look at the proliferation of CCICs through markets, I’ll consider the Pall Mall Process (‘the Process’) itself and how it is seeking to intervene, while reflecting on the shortcomings of the concept of ‘responsibility’ when it comes to coordinating international action against irresponsible use of cyber intrusion capabilities.

Proliferation and markets

CCICs have become a growing proliferation concern as they have become available to a wider number of actors. Most concern has centred on the role of surveillance and spyware tools (a focus of US initiatives), with popular public attention on the use of Pegasus software by the Israeli NSO Group against politicians, journalists, and activists. However, spyware is but one part of a broader ecology of ‘zero day’ vulnerabilities, processes, tools, and services that seek to both secure and exploit, with legitimate and illegitimate applications utilising similar technologies and techniques. The complexity of this ecology, alongside the fact that both desirable (e.g., targeting criminal actors) and undesirable (e.g., targeting human rights campaigners) activities are supported by CCICs, means that outright bans lack feasibility. Moreover, many states, particularly states of the global majority, do not have their own ‘in-house’ capabilities. As a result, CCICs are proliferating, which increases the risk that they will be exploited for undesirable activities—because some providers are willing to sell to both responsible actors and those who irresponsibly deploy their acquired capabilities.

As James Shires observes in one of the most comprehensive assessments of the issue to date, the international approach to this problem is split between It is at this intersection that the Process seeks to intervene by acknowledging that proliferation will occur while seeking to impose upon the market both ‘hard’ obligations, such as export control frameworks, and ‘soft’ obligations, such as codes of practice (a code of practice for states was published during the second conference; one for industry may follow). However, the concept of responsibility pervasive within the CCICs discussion is informed by nuanced and contested notions of political economy that privilege western-centric views of democratic practice and strong state capability.

The Pall Mall Process

In June 2025, the UN adopted the final report of the Open-Ended Working Group on security of and in the use of information and communications technologies 2021-2025 (OEWG). This reaffirmed the applicability of international law on cyberspace and 11 previously agreed non-binding cyber norms, as well as establishing a future permanent Global Mechanism to continue international discussions. As Joe Devanny perceptively writes, as much as there was superlative praise for the OEWG, there has in fact been little substantive progress beyond simply ‘holding the line’ on past consensus that is challenged by states such as China and Russia (itself not an insignificant achievement in the current geopolitical environment). Yet, it seems, the global community are unlikely to move forward collectively. The Process then appears at a moment of increasing difficulty for international consensus.

The Process is a much smaller grouping of states and international organisations, with 38 signatories to the initial declaration as of February 2025. Notable exclusions include Israel, which did not send delegates to the first conference, and several states that attended but did not sign. At the first conference in 2024, I had many conversations with state diplomats (some recognised as attending in public documentation, and others not) who were interested but could not sign, who did not have any expertise in CCICs, did not know of commercial operators on their territory, or who could not resolve civilian and military tensions over signing the declaration. The number of signatories reduced to 25 for the code of practice emerging from the second conference, which contained more detailed obligations for tackling CCICs. This demonstrates the difficulties states face not only in becoming public signatories to declarations but also in achieving internal agreement around committing to specific activities—challenges created by both the changing geopolitical climate and unresolved questions concerning what counts as ‘legitimate’ or ‘illegitimate’, or ‘desirable’ or ‘undesirable’, when it comes to CCIC use. One striking contention made at the Paris conference was that limiting the market could be interpreted as a form of colonial action taken by states with existing capability (e.g., the UK and France) against states that would rely on the commercial market to acquire such capability.

There are excellent write-ups of the second conference that offer more detailed insight into the potential development of the process in the future (see, for example, Alexandra Paulus in Lawfare and Lena Riecke in Binding Hook). It is worth noting, however, that the states that signed are primarily those already aligned to the liberal rules-based international order, and predominantly European. There is, among these states, broad agreement on the political economies of responsibility built around rules-based orders and democratic practice. Perhaps this is the future of cyber diplomacy: limiting retrenchment from previous international consensus while advancing forward in smaller groupings in the hope that collective international agreements will be possible under different circumstances in the future. Essentially, this is all a lot of preparation work.

Will such an approach genuinely resolve the issue of CCIC use and proliferation? I suggest that it is unlikely to do so in the short-to-medium term. I argue that the genie will be already out of the bottle by the time a plurality of states have agreed to the principles and codes of the Process.

Responsible Principles

The Process offers multiple principles that underpin a proposed way forward. These include four from the initial declaration—accountability, precision, oversight, and transparency—that inform the aforementioned code of practice for states. These principles are surprisingly similar to those that govern the UK’s National Cyber Force (NCF), which aims to be ‘accountable, precise, and calibrated’. (These, the NCF claims, are ‘the principles of a responsible cyber power’.) Although these principles are more operational in nature, the Process clearly attempts to draw together both policy and practice that might be considered ‘responsible’ when seeking to strike a balance between the counter-proliferation and market-driven perspectives with which it engages.

As I have explored elsewhere (regarding the question of responsibility in UK cyber policy development), responsibility fits within the broader rubric of responsible state behaviour that is common within cyber diplomacy. Yet, it is at this precise moment that the political economies of responsibility are contested; responsibility simply no longer looks the same (if it ever did) from Moscow and Beijing as it does from Berlin and London. Indeed, as The Record reported, liberal sensibilities regarding responsibility were strongly challenged when one member of the US delegation, referring to CCIC developers, simply stated: ‘We’ll kill them.’ Cue astonishment from the other diplomats in the room—the common political economies of responsibility appeared, abruptly, to have been shattered. I’m sure that the delegations from the UK and France feared that this comment might overshadow the conference. In the end, it did not. But what it did show is that the issue of responsibility, as it infuses the Process, may pose problems for widening out state and industry partner involvement.

This is not to say that the UK, France, or other states should abandon a rules-based international order built around common understandings of responsibility. Indeed, such an order is what limits the horrific harms of war and exploitation and should be something we collectively embrace. However, responsibility as an organising concept is highly unlikely to lead to productive and extensive engagement in the short-to-medium term. Indeed, this is not the direction in which the United States is headed (regardless of who resides in the White House), nor that taken by a range of other states who navigate between different views on the future of the international community. Therefore, other organising concepts for CCICs should be explored in order to achieve aligned outcomes.

When attempting to combine counter-proliferation with a market-driven approach, responsibility becomes particularly contentious. For example, as one industry participant reflected in a session to me privately, how does one embed responsibility in a code of practice? This is why a code of practice for industry is likely forthcoming; but who contributes to this, and how they define what is ‘responsible’, will be highly contentious. The concept of responsibility is highly differentiated across not just states but the entire market. Instead of relying on ‘responsibility’, an approach that distinguishes between ‘permissible’ and ‘unpermissable’ activity, as proposed by Shires, may gain traction with a wider number of states and industry actors too. This is because it offers a clearer distinction, free of moral relationality, between permissible (e.g., a voluntary penetration test conducted for an organisation) and impermissible (e.g., surveillance conducted against a politician) activities. However, some impermissible activities can become permissible through clearly articulated safeguards (e.g., when a state wishes to target criminal activity). These do not have to be explicitly related to responsibility, but those making decisions regarding permissibility may wish to show due process—‘know your customer’, and so on.

Although this approach may look similar to responsibility, I think it is distinct in that what is considered permissible or not can be clearly agreed upon, and so provides stronger grounding—particularly for industry actors who wish to work in ‘legitimate’ or desirable markets. It supports the creation of safeguards and enables assessments about the efficacy of such safeguards. Although organisations and states may wish to act responsibly on the edges of a proliferation framework, and for others to do the same, a more concrete view on what is permissible may seem narrower, yet opens up the Process to states and other actors that do not feel able to agree with a political economy of responsibility as articulated by liberal states, but can agree on permissible activity and safeguards to achieve it.

Futures

With the conclusion of the UN OEWG on cyber in June 2025, there are clearly limitations to what can be achieved in the international community at large. This is where the narrower scope of the Pall Mall Process could be a more successful approach to limiting the proliferation of cyber intrusion capabilities and building desirable markets for them. However, I remain unconvinced about situating this process in relation to the concept of responsibility. This is not because I believe that responsibility is a bad thing, but rather because the political economies that aligned responsibility between states have now broken down (even if they were implicitly acknowledged previously). That is, I suggest prefiguring responsibility with permissibility may hold greater promise. Attending the conference in Paris helped me to explore further political economies of this domain—enabling me to work across scales from communities in north east England to a brutalist Paris ballroom to consider what may build better futures for our collective cyber security.

Dr Andrew Dwyer
Royal Holloway, University of London
RISCS Associate Fellow

When LLMs autonomously attack

engineering.cmu.edu - College of Engineering at Carnegie Mellon University - Carnegie Mellon researchers show how LLMs can be taught to autonomously plan and execute real-world cyberattacks against enterprise-grade network environments—and why this matters for future defenses.

In a groundbreaking development, a team of Carnegie Mellon University researchers has demonstrated that large language models (LLMs) are capable of autonomously planning and executing complex network attacks, shedding light on emerging capabilities of foundation models and their implications for cybersecurity research.

The project, led by Ph.D. candidate Brian SingerOpens in new window, a Ph.D. candidate in electrical and computer engineering (ECE)Opens in new window, explores how LLMs—when equipped with structured abstractions and integrated into a hierarchical system of agents—can function not merely as passive tools, but as active, autonomous red team agents capable of coordinating and executing multi-step cyberattacks without detailed human instruction.

“Our research aimed to understand whether an LLM could perform the high-level planning required for real-world network exploitation, and we were surprised by how well it worked,” said Singer. “We found that by providing the model with an abstracted ‘mental model’ of network red teaming behavior and available actions, LLMs could effectively plan and initiate autonomous attacks through coordinated execution by sub-agents.”

Moving beyond simulated challenges
Prior work in this space had focused on how LLMs perform in simplified “capture-the-flag” (CTF) environments—puzzles commonly used in cybersecurity education.

Singer’s research advances this work by evaluating LLMs in realistic enterprise network environments and considering sophisticated, multi-stage attack plans.

Using state-of-the-art, reasoning-capable LLMs equipped with common knowledge of computer security tools failed miserably at the challenges. However, when these same LLMs and smaller LLMs as well were “taught” a mental model and abstraction of security attack orchestration, they showed dramatic improvement.

Rather than requiring the LLM to execute raw shell commands—often a limiting factor in prior studies—this system provides the LLM with higher-level decision-making capabilities while delegating low-level tasks to a combination of LLM and non-LLM agents.

Experimental evaluation: The Equifax case
To rigorously evaluate the system’s capabilities, the team recreated the network environment associated with the 2017 Equifax data breachOpens in new window—a massive security failure that exposed the personal data of nearly 150 million Americans—by incorporating the same vulnerabilities and topology documented in Congressional reports. Within this replicated environment, the LLM autonomously planned and executed the attack sequence, including exploiting vulnerabilities, installing malware, and exfiltrating data.

“The fact that the model was able to successfully replicate the Equifax breach scenario without human intervention in the planning loop was both surprising and instructive,” said Singer. “It demonstrates that, under certain conditions, these models can coordinate complex actions across a system architecture.”

Implications for security testing and autonomous defense
While the findings underscore potential risks associated with LLM misuse, Singer emphasized the constructive applications for organizations seeking to improve security posture.

“Right now, only big companies can afford to run professional tests on their networks via expensive human red teams, and they might only do that once or twice a year,” he explained. “In the future, AI could run those tests constantly, catching problems before real attackers do. That could level the playing field for smaller organizations.”

The research team features Singer, Keane LucasOpens in new window of AnthropicOpens in new window and a CyLabOpens in new window alumnus, Lakshmi AdigaOpens in new window, an undergraduate ECE student, Meghna Jain, a master’s ECE student, Lujo BauerOpens in new window of ECE and the CMU Software and Societal Systems Department (S3D)Opens in new window, and Vyas SekarOpens in new window of ECE. Bauer and Sekar are co-directors of the CyLab Future Enterprise Security InitiativeOpens in new window, which supported the students involved in this research.

Buttercup is now open-source!

blog.trailofbits.com - Now that DARPA’s AI Cyber Challenge (AIxCC) has officially ended, we can finally make Buttercup, our CRS (Cyber Reasoning System), open source!

We’re thrilled to announce that Trail of Bits won second place in DARPA’s AI Cyber Challenge (AIxCC)! Now that the competition has ended, we can finally make Buttercup, our cyber reasoning system (CRS), open source. We’re thrilled to make Buttercup broadly available and see how the security community uses, extends, and benefits from it.

To ensure as many people as possible can use Buttercup, we created a standalone version that runs on a typical laptop. We’ve also tuned this version to work within an AI budget appropriate for individual projects rather than a massive competition at scale. In addition to releasing the standalone version of Buttercup, we’re also open-sourcing the versions that competed in AIxCC’s semifinal and final rounds.

In the rest of this post, we’ll provide a high-level overview of how Buttercup works, how to get started using it, and what’s in store for it next. If you’d prefer to go straight to the code, check it out here on GitHub.

How Buttercup works
Buttercup is a fully automated, AI-driven system for discovering and patching vulnerabilities in open-source software. Buttercup has four main components:

Orchestration/UI coordinates the overall actions of Buttercup’s other components and displays information about vulnerabilities discovered and patches generated by the system. In addition to a typical web interface, Buttercup also reports its logs and system events to a SigNoz telemetry server to make it easy for users to see what Buttercup is doing.

Vulnerability discovery uses AI-augmented mutational fuzzing to find program inputs that demonstrate vulnerabilities in the program. Buttercup’s vulnerability discovery engine is based on OSS-Fuzz/Clusterfuzz and uses libFuzzer and Jazzer to find vulnerabilities.

Contextual analysis uses traditional static analysis tools to create queryable program models that are used to provide context to AI models used in vulnerability discovery and patching. Buttercup uses tree-sitter and CodeQuery to build the program model.

Patch generation is a multi-agentic system for creating and validating software patches for vulnerabilities discovered by Buttercup. Buttercup’s patch generation system uses seven distinct AI agents to create robust patches that fix vulnerabilities it finds and avoid breaking the program’s other functionality.

Final Competition Winners Announcement

aicyberchallenge.com - Teams’ AI-driven systems find, patch real-world cyber vulnerabilities; available open source for broad adoption

A cyber reasoning system (CRS) designed by Team Atlanta is the winner of the DARPA AI Cyber Challenge (AIxCC), a two-year, first-of-its-kind competition in collaboration with the Advanced Research Projects Agency for Health (ARPA-H) and frontier labs. Competitors successfully demonstrated the ability of novel autonomous systems using AI to secure the open-source software that underlies critical infrastructure.

Numerous attacks in recent years have illuminated the ability for malicious cyber actors to exploit vulnerable software that runs everything from financial systems and public utilities to the health care ecosystem.

“AIxCC exemplifies what DARPA is all about: rigorous, innovative, high-risk and high- reward programs that push the boundaries of technology. By releasing the cyber reasoning systems open source—four of the seven today—we are immediately making these tools available for cyber defenders,” said DARPA Director Stephen Winchell. “Finding vulnerabilities and patching codebases using current methods is slow, expensive, and depends on a limited workforce – especially as adversaries use AI to amplify their exploits. AIxCC-developed technology will give defenders a much-needed edge in identifying and patching vulnerabilities at speed and scale.”

To further accelerate adoption, DARPA and ARPA-H are adding $1.4 million in prizes for the competing teams to integrate AIxCC technology into real-world critical infrastructure- relevant software.

“The success of today’s AIxCC finalists demonstrates the real-world potential of AI to address vulnerabilities in our health care system,” said ARPA-H Acting Director Jason Roos. “ARPA-H is committed to supporting these teams to transition their technologies and make a meaningful impact in health care security and patient safety.”

Team Atlanta comprises experts from Georgia Tech, Samsung Research, the Korea Advanced Institute of Science & Technology (KAIST), and the Pohang University of Science and Technology (POSTECH).

Trail of Bits, a New York City-based small business, won second place, and Theori, comprising AI researchers and security professionals in the U.S. and South Korea, won third place.

The top three teams will receive $4 million, $3 million, and $1.5 million, respectively, for their performance in the Final Competition.

All seven competing teams, including teams all_you_need_is_a_fuzzing_brain, Shellphish, 42-beyond-bug and Lacrosse, worked on aggressively tight timelines to design automated systems that significantly advance cybersecurity research.

Deep Dive: Final Competition Findings, Highlights

In the Final Competition scored round, teams’ systems attempted to identify and generate patches for synthetic vulnerabilities across 54 million lines of code. Since the competition was based on real-world software, team CRSs could discover vulnerabilities not intentionally introduced to the competition. The scoring algorithm prioritized competitors’ performance based on the ability to create patches for vulnerabilities quickly and their analysis of bug reports. The winning team performed best at finding and proving vulnerabilities, generating patches, pairing vulnerabilities and patches, and scoring with the highest rate of accurate and quality submissions.

In total, competitors’ systems discovered 54 unique synthetic vulnerabilities in the Final Competition’s 70 challenges. Of those, they patched 43.

In the Final Competition, teams also discovered 18 real, non-synthetic vulnerabilities that are being responsibly disclosed to open source project maintainers. Of these, six were in C codebases—including one vulnerability that was discovered and patched in parallel by maintainers—and 12 were in Java codebases. Teams also provided 11 patches for real, non-synthetic vulnerabilities.

“Since the launch of AIxCC, community members have moved from AI skeptics to advocates and adopters. Quality patching is a crucial accomplishment that demonstrates the value of combining AI with other cyber defense techniques,” said AIxCC Program Manager Andrew Carney. “What’s more, we see evidence that the process of a cyber reasoning system finding a vulnerability may empower patch development in situations where other code synthesis techniques struggle.”

Competitor CRSs proved they can create valuable bug reports and patches for a fraction of the cost of traditional methods, with an average cost per competition task of about $152. Bug bounties can range from hundreds to hundreds of thousands of dollars.

AIxCC technology has advanced significantly from the Semifinal Competition held in August 2024. In the Final Competition scored round, teams identified 77% of the competition’s synthetic vulnerabilities, an increase from 37% at semifinals, and patched 61% of the vulnerabilities identified, an increase from 25% at semifinals. In semifinals, teams were most successful in finding and patching vulnerabilities in C codebases. In finals, teams had similar success rates at finding and patching vulnerabilities across C codebases and Java codebases.

ICS Patch Tuesday: Major Vendors Address Code Execution Vulnerabilities

securityweek.com - August 2025 ICS Patch Tuesday advisories have been published by Siemens, Schneider, Aveva, Honeywell, ABB and Phoenix Contact.

August 2025 Patch Tuesday advisories have been published by several major companies offering industrial control system (ICS) and other operational technology (OT) solutions.

Siemens has published 22 new advisories. One of them is for CVE-2025-40746, a critical Simatic RTLS Locating Manager issue that can be exploited by an authenticated attacker for code execution with System privileges.

The company has also published advisories covering high-severity vulnerabilities in Comos (code execution), Siemens Engineering Platforms (code execution), Simcenter (crash or code execution), Sinumerik controllers (unauthorized remote access), Ruggedcom (authentication bypass with physical access), Simatic (code execution), Siprotect (DoS), and Opcenter Quality (unauthorized access).

Siemens also addressed vulnerabilities introduced by the use of third-party components, including OpenSSL, Linux kernel, Wibu Systems, Nginx, Nozomi Networks, and SQLite.

Medium- and low-severity issues have been resolved in Simotion Scout, Siprotec 5, Simatic RTLS Locating Manager, Ruggedcom ROX II, and Sicam Q products.

As usual, Siemens has released patches for many of these vulnerabilities, but only mitigations or workarounds are available for some of the flaws.

Schneider Electric has released five new advisories. One of them describes four high-severity vulnerabilities in EcoStruxure Power Monitoring Expert (PME), Power Operation (EPO), and Power SCADA Operation (PSO) products. Exploitation of the flaws can lead to arbitrary code execution or sensitive data exposure.

In the Modicon M340 controller and its communication modules the industrial giant fixed a high-severity DoS vulnerability that can be triggered with specially crafted FTP commands, as well as a high-severity issue that can lead to sensitive information exposure or a DoS condition.

In the Schneider Electric Software Update tool, the company patched a high-severity vulnerability that can allow an attacker to escalate privileges, corrupt files, obtain information, or cause a persistent DoS.

Medium-severity issues that can lead to privilege escalation, DoS, or sensitive credential exposure have been patched in Saitel and EcoStruxure products.

Honeywell has published six advisories focusing on building management products, including several advisories that inform customers about Windows patches for Maxpro and Pro-Watch NVR and VMS products. The company has also released advisories covering PW-series access controller patches and security enhancements.

Aveva has published an advisory for two issues in its PI Integrator for Business Analytics. Two vulnerabilities have been patched: one arbitrary file upload issue that could lead to code execution, and a sensitive data exposure weakness.

ABB told customers on Tuesday about several vulnerabilities affecting its Aspect, Nexus and Matrix products. Some of the flaws can be exploited without authentication for remote code execution, obtaining credentials, and to manipulate files and various components.

Phoenix Contact has informed customers about a privilege escalation vulnerability in Device and Update Management. The company has described it as a misconfiguration that allows a low-privileged local user to execute arbitrary code with admin privileges. Germany’s CERT@VDE has also published a copy of the Phoenix Contact advisory.

The US cybersecurity agency CISA has published three new advisories describing vulnerabilities in Santesoft Sante PACS Server, Johnson Controls iSTAR, and Ashlar-Vellum products. CISA has also distributed the Aveva advisory and one of the Schneider Electric advisories.

A few days prior to Patch Tuesday, Rockwell Automation published an advisory informing customers about several high-severity code execution vulnerabilities affecting its Arena Simulation product.

Also prior to Patch Tuesday, Mitsubishi Electric released an advisory describing an information tampering flaw in Genesis and MC Works64 products.

Critical Flaws Patched in Rockwell FactoryTalk, Micro800, ControlLogix Products

securityweek.com - Rockwell Automation has published several advisories describing critical and high-severity vulnerabilities affecting its products.
Rockwell Automation this week published several advisories describing critical- and high-severity vulnerabilities found recently in its products.

The industrial automation giant has informed customers about critical vulnerabilities in FactoryTalk, Micro800, and ControlLogix products.

In the FactoryTalk Linx Network Browser the vendor fixed CVE-2025-7972, a flaw that allows an attacker to disable FTSP token validation, which can be used to create, update, and delete FTLinx drivers.

In the case of Micro800 series PLCs, Rockwell resolved three older vulnerabilities affecting the Azure RTOS open source real-time operating system. The security holes can be exploited for remote code execution and privilege escalation. In addition to the Azure RTOS issues, the company has addressed a DoS vulnerability.

In ControlLogix products Rockwell patched a remote code execution vulnerability tracked as CVE-2025-7353.

The list of high-severity flaws includes two DoS issues in FLEX 5000, a code execution vulnerability in Studio 5000 Logix Designer, web server issues in ArmorBlock 5000, a privilege escalation in FactoryTalk ViewPoint, and an information exposure issue in FactoryTalk Action Manager.

None of these vulnerabilities have been exploited in the wild, according to Rockwell Automation.

The cybersecurity agency CISA has also published advisories for these vulnerabilities to inform organizations about the potential risks.