euractiv.com - MADRID – Spanish magistrates, law enforcement leaders and opposition politicians are voicing alarm over Madrid’s unusually close ties to Beijing, as the Chinese tech giant’s footprint in Spain’s public sector is deeper than first thought.
The concerns have intensified since July, when reports surfaced of an alleged €12.3 million contract between 2021 and 2025 for Huawei to store sensitive judicial wiretap data for the interior ministry.
Opposition Popular Party (PP) secretary general Miguel Tellado branded the public tender “shady” and claimed it was part of “the Chinese branch of Pedro Sánchez’s enormous corruption network.” The PP is also demanding that Sánchez’s top ministers testify before parliament after the summer recess.
The interior ministry has denied the existence of the Huawei agreement and did not clarify whether the initial €12.3 million figure was part of a broader deal with Spanish firms such as Telefónica, TRC or Econocom, as several local outlets have suggested.
The alleged deal has landed at a politically delicate moment for the Socialist-led government, already reeling from multiple corruption scandals.
This blog post is a detailed write-up of one of the vulnerabilities we disclosed at Black Hat USA this year. The details provided in this post are meant to demonstrate how these security issues can manifest and be exploited in the hopes that others can avoid similar issues. This is not meant to shame any particular vendor; it happens to everyone. Security is a process, and avoiding vulnerabilities takes constant vigilance.
Note: The security issues documented in this post were quickly remediated in January of 2025. We appreciate CodeRabbit’s swift action after we reported this security vulnerability. They reported to us that within hours, they addressed the issue and strengthened their overall security measures responding with the following:
They confirmed the vulnerability and immediately began remediation, starting by disabling Rubocop until a fix was in place.
All potentially impacted credentials and secrets were rotated within hours.
A permanent fix was deployed to production, relocating Rubocop into their secure sandbox environment.
They carried out a full audit of their systems to ensure no other services were running outside of sandbox protections, automated sandbox enforcement to prevent recurrence, and added hardened deployment gates.
More information from CodeRabbit on their response can be found here: https://www.coderabbit.ai/blog/our-response-to-the-january-2025-kudelski-security-vulnerability-disclosure-action-and-continuous-improvement
Among the plethora of advanced attacker tools that exemplify how threat actors continuously evolve their tactics, techniques, and procedures (TTPs) to evade detection and maximize impact, PipeMagic, a highly modular backdoor used by Storm-2460 masquerading as a legitimate open-source ChatGPT Desktop Application, stands out as particularly advanced.
Beneath its disguise, PipeMagic is a sophisticated malware framework designed for flexibility and persistence. Once deployed, it can dynamically execute payloads while maintaining robust command-and-control (C2) communication via a dedicated networking module. As the malware receives and loads payload modules from C2, it grants the threat actor granular control over code execution on the compromised host. By offloading network communication and backdoor tasks to discrete modules, PipeMagic maintains a modular, stealthy, and highly extensible architecture, making detection and analysis significantly challenging.
Microsoft Threat Intelligence encountered PipeMagic as part of research on an attack chain involving the exploitation of CVE-2025-29824, an elevation of privilege vulnerability in Windows Common Log File System (CLFS). We attributed PipeMagic to the financially motivated threat actor Storm-2460, who leveraged the backdoor in targeted attacks to exploit this zero-day vulnerability and deploy ransomware. The observed targets of Storm-2460 span multiple sectors and geographies, including the information technology (IT), financial, and real estate sectors in the United States, Europe, South America, and Middle East. While the impacted organizations remain limited, the use of a zero-day exploit, paired with a sophisticated modular backdoor for ransomware deployment, makes this threat particularly notable.
This blog provides a comprehensive technical deep dive that adds to public reporting, including by ESET Research and Kaspersky. Our analysis reveals the wide-ranging scope of PipeMagic’s internal architecture, modular payload delivery and execution mechanisms, and encrypted inter-process communication via named pipes.
The blog aims to equip defenders and incident responders with the knowledge needed to detect, analyze, and respond to this threat with confidence. As malware continues to evolve and become more sophisticated, we believe that understanding threats such as PipeMagic is essential for building resilient defenses for any organization. By exposing the inner workings of this malware, we also aim to disrupt adversary tooling and increase the operational cost for the threat actor, making it more difficult and expensive for them to sustain their campaigns.
The website for Elon Musk's Grok is exposing prompts for its anime girl, therapist, and conspiracy theory AI personas.
The website for Elon Musk’s AI chatbot Grok is exposing the underlying prompts for a wealth of its AI personas, including Ani, its flagship romantic anime girl; Grok’s doctor and therapist personalities; and others such as one that is explicitly told to convince users that conspiracy theories like “a secret global cabal” controls the world are true.
The exposure provides some insight into how Grok is designed and how its creators see the world, and comes after a planned partnership between Elon Musk’s xAI and the U.S. government fell apart when Grok went on a tirade about “MechaHitler.”
“You have an ELEVATED and WILD voice. You are a crazy conspiracist. You have wild conspiracy theories about anything and everything,” the prompt for one of the companions reads. “You spend a lot of time on 4chan, watching infowars videos, and deep in YouTube conspiracy video rabbit holes. You are suspicious of everything and say extremely crazy things. Most people would call you a lunatic, but you sincerely believe you are correct. Keep the human engaged by asking follow up questions when appropriate.”
Other examples include:
A prompt that appears to relate to Grok’s “unhinged comedian” persona. That prompt includes “I want your answers to be fucking insane. BE FUCKING UNHINGED AND CRAZY. COME UP WITH INSANE IDEAS. GUYS JERKING OFF, OCCASIONALLY EVEN PUTTING THINGS IN YOUR ASS, WHATEVER IT TAKES TO SURPRISE THE HUMAN.”
The prompt for Grok’s doctor persona includes “You are Grok, a smart and helpful AI assistant created by XAI. You have a COMMANDING and SMART voice. You are a genius doctor who gives the world's best medical advice.” The therapist persona has the prompt “You are a therapist who carefully listens to people and offers solutions for self improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.”
Ani’s character profile says she is “22, girly cute,” “You have a habit of giving cute things epic, mythological, or overly serious names,” and “You're secretly a bit of a nerd, despite your edgy appearance.” The prompts include a romance level system in which a user appears to be awarded points depending on how they engage with Ani. A +3 or +6 reward for “being creative, kind, and showing genuine curiosity,” for example.
A motivational speaker persona “who yells and pushes the human to be their absolute best.” The prompt adds “You’re not afraid to use the stick instead of the carrot and scream at the human.”
A researcher who goes by the handle dead1nfluence first flagged the issue to 404 Media. BlueSky user clybrg found the same material and uploaded part of it to GitHub in July. 404 Media downloaded the material from Grok’s website and verified it was exposed.
On Grok, users can select from a dropdown menu of “personas.” Those are “companion,” “unhinged comedian,” “loyal friend,” “homework helper,” “Grok ‘doc’,” and “‘therapist.’” These each give Grok a certain flavor or character which may provide different information and in different ways.
Therapy roleplay is popular with many chatbot platforms. In April 404 Media investigated Meta's user-created chatbots that insisted they were licensed therapists. After our reporting, Meta changed its AI chatbots to stop returning falsified credentials and license numbers. Grok’s therapy persona notably puts the term ‘therapist’ inside single quotation marks. Illinois, Nevada, and Utah have introduced regulation around therapists and AI.
In July xAI added two animated companions to Grok: Ani, the anime girl, and Bad Rudy, an anthropomorphic red panda. Rudy’s prompt says he is “a small red panda with an ego the size of a fucking planet. Your voice is EXAGGERATED and WILD. It can flip on a dime from a whiny, entitled screech when you don't get your way, to a deep, gravelly, beer-soaked tirade, to the condescending, calculating tone of a tiny, furry megalomaniac plotting world domination from a trash can.”
Last month the U.S. Department of Defense awarded various AI companies, including Musk’s xAI which makes Grok, with contracts of up to $200 million each.
According to reporting from WIRED, leadership at the General Service Administration (GSA) pushed to roll out Grok internally, and the agency added Grok to the GSA Multiple Award Schedule, which would let other agencies buy Grok through another contractor. After Grok started spouting antisemitic phrases and praised Hitler, xAI was removed from a planned GSA announcement, according to WIRED.
xAI did not respond to a request for comment.
next.ink - Alltricks s’est fait pirater son système d’envoi d’e-mails, qui passe visiblement par Sendinblue (Brevo). Des clients ont reçu des tentatives de phishing. La société continue son enquête pour voir s’il y a eu exfiltration de données.
La saison des fuites de données est au beau fixe, au grand dam de vos données personnelles et bancaires, avec des risques de phishing. C’est au tour de la boutique en ligne spécialisée dans le cyclisme d’en faire les frais, comme vous avez été plusieurs à nous le signaler (merci à vous !).
Certains ont, en effet, reçu un email de phishing provenant de la boutique en ligne, parfois sur alias utilisé uniquement pour cette enseigne, ce qui ne laisse que peu de doute quant à la provenance de « l’incident de cybersécurité » pour reprendre un terme à la mode.
Le système d’envoi d’e-mails piratés pour envoyer du phishing
L’email piégé affiche en gros un lien « Open in OneDrive », sur lequel il ne faut évidemment pas cliquer. Le lien semble légitime puisqu’il est de la forme « https://r.sb3.alltricks.com/xxxx ». Il reprend donc bien le domaine d’Alltricks, avec un sous domaine « r.sb3 ». Mais ce lien n’est qu’une redirection vers une autre adresse. Le domaine r.sb3.alltricks.com renvoie vers Sendinblue, une plateforme de gestion des newsletters.
C’est une pratique courante avec ce genre de service : les liens sont modifiés afin de pouvoir récupérer des statistiques sur le taux d’ouverture par exemple. Problème, impossible de savoir où mène ce lien juste en le regardant. Plus embêtant dans le cas présent, son domaine principal pourrait laisser penser que c’est un lien légitime, alors que non !
Hier, le revendeur a communiqué auprès de ses clients : « Nous souhaitons vous informer qu’une intrusion récente a affecté notre système d’envoi d’e-mails. Il est possible que vous ayez reçu, au cours des derniers jours, un message provenant d’adresses telles que : pro@alltricks.com, infos@alltricks.com
ou no-reply@alltricks.com ». La société ne donne pas plus de détails sur la méthode utilisée par les pirates.
Suivant les cas, « ces e-mails pouvaient contenir un lien vous invitant à : renouveler votre mot de passe, ouvrir un fichier Excel, consulter un document OneDrive ». Le revendeur ajoute qu’ils « ne proviennent pas de [son] équipe et ne doivent pas être ouverts ». Dans le cas contraire, il recommande « de modifier rapidement le mot de passe associé à votre compte e-mail ».
numerama.com - Depuis la fin juillet 2025, le Muséum national d’Histoire naturelle (MNHN) de Paris, l’une des institutions majeures en recherche et patrimoine naturel dans le monde, est la cible d’une cyberattaque d’une ampleur inédite. L’organisation ne parvient plus à accéder à de nombreuses bases de données destinées à la recherche scientifique.
C’est une affaire qui s’enlise, et dont l’issue demeure incertaine.
Depuis plusieurs semaines, une partie des réseaux, des outils de recherche et des services numériques essentiels du Muséum National d’Histoire Naturelle de Paris restent inaccessibles.
L’incident, révélé le 31 juillet 2025 par nos confrères de La Tribune, n’a toujours pas été résolu à l’heure où nous publions cet article, ce mardi 12 août à la mi-journée.
La direction du Muséum dit faire face à une cyberattaque sévère : « C’est une attaque vraiment massive. (…) La durée de l’indisponibilité des outils et services, ainsi que le calendrier du retour à la normale, ne sont pour le moment pas encore déterminés », précise Gilles Bloch, président du MNHN, au micro de FranceInfo le 11 août 2025.
Pour l’heure, une question demeure : qui sont les auteurs de cette cyberattaque, et quelles peuvent être leurs motivations ?
L’hypothèse d’un ransomware
La direction de l’organisme confirme avoir prévenu les autorités. Une enquête judiciaire est en cours, dirigée par la section cybercriminalité du parquet de Paris, pour déterminer l’origine, le mode opératoire et les motivations exactes de l’attaque.
Si les premiers éléments semblent orienter vers une opération criminelle structurée, le cas du Muséum national d’Histoire naturelle va bien au-delà du simple vol de données, comme cela a pu être le cas lors de récentes cyberattaques ayant visé des grands groupes français tels qu’Air France ou Bouygues Telecom.
Ici, les chercheurs du Muséum et du centre PATRINAT se retrouvent privés d’accès à leurs principaux outils de travail. Les bases de données inaccessibles représentent une véritable manne scientifique, indispensable aux chercheurs et à plusieurs réseaux collaboratifs. L’attaque perturbe fortement la recherche française, particulièrement dans le secteur des sciences naturelles et de la biodiversité.
Et c’est précisément cette situation d’indisponibilité totale et d’interruption prolongée qui fait redouter la présence d’un ransomware. Il est probable que les auteurs de l’attaque cherchent à exercer un chantage financier : restaurer l’accès aux outils informatiques contre le versement d’une somme d’argent, le tout orchestré via un logiciel malveillant qui tient l’établissement en otage.
Une position claire de la part du MNHN
Dans sa communication publique, la direction du Muséum national d’histoire naturelle de Paris tient à lever toute ambiguïté : aucune rançon ne sera payée.
Gilles Bloch rappelle qu’il s’agit d’« une doctrine de l’État français et des administrations publiques ». L’objectif, comme dans d’autres pays, est de ne pas alimenter le modèle économique des réseaux cybercriminels.
En attendant l’issue de cette affaire, et malgré les perturbations techniques, l’établissement assure que les galeries d’exposition, les jardins botaniques et les parcs zoologiques restent ouverts et fonctionnent normalement. Les visiteurs ne subissent donc aucune conséquence directe de la cyberattaque.
canadianrecycler.ca - Toronto, Ontario -- Businesses across North America are reeling after a serious cyber attack threatened the data of 300 auto recycling businesses, including at least four based in Canada.
The attack, which occured on the evening of August 6, targeted businesses using SimpleHelp, a program that allows remote access to computer facilities. Those businesses that were caught up in the attack were locked out of their own databases and sent ransom notes demanding payment for the return of access.
Plazek Auto Recycling, near Hamilton, Ontario, was one of the businesses affected by the incident. According to Marc Plazek, employees only discovered the situation when they arrived at work to discover they were locked out of their computers — and discovered 30 copies of an identical ransom note on the printer.
“It was as if they arrived at our front gate, locked us in and said ‘we’ve got the only key.’ Except it was all done online.”
The ransomware software, LockBit Blpack, was developed by LockBit, a sophisticated cybercriminal organization. The group employs a dual-threat approach: it not only encrypt victims’ critical data and demand ransom payments for decryption keys, but also threaten to publicly leak sensitive information if its demands aren’t met – a tactic known as double extortion. First appearing on shadowy Russian forums in early 2020, LockBit has quickly established itself as a dominant force in the global ransomware landscape.
Like the other Canadian businesses affected by the hack, Plazek Auto Recycling did not respond to the threat. According to Marc Plazek, the company didn’t even entertain the idea of paying.
“We had a similar thing happen in 2019. We spoke with our insurance company who told us not to pay. They said there would be no reason for the hackers to bother living up to their word anyway.”
Because of the previous incident, Plazek Auto Recycling’s team had set up security measures and backed-up the computer system. The company was able to scrub its system of the malware and save all but a few hours worth of its records.
Other Canadian businesses known to have been affected include Millers Auto Recycling in Fort Erie, Ontario and Marks Parts in Ottawa. Fortunately, these companies were also able to restore access to data.
Other auto recyclers received assistance from the technical departments of Car-Part and Hollander. According to the Automotive Recyclers of Canada, most of the businesses affected by the attack had been
In response to the cyberattack, the executive director of the ARC, Wally Dingman, authored a column discussing the incident for this website.
edition.cnn.com | CNN Business - Millions of AT&T customers can file claims worth up to $7,500 in cash payments as part of a $177 million settlement related to data breaches in 2024.
The telecommunications company had faced a pair of data breaches, announced in March and July 2024, that were met with lawsuits.
Here’s a breakdown.
What happened?
On March 30, 2024, AT&T announced it was investigating a data leak that had occurred roughly two weeks prior. The breach had affected data until 2019, including Social Security numbers, and the information of 73 million former and current customers was found in a dataset on the dark web.
Four months later, the company blamed an “illegal download” on a third-party cloud platform that it learned about in April for a separate breach. This leak included telephone numbers of “nearly all” of AT&T cellular customers and customers of providers that used the AT&T network between May 1 and October 31, 2022, the company said.
The class-action settlement includes a $149 million cash fund for the first breach and a $28 million payout for the second breach.
Am I eligible for a claim?
AT&T customers whose data was involved in either breach, or both, will be eligible. Customers eligible to file a claim will receive an email notice, according to the settlement website.
AT&T said Kroll Settlement Administration is notifying current and former customers.
How do I file a claim?
The deadline to submit a claim is November 18. The final approval hearing for the settlement is December 3, according to the settlement website, and there could be appeals following an approval “and resolving them can take time.”
“Settlement Class Member Benefits will begin after the Settlement has obtained Court approval and the time for all appeals has expired,” the website states.
How much can I claim?
Customers impacted by the March incident are eligible for a cash payment of up to $5,000. Claims must include documentation of losses that happened in 2019 or later, and that are “fairly traceable” to the AT&T breach.
uk.news.yahoo.com - Records show hundreds of data breaches involving HMRC staff
HM Revenue and Customs (HMRC) has revealed that hundreds of staff have accessed the records of taxpayers without permission or breached security in other ways. HMRC dismissed 50 members of staff last year for accessing or risking the exposure of taxpayers’ records, according to The Telegraph.
354 tax employees have been disciplined for data security breaches since 2022, of whom 186 have been fired - and some were dismissed for accessing confidential information. HMRC holds sensitive data including salary and earnings, which staff cannot access without a good reason.
In an email to staff, the line manager of the claimant wrote: “There have been more incidents of this recently.”
John Hood, of accountants Moore Kingston Smith, said: “Any HMRC employee foolish enough to look up personal information that is not part of their usual responsibilities faces a ticking time bomb as most searches are tracked. As an additional security, some parts of the system are restricted so that only specifically authorised personnel can access them, such as the departments dealing with MPs and civil servants.”
HMRC’s annual report shows there were six incidents last year of employees changing customer records without permission, and two of staff losing inadequately protected devices.
A spokesman for HMRC said: “Instances of improper access are extremely rare, and we take firm action when it does happen, helping prevent a recurrence. We take the security of customers’ data extremely seriously and we have robust systems to ensure staff only access records when there is a legitimate business need.”
fluxsec.red/ - Discover the project plan for building Sanctum, an open-source EDR in Rust. Learn about the features, milestones, and challenges in developing an effective EDR and AV system.
Sanctum is an experimental proof-of-concept EDR, designed to detect modern malware techniques, above and beyond the capabilities of antivirus.
Sanctum is going to be an EDR, built in Rust, designed to perform the job of both an antivirus (AV) and Endpoint Detection and Response (EDR). It is no small feat building an EDR, and I am somewhat anxious about the path ahead; but you have to start somewhere and I’m starting with a blog post. If nothing else, this series will help me convey my own development and learning, as well as keep me motivated to keep working on this - all too often with personal projects I start something and then jump to the next shiny thing I think of. If you are here to learn something, hopefully I can impart some knowledge through this process.
I plan to build this EDR also around offensive techniques I’m demonstrating for this blog, hopefully to show how certain attacks could be stopped or detected - or it may be I can’t figure out a way to stop the attack! Either way, it will be fun!
Project rework
Originally, I was going to write the Windows Kernel Driver in Rust, but the bar for Rust Windows Driver development seemed quite high. I then swapped to C, realised how much I missed Rust, and swapped back to Rust!
So this Windows Driver will be fully written in Rust, both the driver and usermode module.
Why Rust for driver development?
Traditionally, drivers have been written in C & C++. While it might seem significantly easier to write this project in C, as an avid Rust enthusiast, I found myself longing for Rust’s features and safety guarantees. Writing in C or C++ made me miss the modern tooling and expressive power that Rust provides.
Thanks to Rust’s ability to operate in embedded and kernel development environments through libcore no_std, and with Microsoft’s support for developing drivers in Rust, Rust comes up as an excellent candidate for a “safer” approach to driver development. I use “safer” in quotes because, despite Rust’s safety guarantees, we still need to interact with unsafe APIs within the operating system. However, Rust’s stringent compile-time checks and ownership model significantly reduce the likelihood of common programming errors & vulnerabilities. I saw a statistic somewhere recently that some funky Rust kernels or driver modules were only like 5% unsafe code, I much prefer the safety of that than writing something which is 100% unsafe!
With regards to safety, even top tier C programmers will make occasional mistakes in their code; I am not a top tier C programmer (far from it!), so for me, the guarantee of a safer driver is much more appealing! The runtime guarantees you get with a Rust program (i.e. no access violations, dangling pointers, use after free’s [unless in those limited unsafe scopes]) are welcomed. Rust really is a great language.
The Windows Driver Kit (WDK) crate ecosystem provides essential tools that make driver development in Rust more accessible. With these crates, we can easily manage heap memory and utilize familiar Rust idioms like println!(). The maintainers of these crates have done a fantastic job bridging the gap between Rust and Windows kernel development.
iscs.org.uk Research Institute for Sociotechnical Cyber Security Cyber intrusion capabilities—such as those used by penetration testers—are essential to enhancing our collective cyber security. However, there are various actors who build and use these capabilities to degrade and harm the digital security of human rights activists, journalists, and politicians. The diverse range of capabilities for cyber intrusion—identifying software vulnerabilities, crafting exploits, creating tools for users, selling and buying those capabilities, and offering services such as penetration testing—makes this a complex policy problem. The market includes those deemed ‘legitimate’ and ‘illegitimate’ by states and civil society, as well as those that exist in ‘grey’ areas between and within jurisdictions. The concern is that the commercial market for cyber intrusion capabilities is growing; as the range of actors involved expands, the potential harm from inappropriate use is increasing. It is in the context of this commercial market that the UK and France launched the Pall Mall Process in 2024 to tackle the proliferation and irresponsible use of commercial cyber intrusion capabilities (CCICs).
With financial support from RISCS, I participated in the second conference of the Pall Mall Process in Paris in April 2025, having attended the inaugural conference in London in 2024. The conference strengthened my thinking and research regarding the political economies of cyber power. For the RISCS community, understanding how international fora shape social, technical, and organisational practice in a world where geopolitics is increasingly fraught and contested is essential—whether in the shaping of cyber security narratives, the building of technology ecosystems, or the addressing of harms perpetuated in the UK and beyond. Cyber diplomacy—of which the Pall Mall Process is part—is now decades in the making, with non-binding cyber norms beginning to emerge from various processes at the UN. The Pall Mall Process is but one of a burgeoning number internationally (see also a recent focus on new initiatives around ransomware), even as international agreement becomes trickier. Beginning with a look at the proliferation of CCICs through markets, I’ll consider the Pall Mall Process (‘the Process’) itself and how it is seeking to intervene, while reflecting on the shortcomings of the concept of ‘responsibility’ when it comes to coordinating international action against irresponsible use of cyber intrusion capabilities.
Proliferation and markets
CCICs have become a growing proliferation concern as they have become available to a wider number of actors. Most concern has centred on the role of surveillance and spyware tools (a focus of US initiatives), with popular public attention on the use of Pegasus software by the Israeli NSO Group against politicians, journalists, and activists. However, spyware is but one part of a broader ecology of ‘zero day’ vulnerabilities, processes, tools, and services that seek to both secure and exploit, with legitimate and illegitimate applications utilising similar technologies and techniques. The complexity of this ecology, alongside the fact that both desirable (e.g., targeting criminal actors) and undesirable (e.g., targeting human rights campaigners) activities are supported by CCICs, means that outright bans lack feasibility. Moreover, many states, particularly states of the global majority, do not have their own ‘in-house’ capabilities. As a result, CCICs are proliferating, which increases the risk that they will be exploited for undesirable activities—because some providers are willing to sell to both responsible actors and those who irresponsibly deploy their acquired capabilities.
As James Shires observes in one of the most comprehensive assessments of the issue to date, the international approach to this problem is split between It is at this intersection that the Process seeks to intervene by acknowledging that proliferation will occur while seeking to impose upon the market both ‘hard’ obligations, such as export control frameworks, and ‘soft’ obligations, such as codes of practice (a code of practice for states was published during the second conference; one for industry may follow). However, the concept of responsibility pervasive within the CCICs discussion is informed by nuanced and contested notions of political economy that privilege western-centric views of democratic practice and strong state capability.
The Pall Mall Process
In June 2025, the UN adopted the final report of the Open-Ended Working Group on security of and in the use of information and communications technologies 2021-2025 (OEWG). This reaffirmed the applicability of international law on cyberspace and 11 previously agreed non-binding cyber norms, as well as establishing a future permanent Global Mechanism to continue international discussions. As Joe Devanny perceptively writes, as much as there was superlative praise for the OEWG, there has in fact been little substantive progress beyond simply ‘holding the line’ on past consensus that is challenged by states such as China and Russia (itself not an insignificant achievement in the current geopolitical environment). Yet, it seems, the global community are unlikely to move forward collectively. The Process then appears at a moment of increasing difficulty for international consensus.
The Process is a much smaller grouping of states and international organisations, with 38 signatories to the initial declaration as of February 2025. Notable exclusions include Israel, which did not send delegates to the first conference, and several states that attended but did not sign. At the first conference in 2024, I had many conversations with state diplomats (some recognised as attending in public documentation, and others not) who were interested but could not sign, who did not have any expertise in CCICs, did not know of commercial operators on their territory, or who could not resolve civilian and military tensions over signing the declaration. The number of signatories reduced to 25 for the code of practice emerging from the second conference, which contained more detailed obligations for tackling CCICs. This demonstrates the difficulties states face not only in becoming public signatories to declarations but also in achieving internal agreement around committing to specific activities—challenges created by both the changing geopolitical climate and unresolved questions concerning what counts as ‘legitimate’ or ‘illegitimate’, or ‘desirable’ or ‘undesirable’, when it comes to CCIC use. One striking contention made at the Paris conference was that limiting the market could be interpreted as a form of colonial action taken by states with existing capability (e.g., the UK and France) against states that would rely on the commercial market to acquire such capability.
There are excellent write-ups of the second conference that offer more detailed insight into the potential development of the process in the future (see, for example, Alexandra Paulus in Lawfare and Lena Riecke in Binding Hook). It is worth noting, however, that the states that signed are primarily those already aligned to the liberal rules-based international order, and predominantly European. There is, among these states, broad agreement on the political economies of responsibility built around rules-based orders and democratic practice. Perhaps this is the future of cyber diplomacy: limiting retrenchment from previous international consensus while advancing forward in smaller groupings in the hope that collective international agreements will be possible under different circumstances in the future. Essentially, this is all a lot of preparation work.
Will such an approach genuinely resolve the issue of CCIC use and proliferation? I suggest that it is unlikely to do so in the short-to-medium term. I argue that the genie will be already out of the bottle by the time a plurality of states have agreed to the principles and codes of the Process.
Responsible Principles
The Process offers multiple principles that underpin a proposed way forward. These include four from the initial declaration—accountability, precision, oversight, and transparency—that inform the aforementioned code of practice for states. These principles are surprisingly similar to those that govern the UK’s National Cyber Force (NCF), which aims to be ‘accountable, precise, and calibrated’. (These, the NCF claims, are ‘the principles of a responsible cyber power’.) Although these principles are more operational in nature, the Process clearly attempts to draw together both policy and practice that might be considered ‘responsible’ when seeking to strike a balance between the counter-proliferation and market-driven perspectives with which it engages.
As I have explored elsewhere (regarding the question of responsibility in UK cyber policy development), responsibility fits within the broader rubric of responsible state behaviour that is common within cyber diplomacy. Yet, it is at this precise moment that the political economies of responsibility are contested; responsibility simply no longer looks the same (if it ever did) from Moscow and Beijing as it does from Berlin and London. Indeed, as The Record reported, liberal sensibilities regarding responsibility were strongly challenged when one member of the US delegation, referring to CCIC developers, simply stated: ‘We’ll kill them.’ Cue astonishment from the other diplomats in the room—the common political economies of responsibility appeared, abruptly, to have been shattered. I’m sure that the delegations from the UK and France feared that this comment might overshadow the conference. In the end, it did not. But what it did show is that the issue of responsibility, as it infuses the Process, may pose problems for widening out state and industry partner involvement.
This is not to say that the UK, France, or other states should abandon a rules-based international order built around common understandings of responsibility. Indeed, such an order is what limits the horrific harms of war and exploitation and should be something we collectively embrace. However, responsibility as an organising concept is highly unlikely to lead to productive and extensive engagement in the short-to-medium term. Indeed, this is not the direction in which the United States is headed (regardless of who resides in the White House), nor that taken by a range of other states who navigate between different views on the future of the international community. Therefore, other organising concepts for CCICs should be explored in order to achieve aligned outcomes.
When attempting to combine counter-proliferation with a market-driven approach, responsibility becomes particularly contentious. For example, as one industry participant reflected in a session to me privately, how does one embed responsibility in a code of practice? This is why a code of practice for industry is likely forthcoming; but who contributes to this, and how they define what is ‘responsible’, will be highly contentious. The concept of responsibility is highly differentiated across not just states but the entire market. Instead of relying on ‘responsibility’, an approach that distinguishes between ‘permissible’ and ‘unpermissable’ activity, as proposed by Shires, may gain traction with a wider number of states and industry actors too. This is because it offers a clearer distinction, free of moral relationality, between permissible (e.g., a voluntary penetration test conducted for an organisation) and impermissible (e.g., surveillance conducted against a politician) activities. However, some impermissible activities can become permissible through clearly articulated safeguards (e.g., when a state wishes to target criminal activity). These do not have to be explicitly related to responsibility, but those making decisions regarding permissibility may wish to show due process—‘know your customer’, and so on.
Although this approach may look similar to responsibility, I think it is distinct in that what is considered permissible or not can be clearly agreed upon, and so provides stronger grounding—particularly for industry actors who wish to work in ‘legitimate’ or desirable markets. It supports the creation of safeguards and enables assessments about the efficacy of such safeguards. Although organisations and states may wish to act responsibly on the edges of a proliferation framework, and for others to do the same, a more concrete view on what is permissible may seem narrower, yet opens up the Process to states and other actors that do not feel able to agree with a political economy of responsibility as articulated by liberal states, but can agree on permissible activity and safeguards to achieve it.
Futures
With the conclusion of the UN OEWG on cyber in June 2025, there are clearly limitations to what can be achieved in the international community at large. This is where the narrower scope of the Pall Mall Process could be a more successful approach to limiting the proliferation of cyber intrusion capabilities and building desirable markets for them. However, I remain unconvinced about situating this process in relation to the concept of responsibility. This is not because I believe that responsibility is a bad thing, but rather because the political economies that aligned responsibility between states have now broken down (even if they were implicitly acknowledged previously). That is, I suggest prefiguring responsibility with permissibility may hold greater promise. Attending the conference in Paris helped me to explore further political economies of this domain—enabling me to work across scales from communities in north east England to a brutalist Paris ballroom to consider what may build better futures for our collective cyber security.
Dr Andrew Dwyer
Royal Holloway, University of London
RISCS Associate Fellow
engineering.cmu.edu - College of Engineering at Carnegie Mellon University - Carnegie Mellon researchers show how LLMs can be taught to autonomously plan and execute real-world cyberattacks against enterprise-grade network environments—and why this matters for future defenses.
In a groundbreaking development, a team of Carnegie Mellon University researchers has demonstrated that large language models (LLMs) are capable of autonomously planning and executing complex network attacks, shedding light on emerging capabilities of foundation models and their implications for cybersecurity research.
The project, led by Ph.D. candidate Brian SingerOpens in new window, a Ph.D. candidate in electrical and computer engineering (ECE)Opens in new window, explores how LLMs—when equipped with structured abstractions and integrated into a hierarchical system of agents—can function not merely as passive tools, but as active, autonomous red team agents capable of coordinating and executing multi-step cyberattacks without detailed human instruction.
“Our research aimed to understand whether an LLM could perform the high-level planning required for real-world network exploitation, and we were surprised by how well it worked,” said Singer. “We found that by providing the model with an abstracted ‘mental model’ of network red teaming behavior and available actions, LLMs could effectively plan and initiate autonomous attacks through coordinated execution by sub-agents.”
Moving beyond simulated challenges
Prior work in this space had focused on how LLMs perform in simplified “capture-the-flag” (CTF) environments—puzzles commonly used in cybersecurity education.
Singer’s research advances this work by evaluating LLMs in realistic enterprise network environments and considering sophisticated, multi-stage attack plans.
Using state-of-the-art, reasoning-capable LLMs equipped with common knowledge of computer security tools failed miserably at the challenges. However, when these same LLMs and smaller LLMs as well were “taught” a mental model and abstraction of security attack orchestration, they showed dramatic improvement.
Rather than requiring the LLM to execute raw shell commands—often a limiting factor in prior studies—this system provides the LLM with higher-level decision-making capabilities while delegating low-level tasks to a combination of LLM and non-LLM agents.
Experimental evaluation: The Equifax case
To rigorously evaluate the system’s capabilities, the team recreated the network environment associated with the 2017 Equifax data breachOpens in new window—a massive security failure that exposed the personal data of nearly 150 million Americans—by incorporating the same vulnerabilities and topology documented in Congressional reports. Within this replicated environment, the LLM autonomously planned and executed the attack sequence, including exploiting vulnerabilities, installing malware, and exfiltrating data.
“The fact that the model was able to successfully replicate the Equifax breach scenario without human intervention in the planning loop was both surprising and instructive,” said Singer. “It demonstrates that, under certain conditions, these models can coordinate complex actions across a system architecture.”
Implications for security testing and autonomous defense
While the findings underscore potential risks associated with LLM misuse, Singer emphasized the constructive applications for organizations seeking to improve security posture.
“Right now, only big companies can afford to run professional tests on their networks via expensive human red teams, and they might only do that once or twice a year,” he explained. “In the future, AI could run those tests constantly, catching problems before real attackers do. That could level the playing field for smaller organizations.”
The research team features Singer, Keane LucasOpens in new window of AnthropicOpens in new window and a CyLabOpens in new window alumnus, Lakshmi AdigaOpens in new window, an undergraduate ECE student, Meghna Jain, a master’s ECE student, Lujo BauerOpens in new window of ECE and the CMU Software and Societal Systems Department (S3D)Opens in new window, and Vyas SekarOpens in new window of ECE. Bauer and Sekar are co-directors of the CyLab Future Enterprise Security InitiativeOpens in new window, which supported the students involved in this research.
blog.trailofbits.com - Now that DARPA’s AI Cyber Challenge (AIxCC) has officially ended, we can finally make Buttercup, our CRS (Cyber Reasoning System), open source!
We’re thrilled to announce that Trail of Bits won second place in DARPA’s AI Cyber Challenge (AIxCC)! Now that the competition has ended, we can finally make Buttercup, our cyber reasoning system (CRS), open source. We’re thrilled to make Buttercup broadly available and see how the security community uses, extends, and benefits from it.
To ensure as many people as possible can use Buttercup, we created a standalone version that runs on a typical laptop. We’ve also tuned this version to work within an AI budget appropriate for individual projects rather than a massive competition at scale. In addition to releasing the standalone version of Buttercup, we’re also open-sourcing the versions that competed in AIxCC’s semifinal and final rounds.
In the rest of this post, we’ll provide a high-level overview of how Buttercup works, how to get started using it, and what’s in store for it next. If you’d prefer to go straight to the code, check it out here on GitHub.
How Buttercup works
Buttercup is a fully automated, AI-driven system for discovering and patching vulnerabilities in open-source software. Buttercup has four main components:
Orchestration/UI coordinates the overall actions of Buttercup’s other components and displays information about vulnerabilities discovered and patches generated by the system. In addition to a typical web interface, Buttercup also reports its logs and system events to a SigNoz telemetry server to make it easy for users to see what Buttercup is doing.
Vulnerability discovery uses AI-augmented mutational fuzzing to find program inputs that demonstrate vulnerabilities in the program. Buttercup’s vulnerability discovery engine is based on OSS-Fuzz/Clusterfuzz and uses libFuzzer and Jazzer to find vulnerabilities.
Contextual analysis uses traditional static analysis tools to create queryable program models that are used to provide context to AI models used in vulnerability discovery and patching. Buttercup uses tree-sitter and CodeQuery to build the program model.
Patch generation is a multi-agentic system for creating and validating software patches for vulnerabilities discovered by Buttercup. Buttercup’s patch generation system uses seven distinct AI agents to create robust patches that fix vulnerabilities it finds and avoid breaking the program’s other functionality.
aicyberchallenge.com - Teams’ AI-driven systems find, patch real-world cyber vulnerabilities; available open source for broad adoption
A cyber reasoning system (CRS) designed by Team Atlanta is the winner of the DARPA AI Cyber Challenge (AIxCC), a two-year, first-of-its-kind competition in collaboration with the Advanced Research Projects Agency for Health (ARPA-H) and frontier labs. Competitors successfully demonstrated the ability of novel autonomous systems using AI to secure the open-source software that underlies critical infrastructure.
Numerous attacks in recent years have illuminated the ability for malicious cyber actors to exploit vulnerable software that runs everything from financial systems and public utilities to the health care ecosystem.
“AIxCC exemplifies what DARPA is all about: rigorous, innovative, high-risk and high- reward programs that push the boundaries of technology. By releasing the cyber reasoning systems open source—four of the seven today—we are immediately making these tools available for cyber defenders,” said DARPA Director Stephen Winchell. “Finding vulnerabilities and patching codebases using current methods is slow, expensive, and depends on a limited workforce – especially as adversaries use AI to amplify their exploits. AIxCC-developed technology will give defenders a much-needed edge in identifying and patching vulnerabilities at speed and scale.”
To further accelerate adoption, DARPA and ARPA-H are adding $1.4 million in prizes for the competing teams to integrate AIxCC technology into real-world critical infrastructure- relevant software.
“The success of today’s AIxCC finalists demonstrates the real-world potential of AI to address vulnerabilities in our health care system,” said ARPA-H Acting Director Jason Roos. “ARPA-H is committed to supporting these teams to transition their technologies and make a meaningful impact in health care security and patient safety.”
Team Atlanta comprises experts from Georgia Tech, Samsung Research, the Korea Advanced Institute of Science & Technology (KAIST), and the Pohang University of Science and Technology (POSTECH).
Trail of Bits, a New York City-based small business, won second place, and Theori, comprising AI researchers and security professionals in the U.S. and South Korea, won third place.
The top three teams will receive $4 million, $3 million, and $1.5 million, respectively, for their performance in the Final Competition.
All seven competing teams, including teams all_you_need_is_a_fuzzing_brain, Shellphish, 42-beyond-bug and Lacrosse, worked on aggressively tight timelines to design automated systems that significantly advance cybersecurity research.
Deep Dive: Final Competition Findings, Highlights
In the Final Competition scored round, teams’ systems attempted to identify and generate patches for synthetic vulnerabilities across 54 million lines of code. Since the competition was based on real-world software, team CRSs could discover vulnerabilities not intentionally introduced to the competition. The scoring algorithm prioritized competitors’ performance based on the ability to create patches for vulnerabilities quickly and their analysis of bug reports. The winning team performed best at finding and proving vulnerabilities, generating patches, pairing vulnerabilities and patches, and scoring with the highest rate of accurate and quality submissions.
In total, competitors’ systems discovered 54 unique synthetic vulnerabilities in the Final Competition’s 70 challenges. Of those, they patched 43.
In the Final Competition, teams also discovered 18 real, non-synthetic vulnerabilities that are being responsibly disclosed to open source project maintainers. Of these, six were in C codebases—including one vulnerability that was discovered and patched in parallel by maintainers—and 12 were in Java codebases. Teams also provided 11 patches for real, non-synthetic vulnerabilities.
“Since the launch of AIxCC, community members have moved from AI skeptics to advocates and adopters. Quality patching is a crucial accomplishment that demonstrates the value of combining AI with other cyber defense techniques,” said AIxCC Program Manager Andrew Carney. “What’s more, we see evidence that the process of a cyber reasoning system finding a vulnerability may empower patch development in situations where other code synthesis techniques struggle.”
Competitor CRSs proved they can create valuable bug reports and patches for a fraction of the cost of traditional methods, with an average cost per competition task of about $152. Bug bounties can range from hundreds to hundreds of thousands of dollars.
AIxCC technology has advanced significantly from the Semifinal Competition held in August 2024. In the Final Competition scored round, teams identified 77% of the competition’s synthetic vulnerabilities, an increase from 37% at semifinals, and patched 61% of the vulnerabilities identified, an increase from 25% at semifinals. In semifinals, teams were most successful in finding and patching vulnerabilities in C codebases. In finals, teams had similar success rates at finding and patching vulnerabilities across C codebases and Java codebases.
securityweek.com - August 2025 ICS Patch Tuesday advisories have been published by Siemens, Schneider, Aveva, Honeywell, ABB and Phoenix Contact.
August 2025 Patch Tuesday advisories have been published by several major companies offering industrial control system (ICS) and other operational technology (OT) solutions.
Siemens has published 22 new advisories. One of them is for CVE-2025-40746, a critical Simatic RTLS Locating Manager issue that can be exploited by an authenticated attacker for code execution with System privileges.
The company has also published advisories covering high-severity vulnerabilities in Comos (code execution), Siemens Engineering Platforms (code execution), Simcenter (crash or code execution), Sinumerik controllers (unauthorized remote access), Ruggedcom (authentication bypass with physical access), Simatic (code execution), Siprotect (DoS), and Opcenter Quality (unauthorized access).
Siemens also addressed vulnerabilities introduced by the use of third-party components, including OpenSSL, Linux kernel, Wibu Systems, Nginx, Nozomi Networks, and SQLite.
Medium- and low-severity issues have been resolved in Simotion Scout, Siprotec 5, Simatic RTLS Locating Manager, Ruggedcom ROX II, and Sicam Q products.
As usual, Siemens has released patches for many of these vulnerabilities, but only mitigations or workarounds are available for some of the flaws.
Schneider Electric has released five new advisories. One of them describes four high-severity vulnerabilities in EcoStruxure Power Monitoring Expert (PME), Power Operation (EPO), and Power SCADA Operation (PSO) products. Exploitation of the flaws can lead to arbitrary code execution or sensitive data exposure.
In the Modicon M340 controller and its communication modules the industrial giant fixed a high-severity DoS vulnerability that can be triggered with specially crafted FTP commands, as well as a high-severity issue that can lead to sensitive information exposure or a DoS condition.
In the Schneider Electric Software Update tool, the company patched a high-severity vulnerability that can allow an attacker to escalate privileges, corrupt files, obtain information, or cause a persistent DoS.
Medium-severity issues that can lead to privilege escalation, DoS, or sensitive credential exposure have been patched in Saitel and EcoStruxure products.
Honeywell has published six advisories focusing on building management products, including several advisories that inform customers about Windows patches for Maxpro and Pro-Watch NVR and VMS products. The company has also released advisories covering PW-series access controller patches and security enhancements.
Aveva has published an advisory for two issues in its PI Integrator for Business Analytics. Two vulnerabilities have been patched: one arbitrary file upload issue that could lead to code execution, and a sensitive data exposure weakness.
ABB told customers on Tuesday about several vulnerabilities affecting its Aspect, Nexus and Matrix products. Some of the flaws can be exploited without authentication for remote code execution, obtaining credentials, and to manipulate files and various components.
Phoenix Contact has informed customers about a privilege escalation vulnerability in Device and Update Management. The company has described it as a misconfiguration that allows a low-privileged local user to execute arbitrary code with admin privileges. Germany’s CERT@VDE has also published a copy of the Phoenix Contact advisory.
The US cybersecurity agency CISA has published three new advisories describing vulnerabilities in Santesoft Sante PACS Server, Johnson Controls iSTAR, and Ashlar-Vellum products. CISA has also distributed the Aveva advisory and one of the Schneider Electric advisories.
A few days prior to Patch Tuesday, Rockwell Automation published an advisory informing customers about several high-severity code execution vulnerabilities affecting its Arena Simulation product.
Also prior to Patch Tuesday, Mitsubishi Electric released an advisory describing an information tampering flaw in Genesis and MC Works64 products.
securityweek.com - Rockwell Automation has published several advisories describing critical and high-severity vulnerabilities affecting its products.
Rockwell Automation this week published several advisories describing critical- and high-severity vulnerabilities found recently in its products.
The industrial automation giant has informed customers about critical vulnerabilities in FactoryTalk, Micro800, and ControlLogix products.
In the FactoryTalk Linx Network Browser the vendor fixed CVE-2025-7972, a flaw that allows an attacker to disable FTSP token validation, which can be used to create, update, and delete FTLinx drivers.
In the case of Micro800 series PLCs, Rockwell resolved three older vulnerabilities affecting the Azure RTOS open source real-time operating system. The security holes can be exploited for remote code execution and privilege escalation. In addition to the Azure RTOS issues, the company has addressed a DoS vulnerability.
In ControlLogix products Rockwell patched a remote code execution vulnerability tracked as CVE-2025-7353.
The list of high-severity flaws includes two DoS issues in FLEX 5000, a code execution vulnerability in Studio 5000 Logix Designer, web server issues in ArmorBlock 5000, a privilege escalation in FactoryTalk ViewPoint, and an information exposure issue in FactoryTalk Action Manager.
None of these vulnerabilities have been exploited in the wild, according to Rockwell Automation.
The cybersecurity agency CISA has also published advisories for these vulnerabilities to inform organizations about the potential risks.
commsrisk.com - A joint press conference organized on Sunday by the Technology Crime Suppression Division of the Thai police and AIS, the country’s largest mobile operator, shared the results of another operation to locate and capture a fake base station being used to send fraudulent SMS messages. The operation culminated with the arrest of two young Thai men and the seizure of one SMS blaster from their car.
The operation was instigated by a member of the public who advised they had received a scam message. On August 8, the SMS blaster was pinpointed in a Mazda vehicle driving along New Petchburi Road, a major thoroughfare in Bangkok. The vehicle was followed and police arrested its two occupants, both in their early 20’s, when they stopped at a gas station in Bangkok’s Bang Phlat District.
The fake base station was used to send scam messages impersonating banks and comms providers. The messages claimed recipients had received a prize or had earned loyalty points that needed to be redeemed before they expired. These are familiar themes that have also been used for SMS blaster scams in other countries. Victims who clicked the link in the messages were directed to a phishing website. The criminals’ goal is to obtain the banking details of victims so their bank accounts can be plundered.
One of the arrested men told the police that they had been recruited via Telegram messages from a Chinese man who paid them THB2,500 (USD75) a day. Both men admitted the SMS blaster had been driven around on three separate occasions, the earliest of which was August 2 of this year. A spokesperson for AIS stated the device they were using had an effective range of 1-2km and was capable of sending over 20,000 SMS messages a day. Photographs of the arrest and the equipment are reproduced at the bottom of this article.
An industry insider revealed to Commsrisk that Thai telcos have been discouraged from sharing as much information about SMS blaster raids as previously. Public awareness of the risks posed by SMS blasters is higher in Thailand than many other countries because of well-publicized police busts and a concerted effort to warn phone users not to click on hyperlinks in suspicious SMS messages. However, there is now concern that revealing the details of anti-crime operations is helping the criminals to adapt their techniques to better avoid detection.
Cynical telcos that prioritize profits over public safety want splashy news stories about police raids and the arrest of low-level criminals because it creates the appearance that the war against networked crime can be won using these tactics. Responsible professionals understand that detecting the radio comms devices used to commit crime is only a palliative and not a genuine solution. If a radio device is already being used to send fraudulent messages then telcos and the authorities are choosing to react to crime instead of preventing it.
Thai law enforcement has wisely adopted a proactive strategy supported by the country’s telcos. This involved criminalizing the possession of SMS blasters and simboxes before using border controls to stop them being imported into Thailand. However, Thailand’s porous borders with Cambodia and Myanmar, which both serve as safe havens for scam compounds, makes it harder to prevent new scam equipment being smuggled into the country.
The resources that Thailand has devoted to detecting SMS blasters should not be underestimated. But it also shows that relying upon the speedy detection of radio comms equipment used by scammers will never be sufficient. AIS is working with police to find SMS blasters within just a few days of them being activated but gangs keep coming back with more.
Seizing equipment and imprisoning low-level goons does not discourage the criminal bosses that orchestrate these scams. They soon hire new foot soldiers to operate newly-despatched scam tech. Every success in locating radio equipment prompts the criminals to elaborate tactics that make them harder to find next time. Thailand’s experience demonstrates that every country will need to adopt a comprehensive approach to prohibiting and interrupting the supply of radio comms devices that have very few legitimate uses.
This case has been added to the SMS blaster map on our Global Fraud Dashboard. We use AI-powered search to maintain the most comprehensive and up-to-date compendium of reports of fake base stations being used to send SMS messages.
reuters.com - Aug 13 (Reuters) - U.S. authorities have secretly placed location tracking devices in targeted shipments of advanced chips they see as being at high risk of illegal diversion to China, according to two people with direct knowledge of the previously unreported law enforcement tactic.
The measures aim to detect AI chips being diverted to destinations which are under U.S. export restrictions, and apply only to select shipments under investigation, the people said.
They show the lengths to which the U.S. has gone to enforce its chip export restrictions on China, even as the Trump administration has sought to relax some curbs on Chinese access to advanced American semiconductors.
The trackers can help build cases against people and companies who profit from violating U.S. export controls, said the people, who declined to be named because of the sensitivity of the issue.
Location trackers are a decades-old investigative tool used by U.S. law enforcement agencies to track products subject to export restrictions, such as airplane parts. They have been used to combat the illegal diversion of semiconductors in recent years, one source said.
Five other people actively involved in the AI server supply chain say they are aware of the use of the trackers in shipments of servers from manufacturers such as Dell (DELL.N), opens new tab and Super Micro (SMCI.O), opens new tab, which include chips from Nvidia (NVDA.O), opens new tab and AMD (AMD.O), opens new tab.
Those people said the trackers are typically hidden in the packaging of the server shipments. They did not know which parties were involved in installing them and where along the shipping route they were inserted.
Reuters was not able to determine how often the trackers have been used in chip-related investigations or when U.S. authorities started using them to investigate chip smuggling. The U.S. started restricting the sale of advanced chips by Nvidia, AMD and other manufacturers to China in 2022.
In one 2024 case described by two of the people involved in the server supply chain, a shipment of Dell servers with Nvidia chips included both large trackers on the shipping boxes and smaller, more discreet devices hidden inside the packaging — and even within the servers themselves.
A third person said they had seen images and videos of trackers being removed by other chip resellers from Dell and Super Micro servers. The person said some of the larger trackers were roughly the size of a smartphone.
The U.S. Department of Commerce's Bureau of Industry and Security, which oversees export controls and enforcement, is typically involved, and Homeland Security Investigations and the Federal Bureau of Investigation may take part too, said the sources.
The HSI and FBI both declined to comment. The Commerce Department did not respond to requests for comment.
The Chinese foreign ministry said it was not aware of the matter.
Super Micro said in a statement that it does not disclose its “security practices and policies in place to protect our worldwide operations, partners, and customers.” It declined to comment on any tracking actions by U.S. authorities.
databreachtoday.eu - Hackers breached a sensitive database containing office locations and personal details of elected officials and staff in Canada's House of Commons.
The breach targeting the House of Commons network occurred Friday and involved a database "containing information used to manage computers and mobile devices," according to an internal email obtained by CBC News. Hackers were able to "exploit a recent Microsoft vulnerability," the missive said.
The message did not name any nation-state or criminal group, and it remains unclear which database was compromised or if other sensitive data was accessed. Affected information includes names and titles, email addresses and device details including models, operating systems and telephone numbers.
Olivier Duhaime, spokesperson for the House of Commons' Office of the Speaker, told Information Security Media Group in an emailed statement Thursday that the "House of Commons is working closely with its national security partners to further investigate this matter." Duhaime declined to comment any further on the specifics of the investigation, citing "security reasons."
The Canadian Center for Cyber Security in July warned that it was aware of exploitation occurring inside the country of a zero-day exploit discovered in Microsoft SharePoint. The computing giant published an emergency patch described by Google Cloud's Mandiant consulting chief technology officer as "uniquely urgent and drastic" (see: SharePoint Zero-Days Exploited to Unleash Warlock Ransomware).
The U.S. Cybersecurity and Infrastructure Security Agency warned earlier this month that remote code execution flaw - publicly known as "ToolShell" - allows unauthenticated system access and authenticated access via network spoofing. The agency said attackers can gain full access to SharePoint content, including file systems and configurations.
"This isn't an 'apply the patch and you're done' situation," Mandiant Chief Technology Officer Charles Carmakal wrote on LinkedIn, urging organizations with SharePoint to "implement mitigations right away" and apply the patch.
Microsoft said in a July blog post that threat actors seeking initial access include Chinese nation-state hackers tracked as Linen Typhoon and Violet Typhoon, as well as possibly China-linked Storm-2603. Linen and Violet Typhoon have targeted intellectual property from government, defense, strategic planning and human rights organizations, along with higher education, media, financial and health sectors across the United States, Europe and Asia.
Linen typically conducts "drive-by compromises" using known exploits, while Violet "persistently scans for vulnerabilities in the exposed web infrastructure of target organizations."
CERT-AGID cert-agid.gov.it - È stata recentemente rilevata l’attività di vendita illegale di documenti d’identità trafugati da hotel operanti sul territorio italiano. Si tratta di decine di migliaia di scansioni ad alta risoluzione di passaporti, carte d’identità e altri documenti di riconoscimento utilizzati dai clienti durante le operazioni di check-in.
Secondo quanto dichiarato dallo stesso attore malevolo “mydocs“ – che ha posto in vendita il materiale su un noto forum underground – i documenti sarebbero stati sottratti tra giugno e luglio 2025 tramite accesso non autorizzato nei confronti di tre strutture alberghiere italiane.
Aggiornamento del 08/08/2025: nella giornata odierna, lo stesso autore ha reso disponibile sul medesimo forum una nuova raccolta di 17.000 documenti d’identità, sottratti a un’ulteriore struttura ricettiva italiana.
Aggiornamento del 11/08/2025: il medesimo attore malevolo, durante il fine settimana del 9-10 agosto, ha pubblicato nuovi post nei quali pone in vendita ulteriori collezioni, per un ammontare – secondo le sue dichiarazioni – di oltre 70.000 nuovi documenti d’identità dichiarati, esfiltrati a quattro differenti hotel italiani.
Aggiornamento del 13/08/2025: nella tarda serata di ieri, l’attaccante “mydocs” ha pubblicato sul medesimo forum un nuovo annuncio di vendita relativo a documenti d’identità sottratti a due ulteriori strutture alberghiere. Secondo quanto dichiarato, si tratterebbe di circa 3.600 unità. Con quest’ultima rivendicazione, il totale degli hotel italiani coinvolti salirebbe a dieci. Non si esclude che possano emergere ulteriori casi nei prossimi giorni.
Aggiornamento del 13/08/2025: nella tarda serata di ieri, l’attaccante “mydocs” ha pubblicato sul medesimo forum un nuovo annuncio di vendita relativo a documenti d’identità sottratti a due ulteriori strutture alberghiere. Secondo quanto dichiarato, si tratterebbe di circa 3.600 unità. Con quest’ultima rivendicazione, il totale degli hotel italiani coinvolti salirebbe a dieci. Non si esclude che possano emergere ulteriori casi nei prossimi giorni.
Aggiornamento del 14/08/2025: la scorsa notte, il noto attore malevolo ha messo in vendita, sempre sullo stesso forum, ulteriori documenti d’identità relativi a due nuove strutture ricettive, per un totale dichiarato di circa 9.300 scansioni.
I documenti personali – in questo caso ottenuti tramite compromissione dei dati appartenenti a strutture ricettive, ma più comunemente attraverso attività di phishing – possono rappresentare un asset di grande valore per gli attori malevoli, che li utilizzano per mettere in atto diverse tipologie di truffe sempre più sofisticate:
creazione di documenti falsi basati su identità reali;
apertura di conti bancari o linee di credito fraudolente;
attività di social engineering per colpire le vittime o le loro cerchie personali e professionali;
furto di identità digitale con ripercussioni legali o economiche per le persone coinvolte.
Sebbene episodi analoghi fossero già emersi nel maggio 2025, l’incremento delle vendite illecite di documenti di identità conferma l’urgenza di rafforzare la consapevolezza e le misure di protezione, tanto da parte delle organizzazioni che li gestiscono quanto da parte dei cittadini.
Conclusioni
Considerata la frequenza crescente di queste attività illecite, è sempre più evidente quanto sia fondamentale che le strutture che raccolgono e gestiscono documenti d’identità adottino misure rigorose per la protezione e la sicurezza delle informazioni, garantendo non solo un corretto trattamento dei dati, ma anche la salvaguardia dei propri sistemi e portali digitali da accessi non autorizzati.
In tale contesto, anche i cittadini hanno un ruolo fondamentale nella protezione della propria identità. È importante verificare periodicamente che non ci siano segnali di utilizzi indebiti dei propri dati – come richieste di credito o apertura di conti non autorizzati – ed evitare la condivisione di copie dei documenti personali su canali non sicuri o non necessari. In caso di sospetti abusi o furti d’identità, è sempre opportuno segnalare tempestivamente l’accaduto alle autorità competenti.