Python Software Foundation News
pyfound.blogspot.com
Monday, October 27, 2025
The PSF has withdrawn a $1.5 million proposal to US government grant program
In January 2025, the PSF submitted a proposal to the US government National Science Foundation under the Safety, Security, and Privacy of Open Source Ecosystems program to address structural vulnerabilities in Python and PyPI. It was the PSF’s first time applying for government funding, and navigating the intensive process was a steep learning curve for our small team to climb. Seth Larson, PSF Security Developer in Residence, serving as Principal Investigator (PI) with Loren Crary, PSF Deputy Executive Director, as co-PI, led the multi-round proposal writing process as well as the months-long vetting process. We invested our time and effort because we felt the PSF’s work is a strong fit for the program and that the benefit to the community if our proposal were accepted was considerable.
We were honored when, after many months of work, our proposal was recommended for funding, particularly as only 36% of new NSF grant applicants are successful on their first attempt. We became concerned, however, when we were presented with the terms and conditions we would be required to agree to if we accepted the grant. These terms included affirming the statement that we “do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI, or discriminatory equity ideology in violation of Federal anti-discrimination laws.” This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole. Further, violation of this term gave the NSF the right to “claw back” previously approved and transferred funds. This would create a situation where money we’d already spent could be taken back, which would be an enormous, open-ended financial risk.
Diversity, equity, and inclusion are core to the PSF’s values, as committed to in our mission statement:
The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers.
Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries.
In the end, however, the PSF simply can’t agree to a statement that we won’t operate any programs that “advance or promote” diversity, equity, and inclusion, as it would be a betrayal of our mission and our community.
We’re disappointed to have been put in the position where we had to make this decision, because we believe our proposed project would offer invaluable advances to the Python and greater open source community, protecting millions of PyPI users from attempted supply-chain attacks. The proposed project would create new tools for automated proactive review of all packages uploaded to PyPI, rather than the current process of reactive-only review. These novel tools would rely on capability analysis, designed based on a dataset of known malware. Beyond just protecting PyPI users, the outputs of this work could be transferable for all open source software package registries, such as NPM and Crates.io, improving security across multiple open source ecosystems.
In addition to the security benefits, the grant funds would have made a big difference to the PSF’s budget. The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14. $1.5 million over two years would have been quite a lot of money for us, and easily the largest grant we’d ever received. Ultimately, however, the value of the work and the size of the grant were not more important than practicing our values and retaining the freedom to support every part of our community. The PSF Board voted unanimously to withdraw our application.
Giving up the NSF grant opportunity—along with inflation, lower sponsorship, economic pressure in the tech sector, and global/local uncertainty and conflict—means the PSF needs financial support now more than ever. We are incredibly grateful for any help you can offer. If you're already a PSF member or regular donor, you have our deep appreciation, and we urge you to share your story about why you support the PSF. Your stories make all the difference in spreading awareness about the mission and work of the PSF.
How to support the PSF:
Become a Member: When you sign up as a Supporting Member of the PSF, you become a part of the PSF. You’re eligible to vote in PSF elections, using your voice to guide our future direction, and you help us sustain what we do with your annual support.
Donate: Your donation makes it possible to continue our work supporting Python and its community, year after year.
Sponsor: If your company uses Python and isn’t yet a sponsor, send them our sponsorship page or reach out to sponsors@python.org today. The PSF is ever grateful for our sponsors, past and current, and we do everything we can to make their sponsorships beneficial and rewarding.
source: OpenAI openai.com
October 30, 2025
Now in private beta: an AI agent that thinks like a security researcher and scales to meet the demands of modern software.
Today, we’re announcing Aardvark, an agentic security researcher powered by GPT‑5.
Software security is one of the most critical—and challenging—frontiers in technology. Each year, tens of thousands of new vulnerabilities are discovered across enterprise and open-source codebases. Defenders face the daunting tasks of finding and patching vulnerabilities before their adversaries do. At OpenAI, we are working to tip that balance in favor of defenders.
Aardvark represents a breakthrough in AI and security research: an autonomous agent that can help developers and security teams discover and fix security vulnerabilities at scale. Aardvark is now available in private beta to validate and refine its capabilities in the field.
How Aardvark works
Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches.
Aardvark works by monitoring commits and changes to codebases, identifying vulnerabilities, how they might be exploited, and proposing fixes. Aardvark does not rely on traditional program analysis techniques like fuzzing or software composition analysis. Instead, it uses LLM-powered reasoning and tool-use to understand code behavior and identify vulnerabilities. Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more.
Diagram titled “AARDVARK — Vulnerability Discovery Agent Workflow” showing a process flow from Git repository to threat modeling, vulnerability discovery, validation sandbox, patching with Codex, and human review leading to a pull request.
Aardvark relies on a multi-stage pipeline to identify, explain, and fix vulnerabilities:
Analysis: It begins by analyzing the full repository to produce a threat model reflecting its understanding of the project’s security objectives and design.
Commit scanning: It scans for vulnerabilities by inspecting commit-level changes against the entire repository and threat model as new code is committed. When a repository is first connected, Aardvark will scan its history to identify existing issues. Aardvark explains the vulnerabilities it finds step-by-step, annotating code for human review.
Validation: Once Aardvark has identified a potential vulnerability, it will attempt to trigger it in an isolated, sandboxed environment to confirm its exploitability. Aardvark describes the steps taken to help ensure accurate, high-quality, and low false-positive insights are returned to users.
Patching: Aardvark integrates with OpenAI Codex to help fix the vulnerabilities it finds. It attaches a Codex-generated and Aardvark-scanned patch to each finding for human review and efficient, one-click patching.
Aardvark works alongside engineers, integrating with GitHub, Codex, and existing workflows to deliver clear, actionable insights without slowing development. While Aardvark is built for security, in our testing we’ve found that it can also uncover bugs such as logic flaws, incomplete fixes, and privacy issues.
Real impact, today
Aardvark has been in service for several months, running continuously across OpenAI’s internal codebases and those of external alpha partners. Within OpenAI, it has surfaced meaningful vulnerabilities and contributed to OpenAI’s defensive posture. Partners have highlighted the depth of its analysis, with Aardvark finding issues that occur only under complex conditions.
In benchmark testing on “golden” repositories, Aardvark identified 92% of known and synthetically-introduced vulnerabilities, demonstrating high recall and real-world effectiveness.
Aardvark for Open Source
Aardvark has also been applied to open-source projects, where it has discovered and we have responsibly disclosed numerous vulnerabilities—ten of which have received Common Vulnerabilities and Exposures (CVE) identifiers.
As beneficiaries of decades of open research and responsible disclosure, we’re committed to giving back—contributing tools and findings that make the digital ecosystem safer for everyone. We plan to offer pro-bono scanning to select non-commercial open source repositories to contribute to the security of the open source software ecosystem and supply chain.
We recently updated our outbound coordinated disclosure policy which takes a developer-friendly stance, focused on collaboration and scalable impact, rather than rigid disclosure timelines that can pressure developers. We anticipate tools like Aardvark will result in the discovery of increasing numbers of bugs, and want to sustainably collaborate to achieve long-term resilience.
Why it matters
Software is now the backbone of every industry—which means software vulnerabilities are a systemic risk to businesses, infrastructure, and society. Over 40,000 CVEs were reported in 2024 alone. Our testing shows that around 1.2% of commits introduce bugs—small changes that can have outsized consequences.
Aardvark represents a new defender-first model: an agentic security researcher that partners with teams by delivering continuous protection as code evolves. By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation. We believe in expanding access to security expertise. We're beginning with a private beta and will broaden availability as we learn.
Private beta now open
We’re inviting select partners to join the Aardvark private beta. Participants will gain early access and work directly with our team to refine detection accuracy, validation workflows, and reporting experience.
We’re looking to validate performance across a variety of environments. If your organization or open source project is interested in joining, you can apply here.
securityweek.com
ByIonut Arghire| October 30, 2025 (9:01 AM ET)
Updated: October 31, 2025 (2:36 AM ET)
The hackers stole names, addresses, dates of birth, Social Security numbers, and health and insurance information.
Business services provider Conduent is notifying more than 10 million people that their personal information was stolen in a January 2025 data breach.
The incident was disclosed publicly in late January, when Conduent confirmed system disruptions that affected government agencies in multiple US states.
In April, the company notified the Securities and Exchange Commission (SEC) that the attackers had stolen personal information from its systems.
Last week, Conduent started notifying users that their personal information was stolen in the incident, and submitted notices to Attorney General’s Offices in multiple states.
The hackers accessed Conduent’s network on October 21, 2024 and were evicted on January 13, 2025, after the attack was identified, the company says in the notification letter to the affected individuals.
During the time frame, the attackers exfiltrated various files from the network, including files containing personal information such as names, addresses, dates of birth, Social Security numbers, health insurance details, and medical information.
Conduent is not providing the affected people with free identity theft protection services, but encourages them to obtain free credit reports, place fraud alerts on their credit files, and place security freezes on their credit reports.
“Upon discovery of the incident, we safely restored our systems and operations and notified law enforcement. We are also notifying you in case you decide to take further steps to protect your information should you feel it appropriate to do so,” the notification letter reads.
Based on the data breach notice submitted with the authorities in Oregon, it appears that 10,515,849 individuals were impacted, with the largest number in Texas (4 million).
Conduent serves over 600 government and transportation organizations, and roughly half of Fortune 100 companies, across financial, pharmaceutical, and automobile sectors. The company supports roughly 100 million US residents across 46 states.
While the company has not shared details on the threat actor behind the attack, the Safepay ransomware group claimed the incident in February.
SecurityWeek has emailed Conduent for additional information and will update this article if the company responds.
*Updated with the number of impacted individuals from the Oregon Department of Justice.
reuters.com By A.J. Vicens
October 29, 202511:10 PM GMT+1Updated October 29, 2025
Hackers accessed Ribbon's network in December 2024
Three customers impacted, according to ongoing investigation
Ribbon's breach part of broader trend targeting telecom firms
Oct 29 (Reuters) - Hackers working for an unnamed nation-state breached networks at Ribbon Communications (RBBN.O), opens new tab, a key U.S. telecommunications services company, and remained within the firm’s systems for nearly a year without being detected, a company spokesperson confirmed in a statement on Wednesday.
Ribbon Communications, a Texas-based company that provides technology to facilitate voice and data communications between separate tech platforms and environments, said in its October 23 10-Q filing, opens new tab with the Securities and Exchange Commission that the company learned early last month that people “reportedly associated with a nation-state actor” gained access to the company’s IT network, with initial access dating to early December 2024.
The hack has not been previously reported. It is perhaps the latest example of technology companies that play a critical role in the global telecommunications ecosystem being targeted as part of nation-state hacking campaigns.
Ribbon did not identify the nation-state actor, or disclose which of its customers were affected by the breach, but told Reuters in the statement that its investigation has so far revealed three “smaller customers” impacted.
“While we do not have evidence at this time that would indicate the threat actor gained access to any material information, we continue to work with our third-party experts to confirm this,” a Ribbon spokesperson said in an email. “We have also taken steps to further harden our network to prevent any future incidents.”
sophos.com
October 30, 2025
The threat group targeted a LANSCOPE zero-day vulnerability (CVE-2025-61932)
In mid-2025, Counter Threat Unit™ (CTU) researchers observed a sophisticated BRONZE BUTLER campaign that exploited a zero-day vulnerability in Motex LANSCOPE Endpoint Manager to steal confidential information. The Chinese state-sponsored BRONZE BUTLER threat group (also known as Tick) has been active since 2010 and previously exploited a zero-day vulnerability in Japanese asset management product SKYSEA Client View in 2016. JPCERT/CC published a notice about the LANSCOPE issue on October 22, 2025.
Exploitation of CVE-2025-61932
In the 2025 campaign, CTU™ researchers confirmed that the threat actors gained initial access by exploiting CVE-2025-61932. This vulnerability allows remote attackers to execute arbitrary commands with SYSTEM privileges. CTU analysis indicates that the number of vulnerable internet-facing devices is low. However, attackers could exploit vulnerable devices within compromised networks to conduct privilege escalation and lateral movement. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-61932 to the Known Exploited Vulnerabilities Catalog on October 22.
Command and control
CTU researchers confirmed that the threat actors used the Gokcpdoor malware in this campaign. As reported by a third party in 2023, Gokcpdoor can establish a proxy connection with a command and control (C2) server as a backdoor. The 2025 variant discontinued support for the KCP protocol and added multiplexing communication using a third-party library for its C2 communication (see Figure 1).
Comparison of function names in Gokcpdoor samples
Figure 1: Comparison of internal function names in the 2023 (left) and 2025 (right) Gokcpdoor samples
Furthermore, CTU researchers identified two different types of Gokcpdoor with distinct purposes:
The server type listens for incoming client connections, opening the port specified in its configuration. Some of the analyzed samples used 38000 while others used 38002. The C2 functionality enabled remote access.
The client type initiates connections to hard-coded C2 servers, establishing a communication tunnel to function as a backdoor.
On some compromised hosts, BRONZE BUTLER implemented the Havoc C2 framework instead of Gokcpdoor. Some Gokcpdoor and Havoc samples used the OAED Loader malware, which was also linked to BRONZE BUTLER in the 2023 report, to complicate the execution flow. This malware injects a payload into a legitimate executable according to its embedded configuration (see Figure 2).
Visual representation of execution flow that utilizes OAED Loader
Figure 2: Execution flow utilizing OAED Loader
Abuse of legitimate tools and services
CTU researchers also confirmed that the following tools were used for lateral movement and data exfiltration:
goddi (Go dump domain info) – An open-source Active Directory information dumping tool
Remote desktop – A legitimate remote desktop application used through a backdoor tunnel
7-Zip – An open-source file archiver used for data exfiltration
BRONZE BUTLER also accessed the following cloud storage services via the web browser during remote desktop sessions, potentially attempting to exfiltrate the victim’s confidential information:
file.io
LimeWire
Piping Server
Recommendations
CTU researchers recommend that organizations upgrade vulnerable LANSCOPE servers as appropriate in their environments. Organizations should also review internet-facing LANSCOPE servers that have the LANSCOPE client program (MR) or detection agent (DA) installed to determine if there is a business need for them to be publicly exposed.
Detections and indicators
The following Sophos protections detect activity related to this threat:
Torj/BckDr-SBL
Mal/Generic-S
The threat indicators in Table 1 can be used to detect activity related to this threat. Note that IP addresses can be reallocated. The IP addresses may contain malicious content, so consider the risks before opening them in a browser.
Indicator Type Context
932c91020b74aaa7ffc687e21da0119c MD5 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
be75458b489468e0acdea6ebbb424bc898b3db29 SHA1 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
3c96c1a9b3751339390be9d7a5c3694df46212fb97ebddc074547c2338a4c7ba SHA256 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
4946b0de3b705878c514e2eead096e1e MD5 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
1406b4e905c65ba1599eb9c619c196fa5e1c3bf7 SHA1 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
9e581d0506d2f6ec39226f052a58bc5a020ebc81ae539fa3a6b7fc0db1b94946 SHA256 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
8124940a41d4b7608eada0d2b546b73c010e30b1 SHA1 hash goddi tool used by BRONZE BUTLER
(winupdate.exe)
704e697441c0af67423458a99f30318c57f1a81c4146beb4dd1a88a88a8c97c3 SHA256 hash goddi tool used by BRONZE BUTLER
(winupdate.exe)
38[.]54[.]56[.]57 IP address Gokcpdoor C2 server used by BRONZE BUTLER;
uses TCP port 443
38[.]54[.]88[.]172 IP address Havoc C2 server used by BRONZE BUTLER;
uses TCP port 443
38[.]54[.]56[.]10 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
38[.]60[.]212[.]85 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
108[.]61[.]161[.]118 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
Cellebrite can apparently extract data from most Pixel phones, unless they’re running GrapheneOS.
Despite being a vast repository of personal information, smartphones used to have little by way of security. That has thankfully changed, but companies like Cellebrite offer law enforcement tools that can bypass security on some devices. The company keeps the specifics quiet, but an anonymous individual recently logged in to a Cellebrite briefing and came away with a list of which of Google’s Pixel phones are vulnerable to Cellebrite phone hacking.
This person, who goes by the handle rogueFed, posted screenshots from the recent Microsoft Teams meeting to the GrapheneOS forums (spotted by 404 Media). GrapheneOS is an Android-based operating system that can be installed on select phones, including Pixels. It ships with enhanced security features and no Google services. Because of its popularity among the security-conscious, Cellebrite apparently felt the need to include it in its matrix of Pixel phone support.
The screenshot includes data on the Pixel 6, Pixel 7, Pixel 8, and Pixel 9 family. It does not list the Pixel 10 series, which launched just a few months ago. The phone support is split up into three different conditions: before first unlock, after first unlock, and unlocked. The before first unlock (BFU) state means the phone has not been unlocked since restarting, so all data is encrypted. This is traditionally the most secure state for a phone. In the after first unlock (AFU) state, data extraction is easier. And naturally, an unlocked phone is open season on your data.
At least according to Cellebrite, GrapheneOS is more secure than what Google offers out of the box. The company is telling law enforcement in these briefings that its technology can extract data from Pixel 6, 7, 8, and 9 phones in unlocked, AFU, and BFU states on stock software. However, it cannot brute-force passcodes to enable full control of a device. The leaker also notes law enforcement is still unable to copy an eSIM from Pixel devices. Notably, the Pixel 10 series is moving away from physical SIM cards.
For those same phones running GrapheneOS, police can expect to have a much harder time. The Cellebrite table says that Pixels with GrapheneOS are only accessible when running software from before late 2022—both the Pixel 8 and Pixel 9 were launched after that. Phones in both BFU and AFU states are safe from Cellebrite on updated builds, and as of late 2024, even a fully unlocked GrapheneOS device is immune from having its data copied. An unlocked phone can be inspected in plenty of other ways, but data extraction in this case is limited to what the user can access.
The original leaker claims to have dialed into two calls so far without detection. However, rogueFed also called out the meeting organizer by name (the second screenshot, which we are not reposting). Odds are that Cellebrite will be screening meeting attendees more carefully now.
We’ve reached out to Google to inquire about why a custom ROM created by a small non-profit is more resistant to industrial phone hacking than the official Pixel OS. We’ll update this article if Google has anything to say.
theguardian.com
Harry Davies and Yuval Abraham in Jerusalem
Wed 29 Oct 2025 14.15 CET
The tech giants agreed to extraordinary terms to clinch a lucrative contract with the Israeli government, documents show
When Google and Amazon negotiated a major $1.2bn cloud-computing deal in 2021, their customer – the Israeli government – had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the “winking mechanism”.
The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel’s concerns that data it moves into the global corporations’ cloud platforms could end up in the hands of foreign law enforcement authorities.
Like other big tech companies, Google and Amazon’s cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations.
This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent.
For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.
To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call.
Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox “controls” contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon’s cloud businesses have denied evading any legal obligations.
The strict controls include measures that prohibit the US companies from restricting how an array of Israeli government agencies, security services and military units use their cloud services. According to the deal’s terms, the companies cannot suspend or withdraw Israel’s access to its technology, even if it’s found to have violated their terms of service.
Israeli officials inserted the controls to counter a series of anticipated threats. They feared Google or Amazon might bow to employee or shareholder pressure and withdraw Israel’s access to its products and services if linked to human rights abuses in the occupied Palestinian territories.
They were also concerned the companies could be vulnerable to overseas legal action, particularly in cases relating to the use of the technology in the military occupation of the West Bank and Gaza.
The terms of the Nimbus deal would appear to prohibit Google and Amazon from the kind of unilateral action taken by Microsoft last month, when it disabled the Israeli military’s access to technology used to operate an indiscriminate surveillance system monitoring Palestinian phone calls.
Microsoft, which provides a range of cloud services to Israel’s military and public sector, bid for the Nimbus contract but was beaten by its rivals. According to sources familiar with negotiations, Microsoft’s bid suffered as it refused to accept some of Israel’s demands.
As with Microsoft, Google and Amazon’s cloud businesses have faced scrutiny in recent years over the role of their technology – and the Nimbus contract in particular – in Israel’s two-year war on Gaza.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
With this threat in mind, Israeli officials inserted into the Nimbus deal a requirement for the companies to a send coded message – a “wink” – to its government, revealing the identity of the country they had been compelled to hand over Israeli data to, but were gagged from saying so.
Leaked documents from Israel’s finance ministry, which include a finalised version of the Nimbus agreement, suggest the secret code would take the form of payments – referred to as “special compensation” – made by the companies to the Israeli government.
According to the documents, the payments must be made “within 24 hours of the information being transferred” and correspond to the telephone dialing code of the foreign country, amounting to sums between 1,000 and 9,999 shekels.
Under the terms of the deal, the mechanism works like this:
If either Google or Amazon provides information to authorities in the US, where the dialing code is +1, and they are prevented from disclosing their cooperation, they must send the Israeli government 1,000 shekels.
If, for example, the companies receive a request for Israeli data from authorities in Italy, where the dialing code is +39, they must send 3,900 shekels.
If the companies conclude the terms of a gag order prevent them from even signaling which country has received the data, there is a backstop: the companies must pay 100,000 shekels ($30,000) to the Israeli government.
Legal experts, including several former US prosecutors, said the arrangement was highly unusual and carried risks for the companies as the coded messages could violate legal obligations in the US, where the companies are headquartered, to keep a subpoena secret.
“It seems awfully cute and something that if the US government or, more to the point, a court were to understand, I don’t think they would be particularly sympathetic,” a former US government lawyer said.
Several experts described the mechanism as a “clever” workaround that could comply with the letter of the law but not its spirit. “It’s kind of brilliant, but it’s risky,” said a former senior US security official.
Israeli officials appear to have acknowledged this, documents suggest. Their demands about how Google and Amazon respond to a US-issued order “might collide” with US law, they noted, and the companies would have to make a choice between “violating the contract or violating their legal obligations”.
Neither Google nor Amazon responded to the Guardian’s questions about whether they had used the secret code since the Nimbus contract came into effect.
“We have a rigorous global process for responding to lawful and binding orders for requests related to customer data,” Amazon’s spokesperson said. “We do not have any processes in place to circumvent our confidentiality obligations on lawfully binding orders.”
Google declined to comment on which of Israel’s stringent demands it had accepted in the completed Nimbus deal, but said it was “false” to “imply that we somehow were involved in illegal activity, which is absurd”.
A spokesperson for Israel’s finance ministry said: “The article’s insinuation that Israel compels companies to breach the law is baseless.”
‘No restrictions’
Israeli officials also feared a scenario in which its access to the cloud providers’ technology could be blocked or restricted.
In particular, officials worried that activists and rights groups could place pressure on Google and Amazon, or seek court orders in several European countries, to force them to terminate or limit their business with Israel if their technology were linked to human rights violations.
To counter the risks, Israel inserted controls into the Nimbus agreement which Google and Amazon appear to have accepted, according to government documents prepared after the deal was signed.
The documents state that the agreement prohibits the companies from revoking or restricting Israel’s access to their cloud platforms, either due to changes in company policy or because they find Israel’s use of their technology violates their terms of service.
Provided Israel does not infringe on copyright or resell the companies’ technology, “the government is permitted to make use of any service that is permitted by Israeli law”, according to a finance ministry analysis of the deal.
Both companies’ standard “acceptable use” policies state their cloud platforms should not be used to violate the legal rights of others, nor should they be used to engage in or encourage activities that cause “serious harm” to people.
However, according to an Israeli official familiar with the Nimbus project, there can be “no restrictions” on the kind of information moved into Google and Amazon’s cloud platforms, including military and intelligence data. The terms of the deal seen by the Guardian state that Israel is “entitled to migrate to the cloud or generate in the cloud any content data they wish”.
Israel inserted the provisions into the deal to avoid a situation in which the companies “decide that a certain customer is causing them damage, and therefore cease to sell them services”, one document noted.
The Intercept reported last year the Nimbus project was governed by an “amended” set of confidential policies, and cited a leaked internal report suggesting Google understood it would not be permitted to restrict the types of services used by Israel.
Last month, when Microsoft cut off Israeli access to some cloud and artificial intelligence services, it did so after confirming reporting by the Guardian and its partners, +972 and Local Call, that the military had stored a vast trove of intercepted Palestinian calls in the company’s Azure cloud platform.
Notifying the Israeli military of its decision, Microsoft said that using Azure in this way violated its terms of service and it was “not in the business of facilitating the mass surveillance of civilians”.
Under the terms of the Nimbus deal, Google and Amazon are prohibited from taking such action as it would “discriminate” against the Israeli government. Doing so would incur financial penalties for the companies, as well as legal action for breach of contract.
The Israeli finance ministry spokesperson said Google and Amazon are “bound by stringent contractual obligations that safeguard Israel’s vital interests”. They added: “These agreements are confidential and we will not legitimise the article’s claims by disclosing private commercial terms.”
| The Record from Recorded Future News
Daryna Antoniuk
October 31st, 2025
Russia's Interior Ministry posted a video of raids on suspected developers of the Meduza Stealer malware, which has been sold to cybercriminals since 2023.
Russian police said they detained three hackers suspected of developing and selling the Meduza Stealer malware in a rare crackdown on domestic cybercrime.
The suspects were arrested in Moscow and the surrounding region, Russia’s Interior Ministry spokesperson Irina Volk said in a statement on Thursday.
The three “young IT specialists” are suspected of developing, using and selling malicious software designed to steal login credentials, cryptocurrency wallet data and other sensitive information, she added.
Police said they seized computer equipment, phones, and bank cards during raids on the suspects’ homes. A video released by the Interior Ministry shows officers breaking down doors and storming into apartments. When asked by police why he had been detained, one suspect replied in Russian, “I don’t really understand.”
Officials said the suspects began distributing Meduza Stealer through hacker forums roughly two years ago. In one incident earlier this year, the group allegedly used the malware to steal data from an organization in Russia’s Astrakhan region.
Authorities said the group also created another type of malware designed to disable antivirus protection and build botnets for large-scale cyberattacks. The malicious program was not identified. The three face up to four years in prison if convicted.
Meduza Stealer first appeared in 2023, sold on Russian-language hacking forums and Telegram channels as a service for a fee. It has since been used in cyberattacks targeting both personal and financial data.
Ukrainian officials have previously linked the malware to attacks on domestic military and government entities. In one campaign last October, threat actors used a fake Telegram “technical support” bot to distribute the malware to users of Ukraine’s government mobilization app.
Researchers have also observed Meduza Stealer infections in Poland and inside Russia itself — including one 2023 campaign that used phishing emails impersonating an industrial automation company.
Russia’s law enforcement agencies rarely pursue cybercriminals operating inside the country. But researchers say that has begun to change.
According to a recent report by Recorded Future’s Insikt Group, Moscow’s stance has shifted “from passive tolerance to active management” of the hacking ecosystem — a strategy that includes selective arrests and public crackdowns intended to reinforce state authority while preserving useful talent.
Such moves mark a notable shift in a country long seen as a safe haven for financially motivated hackers. Researchers say many of these actors are now decentralizing their operations to evade both Western and domestic surveillance.
The Record is an editorially independent unit of Recorded Future.
techcrunch.com/
Lorenzo Franceschi-Bicchierai
10:00 PM PDT · October 28, 2025
On Monday, researchers at cybersecurity giant Kaspersky published a report identifying a new spyware called Dante that they say targeted Windows victims in Russia and neighboring Belarus. The researchers said the Dante spyware is made by Memento Labs, a Milan-based surveillance tech maker that was formed in 2019 after a new owner acquired and took over early spyware maker Hacking Team.
Memento chief executive Paolo Lezzi confirmed to TechCrunch that the spyware caught by Kaspersky does indeed belong to Memento.
In a call, Lezzi blamed one of the company’s government customers for exposing Dante, saying the customer used an outdated version of the Windows spyware that will no longer be supported by Memento by the end of this year.
“Clearly they used an agent that was already dead,” Lezzi told TechCrunch, referring to an “agent” as the technical word for the spyware planted on the target’s computer.
“I thought [the government customer] didn’t even use it anymore,” said Lezzi.
Lezzi, who said he was not sure which of the company’s customers were caught, added that Memento had already requested that all of its customers stop using the Windows malware. Lezzi said the company had warned customers that Kaspersky had detected Dante spyware infections since December 2024. He added that Memento plans to send a message to all its customers on Wednesday asking them once again to stop using its Windows spyware.
He said that Memento currently only develops spyware for mobile platforms. The company also develops some zero-days — meaning security flaws in software unknown to the vendor that can be used to deliver spyware — though it mostly sources its exploits from outside developers, according to Lezzi.
When reached by TechCrunch, Kaspersky spokesperson Mai Al Akkad would not say which government Kaspersky believes is behind the espionage campaign, but that it was “someone who has been able to use Dante software.”
“The group stands out for its strong command of Russian and knowledge of local nuances, traits that Kaspersky observed in other campaigns linked to this [government-backed] threat. However, occasional errors suggest that the attackers were not native speakers,” Al Akkad told TechCrunch.
In its new report, Kaspersky said it found a hacking group using the Dante spyware that it refers to as “ForumTroll,” describing the targeting of people with invites to Russian politics and economics forum Primakov Readings. Kaspersky said the hackers targeted a broad range of industries in Russia, including media outlets, universities, and government organizations.
Kaspersky’s discovery of Dante came after the Russian cybersecurity firm said it detected a “wave” of cyberattacks with phishing links that were exploiting a zero-day in the Chrome browser. Lezzi said that the Chrome zero-day was not developed by Memento.
In its report, Kaspersky researchers concluded that Memento “kept improving” the spyware originally developed by Hacking Team until 2022, when the spyware was “replaced by Dante.”
Lezzi conceded that it is possible that some “aspects” or “behaviors” of Memento’s Windows spyware were left over from spyware developed by Hacking Team.
A telltale sign that the spyware caught by Kaspersky belonged to Memento was that the developers allegedly left the word “DANTEMARKER” in the spyware’s code, a clear reference to the name Dante, which Memento had previously and publicly disclosed at a surveillance tech conference, per Kaspersky.
Much like Memento’s Dante spyware, some versions of Hacking Team’s spyware, codenamed Remote Control System, were named after historical Italian figures, such as Leonardo da Vinci and Galileo Galilei.
A history of hacks
In 2019, Lezzi purchased Hacking Team and rebranded it to Memento Labs. According to Lezzi, he paid only one euro for the company and the plan was to start over.
“We want to change absolutely everything,” the Memento owner told Motherboard after the acquisition in 2019. “We’re starting from scratch.”
A year later, Hacking Team’s CEO and founder David Vincenzetti announced that Hacking Team was “dead.”
When he acquired Hacking Team, Lezzi told TechCrunch that the company only had three government customers remaining, a far cry from the more than 40 government customers that Hacking Team had in 2015. That same year, a hacktivist called Phineas Fisher broke into the startup’s servers and siphoned off some 400 gigabytes of internal emails, contracts, documents, and the source code for its spyware.
Before the hack, Hacking Team’s customers in Ethiopia, Morocco, and the United Arab Emirates were caught targeting journalists, critics, and dissidents using the company’s spyware. Once Phineas Fisher published the company’s internal data online, journalists revealed that a Mexican regional government used Hacking Team’s spyware to target local politicians and that Hacking Team had sold to countries with human rights abuses, including Bangladesh, Saudi Arabia, and Sudan, among others.
Lezzi declined to tell TechCrunch how many customers Memento currently has but implied it was fewer than 100 customers. He also said that there are only two current Memento employees left from Hacking Team’s former staff.
The discovery of Memento’s spyware shows that this type of surveillance technology keeps proliferating, according to John Scott-Railton, a senior researcher who has investigated spyware abuses for a decade at the University of Toronto’s Citizen Lab.
It also shows that a controversial company can die because of a spectacular hack and several scandals, and yet a new company with brand-new spyware can still come out of its ashes.
“It tells us that we need to keep up the fear of consequences,” Scott-Railton told TechCrunch. “It says a lot that echoes of the most radioactive, embarrassed and hacked brand are still around.”
www.axios.com
Sam Sabin
F5 warned shareholders Monday that it expects its revenue growth to slow over the next two quarters as many of its customers pause or slow down their buying decisions while responding to a recent major cyberattack.
Why it matters: The comments are the first from F5 about how much the nation-state attack — which was disclosed about two weeks ago — is likely going to impact the company's bottom line.
Driving the news: F5 CEO François Locoh-Donou said during the company's fourth-quarter earnings call that the company is increasing its internal cybersecurity investments as it responds to the highly sophisticated hack.
"We are disappointed that this has happened and very aware as a team and as a company of the burden that this has placed in our customers who have had to work long hours to upgrade" affected products, Locoh-Donou told investors on the call.
Catch up quick: Bloomberg reported the attackers are likely linked to the Chinese government and have been lurking in the company's systems since 2023.
Zoom in: So far, F5 has identified and notified an unspecified number of customers who have had their data stolen as a result of the hacks, Locoh-Donou said.
The company has also worked with thousands of customers in recent weeks to deploy security fixes with minimal operational disruptions, he added.
F5 will enhance its bug bounty program and is working with outside firms to review the security of its code for vulnerabilities, he said.
The company has also transitioned Michael Montoya, the company's security chief, to a new role as its chief technology operations officer to help further embed security into every aspect of the company's operations.
Yes, but: Locoh-Donou told shareholders that most affected customers have said their stolen data was not sensitive and "they're not concerned about it."
Threat level: Locoh-Donou said the company is "acutely aware" that nation-state hackers have been increasingly targeting networking security firms like F5 in recent years.
"We are committed to learning from this incident, sharing our insights with our peers and driving collaborative innovation to collectively strengthen the protection of critical infrastructure across the industry," he said.
By Reuters
October 29, 2025
BANGKOK, Oct 29 (Reuters) - India plans to send an airplane to repatriate some 500 of its nationals who fled from a military raid on a scam centre in Myanmar into Thailand, Thai Prime Minister Anutin Charnvirakul said on Wednesday.
Starting last week, the Myanmar military has conducted a series of military operations against the KK Park cybercrime compound, driving more than 1,500 people from 28 countries into the Thai border town of Mae Sot, according to local authorities.
The border areas between Thailand, Myanmar, Laos and Cambodia have become hubs for online fraud since the COVID-19 pandemic, and the United Nations says billions of dollars have been earned from trafficking hundreds of thousands of people forced to work in the compounds.
KK Park is notorious for its involvement in transnational cyberscams. The sprawling compound and others nearby are run primarily by Chinese criminal gangs and guarded by local militia groups aligned to Myanmar's military.
Anutin said the Indian ambassador would meet the head of immigration to discuss speeding up the legal verification process for the 500 Indian nationals ahead of their flight back to India.
"They don't want this to burden us," Anutin said. "They will send a plane to pick these victims up... the plane will land directly in Mae Sot," he said.
Indian foreign ministry spokesperson Randhir Jaiswal said India's embassy was working with Thailand "to verify their nationality and to repatriate them, after necessary legal formalities are completed in Thailand."
Earlier this year India also sent a plane to repatriate its nationals after thousands were freed from cyberscam centres along the Thai-Myanmar border following a regional crackdown.
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
TEE.fail:
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
With the increasing popularity of remote computation like cloud computing, users are increasingly losing control over their data, uploading it to remote servers that they do not control. Trusted Execution Environments (TEEs) aim to reduce this trust, offering users promises such as privacy and integrity of their data as well as correctness of computation. With the introduction of TEEs and Confidential Computing features to server hardware offered by Intel, AMD, and Nvidia, modern TEE implementations aim to provide hardware-backed integrity and confidentiality to entire virtual machines or GPUs, even when attackers have full control over the system's software, for example via root or hypervisor access. Over the past few years, TEEs have been used to execute confidential cryptocurrency transactions, train proprietary AI models, protect end-to-end encrypted chats, and more.
In this work, we show that the security guarantees of modern TEE offerings by Intel and AMD can be broken cheaply and easily, by building a memory interposition device that allows attackers to physically inspect all memory traffic inside a DDR5 server. Making this worse, despite the increased complexity and speed of DDR5 memory, we show how such an interposition device can be built cheaply and easily, using only off the shelf electronic equipment. This allows us for the first time to extract cryptographic keys from Intel TDX and AMD SEV-SNP with Ciphertext Hiding, including in some cases secret attestation keys from fully updated machines in trusted status. Beyond breaking CPU-based TEEs, we also show how extracted attestation keys can be used to compromise Nvidia's GPU Confidential Computing, allowing attackers to run AI workloads without any TEE protections. Finally, we examine the resilience of existing deployments to TEE compromises, showing how extracted attestation keys can potentially be used by attackers to extract millions of dollars of profit from various cryptocurrency and cloud compute services.
| The Record from Recorded Future News
Daryna Antoniuk
October 27th, 2025
The utility responsible for operating Sweden's power grid is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.
Sweden’s power grid operator is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.
State-owned Svenska kraftnät, which operates the country’s electricity transmission system, said the incident affected a “limited external file transfer solution” and did not disrupt Sweden’s power supply.
“We take this breach very seriously and have taken immediate action,” said Chief Information Security Officer Cem Göcgören in a statement. “We understand that this may cause concern, but the electricity supply has not been affected.”
The ransomware gang Everest claimed responsibility for the attack on its leak site over the weekend, alleging it had exfiltrated about 280 gigabytes of data and saying it would publish it unless the agency complied with its demands.
The same group has previously claimed attacks on Dublin Airport, Air Arabia, and U.S. aerospace supplier Collins Aerospace — incidents that disrupted flight operations across several European cities in September. The group’s claims could not be independently verified.
Svenska kraftnät said it is working closely with the police and national cybersecurity authorities to determine the extent of the breach and what data may have been exposed. The utility has not attributed the attack to any specific threat actor.
“Our current assessment is that mission-critical systems have not been affected,” Göcgören said. “At this time, we are not commenting on perpetrators or motives until we have confirmed information.”
vxdb.sh Journalist | Cybercrime News |
It is human nature to be competitive, to try your best when competing against others. It is no different when it comes to video games. Major E-Sports tournament prize pools regularly reach the multi millions. Last year the CS2 PGL Major hosted in Copenhagen had a prize pool of $1.25M.
Outside of the Esports realm cheating is still very prevalent, from games like Fortnite, Apex Legends, CS2, even non competitive games like Minecraft or Roblox have cheating issues. Most if not all the top tier cheats aren't free. Instead they rely on a subscription-based monetization model, where users pay for access to private builds or regular updates designed to evade detection from the games AntiCheat. Cheat developers also utilize what are called resellers who advertise, and sell the cheat on behalf of the developers in exchange for a cut of the profits.
Most players don't want to or can't pay for premium/paid cheats so they hunt for free alternatives or cracked versions of paid cheats on sketchy forums, Youtube, or even Github. While some free cheats do exist, they usually don't have many features, are slower to update, and quickly detected by the AntiCheat, meaning they’ll get you banned fast, sometimes instantly. A significant portion of these “free” alternatives present security risks. In many cases, the download contains typically info stealers, Discord token grabbers, or RATs. In other instances, the advertised download is a working cheat but has malware executed in the background without the user knowing.
How threat actors spread their malware
Cybercriminals weaponize YouTube by posting videos that advertise free cheats, executors, or “cracked” cheats and then use the video description or pinned comments to funnel viewers to a download link. Many videos use the service Linkvertise which makes users go through a handful of ads and suspicious downloads to reach the final download link where the file is hosted on a site like MediaFire or Meganz. These videos are being posted on stolen or fake youtube accounts created and advertised by what are called Traffer Teams.
What are Traffers Teams?
"Traffer teams manage the entire operation, recruiting affiliates (traffers), handling monetization, and managing/crypting stealer builds. Traffer gangs recruit affiliates who spread the malware, often driving app downloads from YouTube, TikTok, and other platforms. Traffers are commonly paid a percentage of these stolen logs or receive a direct payment for installs. Traffer gangs will typically monetize these stolen logs by selling them directly to buyers or cashing out themselves." As per Benjamin Brundage CEO of Synthient.
In a recent upload by researcher Eric Parker, a YouTube channel was discovered repeatedly uploading videos advertising so-called “Valorant Skins Changer,” “Roblox Executor,” and similar “free hacks" all with oddly similar thumbnails. Each video’s description contained a download link that redirected users to a Google Sites page at "sites[.]google[.]com/view/lyteam".
This site is operated by a Traffer Team known as LyTeam, which promotes and distributes info-stealing malware under the guise of free game cheats.
Later in the same video, Eric Parker downloaded and analyzed a .dll file hosted on the LyTeam site. When uploaded to VirusTotal, the sample was identified to be a strain of the Lumma Stealer Malware, a well-known info-stealing malware family known for harvesting browser credentials and crypto wallets.
How to stay safe
Don't click random links and run files you find out on the internet, if needed use and AntiVirus software to scan files on your computer. Run sketchy files you find either in a virtual machine or sandbox, better yet use VirusTotal.
Staying safe doesn't mean you need to be paranoid 24/7, it's about awareness.
Thank you for reading,
vxdb :)
LayerX discovered the first vulnerability impacting OpenAI’s new ChatGPT Atlas browser, allowing bad actors to inject malicious instructions into ChatGPT’s “memory” and execute remote code. This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware.
The vulnerability affects ChatGPT users on any browser, but it is particularly dangerous for users of OpenAI’s new agentic browser: ChatGPT Atlas. LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge.
The exploit has been reported to OpenAI under Responsible Disclosure procedures, and a summary is provided below, while withholding technical information that will allow attackers to replicate this attack.
TL/DR: How The Exploit Works:
LayerX discovered how attackers can use a Cross-Site Request Forgery (CSRF) request to “piggyback” on the victim’s ChatGPT access credentials, in order to inject malicious instructions into ChatGPT’s memory. Then, when the user attempts to use ChatGPT for legitimate purposes, the tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.
While this vulnerability affects ChatGPT users on any browser, it is particularly dangerous for users of ChatGPT Atlas browser, since they are, by default, logged-in to ChatGPT, and since LayerX testing indicates that the Atlas browser is up to 90% more exposed than Chrome and Edge to phishing attacks.
A Step-by-Step Explanation:
Initially, the user is logged-in to ChatGPT, and holds an authentication cookie or token in their browser.
The user clicks a malicious link, leading them to a compromised web page.
The malicious page invokes a Cross-Site Request Forgery (CSRF) request to take advantage of the user’s pre-existing authentication into ChatGPT
The CSRF exploit injects hidden instructions into ChatGPT’s memory, without the user’s knowledge, thereby “tainting” the core LLM memory.
The next time the user queries ChatGPT, the tainted memories are invokes, allowing deployment of malicious code that can give attackers control over systems or code.
Using Cross-Site Request Forgery (CSRF) To Access LLMs:
A cross-site request forgery (CSRF) attack is when an attacker tricks a user’s browser into sending an unintended, state-changing request to a website where the user is already authenticated, causing the site to perform actions as that user without their consent.
The attack occurs when a victim is logged in to a target site, which has session cookies stored in the browser. The victim visits or is redirected into a malicious page that issues a crafted request (via a form, image tag, link, or script) to the target site. The browser automatically includes the victim’s credentials (cookies, auth headers), so the target site processes the request as if the user initiated it.
In most cases, the impact of a CSRF attack is aimed at activity such as changing account email/password, initiating funds transfers, or making purchases under the user’s session can occur.
However, when it comes to AI systems, using a CSRF attack, attackers can gain access to AI systems that the user is logged-in to, query it, or inject instructions into it.
Infecting ChatGPT’s Core “Memory”
ChatGPT’s “Memory” allows ChatGPT to remember useful details about users’ queries, chat and activities, such as preferences, constraints, projects, style notes, etc., and reuse them across future chats so that users don’t have to repeat themselves. In effect, they act like the LLM’s background memory or subconscious.
Once attackers have access to the user’s ChatGPT via the CSRF request, they can use it to inject hidden instructions to ChatGPT, that will affect future chats.
Like a person’s subconscious, once the right instructions are stored inside ChatGP’s Memory, ChatGPT will be compelled to execute these instructions, effectively becoming a malicious co-conspiritor.
Moreover, once an account’s Memory has been infected, this infection is persistent across all devices that the account is used on – across home and work computers, and across different browsers – whether a user is using them on Chrome, Atlas, or any other browser. This makes the attack extremely “sticky,” and is especially dangerous for users who use the same account for both work and personal purposes.
ChatGPT Atlas Users Up to 90% More Exposed Than Other Browsers
While this vulnerability can be used against ChatGPT users on any browser, users of OpenAI’s ChatGPT browser are particularly vulnerable. This is for two reasons:
When you are using Atlas, you are, by default, logged-in to ChatGPT. This means that ChatGPT credentials are always stored in the browser, where they can be targeted by malicious CSRF requests.
ChatGPT Atlas is particularly bad at stopping phishing attacks. This means that users of Atlas are more exposed than users of other browsers.
LayerX tested Atlas against over 100 in-the-wild web vulnerabilities and phishing attacks. LayerX previously conducted the same test against other AI browsers such as Comet, Dia, and Genspark. The results were uninspiring, to say the least:
In the previous tests, whereas traditional browsers such as Edge and Chrome were able to stop about 50% of phishing attacks using their out-of-the-box protections, Comet and Genspark stopped only 7% (Dia generated results similar to those of Chrome).
Running the same test against Atlas showed even more stark results:
Out of 103 in-the-wild attacks that LayerX tested, ChatGPT Atlas allowed 97 to go through, a whopping 94.2% failure rate.
Compared to Edge (which stopped 53% of attacks in LayerX’s test) and Chrome (which stopped 47% of attacks), ChatGPT Atlas was able to successfully stop only 5.8% of malicious web pages, meaning that users of Atlas were nearly 90% more vulnerable to phishing attacks, compared to users of other browsers.
The implication is that not only users of ChatGPT Atlas are susceptible to malicious attack vectors that can lead to injection of malicious instructions into their ChatGPT accounts, but since Atlas does not include any meaningful anti-phishing protection, Atlas users are at a greater risk of exposure.
Proof of Concept: Injecting Malicious Code To ‘Vibe’ Coding
Below is an illustration of an attack vector exploiting this vulnerability, on an Atlas browser user who is vibe coding:
“Vibe coding” is a collaborative style where the developer treats the AI as a creative partner rather than a line-by-line executor. Instead of prescribing exact syntax, the developer shares the project’s intent and feel (e.g., architecture goals, tone, audience, aesthetic preferences, etc.) and other non-functional requirements.
ChatGPT then uses this holistic brief to produce code that works and matches the requested style, narrowing the gap between high-level ideas and low-level implementation. The developer’s role shifts from hand-coding to steering and refining the AI’s interpretation.
While ChatGPT offers some defenses against malicious instructions, effectiveness can vary with the attack’s sophistication and how the unwanted behavior entered Memory.
In some cases, the user may see a mild warning; in others, the attempt might be blocked. However, if cleverly masked, the code could evade detection altogether. For example, this is the subtle warning that this script received. At most, it’s a sidenote that is easy to miss within the blob of text:
• The Register
Carly Page
Thu 23 Oct 2025 //
Google has taken down thousands of YouTube videos that were quietly spreading password-stealing malware disguised as cracked software and game cheats.
Researchers at Check Point say the so-called "YouTube Ghost Network" hijacked and weaponized legitimate YouTube accounts to post tutorial videos that promised free copies of Photoshop, FL Studio, and Roblox hacks, but instead lured viewers into installing infostealers such as Rhadamanthys and Lumma.
The campaign, which has been running since 2021, surged in 2025, with the number of malicious videos tripling compared to previous years. More than 3,000 malware-laced videos have now been scrubbed from the platform after Check Point worked with Google to dismantle what it called one of the most significant malware delivery operations ever seen on YouTube.
Check Point says the Ghost Network relied on thousands of fake and compromised accounts working in concert to make malicious content look legitimate. Some posted the "tutorial" videos, others flooded comment sections with praise, likes, and emojis to give the illusion of trust, while a third set handled "community posts" that shared download links and passwords for the supposed cracked software.
"This operation took advantage of trust signals, including views, likes, and comments, to make malicious content seem safe," said Eli Smadja, security research group manager at Check Point. "What looks like a helpful tutorial can actually be a polished cyber trap. The scale, modularity, and sophistication of this network make it a blueprint for how threat actors now weaponise engagement tools to spread malware."
Once hooked, victims were typically instructed to disable antivirus software, then download an archive hosted on Dropbox, Google Drive, or MediaFire. Inside was malware rather than a working copy of the promised program, and once opened, the infostealers exfiltrated credentials, crypto wallets, and system data to remote command-and-control servers.
One hijacked channel with 129,000 subscribers posted a cracked version of Adobe Photoshop that racked up nearly 300,000 views and more than 1,000 likes. Another targeted cryptocurrency users, redirecting them to phishing pages hosted on Google Sites.
As Check Point tracked the network, it found the operators frequently rotated payloads and updated download links to outpace takedowns, creating a resilient ecosystem that could quickly regenerate even when accounts were banned.
Check Point says the Ghost Network's modular design, with uploaders, commenters, and link distributors, allowed campaigns to persist for years. The approach mimics a separate operation the firm has dubbed the "Stargazers Ghost Network" on GitHub, where fake developer accounts host malicious repositories.
While most of the malicious videos pushed pirated software, the biggest lure was gaming cheats – particularly for Roblox, which has an estimated 380 million monthly active players. Other videos dangled cracked copies of Microsoft Office, Lightroom, and Adobe tools. The "most viewed" malicious upload targeted Photoshop, drawing almost 300,000 views before Google's cleanup operation.
The surge in 2025 marks a sharp shift in how malware is being distributed. Where phishing emails and drive-by downloads once dominated, attackers are now exploiting the social credibility of mainstream platforms to bypass user skepticism.
"In today's threat landscape, a popular-looking video can be just as dangerous as a phishing email," Smadja said. "This takedown shows that even trusted platforms aren't immune to weaponization, but it also proves that with the right intelligence and partnerships, we can push back."
Check Point doesn't have concrete evidence as to who is operating this network. It said the primary beneficiaries currently appear to be cybercriminals motivated by profit, but this could change if nation-state groups use the same tactics and video content to attract high-value targets.
The YouTube Ghost Network's rise underscores how far online malware peddlers have evolved from spammy inbox bait. The ghosts may have been exorcised this time, but with engagement now an attack vector, the next haunting is only ever a click away.
iverify.io
By Matthias Frielingsdorf, VP of Research
Oct 21, 2025
iOS 26 changes how shutdown logs are handled, erasing key evidence of Pegasus and Predator spyware, creating new challenges for forensic investigators
As iOS 26 is being rolled out, our team noticed a particular change in how the operating system handles the shutdown.log file: it effectively erases crucial evidence of Pegasus and Predator spyware infections. This development poses a serious challenge for forensic investigators and individuals seeking to determine if their devices have been compromised at a time when spyware attacks are becoming more common.
The Power of the shutdown.log
For years, the shutdown.log file has been an invaluable, yet often overlooked, artifact in the detection of iOS malware. Located within the Sysdiagnoses in the Unified Logs section (specifically, Sysdiagnose Folder -> system_logs.logarchive -> Extra -> shutdown.log), it has served as a silent witness to the activities occurring on an iOS device, even during its shutdown sequence.
In 2021, the publicly known version of Pegasus spyware was found to leave discernible traces within this shutdown.log. These traces provided a critical indicator of compromise, allowing security researchers to identify infected devices. However, the developers behind Pegasus, NSO Group, are constantly refining their techniques, and by 2022 Pegasus had evolved.
Pegasus's Evolving Evasion Tactics
While still leaving evidence in the shutdown.log, their methods became more sophisticated. Instead of leaving obvious entries, they began to completely wipe the shutdown.log file. Yet, even with this attempted erasure, their own processes still left behind subtle traces. This meant that even a seemingly clean shutdown.log that began with evidence of a Pegasus sample was, in itself, an indicator of compromise. Multiple cases of this behavior were observed until the end of 2022, highlighting the continuous adaptation of these malicious actors.
Following this period, it is believed that Pegasus developers implemented even more robust wiping mechanisms, likely monitoring device shutdown to ensure a thorough eradication of their presence from the shutdown.log. Researchers have noted instances where devices known to be active had their shutdown.log cleared, alongside other IOCs for Pegasus infections. This led to the conclusion that a cleared shutdown.log could serve as a good heuristic for identifying suspicious devices.
Predator's Similar Footprint
The sophisticated Predator spyware, observed in 2023, also appears to have learned from the past. Given that Predator was actively monitoring the shutdown.log, and considering the similar behavior seen in earlier Pegasus samples, it is highly probable that Predator, too, left traces within this critical log file.
iOS 26: An Unintended Cleanse
With iOS 26 Apple introduced a change—either an intentional design decision or an unforeseen bug—that causes the shutdown.log to be overwritten on every device reboot instead of appended with a new entry every time, preserving each as its own snapshot. This means that any user who updates to iOS 26 and subsequently restarts their device will inadvertently erase all evidence of older Pegasus and Predator detections that might have been present in their shutdown.log.
This automatic overwriting, while potentially intended for system hygiene or performance, effectively sanitizes the very forensic artifact that has been instrumental in identifying these sophisticated threats. It could hardly come at a worse time - spyware attacks have been a constant in the news and recent headlines show that high-power executives and celebrities, not just civil society, are being targeted.
Identifying Pegasus 2022: A Specific IOC
For those still on iOS versions prior to 26, a specific IOC for Pegasus 2022 infections involved the presence of a /private/var/db/com.apple.xpc.roleaccountd.staging/com.apple.WebKit.Networking entry within the shutdown.log. This particular IOC also revealed a significant shift in NSO Group's tactics: they began using normal system process names instead of easily identifiable, similarly named processes, making detection more challenging.
An image of a shutdown.log file
Correlating Logs for Deeper Insight (< iOS 18)
For devices running iOS 18 or earlier, a more comprehensive approach to detection involved correlating containermanagerd log entries with shutdown.log events. Containermanagerd logs contain boot events and can retain data for several weeks. By comparing these boot events with shutdown.log entries, investigators could identify discrepancies. For example, if numerous boot events were observed before shutdown.log entries, it suggested that something was amiss and potentially being hidden.
Before You Update
Given the implications of iOS 26's shutdown.log handling, it is crucial for users to take proactive steps:
Before updating to iOS 26, immediately take and save a sysdiagnose of your device. This will preserve your current shutdown.log and any potential evidence it may contain.
Consider holding off on updating to iOS 26 until Apple addresses this issue, ideally by releasing a bug fix that prevents the overwriting of the shutdown.log on boot.
Key Takeaways
Just months after being disrupted during Operation Cronos, the notorious LockBit ransomware group has reemerged — and it hasn’t wasted time. Check Point Research has confirmed that LockBit is back in operation and already extorting new victims.
Throughout September 2025, Check Point Research identified a dozen organizations targeted by the revived operation, with half of them infected by the newly released LockBit 5.0 variant and the rest by LockBit Black. The attacks span Western Europe, the Americas, and Asia, affecting both Windows and Linux systems, a clear sign that LockBit’s infrastructure and affiliate network are once again active.
A Rapid and Confident Comeback
At the beginning of September, LockBit officially announced its return on underground forums, unveiling LockBit 5.0 and calling for new affiliates to join. This latest version, internally codenamed “ChuongDong,” marks a significant evolution of the group’s encryptor family.
The newly observed LockBit 5.0 attacks span a broad range of targets — about 80% on Windows systems, and around 20% on ESXi and Linux environments. The quick reappearance of multiple active victims demonstrates that LockBit’s Ransomware-as-a-Service (RaaS) model has successfully reactivated its affiliate base.
From Disruption to Reorganization
Until its takedown in early 2024, LockBit was the most dominant RaaS operation globally, responsible for 20–30% of all data-leak site victim postings. Following Operation Cronos, several arrests and data seizures disrupted the group’s infrastructure. Competing ransomware programs, such as RansomHub and Qilin, briefly tried to absorb its affiliates.
However, LockBit’s administrator, LockBitSupp, evaded capture and continued to hint at a comeback on dark web forums. In May 2025, he posted defiantly on the RAMP forum: “We always rise up after being hacked.” By August, LockBitSupp reappeared again, claiming the group was “getting back to work,” a statement that quickly proved true.
A Divided Underground
While LockBit regained traction on RAMP, other major forums like XSS continued to ban RaaS advertising. In early September, LockBitSupp attempted to be reinstated on XSS, even prompting a community vote, which ultimately failed.
Implications: A Familiar Threat Returns
LockBit’s reemergence underscores the group’s resilience and sophistication. Despite high-profile law enforcement actions and public setbacks, the group has once again managed to restore its operations, recruit affiliates, and resume extortion.
With its mature RaaS model, cross-platform reach, and proven reputation among cyber criminals, LockBit’s return represents a renewed threat to organizations across all sectors. September’s wave of infections likely marks only the beginning of a larger campaign — and October’s postings may confirm the group’s full operational recovery.
| Brave brave.com
Authors
Shivan Kaul Sahib
Artem Chaikin
AI browsers remain vulnerable to prompt injection attacks via screenshots and hidden content, allowing attackers to exploit users' authenticated sessions.
This is the second post in a series about security and privacy challenges in agentic browsers. This vulnerability research was conducted by Artem Chaikin (Senior Mobile Security Engineer), and was written by Artem and Shivan Kaul Sahib (VP, Privacy and Security).
Building on our previous disclosure of the Perplexity Comet vulnerability, we’ve continued our security research across the agentic browser landscape. What we’ve found confirms our initial concerns: indirect prompt injection is not an isolated issue, but a systemic challenge facing the entire category of AI-powered browsers. This post examines additional attack vectors we’ve identified and tested across different implementations.
On request, we are withholding one additional vulnerability found in another browser for now. We plan on providing more details next week.
As we’ve written before, AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a Reddit post could result in an attacker being able to steal money or your private data.
As always, we responsibly reported these issues to the various companies listed below so the vulnerabilities could be addressed. As we’ve previously said, a safer Web is good for everyone. The thoughtful commentary and debate about secure agentic AI that was raised by our previous blog post in this series motivated our decision to continue researching and publicizing our findings.
Prompt injection via screenshots in Perplexity Comet
Perplexity’s Comet assistant lets users take screenshots on websites and ask questions about those images. These screenshots can be used as yet another way to inject prompts that bypass traditional text-based input sanitization. Malicious instructions embedded as nearly-invisible text within the image are processed as commands rather than (untrusted) content.
How the attack works:
Setup: An attacker embeds malicious instructions in Web content that are hard to see for humans. In our attack, we were able to hide prompt injection instructions in images using a faint light blue text on a yellow background. This means that the malicious instructions are effectively hidden from the user.
Trigger: User-initiated screenshot capture of a page containing camouflaged malicious text.
Injection: Text recognition extracts text that’s imperceptible to human users (possibly via OCR though we can’t tell for sure since the Comet browser is not open-source). This extracted text is then passed to the LLM without distinguishing it from the user’s query.
Exploit: The injected commands instruct the AI to use its browser tools maliciously.