Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
page 2 / 244
AI browsers are a cybersecurity time bomb https://www.theverge.com/report/810083/ai-browser-cybersecurity-problems
02/11/2025 11:34:02
QRCode
archive.org
thumbnail

| The Verge theverge.com
by
Robert Hart
Oct 30, 2025, 4:53 PM GMT+1

Huge cyber breaches are on the horizon thanks to AI-powered web browsers like ChatGPT Atlas and Comet, experts warn.

Web browsers are getting awfully chatty. They got even chattier last week after OpenAI and Microsoft kicked the AI browser race into high gear with ChatGPT Atlas and a “Copilot Mode” for Edge. They can answer questions, summarize pages, and even take actions on your behalf. The experience is far from seamless yet, but it hints at a more convenient, hands-off future where your browser does lots of your thinking for you. That future could also be a minefield of new vulnerabilities and data leaks, cybersecurity experts warn. The signs are already here, and researchers tell The Verge the chaos is only just getting started.

Atlas and Copilot Mode are part of a broader land grab to control the gateway to the internet and to bake AI directly into the browser itself. That push is transforming what were once standalone chatbots on separate pages or apps into the very platform you use to navigate the web. They’re not alone. Established players are also in the race, such as Google, which is integrating its Gemini AI model into Chrome; Opera, which launched Neon; and The Browser Company, with Dia. Startups are also keen to stake a claim, such as AI startup Perplexity — best known for its AI-powered search engine, which made its AI-powered browser Comet freely available to everyone in early October — and Sweden’s Strawberry, which is still in beta and actively going after “disappointed Atlas users.”

In the past few weeks alone, researchers have uncovered vulnerabilities in Atlas allowing attackers to take advantage of ChatGPT’s “memory” to inject malicious code, grant themselves access privileges, or deploy malware. Flaws discovered in Comet could allow attackers to hijack the browser’s AI with hidden instructions. Perplexity, through a blog, and OpenAI’s chief information security officer, Dane Stuckey, acknowledged prompt injections as a big threat last week, though both described them as a “frontier” problem that has no firm solution.

“Despite some heavy guardrails being in place, there is a vast attack surface,” says Hamed Haddadi, professor of human-centered systems at Imperial College London and chief scientist at web browser company Brave. And what we’re seeing is just the tip of the iceberg.

With AI browsers, the threats are numerous. Foremost, they know far more about you and are “much more powerful than traditional browsers,” says Yash Vekaria, a computer science researcher at UC Davis. Even more than standard browsers, Vekaria says “there is an imminent risk from being tracked and profiled by the browser itself.” AI “memory” functions are designed to learn from everything a user does or shares, from browsing to emails to searches, as well as conversations with the built-in AI assistant. This means you’re probably sharing far more than you realise and the browser remembers it all. The result is “a more invasive profile than ever before,” Vekaria says. Hackers would quite like to get hold of that information, especially if coupled with stored credit card details and login credentials often found on browsers.

Another threat is inherent to the rollout of any new technology. No matter how careful developers are, there will inevitably be weaknesses hackers can exploit. This could range from bugs and coding errors that accidentally reveal sensitive data to major security flaws that could let hackers gain access to your system. “It’s early days, so expect risky vulnerabilities to emerge,” says Lukasz Olejnik, an independent cybersecurity researcher and visiting senior research fellow at King’s College London. He points to the “early Office macro abuses, malicious browser extensions, and mobiles prior to [the] introduction of permissions” as examples of previous security issues linked to the rollout of new technologies. “Here we go again.”

Some vulnerabilities are never found — sometimes leading to devastating zero-day attacks, named as there are zero days to fix the flaw — but thorough testing can slash the number of potential problems. With AI browsers, “the biggest immediate threat is the market rush,” Haddadi says. “These agentic browsers have not been thoroughly tested and validated.”

But AI browsers’ defining feature, AI, is where the worst threats are brewing. The biggest challenge comes with AI agents that act on behalf of the user. Like humans, they’re capable of visiting suspect websites, clicking on dodgy links, and inputting sensitive information into places sensitive information shouldn’t go, but unlike some humans, they lack the learned common sense that helps keep us safe online. Agents can also be misled, even hijacked, for nefarious purposes. All it takes is the right instructions. So-called prompt injections can range from glaringly obvious to subtle, effectively hidden in plain sight in things like images, screenshots, form fields, emails and attachments, and even something as simple as white text on a white background.

Worse yet, these attacks can be very difficult to anticipate and defend against. Automation means bad actors can try and try again until the agent does what they want, says Haddadi. “Interaction with agents allows endless ‘try and error’ configurations and explorations of methods to insert malicious prompts and commands.” There are simply far more chances for a hacker to break through when interacting with an agent, opening up a huge space for potential attacks. Shujun Li, a professor of cybersecurity at the University of Kent, says “zero-day vulnerabilities are exponentially increasing” as a result. Even worse: Li says as the flaw starts with an agent, detection will also be delayed, meaning potentially bigger breaches.

It’s not hard to imagine what might be in store. Olejnik sees scenarios where attackers use hidden instructions to get AI browsers to send out personal data or steal purchased goods by changing the saved address on a shopping site. To make things worse, Vekaria warns it’s “relatively easy to pull off attacks” given the current state of AI browsers, even with safeguards in place. “Browser vendors have a lot of work to do in order to make them more safe, secure, and private for the end users,” he says.

For some threats, experts say the only real way to keep safe using AI browsers is to simply avoid the marquee features entirely. Li suggests people save AI for “only when they absolutely need it” and know what they’re doing. Browsers should “operate in an AI-free mode by default,” he says. If you must use the AI agent features, Vekaria advises a degree of hand-holding. When setting a task, give the agent verified websites you know to be safe rather than letting it figure them out on its own. “It can end up suggesting and using a scam site,” he warns.

theverge.com EN 2025 AI-browsers browsers security risk
Announcing data collection consent changes for new Firefox extensions https://blog.mozilla.org/addons/2025/10/23/data-collection-consent-changes-for-new-firefox-extensions
02/11/2025 11:26:43
QRCode
archive.org

blog.mozilla.org – Mozilla Add-ons Community Blog
Alan Byrne October 23, 2025

As of November 3rd 2025, all new Firefox extensions will be required to specify if they collect or transmit personal data in their manifest.json file using the browser_specific_settings.gecko.data_collection_permissions key. This will apply to new extensions only, and not new versions of existing extensions. Extensions that do not collect or transmit any personal data are required to specify this by setting the none required data collection permission in this property.

This information will then be displayed to the user when they start to install the extension, alongside any permissions it requests.

This information will also be displayed on the addons.mozilla.org page, if it is publicly listed, and in the Permissions and Data section of the Firefox about:addons page for that extension. If an extension supports versions of Firefox prior to 140 for Desktop, or 142 for Android, then the developer will need to continue to provide the user with a clear way to control the add-on’s data collection and transmission immediately after installation of the add-on.

Once any extension starts using these data_collection_permissions keys in a new version, it will need to continue using them for all subsequent versions. Extensions that do not have this property set correctly, and are required to use it, will be prevented from being submitted to addons.mozilla.org for signing with a message explaining why.

In the first half of 2026, Mozilla will require all extensions to adopt this framework. But don’t worry, we’ll give plenty of notice via the add-ons blog. We’re also developing some new features to ease this transition for both extension developers and users, which we will announce here.

mozilla.org EN 2025 announce adddons browser firefox data-collection consent
Tata Motors confirms it fixed security flaws, which exposed company and customer data | TechCrunch https://techcrunch.com/2025/10/28/tata-motors-confirms-it-fixed-security-flaws-that-exposed-company-and-customer-data
02/11/2025 11:25:04
QRCode
archive.org
thumbnail

techcrunch.com
Jagmeet Singh
6:30 PM PDT · October 28, 2025

A security researcher found the Indian automotive giant exposing personal information of its customers, internal company reports, and dealers’ data. Tata confirmed it fixed the issues.

Indian automotive giant Tata Motors has fixed a series of security flaws that exposed sensitive internal data, including personal information of customers, company reports, and data related to its dealers.

Security researcher Eaton Zveare told TechCrunch that he discovered the flaws in Tata Motors’ E-Dukaan unit, an e-commerce portal for buying spare parts for Tata-made commercial vehicles. Headquartered in Mumbai, Tata Motors produces passenger cars, as well as commercial and defense vehicles. The company has a presence in 125 countries worldwide and seven assembly facilities, per its website.

Zveare said he found that the portal’s web source code included the private keys to access and modify data within Tata Motors’ account on Amazon Web Services, the researcher said in a blog post.

The exposed data, Zveare told TechCrunch, included hundreds of thousands of invoices containing customer information, such as their names, mailing addresses, and permanent account number (PAN), a 10-character unique identifier issued by the Indian government.

“Out of respect for not causing some type of alarm bell or massive egress bill at Tata Motors, there were no attempts to exfiltrate large amounts of data or download excessively large files,” the researcher told TechCrunch.

There were also MySQL database backups and Apache Parquet files that included various bits of private customer information and communication, the researcher noted.

The AWS keys also enabled access to over 70 terabytes of data related to Tata Motors’ FleetEdge fleet-tracking software. Zveare also found backdoor admin access to a Tableau account, which included data of over 8,000 users.
“As server admin, you had access to all of it. This primarily includes things like internal financial reports, performance reports, dealer scorecards, and various dashboards,” the researcher said.

The exposed data also included API access to Tata Motors’ fleet management platform, Azuga, which powers the company’s test drive website.

Shortly after discovering the issues, Zveare reported them to Tata Motors through the Indian computer emergency response team, known as CERT-In, in August 2023. Later in October 2023, Tata Motors told Zveare that it was working on fixing the AWS issues after securing the initial loopholes. However, the company did not say when the issues were fixed.

Tata Motors confirmed to TechCrunch that all the reported flaws were fixed in 2023 but would not say if it notified affected customers that their information was exposed.

“We can confirm that the reported flaws and vulnerabilities were thoroughly reviewed following their identification in 2023 and were promptly and fully addressed,” said Tata Motors communications head Sudeep Bhalla, when contacted by TechCrunch.

“Our infrastructure is regularly audited by leading cybersecurity firms, and we maintain comprehensive access logs to monitor for unauthorized activity. We also actively collaborate with industry experts and security researchers to strengthen our security posture and ensure timely mitigation of potential risks,” said Bhalla.

techcrunch.com EN 2025 India Tata automotive flaws data-breach
Python Software Foundation News: The PSF has withdrawn a $1.5 million proposal to US government grant program https://pyfound.blogspot.com/2025/10/NSF-funding-statement.html
02/11/2025 11:22:54
QRCode
archive.org
thumbnail

Python Software Foundation News
pyfound.blogspot.com
Monday, October 27, 2025

The PSF has withdrawn a $1.5 million proposal to US government grant program
In January 2025, the PSF submitted a proposal to the US government National Science Foundation under the Safety, Security, and Privacy of Open Source Ecosystems program to address structural vulnerabilities in Python and PyPI. It was the PSF’s first time applying for government funding, and navigating the intensive process was a steep learning curve for our small team to climb. Seth Larson, PSF Security Developer in Residence, serving as Principal Investigator (PI) with Loren Crary, PSF Deputy Executive Director, as co-PI, led the multi-round proposal writing process as well as the months-long vetting process. We invested our time and effort because we felt the PSF’s work is a strong fit for the program and that the benefit to the community if our proposal were accepted was considerable.

We were honored when, after many months of work, our proposal was recommended for funding, particularly as only 36% of new NSF grant applicants are successful on their first attempt. We became concerned, however, when we were presented with the terms and conditions we would be required to agree to if we accepted the grant. These terms included affirming the statement that we “do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI, or discriminatory equity ideology in violation of Federal anti-discrimination laws.” This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole. Further, violation of this term gave the NSF the right to “claw back” previously approved and transferred funds. This would create a situation where money we’d already spent could be taken back, which would be an enormous, open-ended financial risk.

Diversity, equity, and inclusion are core to the PSF’s values, as committed to in our mission statement:
The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers.
Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries.

In the end, however, the PSF simply can’t agree to a statement that we won’t operate any programs that “advance or promote” diversity, equity, and inclusion, as it would be a betrayal of our mission and our community.

We’re disappointed to have been put in the position where we had to make this decision, because we believe our proposed project would offer invaluable advances to the Python and greater open source community, protecting millions of PyPI users from attempted supply-chain attacks. The proposed project would create new tools for automated proactive review of all packages uploaded to PyPI, rather than the current process of reactive-only review. These novel tools would rely on capability analysis, designed based on a dataset of known malware. Beyond just protecting PyPI users, the outputs of this work could be transferable for all open source software package registries, such as NPM and Crates.io, improving security across multiple open source ecosystems.

In addition to the security benefits, the grant funds would have made a big difference to the PSF’s budget. The PSF is a relatively small organization, operating with an annual budget of around $5 million per year, with a staff of just 14. $1.5 million over two years would have been quite a lot of money for us, and easily the largest grant we’d ever received. Ultimately, however, the value of the work and the size of the grant were not more important than practicing our values and retaining the freedom to support every part of our community. The PSF Board voted unanimously to withdraw our application.

Giving up the NSF grant opportunity—along with inflation, lower sponsorship, economic pressure in the tech sector, and global/local uncertainty and conflict—means the PSF needs financial support now more than ever. We are incredibly grateful for any help you can offer. If you're already a PSF member or regular donor, you have our deep appreciation, and we urge you to share your story about why you support the PSF. Your stories make all the difference in spreading awareness about the mission and work of the PSF.

How to support the PSF:
Become a Member: When you sign up as a Supporting Member of the PSF, you become a part of the PSF. You’re eligible to vote in PSF elections, using your voice to guide our future direction, and you help us sustain what we do with your annual support.
Donate: Your donation makes it possible to continue our work supporting Python and its community, year after year.
Sponsor: If your company uses Python and isn’t yet a sponsor, send them our sponsorship page or reach out to sponsors@python.org today. The PSF is ever grateful for our sponsors, past and current, and we do everything we can to make their sponsorships beneficial and rewarding.

pyfound.blogspot.com EN python foundation US withdrawn
Introducing Aardvark: OpenAI’s agentic security researcher https://openai.com/index/introducing-aardvark/
02/11/2025 11:21:14
QRCode
archive.org

source: OpenAI openai.com
October 30, 2025

Now in private beta: an AI agent that thinks like a security researcher and scales to meet the demands of modern software.

Today, we’re announcing Aardvark, an agentic security researcher powered by GPT‑5.

Software security is one of the most critical—and challenging—frontiers in technology. Each year, tens of thousands of new vulnerabilities are discovered across enterprise and open-source codebases. Defenders face the daunting tasks of finding and patching vulnerabilities before their adversaries do. At OpenAI, we are working to tip that balance in favor of defenders.

Aardvark represents a breakthrough in AI and security research: an autonomous agent that can help developers and security teams discover and fix security vulnerabilities at scale. Aardvark is now available in private beta to validate and refine its capabilities in the field.

How Aardvark works
Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches.

Aardvark works by monitoring commits and changes to codebases, identifying vulnerabilities, how they might be exploited, and proposing fixes. Aardvark does not rely on traditional program analysis techniques like fuzzing or software composition analysis. Instead, it uses LLM-powered reasoning and tool-use to understand code behavior and identify vulnerabilities. Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more.

Diagram titled “AARDVARK — Vulnerability Discovery Agent Workflow” showing a process flow from Git repository to threat modeling, vulnerability discovery, validation sandbox, patching with Codex, and human review leading to a pull request.
Aardvark relies on a multi-stage pipeline to identify, explain, and fix vulnerabilities:

Analysis: It begins by analyzing the full repository to produce a threat model reflecting its understanding of the project’s security objectives and design.
Commit scanning: It scans for vulnerabilities by inspecting commit-level changes against the entire repository and threat model as new code is committed. When a repository is first connected, Aardvark will scan its history to identify existing issues. Aardvark explains the vulnerabilities it finds step-by-step, annotating code for human review.
Validation: Once Aardvark has identified a potential vulnerability, it will attempt to trigger it in an isolated, sandboxed environment to confirm its exploitability. Aardvark describes the steps taken to help ensure accurate, high-quality, and low false-positive insights are returned to users.
Patching: Aardvark integrates with OpenAI Codex to help fix the vulnerabilities it finds. It attaches a Codex-generated and Aardvark-scanned patch to each finding for human review and efficient, one-click patching.
Aardvark works alongside engineers, integrating with GitHub, Codex, and existing workflows to deliver clear, actionable insights without slowing development. While Aardvark is built for security, in our testing we’ve found that it can also uncover bugs such as logic flaws, incomplete fixes, and privacy issues.

Real impact, today
Aardvark has been in service for several months, running continuously across OpenAI’s internal codebases and those of external alpha partners. Within OpenAI, it has surfaced meaningful vulnerabilities and contributed to OpenAI’s defensive posture. Partners have highlighted the depth of its analysis, with Aardvark finding issues that occur only under complex conditions.

In benchmark testing on “golden” repositories, Aardvark identified 92% of known and synthetically-introduced vulnerabilities, demonstrating high recall and real-world effectiveness.

Aardvark for Open Source
Aardvark has also been applied to open-source projects, where it has discovered and we have responsibly disclosed numerous vulnerabilities—ten of which have received Common Vulnerabilities and Exposures (CVE) identifiers.

As beneficiaries of decades of open research and responsible disclosure, we’re committed to giving back—contributing tools and findings that make the digital ecosystem safer for everyone. We plan to offer pro-bono scanning to select non-commercial open source repositories to contribute to the security of the open source software ecosystem and supply chain.

We recently updated⁠ our outbound coordinated disclosure policy⁠ which takes a developer-friendly stance, focused on collaboration and scalable impact, rather than rigid disclosure timelines that can pressure developers. We anticipate tools like Aardvark will result in the discovery of increasing numbers of bugs, and want to sustainably collaborate to achieve long-term resilience.

Why it matters
Software is now the backbone of every industry—which means software vulnerabilities are a systemic risk to businesses, infrastructure, and society. Over 40,000 CVEs were reported in 2024 alone. Our testing shows that around 1.2% of commits introduce bugs—small changes that can have outsized consequences.

Aardvark represents a new defender-first model: an agentic security researcher that partners with teams by delivering continuous protection as code evolves. By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation. We believe in expanding access to security expertise. We're beginning with a private beta and will broaden availability as we learn.

Private beta now open
We’re inviting select partners to join the Aardvark private beta. Participants will gain early access and work directly with our team to refine detection accuracy, validation workflows, and reporting experience.

We’re looking to validate performance across a variety of environments. If your organization or open source project is interested in joining, you can apply here⁠.

openai.com EN 2025 AI LLM security aardvark agent security-researcher
10 Million Impacted by Conduent Data Breach https://www.securityweek.com/millions-impacted-by-conduent-data-breach/
02/11/2025 11:10:25
QRCode
archive.org

securityweek.com
ByIonut Arghire| October 30, 2025 (9:01 AM ET)
Updated: October 31, 2025 (2:36 AM ET)

The hackers stole names, addresses, dates of birth, Social Security numbers, and health and insurance information.

Business services provider Conduent is notifying more than 10 million people that their personal information was stolen in a January 2025 data breach.

The incident was disclosed publicly in late January, when Conduent confirmed system disruptions that affected government agencies in multiple US states.

In April, the company notified the Securities and Exchange Commission (SEC) that the attackers had stolen personal information from its systems.

Last week, Conduent started notifying users that their personal information was stolen in the incident, and submitted notices to Attorney General’s Offices in multiple states.

The hackers accessed Conduent’s network on October 21, 2024 and were evicted on January 13, 2025, after the attack was identified, the company says in the notification letter to the affected individuals.

During the time frame, the attackers exfiltrated various files from the network, including files containing personal information such as names, addresses, dates of birth, Social Security numbers, health insurance details, and medical information.

Conduent is not providing the affected people with free identity theft protection services, but encourages them to obtain free credit reports, place fraud alerts on their credit files, and place security freezes on their credit reports.

“Upon discovery of the incident, we safely restored our systems and operations and notified law enforcement. We are also notifying you in case you decide to take further steps to protect your information should you feel it appropriate to do so,” the notification letter reads.

Based on the data breach notice submitted with the authorities in Oregon, it appears that 10,515,849 individuals were impacted, with the largest number in Texas (4 million).

Conduent serves over 600 government and transportation organizations, and roughly half of Fortune 100 companies, across financial, pharmaceutical, and automobile sectors. The company supports roughly 100 million US residents across 46 states.

While the company has not shared details on the threat actor behind the attack, the Safepay ransomware group claimed the incident in February.

SecurityWeek has emailed Conduent for additional information and will update this article if the company responds.

*Updated with the number of impacted individuals from the Oregon Department of Justice.

securityweek.com EN 2025 Conduent Data-Breach
US company with access to biggest telecom firms uncovers breach by nation-state hackers https://www.reuters.com/business/media-telecom/us-company-with-access-biggest-telecom-firms-uncovers-breach-by-nation-state-2025-10-29/
02/11/2025 11:08:09
QRCode
archive.org

reuters.com By A.J. Vicens
October 29, 202511:10 PM GMT+1Updated October 29, 2025

Hackers accessed Ribbon's network in December 2024
Three customers impacted, according to ongoing investigation
Ribbon's breach part of broader trend targeting telecom firms
Oct 29 (Reuters) - Hackers working for an unnamed nation-state breached networks at Ribbon Communications (RBBN.O), opens new tab, a key U.S. telecommunications services company, and remained within the firm’s systems for nearly a year without being detected, a company spokesperson confirmed in a statement on Wednesday.
Ribbon Communications, a Texas-based company that provides technology to facilitate voice and data communications between separate tech platforms and environments, said in its October 23 10-Q filing, opens new tab with the Securities and Exchange Commission that the company learned early last month that people “reportedly associated with a nation-state actor” gained access to the company’s IT network, with initial access dating to early December 2024.

The hack has not been previously reported. It is perhaps the latest example of technology companies that play a critical role in the global telecommunications ecosystem being targeted as part of nation-state hacking campaigns.
Ribbon did not identify the nation-state actor, or disclose which of its customers were affected by the breach, but told Reuters in the statement that its investigation has so far revealed three “smaller customers” impacted.
“While we do not have evidence at this time that would indicate the threat actor gained access to any material information, we continue to work with our third-party experts to confirm this,” a Ribbon spokesperson said in an email. “We have also taken steps to further harden our network to prevent any future incidents.”

reuters.com EN 2025 Telecom US Ribbon data-breach
BRONZE BUTLER exploits Japanese asset management software vulnerability – Sophos News https://news.sophos.com/en-us/2025/10/30/bronze-butler-exploits-japanese-asset-management-software-vulnerability/
02/11/2025 10:59:25
QRCode
archive.org
thumbnail

sophos.com
October 30, 2025

The threat group targeted a LANSCOPE zero-day vulnerability (CVE-2025-61932)

In mid-2025, Counter Threat Unit™ (CTU) researchers observed a sophisticated BRONZE BUTLER campaign that exploited a zero-day vulnerability in Motex LANSCOPE Endpoint Manager to steal confidential information. The Chinese state-sponsored BRONZE BUTLER threat group (also known as Tick) has been active since 2010 and previously exploited a zero-day vulnerability in Japanese asset management product SKYSEA Client View in 2016. JPCERT/CC published a notice about the LANSCOPE issue on October 22, 2025.

Exploitation of CVE-2025-61932
In the 2025 campaign, CTU™ researchers confirmed that the threat actors gained initial access by exploiting CVE-2025-61932. This vulnerability allows remote attackers to execute arbitrary commands with SYSTEM privileges. CTU analysis indicates that the number of vulnerable internet-facing devices is low. However, attackers could exploit vulnerable devices within compromised networks to conduct privilege escalation and lateral movement. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-61932 to the Known Exploited Vulnerabilities Catalog on October 22.

Command and control
CTU researchers confirmed that the threat actors used the Gokcpdoor malware in this campaign. As reported by a third party in 2023, Gokcpdoor can establish a proxy connection with a command and control (C2) server as a backdoor. The 2025 variant discontinued support for the KCP protocol and added multiplexing communication using a third-party library for its C2 communication (see Figure 1).

Comparison of function names in Gokcpdoor samples

Figure 1: Comparison of internal function names in the 2023 (left) and 2025 (right) Gokcpdoor samples

Furthermore, CTU researchers identified two different types of Gokcpdoor with distinct purposes:

The server type listens for incoming client connections, opening the port specified in its configuration. Some of the analyzed samples used 38000 while others used 38002. The C2 functionality enabled remote access.
The client type initiates connections to hard-coded C2 servers, establishing a communication tunnel to function as a backdoor.
On some compromised hosts, BRONZE BUTLER implemented the Havoc C2 framework instead of Gokcpdoor. Some Gokcpdoor and Havoc samples used the OAED Loader malware, which was also linked to BRONZE BUTLER in the 2023 report, to complicate the execution flow. This malware injects a payload into a legitimate executable according to its embedded configuration (see Figure 2).

Visual representation of execution flow that utilizes OAED Loader

Figure 2: Execution flow utilizing OAED Loader

Abuse of legitimate tools and services
CTU researchers also confirmed that the following tools were used for lateral movement and data exfiltration:

goddi (Go dump domain info) – An open-source Active Directory information dumping tool
Remote desktop – A legitimate remote desktop application used through a backdoor tunnel
7-Zip – An open-source file archiver used for data exfiltration
BRONZE BUTLER also accessed the following cloud storage services via the web browser during remote desktop sessions, potentially attempting to exfiltrate the victim’s confidential information:

file.io
LimeWire
Piping Server
Recommendations
CTU researchers recommend that organizations upgrade vulnerable LANSCOPE servers as appropriate in their environments. Organizations should also review internet-facing LANSCOPE servers that have the LANSCOPE client program (MR) or detection agent (DA) installed to determine if there is a business need for them to be publicly exposed.

Detections and indicators
The following Sophos protections detect activity related to this threat:

Torj/BckDr-SBL
Mal/Generic-S
The threat indicators in Table 1 can be used to detect activity related to this threat. Note that IP addresses can be reallocated. The IP addresses may contain malicious content, so consider the risks before opening them in a browser.

Indicator Type Context
932c91020b74aaa7ffc687e21da0119c MD5 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
be75458b489468e0acdea6ebbb424bc898b3db29 SHA1 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
3c96c1a9b3751339390be9d7a5c3694df46212fb97ebddc074547c2338a4c7ba SHA256 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
4946b0de3b705878c514e2eead096e1e MD5 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
1406b4e905c65ba1599eb9c619c196fa5e1c3bf7 SHA1 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
9e581d0506d2f6ec39226f052a58bc5a020ebc81ae539fa3a6b7fc0db1b94946 SHA256 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
8124940a41d4b7608eada0d2b546b73c010e30b1 SHA1 hash goddi tool used by BRONZE BUTLER
(winupdate.exe)
704e697441c0af67423458a99f30318c57f1a81c4146beb4dd1a88a88a8c97c3 SHA256 hash goddi tool used by BRONZE BUTLER
(winupdate.exe)
38[.]54[.]56[.]57 IP address Gokcpdoor C2 server used by BRONZE BUTLER;
uses TCP port 443
38[.]54[.]88[.]172 IP address Havoc C2 server used by BRONZE BUTLER;
uses TCP port 443
38[.]54[.]56[.]10 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
38[.]60[.]212[.]85 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
108[.]61[.]161[.]118 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER

sophos.com EN 2025 LANSCOPE CVE-2025-61932 BRONZE-BUTLER
Leaker reveals which Pixels are vulnerable to Cellebrite phone hacking https://arstechnica.com/gadgets/2025/10/leaker-reveals-which-pixels-are-vulnerable-to-cellebrite-phone-hacking/
31/10/2025 22:23:35
QRCode
archive.org
thumbnail
  • Ars Technica
    Ryan Whitwam – 30 oct. 2025 21:29 | 79

Cellebrite can apparently extract data from most Pixel phones, unless they’re running GrapheneOS.

Despite being a vast repository of personal information, smartphones used to have little by way of security. That has thankfully changed, but companies like Cellebrite offer law enforcement tools that can bypass security on some devices. The company keeps the specifics quiet, but an anonymous individual recently logged in to a Cellebrite briefing and came away with a list of which of Google’s Pixel phones are vulnerable to Cellebrite phone hacking.

This person, who goes by the handle rogueFed, posted screenshots from the recent Microsoft Teams meeting to the GrapheneOS forums (spotted by 404 Media). GrapheneOS is an Android-based operating system that can be installed on select phones, including Pixels. It ships with enhanced security features and no Google services. Because of its popularity among the security-conscious, Cellebrite apparently felt the need to include it in its matrix of Pixel phone support.

The screenshot includes data on the Pixel 6, Pixel 7, Pixel 8, and Pixel 9 family. It does not list the Pixel 10 series, which launched just a few months ago. The phone support is split up into three different conditions: before first unlock, after first unlock, and unlocked. The before first unlock (BFU) state means the phone has not been unlocked since restarting, so all data is encrypted. This is traditionally the most secure state for a phone. In the after first unlock (AFU) state, data extraction is easier. And naturally, an unlocked phone is open season on your data.
At least according to Cellebrite, GrapheneOS is more secure than what Google offers out of the box. The company is telling law enforcement in these briefings that its technology can extract data from Pixel 6, 7, 8, and 9 phones in unlocked, AFU, and BFU states on stock software. However, it cannot brute-force passcodes to enable full control of a device. The leaker also notes law enforcement is still unable to copy an eSIM from Pixel devices. Notably, the Pixel 10 series is moving away from physical SIM cards.

For those same phones running GrapheneOS, police can expect to have a much harder time. The Cellebrite table says that Pixels with GrapheneOS are only accessible when running software from before late 2022—both the Pixel 8 and Pixel 9 were launched after that. Phones in both BFU and AFU states are safe from Cellebrite on updated builds, and as of late 2024, even a fully unlocked GrapheneOS device is immune from having its data copied. An unlocked phone can be inspected in plenty of other ways, but data extraction in this case is limited to what the user can access.

The original leaker claims to have dialed into two calls so far without detection. However, rogueFed also called out the meeting organizer by name (the second screenshot, which we are not reposting). Odds are that Cellebrite will be screening meeting attendees more carefully now.

We’ve reached out to Google to inquire about why a custom ROM created by a small non-profit is more resistant to industrial phone hacking than the official Pixel OS. We’ll update this article if Google has anything to say.

arstechnica.com EN 2025 Cellebrite Pixels leak
Revealed: Israel demanded Google and Amazon use secret ‘wink’ to sidestep legal orders https://www.theguardian.com/us-news/2025/oct/29/google-amazon-israel-contract-secret-code
31/10/2025 15:12:52
QRCode
archive.org
thumbnail

theguardian.com
Harry Davies and Yuval Abraham in Jerusalem
Wed 29 Oct 2025 14.15 CET

The tech giants agreed to extraordinary terms to clinch a lucrative contract with the Israeli government, documents show

When Google and Amazon negotiated a major $1.2bn cloud-computing deal in 2021, their customer – the Israeli government – had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the “winking mechanism”.

The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel’s concerns that data it moves into the global corporations’ cloud platforms could end up in the hands of foreign law enforcement authorities.

Like other big tech companies, Google and Amazon’s cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations.

This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent.

For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.

To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call.

Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox “controls” contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon’s cloud businesses have denied evading any legal obligations.

The strict controls include measures that prohibit the US companies from restricting how an array of Israeli government agencies, security services and military units use their cloud services. According to the deal’s terms, the companies cannot suspend or withdraw Israel’s access to its technology, even if it’s found to have violated their terms of service.

Israeli officials inserted the controls to counter a series of anticipated threats. They feared Google or Amazon might bow to employee or shareholder pressure and withdraw Israel’s access to its products and services if linked to human rights abuses in the occupied Palestinian territories.

They were also concerned the companies could be vulnerable to overseas legal action, particularly in cases relating to the use of the technology in the military occupation of the West Bank and Gaza.

The terms of the Nimbus deal would appear to prohibit Google and Amazon from the kind of unilateral action taken by Microsoft last month, when it disabled the Israeli military’s access to technology used to operate an indiscriminate surveillance system monitoring Palestinian phone calls.

Microsoft, which provides a range of cloud services to Israel’s military and public sector, bid for the Nimbus contract but was beaten by its rivals. According to sources familiar with negotiations, Microsoft’s bid suffered as it refused to accept some of Israel’s demands.

As with Microsoft, Google and Amazon’s cloud businesses have faced scrutiny in recent years over the role of their technology – and the Nimbus contract in particular – in Israel’s two-year war on Gaza.

During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.

One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.

Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.

Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.

During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.

One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.

Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.

Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.

With this threat in mind, Israeli officials inserted into the Nimbus deal a requirement for the companies to a send coded message – a “wink” – to its government, revealing the identity of the country they had been compelled to hand over Israeli data to, but were gagged from saying so.

Leaked documents from Israel’s finance ministry, which include a finalised version of the Nimbus agreement, suggest the secret code would take the form of payments – referred to as “special compensation” – made by the companies to the Israeli government.

According to the documents, the payments must be made “within 24 hours of the information being transferred” and correspond to the telephone dialing code of the foreign country, amounting to sums between 1,000 and 9,999 shekels.

Under the terms of the deal, the mechanism works like this:

If either Google or Amazon provides information to authorities in the US, where the dialing code is +1, and they are prevented from disclosing their cooperation, they must send the Israeli government 1,000 shekels.

If, for example, the companies receive a request for Israeli data from authorities in Italy, where the dialing code is +39, they must send 3,900 shekels.

If the companies conclude the terms of a gag order prevent them from even signaling which country has received the data, there is a backstop: the companies must pay 100,000 shekels ($30,000) to the Israeli government.

Legal experts, including several former US prosecutors, said the arrangement was highly unusual and carried risks for the companies as the coded messages could violate legal obligations in the US, where the companies are headquartered, to keep a subpoena secret.

“It seems awfully cute and something that if the US government or, more to the point, a court were to understand, I don’t think they would be particularly sympathetic,” a former US government lawyer said.

Several experts described the mechanism as a “clever” workaround that could comply with the letter of the law but not its spirit. “It’s kind of brilliant, but it’s risky,” said a former senior US security official.

Israeli officials appear to have acknowledged this, documents suggest. Their demands about how Google and Amazon respond to a US-issued order “might collide” with US law, they noted, and the companies would have to make a choice between “violating the contract or violating their legal obligations”.

Neither Google nor Amazon responded to the Guardian’s questions about whether they had used the secret code since the Nimbus contract came into effect.

“We have a rigorous global process for responding to lawful and binding orders for requests related to customer data,” Amazon’s spokesperson said. “We do not have any processes in place to circumvent our confidentiality obligations on lawfully binding orders.”

Google declined to comment on which of Israel’s stringent demands it had accepted in the completed Nimbus deal, but said it was “false” to “imply that we somehow were involved in illegal activity, which is absurd”.

A spokesperson for Israel’s finance ministry said: “The article’s insinuation that Israel compels companies to breach the law is baseless.”

‘No restrictions’
Israeli officials also feared a scenario in which its access to the cloud providers’ technology could be blocked or restricted.

In particular, officials worried that activists and rights groups could place pressure on Google and Amazon, or seek court orders in several European countries, to force them to terminate or limit their business with Israel if their technology were linked to human rights violations.

To counter the risks, Israel inserted controls into the Nimbus agreement which Google and Amazon appear to have accepted, according to government documents prepared after the deal was signed.

The documents state that the agreement prohibits the companies from revoking or restricting Israel’s access to their cloud platforms, either due to changes in company policy or because they find Israel’s use of their technology violates their terms of service.

Provided Israel does not infringe on copyright or resell the companies’ technology, “the government is permitted to make use of any service that is permitted by Israeli law”, according to a finance ministry analysis of the deal.

Both companies’ standard “acceptable use” policies state their cloud platforms should not be used to violate the legal rights of others, nor should they be used to engage in or encourage activities that cause “serious harm” to people.

However, according to an Israeli official familiar with the Nimbus project, there can be “no restrictions” on the kind of information moved into Google and Amazon’s cloud platforms, including military and intelligence data. The terms of the deal seen by the Guardian state that Israel is “entitled to migrate to the cloud or generate in the cloud any content data they wish”.

Israel inserted the provisions into the deal to avoid a situation in which the companies “decide that a certain customer is causing them damage, and therefore cease to sell them services”, one document noted.

The Intercept reported last year the Nimbus project was governed by an “amended” set of confidential policies, and cited a leaked internal report suggesting Google understood it would not be permitted to restrict the types of services used by Israel.

Last month, when Microsoft cut off Israeli access to some cloud and artificial intelligence services, it did so after confirming reporting by the Guardian and its partners, +972 and Local Call, that the military had stored a vast trove of intercepted Palestinian calls in the company’s Azure cloud platform.

Notifying the Israeli military of its decision, Microsoft said that using Azure in this way violated its terms of service and it was “not in the business of facilitating the mass surveillance of civilians”.

Under the terms of the Nimbus deal, Google and Amazon are prohibited from taking such action as it would “discriminate” against the Israeli government. Doing so would incur financial penalties for the companies, as well as legal action for breach of contract.

The Israeli finance ministry spokesperson said Google and Amazon are “bound by stringent contractual obligations that safeguard Israel’s vital interests”. They added: “These agreements are confidential and we will not legitimise the article’s claims by disclosing private commercial terms.”

theguardian.com EN 2025 Israel Google Amazon wink secret AWS legal
Three suspected developers of Meduza Stealer malware arrested in Russia https://therecord.media/meduza-stealer-malware-suspected-developers-arrested-russia
31/10/2025 15:02:03
QRCode
archive.org
thumbnail

| The Record from Recorded Future News
Daryna Antoniuk
October 31st, 2025

Russia's Interior Ministry posted a video of raids on suspected developers of the Meduza Stealer malware, which has been sold to cybercriminals since 2023.

Russian police said they detained three hackers suspected of developing and selling the Meduza Stealer malware in a rare crackdown on domestic cybercrime.

The suspects were arrested in Moscow and the surrounding region, Russia’s Interior Ministry spokesperson Irina Volk said in a statement on Thursday.

The three “young IT specialists” are suspected of developing, using and selling malicious software designed to steal login credentials, cryptocurrency wallet data and other sensitive information, she added.

Police said they seized computer equipment, phones, and bank cards during raids on the suspects’ homes. A video released by the Interior Ministry shows officers breaking down doors and storming into apartments. When asked by police why he had been detained, one suspect replied in Russian, “I don’t really understand.”

Officials said the suspects began distributing Meduza Stealer through hacker forums roughly two years ago. In one incident earlier this year, the group allegedly used the malware to steal data from an organization in Russia’s Astrakhan region.

Authorities said the group also created another type of malware designed to disable antivirus protection and build botnets for large-scale cyberattacks. The malicious program was not identified. The three face up to four years in prison if convicted.

Meduza Stealer first appeared in 2023, sold on Russian-language hacking forums and Telegram channels as a service for a fee. It has since been used in cyberattacks targeting both personal and financial data.

Ukrainian officials have previously linked the malware to attacks on domestic military and government entities. In one campaign last October, threat actors used a fake Telegram “technical support” bot to distribute the malware to users of Ukraine’s government mobilization app.

Researchers have also observed Meduza Stealer infections in Poland and inside Russia itself — including one 2023 campaign that used phishing emails impersonating an industrial automation company.

Russia’s law enforcement agencies rarely pursue cybercriminals operating inside the country. But researchers say that has begun to change.

According to a recent report by Recorded Future’s Insikt Group, Moscow’s stance has shifted “from passive tolerance to active management” of the hacking ecosystem — a strategy that includes selective arrests and public crackdowns intended to reinforce state authority while preserving useful talent.

Such moves mark a notable shift in a country long seen as a safe haven for financially motivated hackers. Researchers say many of these actors are now decentralizing their operations to evade both Western and domestic surveillance.

The Record is an editorially independent unit of Recorded Future.

therecord.media EN 2025 meduza developpers busted arrested Russia
CEO of spyware maker Memento Labs confirms one of its government customers was caught using its malware | TechCrunch https://techcrunch.com/2025/10/28/ceo-of-spyware-maker-memento-labs-confirms-one-of-its-government-customers-was-caught-using-its-malware/
29/10/2025 18:59:06
QRCode
archive.org
thumbnail

techcrunch.com/
Lorenzo Franceschi-Bicchierai
10:00 PM PDT · October 28, 2025

On Monday, researchers at cybersecurity giant Kaspersky published a report identifying a new spyware called Dante that they say targeted Windows victims in Russia and neighboring Belarus. The researchers said the Dante spyware is made by Memento Labs, a Milan-based surveillance tech maker that was formed in 2019 after a new owner acquired and took over early spyware maker Hacking Team.

Memento chief executive Paolo Lezzi confirmed to TechCrunch that the spyware caught by Kaspersky does indeed belong to Memento.

In a call, Lezzi blamed one of the company’s government customers for exposing Dante, saying the customer used an outdated version of the Windows spyware that will no longer be supported by Memento by the end of this year.

“Clearly they used an agent that was already dead,” Lezzi told TechCrunch, referring to an “agent” as the technical word for the spyware planted on the target’s computer.

“I thought [the government customer] didn’t even use it anymore,” said Lezzi.

Lezzi, who said he was not sure which of the company’s customers were caught, added that Memento had already requested that all of its customers stop using the Windows malware. Lezzi said the company had warned customers that Kaspersky had detected Dante spyware infections since December 2024. He added that Memento plans to send a message to all its customers on Wednesday asking them once again to stop using its Windows spyware.

He said that Memento currently only develops spyware for mobile platforms. The company also develops some zero-days — meaning security flaws in software unknown to the vendor that can be used to deliver spyware — though it mostly sources its exploits from outside developers, according to Lezzi.

When reached by TechCrunch, Kaspersky spokesperson Mai Al Akkad would not say which government Kaspersky believes is behind the espionage campaign, but that it was “someone who has been able to use Dante software.”

“The group stands out for its strong command of Russian and knowledge of local nuances, traits that Kaspersky observed in other campaigns linked to this [government-backed] threat. However, occasional errors suggest that the attackers were not native speakers,” Al Akkad told TechCrunch.

In its new report, Kaspersky said it found a hacking group using the Dante spyware that it refers to as “ForumTroll,” describing the targeting of people with invites to Russian politics and economics forum Primakov Readings. Kaspersky said the hackers targeted a broad range of industries in Russia, including media outlets, universities, and government organizations.

Kaspersky’s discovery of Dante came after the Russian cybersecurity firm said it detected a “wave” of cyberattacks with phishing links that were exploiting a zero-day in the Chrome browser. Lezzi said that the Chrome zero-day was not developed by Memento.

In its report, Kaspersky researchers concluded that Memento “kept improving” the spyware originally developed by Hacking Team until 2022, when the spyware was “replaced by Dante.”

Lezzi conceded that it is possible that some “aspects” or “behaviors” of Memento’s Windows spyware were left over from spyware developed by Hacking Team.

A telltale sign that the spyware caught by Kaspersky belonged to Memento was that the developers allegedly left the word “DANTEMARKER” in the spyware’s code, a clear reference to the name Dante, which Memento had previously and publicly disclosed at a surveillance tech conference, per Kaspersky.

Much like Memento’s Dante spyware, some versions of Hacking Team’s spyware, codenamed Remote Control System, were named after historical Italian figures, such as Leonardo da Vinci and Galileo Galilei.

A history of hacks
In 2019, Lezzi purchased Hacking Team and rebranded it to Memento Labs. According to Lezzi, he paid only one euro for the company and the plan was to start over.

“We want to change absolutely everything,” the Memento owner told Motherboard after the acquisition in 2019. “We’re starting from scratch.”

A year later, Hacking Team’s CEO and founder David Vincenzetti announced that Hacking Team was “dead.”

When he acquired Hacking Team, Lezzi told TechCrunch that the company only had three government customers remaining, a far cry from the more than 40 government customers that Hacking Team had in 2015. That same year, a hacktivist called Phineas Fisher broke into the startup’s servers and siphoned off some 400 gigabytes of internal emails, contracts, documents, and the source code for its spyware.

Before the hack, Hacking Team’s customers in Ethiopia, Morocco, and the United Arab Emirates were caught targeting journalists, critics, and dissidents using the company’s spyware. Once Phineas Fisher published the company’s internal data online, journalists revealed that a Mexican regional government used Hacking Team’s spyware to target local politicians and that Hacking Team had sold to countries with human rights abuses, including Bangladesh, Saudi Arabia, and Sudan, among others.

Lezzi declined to tell TechCrunch how many customers Memento currently has but implied it was fewer than 100 customers. He also said that there are only two current Memento employees left from Hacking Team’s former staff.

The discovery of Memento’s spyware shows that this type of surveillance technology keeps proliferating, according to John Scott-Railton, a senior researcher who has investigated spyware abuses for a decade at the University of Toronto’s Citizen Lab.

It also shows that a controversial company can die because of a spectacular hack and several scandals, and yet a new company with brand-new spyware can still come out of its ashes.

“It tells us that we need to keep up the fear of consequences,” Scott-Railton told TechCrunch. “It says a lot that echoes of the most radioactive, embarrassed and hacked brand are still around.”

techcrunch.com EN 2025 Dante spyware HackingTeam Memento
Equalize: sotto la lente anche le chat tra l'ex di Fiera Milano Pazzali e un generale della Finanza https://www.ilfattoquotidiano.it/2025/10/19/equalize-accessi-database-viminale-notizie/8165372/
29/10/2025 18:13:05
QRCode
archive.org
thumbnail

Gli accertamenti della Procura sul generale della Guardia di Finanza Cosimo Di Gesù per possibili accessi abusivi al database del Viminale richiesti da Enrico Pazzali

Se non amici fraterni, certo buoni conoscenti e probabilmente estimatori l’uno dell’altro. Fino a quando il primo, l’ex presidente della Fondazione Fiera Enrico Pazzali, viene coinvolto nell’inchiesta milanese sui dossieraggi illegali della società Equalize, e il secondo, il generale Cosimo Di Gesù, comandante dell’Accademia della Guardia di Finanza, suo malgrado, finisce nei verbali di alcuni indagati come persona vicina a Pazzali. Ora, però, la recente analisi della copia forense dei cellulari di Pazzali solleva un’ipotesi investigativa degli inquirenti, ovvero che lo stesso Di Gesù possa avere fatto per conto dell’amico Pazzali accessi abusivi al database del Viminale, spulciando alcuni Sdi o dati riservati di aziende segnalate dall’ex manager pubblico nel marzo 2020 quando prendeva piede il progetto della costruzione dell’ospedale Covid in Fiera. Allo stato Di Gesù non risulta indagato e le verifiche sono in corso. A stimolare gli inquirenti anche una sentenza delle Sezioni unite della Corte di Cassazione per la quale il reato di accesso abusivo a un sistema informatico si applica anche a quel pubblico ufficiale che pur avendone facoltà lo consulta “per ragioni ontologicamente estranee rispetto a quelle per le quali la facoltà di accesso gli è stata attribuita”. Sempre nelle chat di Pazzali emerge che anche il presidente del Tribunale di Milano Fabio Roia nel 2020 fece un controllo su un manager di Fiera per conto di Pazzali. Verifica che secondo Roia, allo stato non indagato, rientra però in un formale e corretto rapporto giudiziario e di tutela visto che una ramo di Fiera Milano fu messo in amministrazione giudiziaria con un commissariamento concluso nel 2017.
Le chat tra Pazzali e Di Gesù risalgono a metà marzo del 2020. Il 21 marzo così Pazzali chiede informazioni “reputazionali” su sette aziende che, dirà Pazzali ai pm, dovevano lavorare per l’allestimento dell’ospedale. Di Gesù così risponde: “Lunedì mattina ti faccio sapere”. Poi scrive: “Anche noi siamo a scartamento ridotto”. Quindi un paio di giorni dopo sempre il comandante della Guardia di Finanza invia tutti i dati recuperati all’allora presidente della Fondazione Fiera elencando le varie criticità azienda per azienda: “Nel 2019 segnalata all’Anac perché ha fatto cartello in un appalto (…). Ha dato incarichi a dipendenti pubblici senza autorizzazione (…). Rapporti con Cosa nostra (…). Qualche piccola irregolarità fiscale (…). Ha utilizzato fatture inesistenti”. Insomma, secondo la Procura di Milano, quei dati erano accessibili solo attraverso terminali riservati. Di Gesù poi scrive: “Questa la situazione un po’ più di nuovo. Come ti dicevo non ho fatto la grossa”.

Gli inquirenti interpretano il termine “la grossa” come un accesso globale alla posizione Sdi e dunque, non avendola fatta, l’ipotesi è che il vertice della Finanza abbia fatto solo un accesso limitato. Ora, poi, qualche giorno prima di questa catena di chat, e cioè il 15 marzo, Di Gesù stimola Pazzali a chiedere a Fontana che domandi a sua volta al generale Giuseppe Zaffarana (all’epoca superiore di Di Gesù) di fargli una consulenza per il costruendo ospedale Covid: “Comunque Fontana potrebbe chiedere al generale Zaffarana la nostra collaborazione. Mia e dei tre miei ragazzi di Anac che, tienilo solo per te, vogliono rientrare perché lì ormai”. Quindi prospetta a Pazzali come entrare: “Magari con una convenzione al volo e solo per questa emergenza”. Quindi si raccomanda: “Ovviamente io e te non ci siamo mai sentiti. Se chiama il capo fammelo sapere”. Pazzali il 17 marzo esegue e avverte il governatore Attilio Fontana che subito si attiva, inviando al presidente di Fiera la risposta della segreteria di Zaffarana. Risposta che Pazzali inoltra a Di Gesù: “Il generale Zaffarana è impegnato in una call e subito dopo ne avrà un’altra. Potrebbe liberarsi nel pomeriggio. L’assistente chiede per agevolare: ‘Oggetto della chiamata’”. Al ché Di Gesù specifica l’oggetto a Pazzali: “Richiesta collaborazione per installazione ospedale in Fiera”. Tre giorni dopo Pazzali chiede e ottiene da Di Gesù i controlli sulle sette aziende.

ilfattoquotidiano.it IT spionaggio equalize viminale italia
Cybersecurity firm F5 anticipates revenue hit after attack https://www.axios.com/2025/10/27/f5-cyberattack-earnings-revenue-hit
29/10/2025 18:06:43
QRCode
archive.org

www.axios.com
Sam Sabin

F5 warned shareholders Monday that it expects its revenue growth to slow over the next two quarters as many of its customers pause or slow down their buying decisions while responding to a recent major cyberattack.

Why it matters: The comments are the first from F5 about how much the nation-state attack — which was disclosed about two weeks ago — is likely going to impact the company's bottom line.

Driving the news: F5 CEO François Locoh-Donou said during the company's fourth-quarter earnings call that the company is increasing its internal cybersecurity investments as it responds to the highly sophisticated hack.

"We are disappointed that this has happened and very aware as a team and as a company of the burden that this has placed in our customers who have had to work long hours to upgrade" affected products, Locoh-Donou told investors on the call.
Catch up quick: Bloomberg reported the attackers are likely linked to the Chinese government and have been lurking in the company's systems since 2023.

Zoom in: So far, F5 has identified and notified an unspecified number of customers who have had their data stolen as a result of the hacks, Locoh-Donou said.

The company has also worked with thousands of customers in recent weeks to deploy security fixes with minimal operational disruptions, he added.
F5 will enhance its bug bounty program and is working with outside firms to review the security of its code for vulnerabilities, he said.
The company has also transitioned Michael Montoya, the company's security chief, to a new role as its chief technology operations officer to help further embed security into every aspect of the company's operations.
Yes, but: Locoh-Donou told shareholders that most affected customers have said their stolen data was not sensitive and "they're not concerned about it."

Threat level: Locoh-Donou said the company is "acutely aware" that nation-state hackers have been increasingly targeting networking security firms like F5 in recent years.

"We are committed to learning from this incident, sharing our insights with our peers and driving collaborative innovation to collectively strengthen the protection of critical infrastructure across the industry," he said.

axios.com EN f5 attack revenue
India plans repatriation of 500 nationals who fled Myanmar scam center https://www.reuters.com/world/asia-pacific/india-plans-repatriation-500-nationals-who-fled-myanmar-scam-centre-thai-prime-2025-10-29/
29/10/2025 18:02:40
QRCode
archive.org

By Reuters
October 29, 2025

BANGKOK, Oct 29 (Reuters) - India plans to send an airplane to repatriate some 500 of its nationals who fled from a military raid on a scam centre in Myanmar into Thailand, Thai Prime Minister Anutin Charnvirakul said on Wednesday.
Starting last week, the Myanmar military has conducted a series of military operations against the KK Park cybercrime compound, driving more than 1,500 people from 28 countries into the Thai border town of Mae Sot, according to local authorities.
The border areas between Thailand, Myanmar, Laos and Cambodia have become hubs for online fraud since the COVID-19 pandemic, and the United Nations says billions of dollars have been earned from trafficking hundreds of thousands of people forced to work in the compounds.
KK Park is notorious for its involvement in transnational cyberscams. The sprawling compound and others nearby are run primarily by  Chinese criminal gangs  and guarded by local militia groups  aligned to Myanmar's military.
Anutin said the Indian ambassador would meet the head of immigration to discuss speeding up the legal verification process for the 500 Indian nationals ahead of their flight back to India.
"They don't want this to burden us," Anutin said. "They will send a plane to pick these victims up... the plane will land directly in Mae Sot," he said.
Indian foreign ministry spokesperson Randhir Jaiswal said India's embassy was working with Thailand "to verify their nationality and to repatriate them, after necessary legal formalities are completed in Thailand."
Earlier this year India also sent a plane to repatriate its nationals after thousands were freed from cyberscam centres along the Thai-Myanmar border following a regional crackdown.

reuters.com EN 2025 Myanmar scam-center Thailand
TEE.fail: Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition https://tee.fail/
29/10/2025 17:25:45
QRCode
archive.org
thumbnail

Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition

TEE.fail:
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition

With the increasing popularity of remote computation like cloud computing, users are increasingly losing control over their data, uploading it to remote servers that they do not control. Trusted Execution Environments (TEEs) aim to reduce this trust, offering users promises such as privacy and integrity of their data as well as correctness of computation. With the introduction of TEEs and Confidential Computing features to server hardware offered by Intel, AMD, and Nvidia, modern TEE implementations aim to provide hardware-backed integrity and confidentiality to entire virtual machines or GPUs, even when attackers have full control over the system's software, for example via root or hypervisor access. Over the past few years, TEEs have been used to execute confidential cryptocurrency transactions, train proprietary AI models, protect end-to-end encrypted chats, and more.

In this work, we show that the security guarantees of modern TEE offerings by Intel and AMD can be broken cheaply and easily, by building a memory interposition device that allows attackers to physically inspect all memory traffic inside a DDR5 server. Making this worse, despite the increased complexity and speed of DDR5 memory, we show how such an interposition device can be built cheaply and easily, using only off the shelf electronic equipment. This allows us for the first time to extract cryptographic keys from Intel TDX and AMD SEV-SNP with Ciphertext Hiding, including in some cases secret attestation keys from fully updated machines in trusted status. Beyond breaking CPU-based TEEs, we also show how extracted attestation keys can be used to compromise Nvidia's GPU Confidential Computing, allowing attackers to run AI workloads without any TEE protections. Finally, we examine the resilience of existing deployments to TEE compromises, showing how extracted attestation keys can potentially be used by attackers to extract millions of dollars of profit from various cryptocurrency and cloud compute services.

tee.fail/ EN 2025 research DDR5 Memory Bus Interposition Truste-Execution Intel
Sweden’s power grid operator confirms data breach claimed by ransomware gang https://therecord.media/sweden-power-grid-operator-data?
29/10/2025 17:16:47
QRCode
archive.org
thumbnail

| The Record from Recorded Future News
Daryna Antoniuk
October 27th, 2025

The utility responsible for operating Sweden's power grid is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.

Sweden’s power grid operator is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.

State-owned Svenska kraftnät, which operates the country’s electricity transmission system, said the incident affected a “limited external file transfer solution” and did not disrupt Sweden’s power supply.

“We take this breach very seriously and have taken immediate action,” said Chief Information Security Officer Cem Göcgören in a statement. “We understand that this may cause concern, but the electricity supply has not been affected.”

The ransomware gang Everest claimed responsibility for the attack on its leak site over the weekend, alleging it had exfiltrated about 280 gigabytes of data and saying it would publish it unless the agency complied with its demands.

The same group has previously claimed attacks on Dublin Airport, Air Arabia, and U.S. aerospace supplier Collins Aerospace — incidents that disrupted flight operations across several European cities in September. The group’s claims could not be independently verified.

Svenska kraftnät said it is working closely with the police and national cybersecurity authorities to determine the extent of the breach and what data may have been exposed. The utility has not attributed the attack to any specific threat actor.

“Our current assessment is that mission-critical systems have not been affected,” Göcgören said. “At this time, we are not commenting on perpetrators or motives until we have confirmed information.”

therecord.media EN 2025 Sweden critical-infrastructure grid operator data-breach ransomware
Infostealers Disguised as Free Video Game Cheats https://vxdb.sh/info-stealing-malware-disguised-as-video-game-cheats/
29/10/2025 17:09:07
QRCode
archive.org
thumbnail

vxdb.sh Journalist | Cybercrime News |

It is human nature to be competitive, to try your best when competing against others. It is no different when it comes to video games. Major E-Sports tournament prize pools regularly reach the multi millions. Last year the CS2 PGL Major hosted in Copenhagen had a prize pool of $1.25M.

Outside of the Esports realm cheating is still very prevalent, from games like Fortnite, Apex Legends, CS2, even non competitive games like Minecraft or Roblox have cheating issues. Most if not all the top tier cheats aren't free. Instead they rely on a subscription-based monetization model, where users pay for access to private builds or regular updates designed to evade detection from the games AntiCheat. Cheat developers also utilize what are called resellers who advertise, and sell the cheat on behalf of the developers in exchange for a cut of the profits.

Most players don't want to or can't pay for premium/paid cheats so they hunt for free alternatives or cracked versions of paid cheats on sketchy forums, Youtube, or even Github. While some free cheats do exist, they usually don't have many features, are slower to update, and quickly detected by the AntiCheat, meaning they’ll get you banned fast, sometimes instantly. A significant portion of these “free” alternatives present security risks. In many cases, the download contains typically info stealers, Discord token grabbers, or RATs. In other instances, the advertised download is a working cheat but has malware executed in the background without the user knowing.

How threat actors spread their malware

Cybercriminals weaponize YouTube by posting videos that advertise free cheats, executors, or “cracked” cheats and then use the video description or pinned comments to funnel viewers to a download link. Many videos use the service Linkvertise which makes users go through a handful of ads and suspicious downloads to reach the final download link where the file is hosted on a site like MediaFire or Meganz. These videos are being posted on stolen or fake youtube accounts created and advertised by what are called Traffer Teams.

What are Traffers Teams?
"Traffer teams manage the entire operation, recruiting affiliates (traffers), handling monetization, and managing/crypting stealer builds. Traffer gangs recruit affiliates who spread the malware, often driving app downloads from YouTube, TikTok, and other platforms. Traffers are commonly paid a percentage of these stolen logs or receive a direct payment for installs. Traffer gangs will typically monetize these stolen logs by selling them directly to buyers or cashing out themselves." As per ⁨Benjamin Brundage CEO of Synthient.

In a recent upload by researcher Eric Parker, a YouTube channel was discovered repeatedly uploading videos advertising so-called “Valorant Skins Changer,” “Roblox Executor,” and similar “free hacks" all with oddly similar thumbnails. Each video’s description contained a download link that redirected users to a Google Sites page at "sites[.]google[.]com/view/lyteam".

This site is operated by a Traffer Team known as LyTeam, which promotes and distributes info-stealing malware under the guise of free game cheats.

Later in the same video, Eric Parker downloaded and analyzed a .dll file hosted on the LyTeam site. When uploaded to VirusTotal, the sample was identified to be a strain of the Lumma Stealer Malware, a well-known info-stealing malware family known for harvesting browser credentials and crypto wallets.

How to stay safe

Don't click random links and run files you find out on the internet, if needed use and AntiVirus software to scan files on your computer. Run sketchy files you find either in a virtual machine or sandbox, better yet use VirusTotal.

Staying safe doesn't mean you need to be paranoid 24/7, it's about awareness.

Thank you for reading,
vxdb :)

vxdb.sh EN 2025 Infostealers Disguised VideoGame Cheats
“ChatGPT Tainted Memories:” LayerX Discovers The First Vulnerability in OpenAI Atlas Browser, Allowing Injection of Malicious Instructions into ChatGPT https://layerxsecurity.com/blog/layerx-identifies-vulnerability-in-new-chatgpt-atlas-browser/
27/10/2025 14:06:17
QRCode
archive.org
  • LayerX Or Eshed
    Published - October 27, 2025

 LayerX discovered the first vulnerability impacting OpenAI’s new ChatGPT Atlas browser, allowing bad actors to inject malicious instructions into ChatGPT’s “memory” and execute remote code. This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware.

The vulnerability affects ChatGPT users on any browser, but it is particularly dangerous for users of OpenAI’s new agentic browser: ChatGPT Atlas. LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge.

The exploit has been reported to OpenAI under Responsible Disclosure procedures, and a summary is provided below, while withholding technical information that will allow attackers to replicate this attack.

TL/DR: How The Exploit Works:
LayerX discovered how attackers can use a Cross-Site Request Forgery (CSRF) request to “piggyback” on the victim’s ChatGPT access credentials, in order to inject malicious instructions into ChatGPT’s memory. Then, when the user attempts to use ChatGPT for legitimate purposes, the tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.

While this vulnerability affects ChatGPT users on any browser, it is particularly dangerous for users of ChatGPT Atlas browser, since they are, by default, logged-in to ChatGPT, and since LayerX testing indicates that the Atlas browser is up to 90% more exposed than Chrome and Edge to phishing attacks.

A Step-by-Step Explanation:
Initially, the user is logged-in to ChatGPT, and holds an authentication cookie or token in their browser.
The user clicks a malicious link, leading them to a compromised web page.
The malicious page invokes a Cross-Site Request Forgery (CSRF) request to take advantage of the user’s pre-existing authentication into ChatGPT
The CSRF exploit injects hidden instructions into ChatGPT’s memory, without the user’s knowledge, thereby “tainting” the core LLM memory.
The next time the user queries ChatGPT, the tainted memories are invokes, allowing deployment of malicious code that can give attackers control over systems or code.

Using Cross-Site Request Forgery (CSRF) To Access LLMs:
A cross-site request forgery (CSRF) attack is when an attacker tricks a user’s browser into sending an unintended, state-changing request to a website where the user is already authenticated, causing the site to perform actions as that user without their consent.

The attack occurs when a victim is logged in to a target site, which has session cookies stored in the browser. The victim visits or is redirected into a malicious page that issues a crafted request (via a form, image tag, link, or script) to the target site. The browser automatically includes the victim’s credentials (cookies, auth headers), so the target site processes the request as if the user initiated it.

In most cases, the impact of a CSRF attack is aimed at activity such as changing account email/password, initiating funds transfers, or making purchases under the user’s session can occur.

However, when it comes to AI systems, using a CSRF attack, attackers can gain access to AI systems that the user is logged-in to, query it, or inject instructions into it.

Infecting ChatGPT’s Core “Memory”
ChatGPT’s “Memory” allows ChatGPT to remember useful details about users’ queries, chat and activities, such as preferences, constraints, projects, style notes, etc., and reuse them across future chats so that users don’t have to repeat themselves. In effect, they act like the LLM’s background memory or subconscious.

Once attackers have access to the user’s ChatGPT via the CSRF request, they can use it to inject hidden instructions to ChatGPT, that will affect future chats.

Like a person’s subconscious, once the right instructions are stored inside ChatGP’s Memory, ChatGPT will be compelled to execute these instructions, effectively becoming a malicious co-conspiritor.

Moreover, once an account’s Memory has been infected, this infection is persistent across all devices that the account is used on – across home and work computers, and across different browsers – whether a user is using them on Chrome, Atlas, or any other browser. This makes the attack extremely “sticky,” and is especially dangerous for users who use the same account for both work and personal purposes.

ChatGPT Atlas Users Up to 90% More Exposed Than Other Browsers
While this vulnerability can be used against ChatGPT users on any browser, users of OpenAI’s ChatGPT browser are particularly vulnerable. This is for two reasons:

When you are using Atlas, you are, by default, logged-in to ChatGPT. This means that ChatGPT credentials are always stored in the browser, where they can be targeted by malicious CSRF requests.
ChatGPT Atlas is particularly bad at stopping phishing attacks. This means that users of Atlas are more exposed than users of other browsers.
LayerX tested Atlas against over 100 in-the-wild web vulnerabilities and phishing attacks. LayerX previously conducted the same test against other AI browsers such as Comet, Dia, and Genspark. The results were uninspiring, to say the least:

In the previous tests, whereas traditional browsers such as Edge and Chrome were able to stop about 50% of phishing attacks using their out-of-the-box protections, Comet and Genspark stopped only 7% (Dia generated results similar to those of Chrome).

Running the same test against Atlas showed even more stark results:

Out of 103 in-the-wild attacks that LayerX tested, ChatGPT Atlas allowed 97 to go through, a whopping 94.2% failure rate.

Compared to Edge (which stopped 53% of attacks in LayerX’s test) and Chrome (which stopped 47% of attacks), ChatGPT Atlas was able to successfully stop only 5.8% of malicious web pages, meaning that users of Atlas were nearly 90% more vulnerable to phishing attacks, compared to users of other browsers.

The implication is that not only users of ChatGPT Atlas are susceptible to malicious attack vectors that can lead to injection of malicious instructions into their ChatGPT accounts, but since Atlas does not include any meaningful anti-phishing protection, Atlas users are at a greater risk of exposure.

Proof of Concept: Injecting Malicious Code To ‘Vibe’ Coding
Below is an illustration of an attack vector exploiting this vulnerability, on an Atlas browser user who is vibe coding:

“Vibe coding” is a collaborative style where the developer treats the AI as a creative partner rather than a line-by-line executor. Instead of prescribing exact syntax, the developer shares the project’s intent and feel (e.g., architecture goals, tone, audience, aesthetic preferences, etc.) and other non-functional requirements.

ChatGPT then uses this holistic brief to produce code that works and matches the requested style, narrowing the gap between high-level ideas and low-level implementation. The developer’s role shifts from hand-coding to steering and refining the AI’s interpretation.
While ChatGPT offers some defenses against malicious instructions, effectiveness can vary with the attack’s sophistication and how the unwanted behavior entered Memory.

In some cases, the user may see a mild warning; in others, the attempt might be blocked. However, if cleverly masked, the code could evade detection altogether. For example, this is the subtle warning that this script received. At most, it’s a sidenote that is easy to miss within the blob of text:

layerxsecurity.com EN 2025 vulnerability OpenAI Atlas Browser Cross-Site-Request-Forgery
How we linked ForumTroll APT to Dante spyware by Memento Labs https://securelist.com/forumtroll-apt-hacking-team-dante-spyware/117851/
27/10/2025 12:16:30
QRCode
archive.org
thumbnail

Kaspersky researchers discovered previously unidentified commercial Dante spyware developed by Memento Labs (formerly Hacking Team) and linked it to the ForumTroll APT attacks.

securelist.com EN 2025 Dante Targeted Team ForumTroll Cyber attacks Spyware Hacking espionage APT HackingTeam
page 2 / 244
4876 links
Shaarli - Le gestionnaire de marque-pages personnel, minimaliste, et sans base de données par la communauté Shaarli - Theme by kalvn