| OAIC oaic.gov.au
Published: 09 October 2025
The Federal Court ordered that Australian Clinical Labs (ACL) pay $5.8 million in civil penalties in relation to a data breach by its Medlab Pathology business in February 2022.
The Federal Court yesterday ordered that Australian Clinical Labs (ACL) pay $5.8 million in civil penalties in relation to a data breach by its Medlab Pathology business in February 2022. The breach resulted in the unauthorised access and exfiltration of the personal information of over 223,000 individuals.
These are the first civil penalties ordered under the Privacy Act 1988 (Cth).
Australian Information Commissioner Elizabeth Tydd welcomed the Court's orders, stating that they “provide an important reminder to all APP entities that they must remain vigilant in securing and responsibly managing the personal information they hold.
“These orders also represent a notable deterrent and signal to organisations to ensure they undertake reasonable and expeditious investigations of potential data breaches and report them to the Office of the Australian Information Commissioner appropriately.
“Entities holding sensitive data need to be responsive to the heightened requirements for securing this information as future action will be subject to higher penalty provisions now available under the Privacy Act".
The Federal Court has made orders imposing the following penalties:
a penalty of $4.2 million for ACL's failure to take reasonable steps to protect the personal information held by ACL on Medlab Pathology’s IT systems under Australian Privacy Principle 11.1, which amounted to more than to 223,000 contraventions of s 13G(a) of the Privacy Act;
a penalty of $800,000 for ACL’s failure to carry out a reasonable and expeditious assessment of whether an eligible data breach had occurred following the cyberattack on the Medlab Pathology IT systems in February 2022, in contravention of s 26WH(2) of the Privacy Act; and
a penalty of $800,000 for ACL’s failures to prepare and give to the Australian Information Commissioner, as soon as practicable, a statement concerning the eligible data breach, in contravention of s 26WK(2) of the Privacy Act.
Justice Halley said in his judgment that the contraventions were “extensive and significant.” His Honour also found that:
‘ACL’s most senior management were involved in the decision making around the integration of Medlab’s IT Systems into ACL’s core environment and ACL’s response to the Medlab Cyberattack, including whether it amounted to an eligible data breach.’
‘ACL’s contraventions … resulted from its failure to act with sufficient care and diligence in managing the risk of a cyberattack on the Medlab IT Systems’
‘ACL’s contravening conduct … had at least the potential to cause significant harm to individuals whose information had been exfiltrated, including financial harm, distress or psychological harms, and material inconvenience.’
‘the contraventions had the potential to have a broader impact on public trust in entities holding private and sensitive information of individuals.’
His Honour identified several factors that reduced the penalty that was imposed. These included that that ‘ACL ... cooperated with the investigation undertaken by the office of the Commissioner', and that it had commenced ‘a program of works to uplift the company’s cybersecurity capabilities’ which ‘satisfied [his Honour] that these actions demonstrate that ACL has sought, and continues to seek, to take meaningful steps to develop a satisfactory culture of compliance.’ His Honour also took into account the apologies made by ACL and the fact that it had admitted liability.
ACL admitted the contraventions, consented to orders being made and the parties made joint submissions on liability and penalty.
The penalties were imposed under the penalty regime which was in force at the time of the contraventions, with a maximum penalty of $2.22 million per contravention. The new penalty regime that came into force on 13 December 2022 allows the Court to impose much higher penalties for serious interferences with privacy. Under the new regime, maximum penalties per contravention can be as much as $50 million, three times the benefit derived from the conduct or up to the 30% of a business’s annual turnover per contravention.
Privacy Commissioner Carly Kind said, “This outcome represents an important turning point in the enforcement of privacy law in Australia. For the first time, a regulated entity has been subject to civil penalties under the Privacy Act, in line with the expectations of the public and the powers given to the OAIC by parliament. This should serve as a vivid reminder to entities, particularly providers operating within Australia’s healthcare system, that there will be consequences of serious failures to protect the privacy of those individuals whose healthcare and information they hold.”
| The Verge theverge.com
by
Robert Hart
Oct 30, 2025, 4:53 PM GMT+1
Huge cyber breaches are on the horizon thanks to AI-powered web browsers like ChatGPT Atlas and Comet, experts warn.
Web browsers are getting awfully chatty. They got even chattier last week after OpenAI and Microsoft kicked the AI browser race into high gear with ChatGPT Atlas and a “Copilot Mode” for Edge. They can answer questions, summarize pages, and even take actions on your behalf. The experience is far from seamless yet, but it hints at a more convenient, hands-off future where your browser does lots of your thinking for you. That future could also be a minefield of new vulnerabilities and data leaks, cybersecurity experts warn. The signs are already here, and researchers tell The Verge the chaos is only just getting started.
Atlas and Copilot Mode are part of a broader land grab to control the gateway to the internet and to bake AI directly into the browser itself. That push is transforming what were once standalone chatbots on separate pages or apps into the very platform you use to navigate the web. They’re not alone. Established players are also in the race, such as Google, which is integrating its Gemini AI model into Chrome; Opera, which launched Neon; and The Browser Company, with Dia. Startups are also keen to stake a claim, such as AI startup Perplexity — best known for its AI-powered search engine, which made its AI-powered browser Comet freely available to everyone in early October — and Sweden’s Strawberry, which is still in beta and actively going after “disappointed Atlas users.”
In the past few weeks alone, researchers have uncovered vulnerabilities in Atlas allowing attackers to take advantage of ChatGPT’s “memory” to inject malicious code, grant themselves access privileges, or deploy malware. Flaws discovered in Comet could allow attackers to hijack the browser’s AI with hidden instructions. Perplexity, through a blog, and OpenAI’s chief information security officer, Dane Stuckey, acknowledged prompt injections as a big threat last week, though both described them as a “frontier” problem that has no firm solution.
“Despite some heavy guardrails being in place, there is a vast attack surface,” says Hamed Haddadi, professor of human-centered systems at Imperial College London and chief scientist at web browser company Brave. And what we’re seeing is just the tip of the iceberg.
With AI browsers, the threats are numerous. Foremost, they know far more about you and are “much more powerful than traditional browsers,” says Yash Vekaria, a computer science researcher at UC Davis. Even more than standard browsers, Vekaria says “there is an imminent risk from being tracked and profiled by the browser itself.” AI “memory” functions are designed to learn from everything a user does or shares, from browsing to emails to searches, as well as conversations with the built-in AI assistant. This means you’re probably sharing far more than you realise and the browser remembers it all. The result is “a more invasive profile than ever before,” Vekaria says. Hackers would quite like to get hold of that information, especially if coupled with stored credit card details and login credentials often found on browsers.
Another threat is inherent to the rollout of any new technology. No matter how careful developers are, there will inevitably be weaknesses hackers can exploit. This could range from bugs and coding errors that accidentally reveal sensitive data to major security flaws that could let hackers gain access to your system. “It’s early days, so expect risky vulnerabilities to emerge,” says Lukasz Olejnik, an independent cybersecurity researcher and visiting senior research fellow at King’s College London. He points to the “early Office macro abuses, malicious browser extensions, and mobiles prior to [the] introduction of permissions” as examples of previous security issues linked to the rollout of new technologies. “Here we go again.”
Some vulnerabilities are never found — sometimes leading to devastating zero-day attacks, named as there are zero days to fix the flaw — but thorough testing can slash the number of potential problems. With AI browsers, “the biggest immediate threat is the market rush,” Haddadi says. “These agentic browsers have not been thoroughly tested and validated.”
But AI browsers’ defining feature, AI, is where the worst threats are brewing. The biggest challenge comes with AI agents that act on behalf of the user. Like humans, they’re capable of visiting suspect websites, clicking on dodgy links, and inputting sensitive information into places sensitive information shouldn’t go, but unlike some humans, they lack the learned common sense that helps keep us safe online. Agents can also be misled, even hijacked, for nefarious purposes. All it takes is the right instructions. So-called prompt injections can range from glaringly obvious to subtle, effectively hidden in plain sight in things like images, screenshots, form fields, emails and attachments, and even something as simple as white text on a white background.
Worse yet, these attacks can be very difficult to anticipate and defend against. Automation means bad actors can try and try again until the agent does what they want, says Haddadi. “Interaction with agents allows endless ‘try and error’ configurations and explorations of methods to insert malicious prompts and commands.” There are simply far more chances for a hacker to break through when interacting with an agent, opening up a huge space for potential attacks. Shujun Li, a professor of cybersecurity at the University of Kent, says “zero-day vulnerabilities are exponentially increasing” as a result. Even worse: Li says as the flaw starts with an agent, detection will also be delayed, meaning potentially bigger breaches.
It’s not hard to imagine what might be in store. Olejnik sees scenarios where attackers use hidden instructions to get AI browsers to send out personal data or steal purchased goods by changing the saved address on a shopping site. To make things worse, Vekaria warns it’s “relatively easy to pull off attacks” given the current state of AI browsers, even with safeguards in place. “Browser vendors have a lot of work to do in order to make them more safe, secure, and private for the end users,” he says.
For some threats, experts say the only real way to keep safe using AI browsers is to simply avoid the marquee features entirely. Li suggests people save AI for “only when they absolutely need it” and know what they’re doing. Browsers should “operate in an AI-free mode by default,” he says. If you must use the AI agent features, Vekaria advises a degree of hand-holding. When setting a task, give the agent verified websites you know to be safe rather than letting it figure them out on its own. “It can end up suggesting and using a scam site,” he warns.
blog.mozilla.org – Mozilla Add-ons Community Blog
Alan Byrne October 23, 2025
As of November 3rd 2025, all new Firefox extensions will be required to specify if they collect or transmit personal data in their manifest.json file using the browser_specific_settings.gecko.data_collection_permissions key. This will apply to new extensions only, and not new versions of existing extensions. Extensions that do not collect or transmit any personal data are required to specify this by setting the none required data collection permission in this property.
This information will then be displayed to the user when they start to install the extension, alongside any permissions it requests.
This information will also be displayed on the addons.mozilla.org page, if it is publicly listed, and in the Permissions and Data section of the Firefox about:addons page for that extension. If an extension supports versions of Firefox prior to 140 for Desktop, or 142 for Android, then the developer will need to continue to provide the user with a clear way to control the add-on’s data collection and transmission immediately after installation of the add-on.
Once any extension starts using these data_collection_permissions keys in a new version, it will need to continue using them for all subsequent versions. Extensions that do not have this property set correctly, and are required to use it, will be prevented from being submitted to addons.mozilla.org for signing with a message explaining why.
In the first half of 2026, Mozilla will require all extensions to adopt this framework. But don’t worry, we’ll give plenty of notice via the add-ons blog. We’re also developing some new features to ease this transition for both extension developers and users, which we will announce here.
techcrunch.com
Jagmeet Singh
6:30 PM PDT · October 28, 2025
A security researcher found the Indian automotive giant exposing personal information of its customers, internal company reports, and dealers’ data. Tata confirmed it fixed the issues.
Indian automotive giant Tata Motors has fixed a series of security flaws that exposed sensitive internal data, including personal information of customers, company reports, and data related to its dealers.
Security researcher Eaton Zveare told TechCrunch that he discovered the flaws in Tata Motors’ E-Dukaan unit, an e-commerce portal for buying spare parts for Tata-made commercial vehicles. Headquartered in Mumbai, Tata Motors produces passenger cars, as well as commercial and defense vehicles. The company has a presence in 125 countries worldwide and seven assembly facilities, per its website.
Zveare said he found that the portal’s web source code included the private keys to access and modify data within Tata Motors’ account on Amazon Web Services, the researcher said in a blog post.
The exposed data, Zveare told TechCrunch, included hundreds of thousands of invoices containing customer information, such as their names, mailing addresses, and permanent account number (PAN), a 10-character unique identifier issued by the Indian government.
“Out of respect for not causing some type of alarm bell or massive egress bill at Tata Motors, there were no attempts to exfiltrate large amounts of data or download excessively large files,” the researcher told TechCrunch.
There were also MySQL database backups and Apache Parquet files that included various bits of private customer information and communication, the researcher noted.
The AWS keys also enabled access to over 70 terabytes of data related to Tata Motors’ FleetEdge fleet-tracking software. Zveare also found backdoor admin access to a Tableau account, which included data of over 8,000 users.
“As server admin, you had access to all of it. This primarily includes things like internal financial reports, performance reports, dealer scorecards, and various dashboards,” the researcher said.
The exposed data also included API access to Tata Motors’ fleet management platform, Azuga, which powers the company’s test drive website.
Shortly after discovering the issues, Zveare reported them to Tata Motors through the Indian computer emergency response team, known as CERT-In, in August 2023. Later in October 2023, Tata Motors told Zveare that it was working on fixing the AWS issues after securing the initial loopholes. However, the company did not say when the issues were fixed.
Tata Motors confirmed to TechCrunch that all the reported flaws were fixed in 2023 but would not say if it notified affected customers that their information was exposed.
“We can confirm that the reported flaws and vulnerabilities were thoroughly reviewed following their identification in 2023 and were promptly and fully addressed,” said Tata Motors communications head Sudeep Bhalla, when contacted by TechCrunch.
“Our infrastructure is regularly audited by leading cybersecurity firms, and we maintain comprehensive access logs to monitor for unauthorized activity. We also actively collaborate with industry experts and security researchers to strengthen our security posture and ensure timely mitigation of potential risks,” said Bhalla.
source: OpenAI openai.com
October 30, 2025
Now in private beta: an AI agent that thinks like a security researcher and scales to meet the demands of modern software.
Today, we’re announcing Aardvark, an agentic security researcher powered by GPT‑5.
Software security is one of the most critical—and challenging—frontiers in technology. Each year, tens of thousands of new vulnerabilities are discovered across enterprise and open-source codebases. Defenders face the daunting tasks of finding and patching vulnerabilities before their adversaries do. At OpenAI, we are working to tip that balance in favor of defenders.
Aardvark represents a breakthrough in AI and security research: an autonomous agent that can help developers and security teams discover and fix security vulnerabilities at scale. Aardvark is now available in private beta to validate and refine its capabilities in the field.
How Aardvark works
Aardvark continuously analyzes source code repositories to identify vulnerabilities, assess exploitability, prioritize severity, and propose targeted patches.
Aardvark works by monitoring commits and changes to codebases, identifying vulnerabilities, how they might be exploited, and proposing fixes. Aardvark does not rely on traditional program analysis techniques like fuzzing or software composition analysis. Instead, it uses LLM-powered reasoning and tool-use to understand code behavior and identify vulnerabilities. Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more.
Diagram titled “AARDVARK — Vulnerability Discovery Agent Workflow” showing a process flow from Git repository to threat modeling, vulnerability discovery, validation sandbox, patching with Codex, and human review leading to a pull request.
Aardvark relies on a multi-stage pipeline to identify, explain, and fix vulnerabilities:
Analysis: It begins by analyzing the full repository to produce a threat model reflecting its understanding of the project’s security objectives and design.
Commit scanning: It scans for vulnerabilities by inspecting commit-level changes against the entire repository and threat model as new code is committed. When a repository is first connected, Aardvark will scan its history to identify existing issues. Aardvark explains the vulnerabilities it finds step-by-step, annotating code for human review.
Validation: Once Aardvark has identified a potential vulnerability, it will attempt to trigger it in an isolated, sandboxed environment to confirm its exploitability. Aardvark describes the steps taken to help ensure accurate, high-quality, and low false-positive insights are returned to users.
Patching: Aardvark integrates with OpenAI Codex to help fix the vulnerabilities it finds. It attaches a Codex-generated and Aardvark-scanned patch to each finding for human review and efficient, one-click patching.
Aardvark works alongside engineers, integrating with GitHub, Codex, and existing workflows to deliver clear, actionable insights without slowing development. While Aardvark is built for security, in our testing we’ve found that it can also uncover bugs such as logic flaws, incomplete fixes, and privacy issues.
Real impact, today
Aardvark has been in service for several months, running continuously across OpenAI’s internal codebases and those of external alpha partners. Within OpenAI, it has surfaced meaningful vulnerabilities and contributed to OpenAI’s defensive posture. Partners have highlighted the depth of its analysis, with Aardvark finding issues that occur only under complex conditions.
In benchmark testing on “golden” repositories, Aardvark identified 92% of known and synthetically-introduced vulnerabilities, demonstrating high recall and real-world effectiveness.
Aardvark for Open Source
Aardvark has also been applied to open-source projects, where it has discovered and we have responsibly disclosed numerous vulnerabilities—ten of which have received Common Vulnerabilities and Exposures (CVE) identifiers.
As beneficiaries of decades of open research and responsible disclosure, we’re committed to giving back—contributing tools and findings that make the digital ecosystem safer for everyone. We plan to offer pro-bono scanning to select non-commercial open source repositories to contribute to the security of the open source software ecosystem and supply chain.
We recently updated our outbound coordinated disclosure policy which takes a developer-friendly stance, focused on collaboration and scalable impact, rather than rigid disclosure timelines that can pressure developers. We anticipate tools like Aardvark will result in the discovery of increasing numbers of bugs, and want to sustainably collaborate to achieve long-term resilience.
Why it matters
Software is now the backbone of every industry—which means software vulnerabilities are a systemic risk to businesses, infrastructure, and society. Over 40,000 CVEs were reported in 2024 alone. Our testing shows that around 1.2% of commits introduce bugs—small changes that can have outsized consequences.
Aardvark represents a new defender-first model: an agentic security researcher that partners with teams by delivering continuous protection as code evolves. By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation. We believe in expanding access to security expertise. We're beginning with a private beta and will broaden availability as we learn.
Private beta now open
We’re inviting select partners to join the Aardvark private beta. Participants will gain early access and work directly with our team to refine detection accuracy, validation workflows, and reporting experience.
We’re looking to validate performance across a variety of environments. If your organization or open source project is interested in joining, you can apply here.
securityweek.com
ByIonut Arghire| October 30, 2025 (9:01 AM ET)
Updated: October 31, 2025 (2:36 AM ET)
The hackers stole names, addresses, dates of birth, Social Security numbers, and health and insurance information.
Business services provider Conduent is notifying more than 10 million people that their personal information was stolen in a January 2025 data breach.
The incident was disclosed publicly in late January, when Conduent confirmed system disruptions that affected government agencies in multiple US states.
In April, the company notified the Securities and Exchange Commission (SEC) that the attackers had stolen personal information from its systems.
Last week, Conduent started notifying users that their personal information was stolen in the incident, and submitted notices to Attorney General’s Offices in multiple states.
The hackers accessed Conduent’s network on October 21, 2024 and were evicted on January 13, 2025, after the attack was identified, the company says in the notification letter to the affected individuals.
During the time frame, the attackers exfiltrated various files from the network, including files containing personal information such as names, addresses, dates of birth, Social Security numbers, health insurance details, and medical information.
Conduent is not providing the affected people with free identity theft protection services, but encourages them to obtain free credit reports, place fraud alerts on their credit files, and place security freezes on their credit reports.
“Upon discovery of the incident, we safely restored our systems and operations and notified law enforcement. We are also notifying you in case you decide to take further steps to protect your information should you feel it appropriate to do so,” the notification letter reads.
Based on the data breach notice submitted with the authorities in Oregon, it appears that 10,515,849 individuals were impacted, with the largest number in Texas (4 million).
Conduent serves over 600 government and transportation organizations, and roughly half of Fortune 100 companies, across financial, pharmaceutical, and automobile sectors. The company supports roughly 100 million US residents across 46 states.
While the company has not shared details on the threat actor behind the attack, the Safepay ransomware group claimed the incident in February.
SecurityWeek has emailed Conduent for additional information and will update this article if the company responds.
*Updated with the number of impacted individuals from the Oregon Department of Justice.
reuters.com By A.J. Vicens
October 29, 202511:10 PM GMT+1Updated October 29, 2025
Hackers accessed Ribbon's network in December 2024
Three customers impacted, according to ongoing investigation
Ribbon's breach part of broader trend targeting telecom firms
Oct 29 (Reuters) - Hackers working for an unnamed nation-state breached networks at Ribbon Communications (RBBN.O), opens new tab, a key U.S. telecommunications services company, and remained within the firm’s systems for nearly a year without being detected, a company spokesperson confirmed in a statement on Wednesday.
Ribbon Communications, a Texas-based company that provides technology to facilitate voice and data communications between separate tech platforms and environments, said in its October 23 10-Q filing, opens new tab with the Securities and Exchange Commission that the company learned early last month that people “reportedly associated with a nation-state actor” gained access to the company’s IT network, with initial access dating to early December 2024.
The hack has not been previously reported. It is perhaps the latest example of technology companies that play a critical role in the global telecommunications ecosystem being targeted as part of nation-state hacking campaigns.
Ribbon did not identify the nation-state actor, or disclose which of its customers were affected by the breach, but told Reuters in the statement that its investigation has so far revealed three “smaller customers” impacted.
“While we do not have evidence at this time that would indicate the threat actor gained access to any material information, we continue to work with our third-party experts to confirm this,” a Ribbon spokesperson said in an email. “We have also taken steps to further harden our network to prevent any future incidents.”
sophos.com
October 30, 2025
The threat group targeted a LANSCOPE zero-day vulnerability (CVE-2025-61932)
In mid-2025, Counter Threat Unit™ (CTU) researchers observed a sophisticated BRONZE BUTLER campaign that exploited a zero-day vulnerability in Motex LANSCOPE Endpoint Manager to steal confidential information. The Chinese state-sponsored BRONZE BUTLER threat group (also known as Tick) has been active since 2010 and previously exploited a zero-day vulnerability in Japanese asset management product SKYSEA Client View in 2016. JPCERT/CC published a notice about the LANSCOPE issue on October 22, 2025.
Exploitation of CVE-2025-61932
In the 2025 campaign, CTU™ researchers confirmed that the threat actors gained initial access by exploiting CVE-2025-61932. This vulnerability allows remote attackers to execute arbitrary commands with SYSTEM privileges. CTU analysis indicates that the number of vulnerable internet-facing devices is low. However, attackers could exploit vulnerable devices within compromised networks to conduct privilege escalation and lateral movement. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added CVE-2025-61932 to the Known Exploited Vulnerabilities Catalog on October 22.
Command and control
CTU researchers confirmed that the threat actors used the Gokcpdoor malware in this campaign. As reported by a third party in 2023, Gokcpdoor can establish a proxy connection with a command and control (C2) server as a backdoor. The 2025 variant discontinued support for the KCP protocol and added multiplexing communication using a third-party library for its C2 communication (see Figure 1).
Comparison of function names in Gokcpdoor samples
Figure 1: Comparison of internal function names in the 2023 (left) and 2025 (right) Gokcpdoor samples
Furthermore, CTU researchers identified two different types of Gokcpdoor with distinct purposes:
The server type listens for incoming client connections, opening the port specified in its configuration. Some of the analyzed samples used 38000 while others used 38002. The C2 functionality enabled remote access.
The client type initiates connections to hard-coded C2 servers, establishing a communication tunnel to function as a backdoor.
On some compromised hosts, BRONZE BUTLER implemented the Havoc C2 framework instead of Gokcpdoor. Some Gokcpdoor and Havoc samples used the OAED Loader malware, which was also linked to BRONZE BUTLER in the 2023 report, to complicate the execution flow. This malware injects a payload into a legitimate executable according to its embedded configuration (see Figure 2).
Visual representation of execution flow that utilizes OAED Loader
Figure 2: Execution flow utilizing OAED Loader
Abuse of legitimate tools and services
CTU researchers also confirmed that the following tools were used for lateral movement and data exfiltration:
goddi (Go dump domain info) – An open-source Active Directory information dumping tool
Remote desktop – A legitimate remote desktop application used through a backdoor tunnel
7-Zip – An open-source file archiver used for data exfiltration
BRONZE BUTLER also accessed the following cloud storage services via the web browser during remote desktop sessions, potentially attempting to exfiltrate the victim’s confidential information:
file.io
LimeWire
Piping Server
Recommendations
CTU researchers recommend that organizations upgrade vulnerable LANSCOPE servers as appropriate in their environments. Organizations should also review internet-facing LANSCOPE servers that have the LANSCOPE client program (MR) or detection agent (DA) installed to determine if there is a business need for them to be publicly exposed.
Detections and indicators
The following Sophos protections detect activity related to this threat:
Torj/BckDr-SBL
Mal/Generic-S
The threat indicators in Table 1 can be used to detect activity related to this threat. Note that IP addresses can be reallocated. The IP addresses may contain malicious content, so consider the risks before opening them in a browser.
Indicator Type Context
932c91020b74aaa7ffc687e21da0119c MD5 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
be75458b489468e0acdea6ebbb424bc898b3db29 SHA1 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
3c96c1a9b3751339390be9d7a5c3694df46212fb97ebddc074547c2338a4c7ba SHA256 hash Gokcpdoor variant used by BRONZE BUTLER
(oci.dll)
4946b0de3b705878c514e2eead096e1e MD5 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
1406b4e905c65ba1599eb9c619c196fa5e1c3bf7 SHA1 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
9e581d0506d2f6ec39226f052a58bc5a020ebc81ae539fa3a6b7fc0db1b94946 SHA256 hash Havoc sample used by BRONZE BUTLER
(MaxxAudioMeters64LOC.dll)
8124940a41d4b7608eada0d2b546b73c010e30b1 SHA1 hash goddi tool used by BRONZE BUTLER
(winupdate.exe)
704e697441c0af67423458a99f30318c57f1a81c4146beb4dd1a88a88a8c97c3 SHA256 hash goddi tool used by BRONZE BUTLER
(winupdate.exe)
38[.]54[.]56[.]57 IP address Gokcpdoor C2 server used by BRONZE BUTLER;
uses TCP port 443
38[.]54[.]88[.]172 IP address Havoc C2 server used by BRONZE BUTLER;
uses TCP port 443
38[.]54[.]56[.]10 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
38[.]60[.]212[.]85 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
108[.]61[.]161[.]118 IP address Connected to ports opened by Gokcpdoor variant
used by BRONZE BUTLER
Cellebrite can apparently extract data from most Pixel phones, unless they’re running GrapheneOS.
Despite being a vast repository of personal information, smartphones used to have little by way of security. That has thankfully changed, but companies like Cellebrite offer law enforcement tools that can bypass security on some devices. The company keeps the specifics quiet, but an anonymous individual recently logged in to a Cellebrite briefing and came away with a list of which of Google’s Pixel phones are vulnerable to Cellebrite phone hacking.
This person, who goes by the handle rogueFed, posted screenshots from the recent Microsoft Teams meeting to the GrapheneOS forums (spotted by 404 Media). GrapheneOS is an Android-based operating system that can be installed on select phones, including Pixels. It ships with enhanced security features and no Google services. Because of its popularity among the security-conscious, Cellebrite apparently felt the need to include it in its matrix of Pixel phone support.
The screenshot includes data on the Pixel 6, Pixel 7, Pixel 8, and Pixel 9 family. It does not list the Pixel 10 series, which launched just a few months ago. The phone support is split up into three different conditions: before first unlock, after first unlock, and unlocked. The before first unlock (BFU) state means the phone has not been unlocked since restarting, so all data is encrypted. This is traditionally the most secure state for a phone. In the after first unlock (AFU) state, data extraction is easier. And naturally, an unlocked phone is open season on your data.
At least according to Cellebrite, GrapheneOS is more secure than what Google offers out of the box. The company is telling law enforcement in these briefings that its technology can extract data from Pixel 6, 7, 8, and 9 phones in unlocked, AFU, and BFU states on stock software. However, it cannot brute-force passcodes to enable full control of a device. The leaker also notes law enforcement is still unable to copy an eSIM from Pixel devices. Notably, the Pixel 10 series is moving away from physical SIM cards.
For those same phones running GrapheneOS, police can expect to have a much harder time. The Cellebrite table says that Pixels with GrapheneOS are only accessible when running software from before late 2022—both the Pixel 8 and Pixel 9 were launched after that. Phones in both BFU and AFU states are safe from Cellebrite on updated builds, and as of late 2024, even a fully unlocked GrapheneOS device is immune from having its data copied. An unlocked phone can be inspected in plenty of other ways, but data extraction in this case is limited to what the user can access.
The original leaker claims to have dialed into two calls so far without detection. However, rogueFed also called out the meeting organizer by name (the second screenshot, which we are not reposting). Odds are that Cellebrite will be screening meeting attendees more carefully now.
We’ve reached out to Google to inquire about why a custom ROM created by a small non-profit is more resistant to industrial phone hacking than the official Pixel OS. We’ll update this article if Google has anything to say.
theguardian.com
Harry Davies and Yuval Abraham in Jerusalem
Wed 29 Oct 2025 14.15 CET
The tech giants agreed to extraordinary terms to clinch a lucrative contract with the Israeli government, documents show
When Google and Amazon negotiated a major $1.2bn cloud-computing deal in 2021, their customer – the Israeli government – had an unusual demand: agree to use a secret code as part of an arrangement that would become known as the “winking mechanism”.
The demand, which would require Google and Amazon to effectively sidestep legal obligations in countries around the world, was born out of Israel’s concerns that data it moves into the global corporations’ cloud platforms could end up in the hands of foreign law enforcement authorities.
Like other big tech companies, Google and Amazon’s cloud businesses routinely comply with requests from police, prosecutors and security services to hand over customer data to assist investigations.
This process is often cloaked in secrecy. The companies are frequently gagged from alerting the affected customer their information has been turned over. This is either because the law enforcement agency has the power to demand this or a court has ordered them to stay silent.
For Israel, losing control of its data to authorities overseas was a significant concern. So to deal with the threat, officials created a secret warning system: the companies must send signals hidden in payments to the Israeli government, tipping it off when it has disclosed Israeli data to foreign courts or investigators.
To clinch the lucrative contract, Google and Amazon agreed to the so-called winking mechanism, according to leaked documents seen by the Guardian, as part of a joint investigation with Israeli-Palestinian publication +972 Magazine and Hebrew-language outlet Local Call.
Based on the documents and descriptions of the contract by Israeli officials, the investigation reveals how the companies bowed to a series of stringent and unorthodox “controls” contained within the 2021 deal, known as Project Nimbus. Both Google and Amazon’s cloud businesses have denied evading any legal obligations.
The strict controls include measures that prohibit the US companies from restricting how an array of Israeli government agencies, security services and military units use their cloud services. According to the deal’s terms, the companies cannot suspend or withdraw Israel’s access to its technology, even if it’s found to have violated their terms of service.
Israeli officials inserted the controls to counter a series of anticipated threats. They feared Google or Amazon might bow to employee or shareholder pressure and withdraw Israel’s access to its products and services if linked to human rights abuses in the occupied Palestinian territories.
They were also concerned the companies could be vulnerable to overseas legal action, particularly in cases relating to the use of the technology in the military occupation of the West Bank and Gaza.
The terms of the Nimbus deal would appear to prohibit Google and Amazon from the kind of unilateral action taken by Microsoft last month, when it disabled the Israeli military’s access to technology used to operate an indiscriminate surveillance system monitoring Palestinian phone calls.
Microsoft, which provides a range of cloud services to Israel’s military and public sector, bid for the Nimbus contract but was beaten by its rivals. According to sources familiar with negotiations, Microsoft’s bid suffered as it refused to accept some of Israel’s demands.
As with Microsoft, Google and Amazon’s cloud businesses have faced scrutiny in recent years over the role of their technology – and the Nimbus contract in particular – in Israel’s two-year war on Gaza.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
During its offensive in the territory, where a UN commission of inquiry concluded that Israel has committed genocide, the Israeli military has relied heavily on cloud providers to store and analyse large volumes of data and intelligence information.
One such dataset was the vast collection of intercepted Palestinian calls that until August was stored on Microsoft’s cloud platform. According to intelligence sources, the Israeli military planned to move the data to Amazon Web Services (AWS) datacentres.
Amazon did not respond to the Guardian’s questions about whether it knew of Israel’s plan to migrate the mass surveillance data to its cloud platform. A spokesperson for the company said it respected “the privacy of our customers and we do not discuss our relationship without their consent, or have visibility into their workloads” stored in the cloud.
Asked about the winking mechanism, both Amazon and Google denied circumventing legally binding orders. “The idea that we would evade our legal obligations to the US government as a US company, or in any other country, is categorically wrong,” a Google spokesperson said.
With this threat in mind, Israeli officials inserted into the Nimbus deal a requirement for the companies to a send coded message – a “wink” – to its government, revealing the identity of the country they had been compelled to hand over Israeli data to, but were gagged from saying so.
Leaked documents from Israel’s finance ministry, which include a finalised version of the Nimbus agreement, suggest the secret code would take the form of payments – referred to as “special compensation” – made by the companies to the Israeli government.
According to the documents, the payments must be made “within 24 hours of the information being transferred” and correspond to the telephone dialing code of the foreign country, amounting to sums between 1,000 and 9,999 shekels.
Under the terms of the deal, the mechanism works like this:
If either Google or Amazon provides information to authorities in the US, where the dialing code is +1, and they are prevented from disclosing their cooperation, they must send the Israeli government 1,000 shekels.
If, for example, the companies receive a request for Israeli data from authorities in Italy, where the dialing code is +39, they must send 3,900 shekels.
If the companies conclude the terms of a gag order prevent them from even signaling which country has received the data, there is a backstop: the companies must pay 100,000 shekels ($30,000) to the Israeli government.
Legal experts, including several former US prosecutors, said the arrangement was highly unusual and carried risks for the companies as the coded messages could violate legal obligations in the US, where the companies are headquartered, to keep a subpoena secret.
“It seems awfully cute and something that if the US government or, more to the point, a court were to understand, I don’t think they would be particularly sympathetic,” a former US government lawyer said.
Several experts described the mechanism as a “clever” workaround that could comply with the letter of the law but not its spirit. “It’s kind of brilliant, but it’s risky,” said a former senior US security official.
Israeli officials appear to have acknowledged this, documents suggest. Their demands about how Google and Amazon respond to a US-issued order “might collide” with US law, they noted, and the companies would have to make a choice between “violating the contract or violating their legal obligations”.
Neither Google nor Amazon responded to the Guardian’s questions about whether they had used the secret code since the Nimbus contract came into effect.
“We have a rigorous global process for responding to lawful and binding orders for requests related to customer data,” Amazon’s spokesperson said. “We do not have any processes in place to circumvent our confidentiality obligations on lawfully binding orders.”
Google declined to comment on which of Israel’s stringent demands it had accepted in the completed Nimbus deal, but said it was “false” to “imply that we somehow were involved in illegal activity, which is absurd”.
A spokesperson for Israel’s finance ministry said: “The article’s insinuation that Israel compels companies to breach the law is baseless.”
‘No restrictions’
Israeli officials also feared a scenario in which its access to the cloud providers’ technology could be blocked or restricted.
In particular, officials worried that activists and rights groups could place pressure on Google and Amazon, or seek court orders in several European countries, to force them to terminate or limit their business with Israel if their technology were linked to human rights violations.
To counter the risks, Israel inserted controls into the Nimbus agreement which Google and Amazon appear to have accepted, according to government documents prepared after the deal was signed.
The documents state that the agreement prohibits the companies from revoking or restricting Israel’s access to their cloud platforms, either due to changes in company policy or because they find Israel’s use of their technology violates their terms of service.
Provided Israel does not infringe on copyright or resell the companies’ technology, “the government is permitted to make use of any service that is permitted by Israeli law”, according to a finance ministry analysis of the deal.
Both companies’ standard “acceptable use” policies state their cloud platforms should not be used to violate the legal rights of others, nor should they be used to engage in or encourage activities that cause “serious harm” to people.
However, according to an Israeli official familiar with the Nimbus project, there can be “no restrictions” on the kind of information moved into Google and Amazon’s cloud platforms, including military and intelligence data. The terms of the deal seen by the Guardian state that Israel is “entitled to migrate to the cloud or generate in the cloud any content data they wish”.
Israel inserted the provisions into the deal to avoid a situation in which the companies “decide that a certain customer is causing them damage, and therefore cease to sell them services”, one document noted.
The Intercept reported last year the Nimbus project was governed by an “amended” set of confidential policies, and cited a leaked internal report suggesting Google understood it would not be permitted to restrict the types of services used by Israel.
Last month, when Microsoft cut off Israeli access to some cloud and artificial intelligence services, it did so after confirming reporting by the Guardian and its partners, +972 and Local Call, that the military had stored a vast trove of intercepted Palestinian calls in the company’s Azure cloud platform.
Notifying the Israeli military of its decision, Microsoft said that using Azure in this way violated its terms of service and it was “not in the business of facilitating the mass surveillance of civilians”.
Under the terms of the Nimbus deal, Google and Amazon are prohibited from taking such action as it would “discriminate” against the Israeli government. Doing so would incur financial penalties for the companies, as well as legal action for breach of contract.
The Israeli finance ministry spokesperson said Google and Amazon are “bound by stringent contractual obligations that safeguard Israel’s vital interests”. They added: “These agreements are confidential and we will not legitimise the article’s claims by disclosing private commercial terms.”
| The Record from Recorded Future News
Daryna Antoniuk
October 31st, 2025
Russia's Interior Ministry posted a video of raids on suspected developers of the Meduza Stealer malware, which has been sold to cybercriminals since 2023.
Russian police said they detained three hackers suspected of developing and selling the Meduza Stealer malware in a rare crackdown on domestic cybercrime.
The suspects were arrested in Moscow and the surrounding region, Russia’s Interior Ministry spokesperson Irina Volk said in a statement on Thursday.
The three “young IT specialists” are suspected of developing, using and selling malicious software designed to steal login credentials, cryptocurrency wallet data and other sensitive information, she added.
Police said they seized computer equipment, phones, and bank cards during raids on the suspects’ homes. A video released by the Interior Ministry shows officers breaking down doors and storming into apartments. When asked by police why he had been detained, one suspect replied in Russian, “I don’t really understand.”
Officials said the suspects began distributing Meduza Stealer through hacker forums roughly two years ago. In one incident earlier this year, the group allegedly used the malware to steal data from an organization in Russia’s Astrakhan region.
Authorities said the group also created another type of malware designed to disable antivirus protection and build botnets for large-scale cyberattacks. The malicious program was not identified. The three face up to four years in prison if convicted.
Meduza Stealer first appeared in 2023, sold on Russian-language hacking forums and Telegram channels as a service for a fee. It has since been used in cyberattacks targeting both personal and financial data.
Ukrainian officials have previously linked the malware to attacks on domestic military and government entities. In one campaign last October, threat actors used a fake Telegram “technical support” bot to distribute the malware to users of Ukraine’s government mobilization app.
Researchers have also observed Meduza Stealer infections in Poland and inside Russia itself — including one 2023 campaign that used phishing emails impersonating an industrial automation company.
Russia’s law enforcement agencies rarely pursue cybercriminals operating inside the country. But researchers say that has begun to change.
According to a recent report by Recorded Future’s Insikt Group, Moscow’s stance has shifted “from passive tolerance to active management” of the hacking ecosystem — a strategy that includes selective arrests and public crackdowns intended to reinforce state authority while preserving useful talent.
Such moves mark a notable shift in a country long seen as a safe haven for financially motivated hackers. Researchers say many of these actors are now decentralizing their operations to evade both Western and domestic surveillance.
The Record is an editorially independent unit of Recorded Future.
techcrunch.com/
Lorenzo Franceschi-Bicchierai
10:00 PM PDT · October 28, 2025
On Monday, researchers at cybersecurity giant Kaspersky published a report identifying a new spyware called Dante that they say targeted Windows victims in Russia and neighboring Belarus. The researchers said the Dante spyware is made by Memento Labs, a Milan-based surveillance tech maker that was formed in 2019 after a new owner acquired and took over early spyware maker Hacking Team.
Memento chief executive Paolo Lezzi confirmed to TechCrunch that the spyware caught by Kaspersky does indeed belong to Memento.
In a call, Lezzi blamed one of the company’s government customers for exposing Dante, saying the customer used an outdated version of the Windows spyware that will no longer be supported by Memento by the end of this year.
“Clearly they used an agent that was already dead,” Lezzi told TechCrunch, referring to an “agent” as the technical word for the spyware planted on the target’s computer.
“I thought [the government customer] didn’t even use it anymore,” said Lezzi.
Lezzi, who said he was not sure which of the company’s customers were caught, added that Memento had already requested that all of its customers stop using the Windows malware. Lezzi said the company had warned customers that Kaspersky had detected Dante spyware infections since December 2024. He added that Memento plans to send a message to all its customers on Wednesday asking them once again to stop using its Windows spyware.
He said that Memento currently only develops spyware for mobile platforms. The company also develops some zero-days — meaning security flaws in software unknown to the vendor that can be used to deliver spyware — though it mostly sources its exploits from outside developers, according to Lezzi.
When reached by TechCrunch, Kaspersky spokesperson Mai Al Akkad would not say which government Kaspersky believes is behind the espionage campaign, but that it was “someone who has been able to use Dante software.”
“The group stands out for its strong command of Russian and knowledge of local nuances, traits that Kaspersky observed in other campaigns linked to this [government-backed] threat. However, occasional errors suggest that the attackers were not native speakers,” Al Akkad told TechCrunch.
In its new report, Kaspersky said it found a hacking group using the Dante spyware that it refers to as “ForumTroll,” describing the targeting of people with invites to Russian politics and economics forum Primakov Readings. Kaspersky said the hackers targeted a broad range of industries in Russia, including media outlets, universities, and government organizations.
Kaspersky’s discovery of Dante came after the Russian cybersecurity firm said it detected a “wave” of cyberattacks with phishing links that were exploiting a zero-day in the Chrome browser. Lezzi said that the Chrome zero-day was not developed by Memento.
In its report, Kaspersky researchers concluded that Memento “kept improving” the spyware originally developed by Hacking Team until 2022, when the spyware was “replaced by Dante.”
Lezzi conceded that it is possible that some “aspects” or “behaviors” of Memento’s Windows spyware were left over from spyware developed by Hacking Team.
A telltale sign that the spyware caught by Kaspersky belonged to Memento was that the developers allegedly left the word “DANTEMARKER” in the spyware’s code, a clear reference to the name Dante, which Memento had previously and publicly disclosed at a surveillance tech conference, per Kaspersky.
Much like Memento’s Dante spyware, some versions of Hacking Team’s spyware, codenamed Remote Control System, were named after historical Italian figures, such as Leonardo da Vinci and Galileo Galilei.
A history of hacks
In 2019, Lezzi purchased Hacking Team and rebranded it to Memento Labs. According to Lezzi, he paid only one euro for the company and the plan was to start over.
“We want to change absolutely everything,” the Memento owner told Motherboard after the acquisition in 2019. “We’re starting from scratch.”
A year later, Hacking Team’s CEO and founder David Vincenzetti announced that Hacking Team was “dead.”
When he acquired Hacking Team, Lezzi told TechCrunch that the company only had three government customers remaining, a far cry from the more than 40 government customers that Hacking Team had in 2015. That same year, a hacktivist called Phineas Fisher broke into the startup’s servers and siphoned off some 400 gigabytes of internal emails, contracts, documents, and the source code for its spyware.
Before the hack, Hacking Team’s customers in Ethiopia, Morocco, and the United Arab Emirates were caught targeting journalists, critics, and dissidents using the company’s spyware. Once Phineas Fisher published the company’s internal data online, journalists revealed that a Mexican regional government used Hacking Team’s spyware to target local politicians and that Hacking Team had sold to countries with human rights abuses, including Bangladesh, Saudi Arabia, and Sudan, among others.
Lezzi declined to tell TechCrunch how many customers Memento currently has but implied it was fewer than 100 customers. He also said that there are only two current Memento employees left from Hacking Team’s former staff.
The discovery of Memento’s spyware shows that this type of surveillance technology keeps proliferating, according to John Scott-Railton, a senior researcher who has investigated spyware abuses for a decade at the University of Toronto’s Citizen Lab.
It also shows that a controversial company can die because of a spectacular hack and several scandals, and yet a new company with brand-new spyware can still come out of its ashes.
“It tells us that we need to keep up the fear of consequences,” Scott-Railton told TechCrunch. “It says a lot that echoes of the most radioactive, embarrassed and hacked brand are still around.”
By Reuters
October 29, 2025
BANGKOK, Oct 29 (Reuters) - India plans to send an airplane to repatriate some 500 of its nationals who fled from a military raid on a scam centre in Myanmar into Thailand, Thai Prime Minister Anutin Charnvirakul said on Wednesday.
Starting last week, the Myanmar military has conducted a series of military operations against the KK Park cybercrime compound, driving more than 1,500 people from 28 countries into the Thai border town of Mae Sot, according to local authorities.
The border areas between Thailand, Myanmar, Laos and Cambodia have become hubs for online fraud since the COVID-19 pandemic, and the United Nations says billions of dollars have been earned from trafficking hundreds of thousands of people forced to work in the compounds.
KK Park is notorious for its involvement in transnational cyberscams. The sprawling compound and others nearby are run primarily by Chinese criminal gangs and guarded by local militia groups aligned to Myanmar's military.
Anutin said the Indian ambassador would meet the head of immigration to discuss speeding up the legal verification process for the 500 Indian nationals ahead of their flight back to India.
"They don't want this to burden us," Anutin said. "They will send a plane to pick these victims up... the plane will land directly in Mae Sot," he said.
Indian foreign ministry spokesperson Randhir Jaiswal said India's embassy was working with Thailand "to verify their nationality and to repatriate them, after necessary legal formalities are completed in Thailand."
Earlier this year India also sent a plane to repatriate its nationals after thousands were freed from cyberscam centres along the Thai-Myanmar border following a regional crackdown.
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
TEE.fail:
Breaking Trusted Execution Environments via DDR5 Memory Bus Interposition
With the increasing popularity of remote computation like cloud computing, users are increasingly losing control over their data, uploading it to remote servers that they do not control. Trusted Execution Environments (TEEs) aim to reduce this trust, offering users promises such as privacy and integrity of their data as well as correctness of computation. With the introduction of TEEs and Confidential Computing features to server hardware offered by Intel, AMD, and Nvidia, modern TEE implementations aim to provide hardware-backed integrity and confidentiality to entire virtual machines or GPUs, even when attackers have full control over the system's software, for example via root or hypervisor access. Over the past few years, TEEs have been used to execute confidential cryptocurrency transactions, train proprietary AI models, protect end-to-end encrypted chats, and more.
In this work, we show that the security guarantees of modern TEE offerings by Intel and AMD can be broken cheaply and easily, by building a memory interposition device that allows attackers to physically inspect all memory traffic inside a DDR5 server. Making this worse, despite the increased complexity and speed of DDR5 memory, we show how such an interposition device can be built cheaply and easily, using only off the shelf electronic equipment. This allows us for the first time to extract cryptographic keys from Intel TDX and AMD SEV-SNP with Ciphertext Hiding, including in some cases secret attestation keys from fully updated machines in trusted status. Beyond breaking CPU-based TEEs, we also show how extracted attestation keys can be used to compromise Nvidia's GPU Confidential Computing, allowing attackers to run AI workloads without any TEE protections. Finally, we examine the resilience of existing deployments to TEE compromises, showing how extracted attestation keys can potentially be used by attackers to extract millions of dollars of profit from various cryptocurrency and cloud compute services.
| The Record from Recorded Future News
Daryna Antoniuk
October 27th, 2025
The utility responsible for operating Sweden's power grid is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.
Sweden’s power grid operator is investigating a data breach after a ransomware group threatened to leak hundreds of gigabytes of purportedly stolen internal data.
State-owned Svenska kraftnät, which operates the country’s electricity transmission system, said the incident affected a “limited external file transfer solution” and did not disrupt Sweden’s power supply.
“We take this breach very seriously and have taken immediate action,” said Chief Information Security Officer Cem Göcgören in a statement. “We understand that this may cause concern, but the electricity supply has not been affected.”
The ransomware gang Everest claimed responsibility for the attack on its leak site over the weekend, alleging it had exfiltrated about 280 gigabytes of data and saying it would publish it unless the agency complied with its demands.
The same group has previously claimed attacks on Dublin Airport, Air Arabia, and U.S. aerospace supplier Collins Aerospace — incidents that disrupted flight operations across several European cities in September. The group’s claims could not be independently verified.
Svenska kraftnät said it is working closely with the police and national cybersecurity authorities to determine the extent of the breach and what data may have been exposed. The utility has not attributed the attack to any specific threat actor.
“Our current assessment is that mission-critical systems have not been affected,” Göcgören said. “At this time, we are not commenting on perpetrators or motives until we have confirmed information.”
vxdb.sh Journalist | Cybercrime News |
It is human nature to be competitive, to try your best when competing against others. It is no different when it comes to video games. Major E-Sports tournament prize pools regularly reach the multi millions. Last year the CS2 PGL Major hosted in Copenhagen had a prize pool of $1.25M.
Outside of the Esports realm cheating is still very prevalent, from games like Fortnite, Apex Legends, CS2, even non competitive games like Minecraft or Roblox have cheating issues. Most if not all the top tier cheats aren't free. Instead they rely on a subscription-based monetization model, where users pay for access to private builds or regular updates designed to evade detection from the games AntiCheat. Cheat developers also utilize what are called resellers who advertise, and sell the cheat on behalf of the developers in exchange for a cut of the profits.
Most players don't want to or can't pay for premium/paid cheats so they hunt for free alternatives or cracked versions of paid cheats on sketchy forums, Youtube, or even Github. While some free cheats do exist, they usually don't have many features, are slower to update, and quickly detected by the AntiCheat, meaning they’ll get you banned fast, sometimes instantly. A significant portion of these “free” alternatives present security risks. In many cases, the download contains typically info stealers, Discord token grabbers, or RATs. In other instances, the advertised download is a working cheat but has malware executed in the background without the user knowing.
How threat actors spread their malware
Cybercriminals weaponize YouTube by posting videos that advertise free cheats, executors, or “cracked” cheats and then use the video description or pinned comments to funnel viewers to a download link. Many videos use the service Linkvertise which makes users go through a handful of ads and suspicious downloads to reach the final download link where the file is hosted on a site like MediaFire or Meganz. These videos are being posted on stolen or fake youtube accounts created and advertised by what are called Traffer Teams.
What are Traffers Teams?
"Traffer teams manage the entire operation, recruiting affiliates (traffers), handling monetization, and managing/crypting stealer builds. Traffer gangs recruit affiliates who spread the malware, often driving app downloads from YouTube, TikTok, and other platforms. Traffers are commonly paid a percentage of these stolen logs or receive a direct payment for installs. Traffer gangs will typically monetize these stolen logs by selling them directly to buyers or cashing out themselves." As per Benjamin Brundage CEO of Synthient.
In a recent upload by researcher Eric Parker, a YouTube channel was discovered repeatedly uploading videos advertising so-called “Valorant Skins Changer,” “Roblox Executor,” and similar “free hacks" all with oddly similar thumbnails. Each video’s description contained a download link that redirected users to a Google Sites page at "sites[.]google[.]com/view/lyteam".
This site is operated by a Traffer Team known as LyTeam, which promotes and distributes info-stealing malware under the guise of free game cheats.
Later in the same video, Eric Parker downloaded and analyzed a .dll file hosted on the LyTeam site. When uploaded to VirusTotal, the sample was identified to be a strain of the Lumma Stealer Malware, a well-known info-stealing malware family known for harvesting browser credentials and crypto wallets.
How to stay safe
Don't click random links and run files you find out on the internet, if needed use and AntiVirus software to scan files on your computer. Run sketchy files you find either in a virtual machine or sandbox, better yet use VirusTotal.
Staying safe doesn't mean you need to be paranoid 24/7, it's about awareness.
Thank you for reading,
vxdb :)
LayerX discovered the first vulnerability impacting OpenAI’s new ChatGPT Atlas browser, allowing bad actors to inject malicious instructions into ChatGPT’s “memory” and execute remote code. This exploit can allow attackers to infect systems with malicious code, grant themselves access privileges, or deploy malware.
The vulnerability affects ChatGPT users on any browser, but it is particularly dangerous for users of OpenAI’s new agentic browser: ChatGPT Atlas. LayerX has found that Atlas currently does not include any meaningful anti-phishing protections, meaning that users of this browser are up to 90% more vulnerable to phishing attacks than users of traditional browsers like Chrome or Edge.
The exploit has been reported to OpenAI under Responsible Disclosure procedures, and a summary is provided below, while withholding technical information that will allow attackers to replicate this attack.
TL/DR: How The Exploit Works:
LayerX discovered how attackers can use a Cross-Site Request Forgery (CSRF) request to “piggyback” on the victim’s ChatGPT access credentials, in order to inject malicious instructions into ChatGPT’s memory. Then, when the user attempts to use ChatGPT for legitimate purposes, the tainted memories will be invoked, and can execute remote code that will allow the attacker to gain control of the user account, their browser, code they are writing, or systems they have access to.
While this vulnerability affects ChatGPT users on any browser, it is particularly dangerous for users of ChatGPT Atlas browser, since they are, by default, logged-in to ChatGPT, and since LayerX testing indicates that the Atlas browser is up to 90% more exposed than Chrome and Edge to phishing attacks.
A Step-by-Step Explanation:
Initially, the user is logged-in to ChatGPT, and holds an authentication cookie or token in their browser.
The user clicks a malicious link, leading them to a compromised web page.
The malicious page invokes a Cross-Site Request Forgery (CSRF) request to take advantage of the user’s pre-existing authentication into ChatGPT
The CSRF exploit injects hidden instructions into ChatGPT’s memory, without the user’s knowledge, thereby “tainting” the core LLM memory.
The next time the user queries ChatGPT, the tainted memories are invokes, allowing deployment of malicious code that can give attackers control over systems or code.
Using Cross-Site Request Forgery (CSRF) To Access LLMs:
A cross-site request forgery (CSRF) attack is when an attacker tricks a user’s browser into sending an unintended, state-changing request to a website where the user is already authenticated, causing the site to perform actions as that user without their consent.
The attack occurs when a victim is logged in to a target site, which has session cookies stored in the browser. The victim visits or is redirected into a malicious page that issues a crafted request (via a form, image tag, link, or script) to the target site. The browser automatically includes the victim’s credentials (cookies, auth headers), so the target site processes the request as if the user initiated it.
In most cases, the impact of a CSRF attack is aimed at activity such as changing account email/password, initiating funds transfers, or making purchases under the user’s session can occur.
However, when it comes to AI systems, using a CSRF attack, attackers can gain access to AI systems that the user is logged-in to, query it, or inject instructions into it.
Infecting ChatGPT’s Core “Memory”
ChatGPT’s “Memory” allows ChatGPT to remember useful details about users’ queries, chat and activities, such as preferences, constraints, projects, style notes, etc., and reuse them across future chats so that users don’t have to repeat themselves. In effect, they act like the LLM’s background memory or subconscious.
Once attackers have access to the user’s ChatGPT via the CSRF request, they can use it to inject hidden instructions to ChatGPT, that will affect future chats.
Like a person’s subconscious, once the right instructions are stored inside ChatGP’s Memory, ChatGPT will be compelled to execute these instructions, effectively becoming a malicious co-conspiritor.
Moreover, once an account’s Memory has been infected, this infection is persistent across all devices that the account is used on – across home and work computers, and across different browsers – whether a user is using them on Chrome, Atlas, or any other browser. This makes the attack extremely “sticky,” and is especially dangerous for users who use the same account for both work and personal purposes.
ChatGPT Atlas Users Up to 90% More Exposed Than Other Browsers
While this vulnerability can be used against ChatGPT users on any browser, users of OpenAI’s ChatGPT browser are particularly vulnerable. This is for two reasons:
When you are using Atlas, you are, by default, logged-in to ChatGPT. This means that ChatGPT credentials are always stored in the browser, where they can be targeted by malicious CSRF requests.
ChatGPT Atlas is particularly bad at stopping phishing attacks. This means that users of Atlas are more exposed than users of other browsers.
LayerX tested Atlas against over 100 in-the-wild web vulnerabilities and phishing attacks. LayerX previously conducted the same test against other AI browsers such as Comet, Dia, and Genspark. The results were uninspiring, to say the least:
In the previous tests, whereas traditional browsers such as Edge and Chrome were able to stop about 50% of phishing attacks using their out-of-the-box protections, Comet and Genspark stopped only 7% (Dia generated results similar to those of Chrome).
Running the same test against Atlas showed even more stark results:
Out of 103 in-the-wild attacks that LayerX tested, ChatGPT Atlas allowed 97 to go through, a whopping 94.2% failure rate.
Compared to Edge (which stopped 53% of attacks in LayerX’s test) and Chrome (which stopped 47% of attacks), ChatGPT Atlas was able to successfully stop only 5.8% of malicious web pages, meaning that users of Atlas were nearly 90% more vulnerable to phishing attacks, compared to users of other browsers.
The implication is that not only users of ChatGPT Atlas are susceptible to malicious attack vectors that can lead to injection of malicious instructions into their ChatGPT accounts, but since Atlas does not include any meaningful anti-phishing protection, Atlas users are at a greater risk of exposure.
Proof of Concept: Injecting Malicious Code To ‘Vibe’ Coding
Below is an illustration of an attack vector exploiting this vulnerability, on an Atlas browser user who is vibe coding:
“Vibe coding” is a collaborative style where the developer treats the AI as a creative partner rather than a line-by-line executor. Instead of prescribing exact syntax, the developer shares the project’s intent and feel (e.g., architecture goals, tone, audience, aesthetic preferences, etc.) and other non-functional requirements.
ChatGPT then uses this holistic brief to produce code that works and matches the requested style, narrowing the gap between high-level ideas and low-level implementation. The developer’s role shifts from hand-coding to steering and refining the AI’s interpretation.
While ChatGPT offers some defenses against malicious instructions, effectiveness can vary with the attack’s sophistication and how the unwanted behavior entered Memory.
In some cases, the user may see a mild warning; in others, the attempt might be blocked. However, if cleverly masked, the code could evade detection altogether. For example, this is the subtle warning that this script received. At most, it’s a sidenote that is easy to miss within the blob of text:
• The Register
Carly Page
Thu 23 Oct 2025 //
Google has taken down thousands of YouTube videos that were quietly spreading password-stealing malware disguised as cracked software and game cheats.
Researchers at Check Point say the so-called "YouTube Ghost Network" hijacked and weaponized legitimate YouTube accounts to post tutorial videos that promised free copies of Photoshop, FL Studio, and Roblox hacks, but instead lured viewers into installing infostealers such as Rhadamanthys and Lumma.
The campaign, which has been running since 2021, surged in 2025, with the number of malicious videos tripling compared to previous years. More than 3,000 malware-laced videos have now been scrubbed from the platform after Check Point worked with Google to dismantle what it called one of the most significant malware delivery operations ever seen on YouTube.
Check Point says the Ghost Network relied on thousands of fake and compromised accounts working in concert to make malicious content look legitimate. Some posted the "tutorial" videos, others flooded comment sections with praise, likes, and emojis to give the illusion of trust, while a third set handled "community posts" that shared download links and passwords for the supposed cracked software.
"This operation took advantage of trust signals, including views, likes, and comments, to make malicious content seem safe," said Eli Smadja, security research group manager at Check Point. "What looks like a helpful tutorial can actually be a polished cyber trap. The scale, modularity, and sophistication of this network make it a blueprint for how threat actors now weaponise engagement tools to spread malware."
Once hooked, victims were typically instructed to disable antivirus software, then download an archive hosted on Dropbox, Google Drive, or MediaFire. Inside was malware rather than a working copy of the promised program, and once opened, the infostealers exfiltrated credentials, crypto wallets, and system data to remote command-and-control servers.
One hijacked channel with 129,000 subscribers posted a cracked version of Adobe Photoshop that racked up nearly 300,000 views and more than 1,000 likes. Another targeted cryptocurrency users, redirecting them to phishing pages hosted on Google Sites.
As Check Point tracked the network, it found the operators frequently rotated payloads and updated download links to outpace takedowns, creating a resilient ecosystem that could quickly regenerate even when accounts were banned.
Check Point says the Ghost Network's modular design, with uploaders, commenters, and link distributors, allowed campaigns to persist for years. The approach mimics a separate operation the firm has dubbed the "Stargazers Ghost Network" on GitHub, where fake developer accounts host malicious repositories.
While most of the malicious videos pushed pirated software, the biggest lure was gaming cheats – particularly for Roblox, which has an estimated 380 million monthly active players. Other videos dangled cracked copies of Microsoft Office, Lightroom, and Adobe tools. The "most viewed" malicious upload targeted Photoshop, drawing almost 300,000 views before Google's cleanup operation.
The surge in 2025 marks a sharp shift in how malware is being distributed. Where phishing emails and drive-by downloads once dominated, attackers are now exploiting the social credibility of mainstream platforms to bypass user skepticism.
"In today's threat landscape, a popular-looking video can be just as dangerous as a phishing email," Smadja said. "This takedown shows that even trusted platforms aren't immune to weaponization, but it also proves that with the right intelligence and partnerships, we can push back."
Check Point doesn't have concrete evidence as to who is operating this network. It said the primary beneficiaries currently appear to be cybercriminals motivated by profit, but this could change if nation-state groups use the same tactics and video content to attract high-value targets.
The YouTube Ghost Network's rise underscores how far online malware peddlers have evolved from spammy inbox bait. The ghosts may have been exorcised this time, but with engagement now an attack vector, the next haunting is only ever a click away.
iverify.io
By Matthias Frielingsdorf, VP of Research
Oct 21, 2025
iOS 26 changes how shutdown logs are handled, erasing key evidence of Pegasus and Predator spyware, creating new challenges for forensic investigators
As iOS 26 is being rolled out, our team noticed a particular change in how the operating system handles the shutdown.log file: it effectively erases crucial evidence of Pegasus and Predator spyware infections. This development poses a serious challenge for forensic investigators and individuals seeking to determine if their devices have been compromised at a time when spyware attacks are becoming more common.
The Power of the shutdown.log
For years, the shutdown.log file has been an invaluable, yet often overlooked, artifact in the detection of iOS malware. Located within the Sysdiagnoses in the Unified Logs section (specifically, Sysdiagnose Folder -> system_logs.logarchive -> Extra -> shutdown.log), it has served as a silent witness to the activities occurring on an iOS device, even during its shutdown sequence.
In 2021, the publicly known version of Pegasus spyware was found to leave discernible traces within this shutdown.log. These traces provided a critical indicator of compromise, allowing security researchers to identify infected devices. However, the developers behind Pegasus, NSO Group, are constantly refining their techniques, and by 2022 Pegasus had evolved.
Pegasus's Evolving Evasion Tactics
While still leaving evidence in the shutdown.log, their methods became more sophisticated. Instead of leaving obvious entries, they began to completely wipe the shutdown.log file. Yet, even with this attempted erasure, their own processes still left behind subtle traces. This meant that even a seemingly clean shutdown.log that began with evidence of a Pegasus sample was, in itself, an indicator of compromise. Multiple cases of this behavior were observed until the end of 2022, highlighting the continuous adaptation of these malicious actors.
Following this period, it is believed that Pegasus developers implemented even more robust wiping mechanisms, likely monitoring device shutdown to ensure a thorough eradication of their presence from the shutdown.log. Researchers have noted instances where devices known to be active had their shutdown.log cleared, alongside other IOCs for Pegasus infections. This led to the conclusion that a cleared shutdown.log could serve as a good heuristic for identifying suspicious devices.
Predator's Similar Footprint
The sophisticated Predator spyware, observed in 2023, also appears to have learned from the past. Given that Predator was actively monitoring the shutdown.log, and considering the similar behavior seen in earlier Pegasus samples, it is highly probable that Predator, too, left traces within this critical log file.
iOS 26: An Unintended Cleanse
With iOS 26 Apple introduced a change—either an intentional design decision or an unforeseen bug—that causes the shutdown.log to be overwritten on every device reboot instead of appended with a new entry every time, preserving each as its own snapshot. This means that any user who updates to iOS 26 and subsequently restarts their device will inadvertently erase all evidence of older Pegasus and Predator detections that might have been present in their shutdown.log.
This automatic overwriting, while potentially intended for system hygiene or performance, effectively sanitizes the very forensic artifact that has been instrumental in identifying these sophisticated threats. It could hardly come at a worse time - spyware attacks have been a constant in the news and recent headlines show that high-power executives and celebrities, not just civil society, are being targeted.
Identifying Pegasus 2022: A Specific IOC
For those still on iOS versions prior to 26, a specific IOC for Pegasus 2022 infections involved the presence of a /private/var/db/com.apple.xpc.roleaccountd.staging/com.apple.WebKit.Networking entry within the shutdown.log. This particular IOC also revealed a significant shift in NSO Group's tactics: they began using normal system process names instead of easily identifiable, similarly named processes, making detection more challenging.
An image of a shutdown.log file
Correlating Logs for Deeper Insight (< iOS 18)
For devices running iOS 18 or earlier, a more comprehensive approach to detection involved correlating containermanagerd log entries with shutdown.log events. Containermanagerd logs contain boot events and can retain data for several weeks. By comparing these boot events with shutdown.log entries, investigators could identify discrepancies. For example, if numerous boot events were observed before shutdown.log entries, it suggested that something was amiss and potentially being hidden.
Before You Update
Given the implications of iOS 26's shutdown.log handling, it is crucial for users to take proactive steps:
Before updating to iOS 26, immediately take and save a sysdiagnose of your device. This will preserve your current shutdown.log and any potential evidence it may contain.
Consider holding off on updating to iOS 26 until Apple addresses this issue, ideally by releasing a bug fix that prevents the overwriting of the shutdown.log on boot.