bleepingcomputer.com
By Bill Toulas
December 1, 2025
The popular open-source SmartTube YouTube client for Android TV was compromised after an attacker gained access to the developer's signing keys, leading to a malicious update being pushed to users.
The compromise became known when multiple users reported that Play Protect, Android's built-in antivirus module, blocked SmartTube on their devices and warned them of a risk.
The developer of SmartTube, Yuriy Yuliskov, admitted that his digital keys were compromised late last week, leading to the injection of malware into the app.
Yuliskov revoked the old signature and said he would soon publish a new version with a separate app ID, urging users to move to that one instead.
SmartTube is one of the most widely downloaded third-party YouTube clients for Android TVs, Fire TV sticks, Android TV boxes, and similar devices.
Its popularity stems from the fact that it is free, can block ads, and performs well on underpowered devices.
A user who reverse-engineered the compromised SmartTube version number 30.51 found that it includes a hidden native library named libalphasdk.so [VirusTotal]. This library does not exist in the public source code, so it is being injected into release builds.
"Possibly a malware. This file is not part of my project or any SDK I use. Its presence in the APK is unexpected and suspicious. I recommend caution until its origin is verified," cautioned Yuliskov on a GitHub thread.
The library runs silently in the background without user interaction, fingerprints the host device, registers it with a remote backend, and periodically sends metrics and retrieves configuration via an encrypted communications channel.
All this happens without any visible indication to the user. While there's no evidence of malicious activity such as account theft or participation in DDoS botnets, the risk of enabling such activities at any time is high.
Although the developer announced on Telegram the release of safe beta and stable test builds, they have not reached the project's official GitHub repository yet.
Also, the developer has not provided full details of what exactly happened, which has created trust issues in the community.
Yuliskov promised to address all concerns once the final release of the new app is pushed to the F-Droid store.
Until the developer transparently discloses all points publicly in a detailed post-mortem, users are recommended to stay on older, known-to-be-safe builds, avoid logging in with premium accounts, and turn off auto-updates.
Impacted users are also recommended to reset their Google Account passwords, check their account console for unauthorized access, and remove services they don't recognize.
At this time, it is unclear exactly when the compromise occurred or which versions of SmartTube are safe to use. One user reported that Play Protect doesn't flag version 30.19, so it appears safe.
BleepingComputer has contacted Yuliskov to determine which versions of the SmartTube app were compromised, and he responded with the following:
"Some of the older builds that appeared on GitHub were unintentionally compromised due to malware present on my development machine at the time they were created. As soon as I noticed the issue in late November, I immediately wiped the system and cleaned the environment, including the GitHub repository."
"I became aware of the malware issue around version 30.47, but as users reported lately it started around version 30.43. So, for my understanding the compromised versions are: 30.43-30.47."
"After cleaning the environment, a couple of builds were released using the previous key (prepared on the clean system), but from version 30.55 onward I switched to a new key for full security. The differing hashes for 30.47 Stable v7a are likely the result of attempts to restore that build after cleaning the infected system."
Update 12/2 - Added developer comment and information.
nextron-systems.com - Nextron Systems
by Marius BenthinNov 28, 2025
Over the last weeks we’ve been running a new internal artifact-scanning service across several large ecosystems. It’s still growing feature-wise, LLM scoring and a few other bits are being added, but the core pipeline is already pulling huge amounts of stuff every week – Docker Hub images, PyPI packages, NPM modules, Chrome extensions, VS Code extensions. Everything gets thrown through our signature set that’s built to flag obfuscated JavaScript, encoded payloads, suspicious command stubs, reverse shells, and the usual “why is this here” indicators.
The only reason this works at the scale we need is THOR Thunderstorm running in Docker. That backend handles the heavy lifting for millions of files, so the pipeline just feeds artifacts into it at a steady rate. Same component is available to customers; if someone wants to plug this kind of scanning into their own CI or ingestion workflow, Thunderstorm can be used exactly the way we use it internally.
We review millions of files; most of the noise is the classic JS-obfuscation stuff that maintainers use to “protect” code; ok… but buried in the noise you find the things that shouldn’t be there at all. And one of those popped up this week.
Our artifact scanning approach
We published an article this year about blind spots in security tooling and why malicious artifacts keep slipping through the standard AV checks. That’s the gap this whole setup is meant to cover. AV engines choke on obfuscated scripts, and LLMs fall over as soon as you throw them industrial-scale volume. Thunderstorm sits in the middle – signature coverage that hits encoded payloads, weird script constructs, stagers, reverse shells, etc., plus the ability to scale horizontally in containers.
The workflow is simple:
pull artifacts from Docker Hub, PyPI, NPM, the VS Code Marketplace, Chrome Web Store;
unpack them into individual files;
feed them into Thunderstorm;
store all hits;
manually review anything above a certain score.
We run these scans continuously. The goal is to surface the obviously malicious uploads quickly and not get buried in the endless “maybe suspicious” noise.
The finding: malicious VS Code extension with Rust implants
While reviewing flagged VS Code extensions, Marius stumbled over an extension named “Icon Theme: Material”, published under the account “IconKiefApp”. It mimics the legitimate and extremely popular Material Icon Theme extension by Philipp Kief. Same name pattern, same visuals, but not the same author.
The fake extension had more than 16,000 installs already.
Inside the package we found two Rust implants: one Mach-O, one Windows PE. The paths looked like this:
icon-theme-materiall.5.29.1/extension/dist/extension/desktop/
The Mach-O binary contains a user-path string identical in style to the GlassWorm samples reported recently by Koi (VT sample link below). The PE implant shows the same structure. Both binaries are definitely not part of any real icon-theme extension.
The malicious extension:
https://marketplace.visualstudio.com/items?itemName=Iconkieftwo.icon-theme-materiall
The legitimate one:
https://marketplace.visualstudio.com/items?itemName=PKief.material-icon-theme
Related GlassWorm sample:
https://www.virustotal.com/gui/file/eafeccc6925130db1ebc5150b8922bf3371ab94dbbc2d600d9cf7cd6849b056e
IOCs
VS Code Extension
0878f3c59755ffaf0b639c1b2f6e8fed552724a50eb2878c3ba21cf8eb4e2ab6
icon-theme-materiall.5.29.1.zip
Rust Implants
6ebeb188f3cc3b647c4460c0b8e41b75d057747c662f4cd7912d77deaccfd2f2
(os.node) PE
fb07743d139f72fca4616b01308f1f705f02fda72988027bc68e9316655eadda
(darwin.node) MACHO
Signatures
YARA rules that triggered on the samples:
SUSP_Implant_Indicators_Jul24_1
SUSP_HKTL_Gen_Pattern_Feb25_2
Status
We already reported the malicious extension to Microsoft. The previous version, 5.29.0, didn’t contain any implants. The publisher then pushed a new update, version 5.29.1, on 28 November 2025 at 11:34, and that one does include the two Rust implants.
As of now (28 November, 14:00 CET), the malicious 5.29.1 release is still online. We expect Microsoft to remove the extension from the Marketplace. We’ll share more details once we’ve fully unpacked both binaries and mapped the overlaps with the GlassWorm activity.
Closing
This is exactly the kind of thing the artifact-scanner was built for. Package ecosystems attract opportunistic uploads; VS Code extensions are no different. We’ll keep scanning the big ecosystems and publish findings when they’re clearly malicious. If you maintain an extension or a package registry and want to compare detections with us, feel free to reach out; we’re adding more sources week by week.
Update 29.11.2025
Since we published the initial post, a full technical analysis of the Rust implants contained in the malicious extension has been completed. The detailed breakdown is now available in our follow-up article: “Analysis of the Rust implants found in the malicious VS Code extension”.
That post describes how the implants operate on Windows and macOS, their command-and-control mechanism via a Solana-based wallet, the encrypted-payload delivery, and fallback techniques including a hidden Google Calendar-based channel.
Readers who want full technical context, IOCs and deeper insight are encouraged to review the new analysis.
Post-mortem of Shai-Hulud attack on November 24th, 2025
Oliver Browne
Nov 26, 2025
PostHog news - posthog.com
At 4:11 AM UTC on November 24th, a number of our SDKs and other packages were compromised, with a malicious self-replicating worm - Shai-Hulud 2.0. New versions were published to npm, which contained a preinstall script that:
Scanned the environment the install script was running in for credentials of any kind using Trufflehog, an open-source security tool that searches codebases, Git histories, and other data sources for secrets.
Exfiltrated those credentials by creating a new public repository on GitHub and pushing the credentials to it.
Used any npm credentials found to publish malicious packages to npm, propagating the breach.
By 9:30 AM UTC, we had identified the malicious packages, deleted them, and revoked the tokens used to publish them. We also began the process of rolling all potentially compromised credentials pre-emptively, although we had not at the time established how our own npm credentials had been compromised (we have now, details below).
The attack only affected our Javascript SDKs published in npm. The most relevant compromised packages and versions were:
posthog-node 4.18.1, 5.13.3 and 5.11.3
posthog-js 1.297.3
posthog-react-native 4.11.1
posthog-docusaurus 2.0.6
posthog-react-native-session-replay@1.2.2
@posthog/agent@1.24.1
@posthog/ai@7.1.2
@posthog/cli@0.5.15
What should you do?
If you are using the script version of PostHog you were not affected since the worm spread via the preinstall step when installing your dependencies on your development/CI/production machines.
If you are using one of our Javascript SDKs, our recommendations are to:
Look for the malicious files locally, in your home folder, or your document roots:
Terminal
find . -name "setup_bun.js" \
-o -name "bun_environment.js" \
-o -name "cloud.json" \
-o -name "contents.json" \
-o -name "environment.json" \
-o -name "truffleSecrets.json"
Check npm logs for suspicious entries:
Terminal
grep -R "shai" ~/.npm/_logs
grep -R "preinstall" ~/.npm/_logs
Delete any cached dependencies:
Terminal
rm -rf node_modules
npm cache clean --force
pnpm cache delete
Pin any dependencies to a known-good version (in our case, all the latest published versions, which have been published after we identified the attack, are known-good), and then reinstall your dependencies.
We also suggest you make use of the minimumReleaseAge setting present both in yarn and pnpm. By setting this to a high enough value (like 3 days), you can make sure you won't be hit by these vulnerabilities before researchers, package managers, and library maintainers have the chance to wipe the malicious packages.
How did it happen?
PostHog's own package publishing credentials were not compromised by the worm described above. We were targeted directly, as were a number of other major vendors, to act as a "patient zero" for this attack.
The first step the attacker took was to steal the Github Personal Access Token of one of our bots, and then use that to steal the rest of the Github secrets available in our CI runners, which included this npm token. These steps were done days before the attack on the 24th of November.
At 5:40PM on November 18th, now-deleted user brwjbowkevj opened a pull request against our posthog repository, including this commit. This PR changed the code of a script executed by a workflow we were running against external contributions, modifying it to send the secrets available during that script's execution to a webhook controlled by the attacker. These secrets included the Github Personal Access Token of one of our bots, which had broad repo write permissions across our organization. The PR itself was deleted along with the fork it came from when the user was deleted, but the commit was not.
The PR was opened, the workflow run, and the PR closed within the space of 1 minute (screenshots include timestamps in UTC+2, the author's timezone):
initial PR logs
At 3:28 PM UTC on November 23rd, the attacker used these credentials to delete a workflow run. We believe this was a test, to see if the stolen credentials were still valid (it was successful).
At 3:43 PM, the attacker used these credentials again, to create another commit masquerading, by chance, as the report's author (we believe this was a randomly chosen branch on which the author happened to be the last legitimate contributor given that the author does not possess any special privileges on his GitHub account).
This commit was pushed directly as a detached commit, not as part of a pull request or similar. In it, the attacker modified an arbitrary Lint PR workflow directly to exfiltrate all of our Github secrets. Unlike the previous PR attack, which could only modify the script called from the workflow, and as such could only exfiltrate our bot PAT, this commit had full write access to our repository given the ultra-permissive PAT which meant they could run arbitrary code on the scope of our Github Actions runners.
With that done, the attacker was able to run their modified workflow, and did so at 3:45 PM UTC:
Follow up commit workflow runs
The principal associated with these workflow actions is posthog-bot, our Github bot user, whose PAT was stolen in the initial PR. We were only able to identify this specific commit as the pivot after the fact using Github audit logs, due to the attackers deletion of the workflow run following its completion.
At this point, the attacker had our npm publishing token, and 12 hours later, at 4:11 AM UTC the following morning, published the malicious packages to npm, starting the worm.
As noted, PostHog was not the only vendor used as an initial vector for this broader attack. We expect other vendors will be able to identify similar attack patterns in their own audit logs.
Why did it happen?
PostHog is proudly open-source, and that means a lot of our repositories frequently receive external contributions (thank you).
For external contributions, we want to automatically assign reviewers depending on which parts of our codebase the contribution changed. GitHub's CODEOWNERS file is typically used for this, but we want the review to be a "soft" requirement, rather than blocking the PR for internal contributors who might be working on code they don't own.
We had a workflow, auto-assign-reviewers.yaml, which was supposed to do this, but it never really worked for external contributions since it required manual approval defeating the purpose of automatically tagging the right people without manual interference.
One of our engineers figured out this was because it triggered on: pull_request which means external contributions (which come from forks, rather than branches in the repo like internal contributions) would not have the workflow automatically run. The fix for this was changing the trigger to be on: pull_request_target, which runs the workflow as it's defined in the PR target repo/branch, and is therefore considered safe to auto-run.
Our engineer opened a PR to make this change, and also make some fixes to the script, including checking out the current branch, rather than the PR base branch, so that the diffing would work properly. This change seemed safe, as our understanding of on: pull_request_target was, roughly, "ok, this runs the code as it is in master/the target repo".
This was a dangerous misconception, for a few reasons:
on: pull_request_target only ensures the workflow is being run as defined in the PR target, not the code being run - that's controlled by the checkout step.
This particular workflow executed code from within the repo - a script called assign-reviewers.js, which was initially developed for internal (and crucially, trusted) auto-assignment, but was now being used for external assignment too.
The workflow was modified to manually checkout the git commit of the PR head, rather than the PR base, so that the diffing would work correctly for external contributions, but this meant that the code being run was controlled by the PR author.
These pieces together meant it was possible for a pull request which modified assign-reviewers.js to run arbitrary code, within the context of a trusted CI run, and therefore steal our bot token.
Why did this workflow change get merged? Honestly, security is unintuitive.
The engineer making the change thought pull_request_target ensured that the version of assign-reviewers.js being executed, a script stored in .github/scripts in the repository, would be the one on master, rather than the one in the PR.
The engineer reviewing the PR thought the same.
None of us noticed the security hole in the month and a half between the PR being merged and the attack (the PR making this change was merged on the 11th of September). This workflow change was even flagged by one of our static analysis tools before merge, but we explicitly dismissed the alert because we mistakenly thought our usage was safe.
Workflow rules, triggers and execution contexts are hard to reason about - so hard to reason about that Github is actively making changes to make them simpler and closer to our understanding above. Although, in our case, these changes would not have protected us against the initial attack.
Notably, we identified copycat attacks on the following day attempting to leverage the same vulnerability, and while we prevented those, we had to take frustratingly manual and uncertain measures to do so. The changes Github is making to the behaviour of pull_request_target would have prevented those copycats automatically for us.
How are we preventing it from happening again?
This is the largest and most impactful security incident we've ever had. We feel terrible about it, and we're doing everything we can to prevent something like this from happening again.
I won't enumerate all the process and posture changes we're implementing here, beyond saying:
We've significantly tightened our package release workflows (moving to the trusted publisher model).
Increased the scrutiny any PR modifying a workflow file gets (requiring a specific review from someone on our security team).
Switched to pnpm 10 (to disable preinstall/postinstall scripts and use minimumReleaseAge).
Re-worked our Github secrets management to make our response to incidents like this faster and more robust.
PostHog is, in many of our engineers minds, first and foremost a data company. We've grown a lot in the last few years, and for that time, our focus has always been on data security - ensuring the data you send us is safe, that our cloud environments are secure, and that we never expose personal information. This kind of attack, being leveraged as an initial vector for an ecosystem-wide worm, simply wasn't something we'd prepared for.
At a higher level, we've started to take broad security a lot more seriously, even prior to this incident. In July, we hired Tom P, who's been fully dedicated to improving our overall security posture. Both our incident response and the analysis in this post-mortem simply wouldn't have been possible without the tools and practices he's put in place, and while there's a huge amount still to do, we feel good about the progress we're making. We have to do better here, and we feel confident we will.
Given the prominence of this attack and our desire to take this work seriously, we wanted to use this as a chance to say that if you'd like to work in our security team, and write post-mortems like these (or, better still, write analysis like this about attacks you stopped), we're always looking for new talent. Email tom.p at posthog dot com, or apply directly on our careers page.
| Europol
europol.europa.eu
From 24 to 28 November 2025, Europol supported an action week conducted by law enforcement authorities from Switzerland and Germany in Zurich, Switzerland. The operation focused on taking down the illegal cryptocurrency mixing service ‘Cryptomixer’, which is suspected of facilitating cybercrime and money laundering.
Open in modalOP Olympia - this domain has been seized
Three servers were seized in Switzerland, along with the cryptomixer.io domain. The operation resulted in the confiscation of over 12 terabytes of data and more than EUR 25 million worth of the cryptocurrency Bitcoin. After the illegal service was taken over and shut down, law enforcement placed a seizure banner on the website.
A service to obfuscate the origin of criminal funds
Cryptomixer was a hybrid mixing service accessible via both the clear web and the dark web. It facilitated the obfuscation of criminal funds for ransomware groups, underground economy forums and dark web markets. Its software blocked the traceability of funds on the blockchain, making it the platform of choice for cybercriminals seeking to launder illegal proceeds from a variety of criminal activities, such as drug trafficking, weapons trafficking, ransomware attacks, and payment card fraud. Since its creation in the year 2016, over EUR 1.3 billion in Bitcoin were mixed through the service.
Deposited funds from various users were pooled for a long and randomised period before being redistributed to destination addresses, again at random times. As many digital currencies provide a public ledger of all transactions, mixing services make it difficult to trace specific coins, thus concealing the origin of cryptocurrency.
Mixing services such as Cryptomixer offer their clients anonymity and are often used before criminals redirect their laundered assets to cryptocurrency exchanges. This allows ‘cleaned’ cryptocurrency to be exchanged for other cryptocurrencies or for FIAT currency through cash machines or bank accounts.
Europol’s support
Europol facilitated the exchange of information in the framework of the Joint Cybercrime Action Taskforce (J-CAT), which is hosted at Europol’s headquarters in The Hague, the Netherlands. One of Europol’s priorities is to act as a broker of law enforcement knowledge, providing a hub through which Member States can connect and benefit from one another’s and Europol’s expertise.
Throughout the operation, the agency provided crucial support, including coordinating the involved partners and hosting operational meetings. On the action day, Europol’s cybercrime experts provided on-the-spot support and forensic assistance.
In March 2023, Europol already supported the takedown of the largest mixing service at that time, ‘Chipmixer’.
Participating countries:
Germany: Federal Criminal Police Office (Bundeskriminalamt); Prosecutor General’s Office Frankfurt am Main, Cyber Crime Centre (Generalstaatsanwaltschaft Frankfurt am Main, Zentralstelle zur Bekämpfung der Internet- und Computerkriminalität)
Switzerland: Zurich City Police (Stadtpolizei Zürich); Zurich Cantonal Police (Kantonspolizei Zürich); Public Prosecutor‘s Office Zurich (Staatsanwaltschaft Zürich)