Unit 42 researchers have discovered that the Muddled Libra group now actively targets software-as-a-service (SaaS) applications and cloud service provider (CSP) environments. Organizations often store a variety of data in SaaS applications and use services from CSPs. The threat actors have begun attempting to leverage some of this data to assist with their attack progression, and to use for extortion when trying to monetize their work.
In a previous blog post we described a process injection vulnerability affecting all AppKit-based macOS applications. This research was presented at Black Hat USA 2022, DEF CON 30 and Objective by the Sea v5. This vulnerability was actually the second universal process injection vulnerability we reported to Apple, but it was fixed earlier than the first. Because it shared some parts of the exploit chain with the first one, there were a few steps we had to skip in the earlier post and the presentations. Now that the first vulnerability has been fixed in macOS 13.0 (Ventura) and improved in macOS 14.0 (Sonoma), we can detail the first one and thereby fill in the blanks of the previous post.
This vulnerability was independently found by Adam Chester and written up here under the name “DirtyNIB”. While the exploit chain demonstrated by Adam shares a lot of similarity to ours, our attacks trigger automatically and do not require a user to click a button, making them a lot more stealthy. Therefore we decided to publish our own version of this write-up as well.
In February 2024, Rapid7’s vulnerability research team identified two new vulnerabilities affecting JetBrains TeamCity CI/CD server:
Group-IB, a leading creator of cybersecurity technologies to investigate, prevent, and fight digital crime, has uncovered a new iOS Trojan designed to steal users’ facial recognition data, identity documents, and intercept SMS. The Trojan, dubbed GoldPickaxe.iOS by Group-IB’s Threat Intelligence unit, has been attributed to a Chinese-speaking threat actor codenamed GoldFactory, responsible for developing a suite of highly sophisticated banking Trojans that also includes the earlier discovered GoldDigger and newly identified GoldDiggerPlus, GoldKefu, and GoldPickaxe for Android. To exploit the stolen biometric data, the threat actor utilizes AI face-swapping services to create deepfakes by replacing their faces with those of the victims. This method could be used by cybercriminals to gain unauthorized access to the victim’s banking account – a new fraud technique, previously unseen by Group-IB researchers. The GoldFactory Trojans target the Asia-Pacific region, specifically — Thailand and Vietnam impersonating local banks and government organizations.
Group-IB’s discovery also marks a rare instance of malware targeting Apple’s mobile operating system. The detailed technical description of the Trojans, analysis of their technical capabilities, and the list of relevant indicators of compromise can be found in Group-IB’s latest blog post.
The Qualys Threat Research Unit (TRU) has recently unearthed four significant vulnerabilities in the GNU C Library, a cornerstone for countless applications in the Linux environment.
Before diving into the specific details of the vulnerabilities discovered by the Qualys Threat Research Unit in the GNU C Library, it’s crucial to understand these findings’ broader impact and importance. The GNU C Library, or glibc, is an essential component of virtually every Linux-based system, serving as the core interface between applications and the Linux kernel. The recent discovery of these vulnerabilities is not just a technical concern but a matter of widespread security implications.
It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.
As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities:
We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes.
We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.