Cybercriminals use GhostGPT, an uncensored AI chatbot, for malware creation, BEC scams, and more. Learn about the risks and how AI fights back.
#chatbot #creation #cybercriminals #fights #ghostgpt #learn #malware #risks #scams #uncensored
Introduction Telegram, as previously reported by KELA, is a popular and legitimate messaging platform that has evolved in the past few years into a major platform for cybercriminal activities. Its lack of strict content moderation has made the platform cybercriminals’ playground. They use the platform for distribution of stolen data and hacking tools, publicizing their […]
Eight individuals have been arrested as part of an ongoing international crackdown on cybercrime, dealing a major blow to criminal operations in Côte d’Ivoire and Nigeria.
The arrests were made as part of INTERPOL’s Operation Contender 2.0, an initiative aimed at combating cyber-enabled crimes, primarily in West Africa, through enhanced international intelligence sharing.
Phishing scam targets Swiss citizens
In Côte d’Ivoire authorities dismantled a large-scale phishing scam, thanks to a collaborative effort with Swiss police and INTERPOL.
Organizations that get relieved of credentials to their cloud environments can quickly find themselves part of a disturbing new trend: Cybercriminals using stolen cloud credentials to operate and resell sexualized AI-powered chat services. Researchers say these illicit chat bots, which…
Abuse by cybercriminals Cobalt Strike is a popular commercial tool provided by the cybersecurity software company Fortra. It is designed to help legitimate IT security experts perform attack simulations that identify weaknesses in security operations and incident responses. In the wrong hands, however, unlicensed copies of Cobalt Strike can provide a malicious actor with a wide range of attack capabilities.Fortra...
Explore the shift in phishing from Dark web to Telegram, where cybercriminals trade tools and data, and uncover Guardio's insights on countering this menace.
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks.
In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes.
CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.