ChatGPT for a long time on March 20th posted a giant orange warning on top of their interface that they’re unable to load chat history.
At the end of November 2022, OpenAI released ChatGPT, the new interface for its Large Language Model (LLM), which instantly created a flurry of interest in AI and its possible uses. However, ChatGPT has also added some spice to the modern cyber threat landscape as it quickly became apparent that code generation can help less-skilled threat actors effortlessly launch cyberattacks.
In Check Point Research’s (CPR) previous blog, we described how ChatGPT successfully conducted a full infection flow, from creating a convincing spear-phishing email to running a reverse shell, capable of accepting commands in English. The question at hand is whether this is just a hypothetical threat or if there are already threat actors using OpenAI technologies for malicious purposes.
CPR’s analysis of several major underground hacking communities shows that there are already first instances of cybercriminals using OpenAI to develop malicious tools. As we suspected, some of the cases clearly showed that many cybercriminals using OpenAI have no development skills at all. Although the tools that we present in this report are pretty basic, it’s only a matter of time until more sophisticated threat actors enhance the way they use AI-based tools for bad.
OpenAI chat has exploded in popularity over the last couple of weeks. People are using it to do all sorts of interesting things. If you are unfamiliar with OpenAI Chat and GPT-3, you can find a primer here. The gist is that it’s an artificial intelligence model that you can chat with as if it were a person. It can do all kinds of things like answer questions, write code, find bugs in code, and more. It also remembers context, so you can refer to something you already mentioned at it is able to follow along. I thought maybe this could be a useful tool for building email phishing campaigns for my pentesting work, so I thought I’d try it out and see what I could get it to do.
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.