As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities:
We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes.
We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.