Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation.
The rise of publicly-accessible Al models like ChatGPT has produced some interesting attempts to create malware. How seriously should defenders take them?