The number of victims affected by a mass-ransomware attack, caused by a bug in a popular data transfer tool used by businesses around the world, continues to grow as another organization tells TechCrunch that it was also hacked.
The City of Toronto told TechCrunch in a revised statement on March 23: “Today, the City of Toronto has confirmed that unauthorized access to City data did occur through a third party vendor. The access is limited to files that were unable to be processed through the third party secure file transfer system.”
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems.
[PDF DOC] https://arxiv.org/pdf/2302.12173.pdf