Since mid-April 2024, Microsoft has observed an increase in defense evasion tactics used in campaigns abusing file hosting services like SharePoint, OneDrive, and Dropbox. These campaigns use sophisticated techniques to perform social engineering, evade detection, and compromise identities, and include business email compromise (BEC) attacks.
Like many companies, Dropbox has been experimenting with large language models (LLMs) as a potential backend for product and research initiatives. As interest in leveraging LLMs has increased in recent months, the Dropbox Security team has been advising on measures to harden internal Dropbox infrastructure for secure usage in accordance with our AI principles. In particular, we’ve been working to mitigate abuse of potential LLM-powered products and features via user-controlled input.
We were recently the target of a phishing campaign that successfully accessed some of the code we store in GitHub. No one’s content, passwords, or payment information was accessed, and the issue was quickly resolved. Our core apps and infrastructure were also unaffected, as access to this code is even more limited and strictly controlled. We believe the risk to customers is minimal. Because we take our commitment to security, privacy, and transparency seriously, we have notified those affected and are sharing more here.