Discord says that approximately 70,000 users may have had their government ID photos exposed as part of a data breach of a third-party service.
Discord has identified approximately 70,000 users that may have had their government ID photos exposed as part of a customer service data breach announced last week, spokesperson Nu Wexler tells The Verge. A tweet by vx-underground said that the company was being extorted over a breach of its Zendesk instance by a group claiming to have “1.5TB of age verification related photos. 2,185,151 photos.”
When we asked about the tweet, Wexler shared this statement:
Following last week’s announcement about a security incident involving a third-party customer service provider, we want to address inaccurate claims by those responsible that are circulating online. First, as stated in our blog post, this was not a breach of Discord, but rather a third-party service we use to support our customer service efforts. Second, the numbers being shared are incorrect and part of an attempt to extort a payment from Discord. Of the accounts impacted globally, we have identified approximately 70,000 users that may have had government-ID photos exposed, which our vendor used to review age-related appeals. Third, we will not reward those responsible for their illegal actions.
All affected users globally have been contacted and we continue to work closely with law enforcement, data protection authorities, and external security experts. We’ve secured the affected systems and ended work with the compromised vendor. We take our responsibility to protect your personal data seriously and understand the concern this may cause.
In its announcement last week, Discord said that information like names, usernames, emails, the last four digits of credit cards, and IP addresses also may have been impacted by the breach.
Cybersecurity workers can also start creating their own Security Copilot AI agents.
Microsoft is launching a Security Store that will be full of security software-as-a-service (SaaS) solutions and AI agents. It’s part of a broader effort to sell Microsoft’s Sentinel security platform to businesses, complete with Microsoft Security Copilot AI agents that can be built by security teams to help tackle the latest threats.
The Microsoft Security Store is a storefront designed for security professionals to buy and deploy SaaS solutions and AI agents from Microsoft’s ecosystem partners. Darktrace, Illumio, Netskope, Perfomanta, and Tanium are all part of the new store, with solutions covering threat protection, identity and device management, and more.
A lot of the solutions will integrate with Microsoft Defender, Sentinel, Entra, Purview, or Security Copilot, making them quick to onboard for businesses that are fully reliant on Microsoft for their security needs. This should cut down on procurement and onboarding times, too.
Alongside the Security Store, Microsoft is also allowing Security Copilot users to build their own AI agents. Microsoft launched some of its own security AI agents earlier this year, and now security teams can use a tool that’s similar to Copilot Studio to build their own. You simply create an AI agent through a set of prompts and then publish them all with no code required. These Security Copilot agents will also be available in the Security Store today.