Threat actors are advancing AI strategies and outpacing traditional security. CXOs must critically examine AI weaponization across the attack chain.
The integration of AI into adversarial operations is fundamentally reshaping the speed, scale and sophistication of attacks. As AI defense capabilities evolve, so do the AI strategies and tools leveraged by threat actors, creating a rapidly shifting threat landscape that outpaces traditional detection and response methods. This accelerating evolution necessitates a critical examination for CXOs into how threat actors will strategically weaponize AI across each phase of the attack chain.
One of the most alarming shifts we have seen, following the introduction of AI technologies, is the dramatic drop in mean time to exfiltrate (MTTE) data, following initial access. In 2021, the average MTTE stood at nine days. According to our Unit 42 2025 Global Incident Response Report, by 2024 MTTE dropped to two days. In one in five cases, the time from compromise to exfiltration was less than 1 hour.
In our testing, Unit 42 was able to simulate a ransomware attack (from initial compromise to data exfiltration) in just 25 minutes using AI at every stage of the attack chain. That’s a 100x increase in speed, powered entirely by AI.
Recent threat activity observed by Unit 42 has highlighted how adversaries are leveraging AI in attacks:
A Chinese startup, Sand AI, appears to be blocking certain politically sensitive images from its online video generation tool.
A China-based startup, Sand AI, has released an openly licensed, video-generating AI model that’s garnered praise from entrepreneurs like the founding director of Microsoft Research Asia, Kai-Fu Lee. But Sand AI appears to be censoring the hosted version of its model to block images that might raise the ire of Chinese regulators from the hosted version of the model, according to TechCrunch’s testing.
Earlier this week, Sand AI announced Magi-1, a model that generates videos by “autoregressively” predicting sequences of frames. The company claims the model can generate high-quality, controllable footage that captures physics more accurately than rival open models.
A new attack technique named Policy Puppetry can break the protections of major gen-AI models to produce harmful outputs.
Combined with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates.
A self-contained AI system engineered for offensive cyber operations, Xanthorox AI, has surfaced on darknet forums and encrypted channels.
Introduced in late Q1 2025, it marks a shift in the threat landscape with its autonomous, modular structure designed to support large-scale, highly adaptive cyber-attacks.
Built entirely on private servers, Xanthorox avoids using public APIs or cloud services, significantly reducing its visibility and traceability.
By leveraging Microsoft Security Copilot to expedite the vulnerability discovery process, Microsoft Threat Intelligence uncovered several vulnerabilities in multiple open-source bootloaders, impacting all operating systems relying on Unified Extensible Firmware Interface (UEFI) Secure Boot as well as IoT devices. The vulnerabilities found in the GRUB2 bootloader (commonly used as a Linux bootloader) and U-boot and Barebox bootloaders (commonly used for embedded systems), could allow threat actors to gain and execute arbitrary code.
The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.
MITRE’s AI Incident Sharing initiative helps organizations receive and hand out data on real-world AI incidents.
Non-profit technology and R&D company MITRE has introduced a new mechanism that enables organizations to share intelligence on real-world AI-related incidents.
Shaped in collaboration with over 15 companies, the new AI Incident Sharing initiative aims to increase community knowledge of threats and defenses involving AI-enabled systems.
Olga Loiek, a University of Pennsylvania student was looking for an audience on the internet – just not like this.
Shortly after launching a YouTube channel in November last year, Loiek, a 21-year-old from Ukraine, found her image had been taken and spun through artificial intelligence to create alter egos on Chinese social media platforms.
Her digital doppelgangers - like "Natasha" - claimed to be Russian women fluent in Chinese who wanted to thank China for its support of Russia and make a little money on the side selling products such as Russian candies.
A NewsGuard audit found that chatbots spewed misinformation from American fugitive John Mark Dougan.
#AI #Axios #ChatGPT #Google #Illustrations #License #Microsoft #Misinformation #OpenAI #Visuals #genAI #generative #or