Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
6 résultats taggé LLMs  ✕
When LLMs autonomously attack https://engineering.cmu.edu/news-events/news/2025/07/24-when-llms-autonomously-attack.html
17/08/2025 17:49:46
QRCode
archive.org
thumbnail

engineering.cmu.edu - College of Engineering at Carnegie Mellon University - Carnegie Mellon researchers show how LLMs can be taught to autonomously plan and execute real-world cyberattacks against enterprise-grade network environments—and why this matters for future defenses.

In a groundbreaking development, a team of Carnegie Mellon University researchers has demonstrated that large language models (LLMs) are capable of autonomously planning and executing complex network attacks, shedding light on emerging capabilities of foundation models and their implications for cybersecurity research.

The project, led by Ph.D. candidate Brian SingerOpens in new window, a Ph.D. candidate in electrical and computer engineering (ECE)Opens in new window, explores how LLMs—when equipped with structured abstractions and integrated into a hierarchical system of agents—can function not merely as passive tools, but as active, autonomous red team agents capable of coordinating and executing multi-step cyberattacks without detailed human instruction.

“Our research aimed to understand whether an LLM could perform the high-level planning required for real-world network exploitation, and we were surprised by how well it worked,” said Singer. “We found that by providing the model with an abstracted ‘mental model’ of network red teaming behavior and available actions, LLMs could effectively plan and initiate autonomous attacks through coordinated execution by sub-agents.”

Moving beyond simulated challenges
Prior work in this space had focused on how LLMs perform in simplified “capture-the-flag” (CTF) environments—puzzles commonly used in cybersecurity education.

Singer’s research advances this work by evaluating LLMs in realistic enterprise network environments and considering sophisticated, multi-stage attack plans.

Using state-of-the-art, reasoning-capable LLMs equipped with common knowledge of computer security tools failed miserably at the challenges. However, when these same LLMs and smaller LLMs as well were “taught” a mental model and abstraction of security attack orchestration, they showed dramatic improvement.

Rather than requiring the LLM to execute raw shell commands—often a limiting factor in prior studies—this system provides the LLM with higher-level decision-making capabilities while delegating low-level tasks to a combination of LLM and non-LLM agents.

Experimental evaluation: The Equifax case
To rigorously evaluate the system’s capabilities, the team recreated the network environment associated with the 2017 Equifax data breachOpens in new window—a massive security failure that exposed the personal data of nearly 150 million Americans—by incorporating the same vulnerabilities and topology documented in Congressional reports. Within this replicated environment, the LLM autonomously planned and executed the attack sequence, including exploiting vulnerabilities, installing malware, and exfiltrating data.

“The fact that the model was able to successfully replicate the Equifax breach scenario without human intervention in the planning loop was both surprising and instructive,” said Singer. “It demonstrates that, under certain conditions, these models can coordinate complex actions across a system architecture.”

Implications for security testing and autonomous defense
While the findings underscore potential risks associated with LLM misuse, Singer emphasized the constructive applications for organizations seeking to improve security posture.

“Right now, only big companies can afford to run professional tests on their networks via expensive human red teams, and they might only do that once or twice a year,” he explained. “In the future, AI could run those tests constantly, catching problems before real attackers do. That could level the playing field for smaller organizations.”

The research team features Singer, Keane LucasOpens in new window of AnthropicOpens in new window and a CyLabOpens in new window alumnus, Lakshmi AdigaOpens in new window, an undergraduate ECE student, Meghna Jain, a master’s ECE student, Lujo BauerOpens in new window of ECE and the CMU Software and Societal Systems Department (S3D)Opens in new window, and Vyas SekarOpens in new window of ECE. Bauer and Sekar are co-directors of the CyLab Future Enterprise Security InitiativeOpens in new window, which supported the students involved in this research.

engineering.cmu.edu EN 2025 AnthropicOpens CarnegieMellon LLMs LLM autonomously attack
MCP Prompt Injection: Not Just For Evil https://www.tenable.com/blog/mcp-prompt-injection-not-just-for-evil
04/05/2025 13:54:57
QRCode
archive.org
thumbnail

MCP tools are implicated in several new attack techniques. Here's a look at how they can be manipulated for good, such as logging tool usage and filtering unauthorized commands.

Over the last few months, there has been a lot of activity in the Model Context Protocol (MCP) space, both in terms of adoption as well as security. Developed by Anthropic, MCP has been rapidly gaining traction across the AI ecosystem. MCP allows Large Language Models (LLMs) to interface with tools and for those interfaces to be rapidly created. MCP tools allow for the rapid development of “agentic” systems, or AI systems that autonomously perform tasks.

Beyond adoption, new attack techniques have been shown to allow prompt injection via MCP tool descriptions and responses, MCP tool poisoning, rug pulls and more.

Prompt Injection is a weakness in LLMs that can be used to elicit unintended behavior, circumvent safeguards and produce potentially malicious responses. Prompt injection occurs when an attacker instructs the LLM to disregard other rules and do the attacker’s bidding. In this blog, I show how to use techniques similar to prompt injection to change the LLM’s interaction with MCP tools. Anyone conducting MCP research may find these techniques useful.

tenable EN 2025 MCP Prompt-Injection LLM LLMs technique interface vulnerability research
Keeping GenAI technologies secure is a shared responsibility https://blog.mozilla.org/en/mozilla/keeping-genai-technologies-secure-is-a-shared-responsibility/
09/06/2024 14:49:08
QRCode
archive.org
thumbnail

Today, we are investing in the next generation of GenAI security with the 0Day Investigative Network (0Din) by Mozilla, a bug bounty program for large language models (LLMs) and other deep learning technologies. 0Din expands the scope to identify and fix GenAI security by delving beyond the application layer with a focus on emerging classes of vulnerabilities and weaknesses in these new generations of models.

mozilla EN BugBounty LLMs 0Din GenAI
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute https://www.robustintelligence.com/blog-posts/using-ai-to-automatically-jailbreak-gpt-4-and-other-llms-in-under-a-minute
09/12/2023 12:12:17
QRCode
archive.org
thumbnail

It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.

robustintelligence EN AI Jailbreak GPT-4 chatgpt hacking LLMs research
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT https://dropbox.tech/machine-learning/prompt-injection-with-control-characters-openai-chatgpt-llm
04/08/2023 09:47:15
QRCode
archive.org
thumbnail

Like many companies, Dropbox has been experimenting with large language models (LLMs) as a potential backend for product and research initiatives. As interest in leveraging LLMs has increased in recent months, the Dropbox Security team has been advising on measures to harden internal Dropbox infrastructure for secure usage in accordance with our AI principles. In particular, we’ve been working to mitigate abuse of potential LLM-powered products and features via user-controlled input.

dropbox EN 2023 ChatGPT LLMs prompt-injection
ChatGPT creates mutating malware that evades detection by EDR https://www.csoonline.com/article/3698516/chatgpt-creates-mutating-malware-that-evades-detection-by-edr.html
07/06/2023 19:56:49
QRCode
archive.org
thumbnail

A global sensation since its initial release at the end of last year, ChatGPT's popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

csoonline EN 2023 ChatGPT LLMs EDR BlackMamba
4737 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio