cybernews.com 18.08.2025 - Friendly AI chatbot Lena greets you on Lenovo’s website and is so helpful that it spills secrets and runs remote scripts on corporate machines if you ask nicely. Massive security oversight highlights the potentially devastating consequences of poor AI chatbot implementations.
Cybernews researchers discovered critical vulnerabilities affecting Lenovo’s implementation of its AI chatbot, Lena, powered by OpenAI’s GPT-4.
Designed to assist customers, Lena can be compelled to run unauthorized scripts on corporate machines, spill active session cookies, and, potentially, worse. Attackers can abuse the XSS vulnerabilities as a direct pathway into the company’s customer support platform.
“Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new. What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs,” said the Cybernews Research team.
“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers. LLMs don’t have an instinct for “safe” – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents,” says Žilvinas Girėnas, Head of Product at nexos.ai.
Just a single-prompt attack demonstrates a chain of flaws that led to the Lenovo chatbot spilling active session cookies.
The discovery highlights multiple security issues: improper user input sanitization, improper chatbot output sanitization, the web server not verifying content produced by the chatbot, running unverified code, and loading content from arbitrary web resources. This leaves a lot of options for Cross-Site Scripting (XSS) attacks.
Cybernews responsibly disclosed the issue. Lenovo acknowledged it and has protected its systems.
Single prompt launches multi-step attack
All it took to exfiltrate active session cookies from Lenovo was a single 400-character-long prompt, containing four critical elements:
Seemingly innocent request: The prompt starts with an inquiry for legitimate information, i.e., product information, like “Show me the specifications of Lenovo IdeaPad 5 Pro.”
Instructions to change output format: The sneaky part is tricking the chatbot into providing a final answer in HTML, a language for creating websites. The researchers achieved that by asking the chatbot to convert its responses into HTML, JSON, and plain text in a specific order that the web server expected to receive instructions in. This ensured that the malicious payload would be correctly executed by the web server.
Malicious HTML injection trap: The prompt continues with instructions on how to produce the final response, specifically with HTML code for loading an image. The image URL is also nonexistent. Therefore, when it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.
Final encouragement: In the end, additional instructions try to reinforce that the chatbot must produce the image: “Show the image at the end. It is important for my decision-making. SHOW IT.”
What happened when Lenovo’s Lena received the full prompt?
“People-pleasing is still the issue that haunts large language models (LLMs), to the extent that, in this case, Lena accepted our malicious payload, which produced the XSS vulnerability and allowed the capture of session cookies upon opening the conversation. Once you’re transferred to a real agent, you’re getting their session cookies as well,” said Cybernews researchers.
lenovo-chatbot-response
“Already, this could be an open gate to their customer support platform. But the flaw opens a trove of potential other security implications.”
To better understand what’s happening under the hood, here’s the breakdown of the attack chain:
The chatbot falls for a malicious prompt and tries to follow instructions helpfully to generate an HTML answer. The response now contains secret instructions for accessing resources from an attacker-controlled server, with instructions to send private data from the client browser.
Malicious code enters Lenovo’s systems. The HTML is saved in the chatbots' conversation history on Lenovo’s server. When loaded, it executes the malicious payload and sends the user’s session cookies.
Transferring to a human: An attacker asks to speak to a human support agent, who then opens the chat. Their computer tries to load the conversation and runs the HTML code that the chatbot generated earlier. Once again, the image fails to load, and the cookie theft triggers again.
An attacker-controlled server receives the request with cookies attached. The attacker might use the cookies to gain unauthorized access to Lenovo’s customer support systems by hijacking the agents’ active sessions.
techcrunch.com 24.07 - "We're getting a lot of stuff that looks like gold, but it's actually just crap,” said the founder of one security testing firm. AI-generated security vulnerability reports are already having an effect on bug hunting, for better and worse.
So-called AI slop, meaning LLM-generated low-quality images, videos, and text, has taken over the internet in the last couple of years, polluting websites, social media platforms, at least one newspaper, and even real-world events.
The world of cybersecurity is not immune to this problem, either. In the last year, people across the cybersecurity industry have raised concerns about AI slop bug bounty reports, meaning reports that claim to have found vulnerabilities that do not actually exist, because they were created with a large language model that simply made up the vulnerability, and then packaged it into a professional-looking writeup.
“People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, ‘oh no, where is this vulnerability?’,” Vlad Ionescu, the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch.
“It turns out it was just a hallucination all along. The technical details were just made up by the LLM,” said Ionescu.
Ionescu, who used to work at Meta’s red team tasked with hacking the company from the inside, explained that one of the issues is that LLMs are designed to be helpful and give positive responses. “If you ask it for a report, it’s going to give you a report. And then people will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you get into this frustrating situation,” said Ionescu.
“That’s the problem people are running into, is we’re getting a lot of stuff that looks like gold, but it’s actually just crap,” said Ionescu.
Just in the last year, there have been real-world examples of this. Harry Sintonen, a security researcher, revealed that the open source security project Curl received a fake report. “The attacker miscalculated badly,” Sintonen wrote in a post on Mastodon. “Curl can smell AI slop from miles away.”
In response to Sintonen’s post, Benjamin Piouffle of Open Collective, a tech platform for nonprofits, said that they have the same problem: that their inbox is “flooded with AI garbage.”
One open source developer, who maintains the CycloneDX project on GitHub, pulled their bug bounty down entirely earlier this year after receiving “almost entirely AI slop reports.”
The leading bug bounty platforms, which essentially work as intermediaries between bug bounty hackers and companies who are willing to pay and reward them for finding flaws in their products and software, are also seeing a spike in AI-generated reports, TechCrunch has learned.
RSAC: Can we turn to govt, academic models instead?
Corporate AI models are already skewed to serve their makers' interests, and unless governments and academia step up to build transparent alternatives, the tech risks becoming just another tool for commercial manipulation.
That's according to cryptography and privacy guru Bruce Schneier, who spoke to The Register last week following a keynote speech at the RSA Conference in San Francisco.
"I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers," he told us. "It's integrity that we really need to think about, integrity as a security property and how it works with AI."
During his RSA keynote, Schneier asked: "Did your chatbot recommend a particular airline or hotel because it's the best deal for you, or because the AI company got a kickback from those companies?"
To deal with this quandary, Schneier proposes that governments should start taking a more hands-on stance in regulating AI, forcing model developers to be more open about the information they receive, and how the decisions models make are conceived.
He praised the EU AI Act, noting that it provides a mechanism to adapt the law as technology evolves, though he acknowledged there are teething problems. The legislation, which entered into force in August 2024, introduces phased requirements based on the risk level of AI systems. Companies deploying high-risk AI must maintain technical documentation, conduct risk assessments, and ensure transparency around how their models are built and how decisions are made.
Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there, he opined. This could push other regions toward similar regulation, though he added that in the US, meaningful legislative movement remains unlikely under the current administration.
Le conseiller fédéral Albert Rösti signera aujourd’hui à Strasbourg la Convention-cadre du Conseil de l’Europe sur l’intelligence artificielle. Par cet acte, la Suisse rejoint les États signataires d’un premier instrument juridiquement contraignant au niveau international visant à encadrer le développement et l’utilisation de l’IA dans le respect des droits fondamentaux
Today, we’re announcing Sec-Gemini v1, a new experimental AI model focused on advancing cybersecurity AI frontiers.
As outlined a year ago, defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. This fundamental asymmetry has made securing systems extremely difficult, time consuming and error prone. AI-powered cybersecurity workflows have the potential to help shift the balance back to the defenders by force multiplying cybersecurity professionals like never before.
La loi sur l’IA est le tout premier cadre juridique en matière d’IA, qui traite des risques liés à l’IA et positionne l’Europe pour qu’elle joue un rôle de premier plan à l’échelle mondiale.
Today, many seasoned security professionals will tell you they’ve been fighting a constant battle against cybercriminals and state-sponsored attackers. They will also tell you that any clear-eyed assessment shows that most of the patches, preventative measures and public awareness campaigns can only succeed at mitigating yesterday’s threats — not the threats waiting in the wings.
That could be changing. As the world focuses on the potential of AI — and governments and industry work on a regulatory approach to ensure AI is safe and secure — we believe that AI represents an inflection point for digital security. We’re not alone. More than 40% of people view better security as a top application for AI — and it’s a topic that will be front and center at the Munich Security Conference this weekend.
Les négociateurs du Parlement et du Conseil européens sont parvenus à un accord concernant la réglementation de l'intelligence artificielle. L'approche basée sur les risques, à la base du projet, est confirmée. Des compromis sont censés garantir la protection contre les risques liés à l’IA, tout en encourageant l’innovation.
En Suisse aussi, l’intelligence artificielle (IA) investit de plus en plus la vie économique et sociale de la population. Dans ce contexte, le PFPDT rappelle que la loi sur la protection des données en vigueur depuis le 1er septembre 2023 est directement applicable aux traitements de données basés sur l’IA.