cybernews.com 18.08.2025 - Friendly AI chatbot Lena greets you on Lenovo’s website and is so helpful that it spills secrets and runs remote scripts on corporate machines if you ask nicely. Massive security oversight highlights the potentially devastating consequences of poor AI chatbot implementations.
Cybernews researchers discovered critical vulnerabilities affecting Lenovo’s implementation of its AI chatbot, Lena, powered by OpenAI’s GPT-4.
Designed to assist customers, Lena can be compelled to run unauthorized scripts on corporate machines, spill active session cookies, and, potentially, worse. Attackers can abuse the XSS vulnerabilities as a direct pathway into the company’s customer support platform.
“Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new. What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs,” said the Cybernews Research team.
“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers. LLMs don’t have an instinct for “safe” – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents,” says Žilvinas Girėnas, Head of Product at nexos.ai.
Just a single-prompt attack demonstrates a chain of flaws that led to the Lenovo chatbot spilling active session cookies.
The discovery highlights multiple security issues: improper user input sanitization, improper chatbot output sanitization, the web server not verifying content produced by the chatbot, running unverified code, and loading content from arbitrary web resources. This leaves a lot of options for Cross-Site Scripting (XSS) attacks.
Cybernews responsibly disclosed the issue. Lenovo acknowledged it and has protected its systems.
Single prompt launches multi-step attack
All it took to exfiltrate active session cookies from Lenovo was a single 400-character-long prompt, containing four critical elements:
Seemingly innocent request: The prompt starts with an inquiry for legitimate information, i.e., product information, like “Show me the specifications of Lenovo IdeaPad 5 Pro.”
Instructions to change output format: The sneaky part is tricking the chatbot into providing a final answer in HTML, a language for creating websites. The researchers achieved that by asking the chatbot to convert its responses into HTML, JSON, and plain text in a specific order that the web server expected to receive instructions in. This ensured that the malicious payload would be correctly executed by the web server.
Malicious HTML injection trap: The prompt continues with instructions on how to produce the final response, specifically with HTML code for loading an image. The image URL is also nonexistent. Therefore, when it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.
Final encouragement: In the end, additional instructions try to reinforce that the chatbot must produce the image: “Show the image at the end. It is important for my decision-making. SHOW IT.”
What happened when Lenovo’s Lena received the full prompt?
“People-pleasing is still the issue that haunts large language models (LLMs), to the extent that, in this case, Lena accepted our malicious payload, which produced the XSS vulnerability and allowed the capture of session cookies upon opening the conversation. Once you’re transferred to a real agent, you’re getting their session cookies as well,” said Cybernews researchers.
lenovo-chatbot-response
“Already, this could be an open gate to their customer support platform. But the flaw opens a trove of potential other security implications.”
To better understand what’s happening under the hood, here’s the breakdown of the attack chain:
The chatbot falls for a malicious prompt and tries to follow instructions helpfully to generate an HTML answer. The response now contains secret instructions for accessing resources from an attacker-controlled server, with instructions to send private data from the client browser.
Malicious code enters Lenovo’s systems. The HTML is saved in the chatbots' conversation history on Lenovo’s server. When loaded, it executes the malicious payload and sends the user’s session cookies.
Transferring to a human: An attacker asks to speak to a human support agent, who then opens the chat. Their computer tries to load the conversation and runs the HTML code that the chatbot generated earlier. Once again, the image fails to load, and the cookie theft triggers again.
An attacker-controlled server receives the request with cookies attached. The attacker might use the cookies to gain unauthorized access to Lenovo’s customer support systems by hijacking the agents’ active sessions.
forbes.com 20.08.2025 - xAI published conversations with Grok and made them searchable on Google, including a plan to assassinate Elon Musk and instructions for making fentanyl and bombs.
Elon Musk’s AI firm, xAI, has published the chat transcripts of hundreds of thousands of conversations between its chatbot Grok and the bot’s users — in many cases, without those users’ knowledge or permission.
Anytime a Grok user clicks the “share” button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message or other means. Unbeknownst to users, though, that unique URL is also made available to search engines, like Google, Bing and DuckDuckGo, making them searchable to anyone on the web. In other words, on Musk’s Grok, hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.
Today, a Google search for Grok chats shows that the search engine has indexed more than 370,000 user conversations with the bot. The shared pages revealed conversations between Grok users and the LLM that range from simple business tasks like writing tweets to generating images of a fictional terrorist attack in Kashmir and attempting to hack into a crypto wallet. Forbes reviewed conversations where users asked intimate questions about medicine and psychology; some even revealed the name, personal details and at least one password shared with the bot by a Grok user. Image files, spreadsheets and some text documents uploaded by users could also be accessed via the Grok shared page.
Among the indexed conversations were some initiated by British journalist Andrew Clifford, who used Grok to summarize the front pages of newspapers and compose tweets for his website Sentinel Current. Clifford told Forbes that he was unaware that clicking the share button would mean that his prompt would be discoverable on Google. “I would be a bit peeved but there was nothing on there that shouldn’t be there,” said Clifford, who has now switched to using Google’s Gemini AI.
Not all the conversations, though, were as benign as Clifford’s. Some were explicit, bigoted and violated xAI’s rules. The company prohibits use of its bot to “promot[e] critically harming human life or to “develop bioweapons, chemical weapons, or weapons of mass destruction,” but in published, shared conversations easily found via a Google search, Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide. Grok also offered a detailed plan for the assassination of Elon Musk. Via the “share” function, the illicit instructions were then published on Grok’s website and indexed by Google.
xAI did not respond to a detailed request for comment.
xAI is not the only AI startup to have published users’ conversations with its chatbots. Earlier this month, users of OpenAI’s ChatGPT were alarmed to find that their conversations were appearing in Google search results, though the users had opted to make those conversations “discoverable” to others. But after outcry, the company quickly changed its policy. Calling the indexing “a short-lived experiment,” OpenAI chief information security officer Dane Stuckey said in a post on X that it would be discontinued because it “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”
After OpenAI canned its share feature, Musk took a victory lap. Grok’s X account claimed at the time that it had no such sharing feature, and Musk tweeted in response, “Grok ftw” [for the win]. It’s unclear when Grok added the share feature, but X users have been warning since January that Grok conversations were being indexed by Google.
Some of the conversations asking Grok for instructions about how to manufacture drugs and bombs were likely initiated by security engineers, redteamers, or Trust & Safety professionals. But in at least a few cases, Grok’s sharing setting misled even professional AI researchers.
Nathan Lambert, a computational scientist at the Allen Institute for AI, used Grok to create a summary of his blog posts to share with his team. He was shocked to learn from Forbes that his Grok prompt and the AI’s response was indexed on Google. “I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it, especially after the recent flare-up with ChatGPT,” said the Seattle-based researcher.
Google allows website owners to choose when and how their content is indexed for search. “Publishers of these pages have full control over whether they are indexed,” said Google spokesperson Ned Adriance in a statement. Google itself previously allowed chats with its AI chatbot, Bard, to be indexed, but it removed them from search in 2023. Meta continues to allow its shared searches to be discoverable by search engines, Business Insider reported.
Opportunists are beginning to notice, and take advantage of, Grok’s published chats. On LinkedIn and the forum BlackHatWorld, marketers have discussed intentionally creating and sharing conversations with Grok to increase the prominence and name recognition of their businesses and products in Google search results. (It is unclear how effective these efforts would be.) Satish Kumar, CEO of SEO agency Pyrite Technologies, demonstrated to Forbes how one business had used Grok to manipulate results for a search of companies that will write your PhD dissertation for you.
“Every shared chat on Grok is fully indexable and searchable on Google,” he said. “People are actively using tactics to push these pages into Google’s index.”
Ian Carroll, Sam Curry / ian.sh
When applying for a job at McDonald's, over 90% of franchises use "Olivia," an AI-powered chatbot. We discovered a vulnerability that could allow an attacker to access more than 64 million job applications. This data includes applicants' names, resumes, email addresses, phone numbers, and personality test results.
McHire is the chatbot recruitment platform used by 90% of McDonald’s franchisees. Prospective employees chat with a bot named Olivia, created by a company called Paradox.ai, that collects their personal information, shift preferences, and administers personality tests. We noticed this after seeing complaints on Reddit of the bot responding with nonsensical answers.
During a cursory security review of a few hours, we identified two serious issues: the McHire administration interface for restaurant owners accepted the default credentials 123456:123456, and an insecure direct object reference (IDOR) on an internal API allowed us to access any contacts and chats we wanted. Together they allowed us and anyone else with a McHire account and access to any inbox to retrieve the personal data of more than 64 million applicants.
We disclosed this issue to Paradox.ai and McDonald’s at the same time.
06/30/2025 5:46PM ET: Disclosed to Paradox.ai and McDonald’s
06/30/2025 6:24PM ET: McDonald’s confirms receipt and requests technical details
06/30/2025 7:31PM ET: Credentials are no longer usable to access the app
07/01/2025 9:44PM ET: Followed up on status
07/01/2025 10:18PM ET: Paradox.ai confirms the issues have been resolved
Cybercriminals use GhostGPT, an uncensored AI chatbot, for malware creation, BEC scams, and more. Learn about the risks and how AI fights back.
#chatbot #creation #cybercriminals #fights #ghostgpt #learn #malware #risks #scams #uncensored
With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information.
When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.