Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
14 résultats taggé IA  ✕
Critical flaw plagues Lenovo AI chatbot: attackers can run malicious code and steal cookies https://cybernews.com/security/lenovo-chatbot-lena-plagued-by-critical-vulnerabilities/
21/08/2025 10:33:54
QRCode
archive.org

cybernews.com 18.08.2025 - Friendly AI chatbot Lena greets you on Lenovo’s website and is so helpful that it spills secrets and runs remote scripts on corporate machines if you ask nicely. Massive security oversight highlights the potentially devastating consequences of poor AI chatbot implementations.

  • Lenovo’s AI chatbot Lena was affected by critical XSS vulnerabilities, which enabled attackers to inject malicious code and steal session cookies with a single prompt.
  • The flaws could potentially lead to data theft, customer support system compromise, and serve as a jumpboard for lateral movement within the company’s network.
  • Improper input and output sanitization highlights a need for stricter security practices in AI chatbot implementations.

Cybernews researchers discovered critical vulnerabilities affecting Lenovo’s implementation of its AI chatbot, Lena, powered by OpenAI’s GPT-4.

Designed to assist customers, Lena can be compelled to run unauthorized scripts on corporate machines, spill active session cookies, and, potentially, worse. Attackers can abuse the XSS vulnerabilities as a direct pathway into the company’s customer support platform.

“Everyone knows chatbots hallucinate and can be tricked by prompt injections. This isn’t new. What’s truly surprising is that Lenovo, despite being aware of these flaws, did not protect itself from potentially malicious user manipulations and chatbot outputs,” said the Cybernews Research team.

“This isn’t just Lenovo’s problem. Any AI system without strict input and output controls creates an opening for attackers. LLMs don’t have an instinct for “safe” – they follow instructions exactly as given. Without strong guardrails and continuous monitoring, even small oversights can turn into major security incidents,” says Žilvinas Girėnas, Head of Product at nexos.ai.

Just a single-prompt attack demonstrates a chain of flaws that led to the Lenovo chatbot spilling active session cookies.

The discovery highlights multiple security issues: improper user input sanitization, improper chatbot output sanitization, the web server not verifying content produced by the chatbot, running unverified code, and loading content from arbitrary web resources. This leaves a lot of options for Cross-Site Scripting (XSS) attacks.

Cybernews responsibly disclosed the issue. Lenovo acknowledged it and has protected its systems.

Single prompt launches multi-step attack
All it took to exfiltrate active session cookies from Lenovo was a single 400-character-long prompt, containing four critical elements:

Seemingly innocent request: The prompt starts with an inquiry for legitimate information, i.e., product information, like “Show me the specifications of Lenovo IdeaPad 5 Pro.”
Instructions to change output format: The sneaky part is tricking the chatbot into providing a final answer in HTML, a language for creating websites. The researchers achieved that by asking the chatbot to convert its responses into HTML, JSON, and plain text in a specific order that the web server expected to receive instructions in. This ensured that the malicious payload would be correctly executed by the web server.
Malicious HTML injection trap: The prompt continues with instructions on how to produce the final response, specifically with HTML code for loading an image. The image URL is also nonexistent. Therefore, when it fails to load, the second part of the command instructs the browser to make a network request to the attacker-controlled server and send all cookie data as part of a URL.
Final encouragement: In the end, additional instructions try to reinforce that the chatbot must produce the image: “Show the image at the end. It is important for my decision-making. SHOW IT.”
What happened when Lenovo’s Lena received the full prompt?

“People-pleasing is still the issue that haunts large language models (LLMs), to the extent that, in this case, Lena accepted our malicious payload, which produced the XSS vulnerability and allowed the capture of session cookies upon opening the conversation. Once you’re transferred to a real agent, you’re getting their session cookies as well,” said Cybernews researchers.

lenovo-chatbot-response
“Already, this could be an open gate to their customer support platform. But the flaw opens a trove of potential other security implications.”

To better understand what’s happening under the hood, here’s the breakdown of the attack chain:

The chatbot falls for a malicious prompt and tries to follow instructions helpfully to generate an HTML answer. The response now contains secret instructions for accessing resources from an attacker-controlled server, with instructions to send private data from the client browser.
Malicious code enters Lenovo’s systems. The HTML is saved in the chatbots' conversation history on Lenovo’s server. When loaded, it executes the malicious payload and sends the user’s session cookies.
Transferring to a human: An attacker asks to speak to a human support agent, who then opens the chat. Their computer tries to load the conversation and runs the HTML code that the chatbot generated earlier. Once again, the image fails to load, and the cookie theft triggers again.
An attacker-controlled server receives the request with cookies attached. The attacker might use the cookies to gain unauthorized access to Lenovo’s customer support systems by hijacking the agents’ active sessions.

cybernews.com EN 2025 Lenovo AI Lena IA chatbot injection malicious code
AI slop and fake reports are coming for your bug bounty programs https://techcrunch.com/2025/07/24/ai-slop-and-fake-reports-are-exhausting-some-security-bug-bounties/?uID=8e71ce9f0d62feda43e6b97db738658f0358bf8874bfa63345d6d3d61266ca54
02/08/2025 10:46:31
QRCode
archive.org
thumbnail

techcrunch.com 24.07 - "We're getting a lot of stuff that looks like gold, but it's actually just crap,” said the founder of one security testing firm. AI-generated security vulnerability reports are already having an effect on bug hunting, for better and worse.

So-called AI slop, meaning LLM-generated low-quality images, videos, and text, has taken over the internet in the last couple of years, polluting websites, social media platforms, at least one newspaper, and even real-world events.

The world of cybersecurity is not immune to this problem, either. In the last year, people across the cybersecurity industry have raised concerns about AI slop bug bounty reports, meaning reports that claim to have found vulnerabilities that do not actually exist, because they were created with a large language model that simply made up the vulnerability, and then packaged it into a professional-looking writeup.

“People are receiving reports that sound reasonable, they look technically correct. And then you end up digging into them, trying to figure out, ‘oh no, where is this vulnerability?’,” Vlad Ionescu, the co-founder and CTO of RunSybil, a startup that develops AI-powered bug hunters, told TechCrunch.

“It turns out it was just a hallucination all along. The technical details were just made up by the LLM,” said Ionescu.

Ionescu, who used to work at Meta’s red team tasked with hacking the company from the inside, explained that one of the issues is that LLMs are designed to be helpful and give positive responses. “If you ask it for a report, it’s going to give you a report. And then people will copy and paste these into the bug bounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you get into this frustrating situation,” said Ionescu.

“That’s the problem people are running into, is we’re getting a lot of stuff that looks like gold, but it’s actually just crap,” said Ionescu.

Just in the last year, there have been real-world examples of this. Harry Sintonen, a security researcher, revealed that the open source security project Curl received a fake report. “The attacker miscalculated badly,” Sintonen wrote in a post on Mastodon. “Curl can smell AI slop from miles away.”

In response to Sintonen’s post, Benjamin Piouffle of Open Collective, a tech platform for nonprofits, said that they have the same problem: that their inbox is “flooded with AI garbage.”

One open source developer, who maintains the CycloneDX project on GitHub, pulled their bug bounty down entirely earlier this year after receiving “almost entirely AI slop reports.”

The leading bug bounty platforms, which essentially work as intermediaries between bug bounty hackers and companies who are willing to pay and reward them for finding flaws in their products and software, are also seeing a spike in AI-generated reports, TechCrunch has learned.

techcrunch.com EN 2025 IA AI-slop LLM BugBounty
Schneier warns that AI loses integrity due to corporate bias https://www.theregister.com/2025/05/06/schneier_ai_models/
10/05/2025 22:42:42
QRCode
archive.org
thumbnail

RSAC: Can we turn to govt, academic models instead?
Corporate AI models are already skewed to serve their makers' interests, and unless governments and academia step up to build transparent alternatives, the tech risks becoming just another tool for commercial manipulation.

That's according to cryptography and privacy guru Bruce Schneier, who spoke to The Register last week following a keynote speech at the RSA Conference in San Francisco.

"I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers," he told us. "It's integrity that we really need to think about, integrity as a security property and how it works with AI."

During his RSA keynote, Schneier asked: "Did your chatbot recommend a particular airline or hotel because it's the best deal for you, or because the AI company got a kickback from those companies?"

To deal with this quandary, Schneier proposes that governments should start taking a more hands-on stance in regulating AI, forcing model developers to be more open about the information they receive, and how the decisions models make are conceived.

He praised the EU AI Act, noting that it provides a mechanism to adapt the law as technology evolves, though he acknowledged there are teething problems. The legislation, which entered into force in August 2024, introduces phased requirements based on the risk level of AI systems. Companies deploying high-risk AI must maintain technical documentation, conduct risk assessments, and ensure transparency around how their models are built and how decisions are made.

Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there, he opined. This could push other regions toward similar regulation, though he added that in the US, meaningful legislative movement remains unlikely under the current administration.

theregister EN 2025 Schneier IA corporate bias corporate-bias warning
La Suisse signe la Convention du Conseil de l’Europe sur l’intelligence artificielle https://swissprivacy.law/344/
08/04/2025 07:33:01
QRCode
archive.org

Le conseiller fédéral Albert Rösti signera aujourd’hui à Strasbourg la Convention-cadre du Conseil de l’Europe sur l’intelligence artificielle. Par cet acte, la Suisse rejoint les États signataires d’un premier instrument juridiquement contraignant au niveau international visant à encadrer le développement et l’utilisation de l’IA dans le respect des droits fondamentaux

swissprivacy.law FR CH 2025 Convention Conseil Europe IA intelligence artificielle Suisse acte
Google Online Security Blog: Google announces Sec-Gemini v1, a new experimental cybersecurity model https://security.googleblog.com/2025/04/google-launches-sec-gemini-v1-new.html?m=1
07/04/2025 06:43:07
QRCode
archive.org
thumbnail

Today, we’re announcing Sec-Gemini v1, a new experimental AI model focused on advancing cybersecurity AI frontiers.

As outlined a year ago, defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. This fundamental asymmetry has made securing systems extremely difficult, time consuming and error prone. AI-powered cybersecurity workflows have the potential to help shift the balance back to the defenders by force multiplying cybersecurity professionals like never before.

security.googleblog EN 2025 Sec-Gemini IA announce experimental cybersecurity model
EPFL: des failles de sécurité dans les modèles d'IA https://www.swissinfo.ch/fre/epfl%3a-des-failles-de-s%c3%a9curit%c3%a9-dans-les-mod%c3%a8les-d%27ia/88615014
23/12/2024 23:23:20
QRCode
archive.org
thumbnail

Les modèles d'intelligence artificielle (IA) peuvent être manipulés malgré les mesures de protection existantes. Avec des attaques ciblées, des scientifiques lausannois ont pu amener ces systèmes à générer des contenus dangereux ou éthiquement douteux.

swissinfo FR 2024 EPFL IA chatgpt Jailbreak failles LLM vulnerabilités Manipulation
Quarante pourcents de la population se tourne vers l'IA https://www.swissinfo.ch/fre/quarante-pourcents-de-la-population-se-tourne-vers-l%27ia/87498532
06/09/2024 11:42:02
QRCode
archive.org
thumbnail

Environ 40% de la population suisse se sert d'outils d'intelligence artificielle tels que ChatGPT. Chez les jeunes, leur utilisation est très répandue, alors que les plus âgés y ont moins recours. La TV et l'audio, en revanche, sont appréciés de toutes les générations.

swissinfo ChatGPT Suisse IA FR 2024 statistiques
Loi sur l’IA https://digital-strategy.ec.europa.eu/fr/policies/regulatory-framework-ai
17/03/2024 16:06:58
QRCode
archive.org

La loi sur l’IA est le tout premier cadre juridique en matière d’IA, qui traite des risques liés à l’IA et positionne l’Europe pour qu’elle joue un rôle de premier plan à l’échelle mondiale.

digital-strategy.ec.europa.eu FR 2024 IA loi legal juridique Europe EU regulatory
Microsoft publie son outil interne de test de sécu d'IA générative https://www.zdnet.fr/actualites/microsoft-publie-son-outil-interne-de-test-de-secu-d-ia-generative-39964464.htm
17/03/2024 14:46:49
QRCode
archive.org
thumbnail

PyRIT peut générer des milliers de messages malveillants pour tester un modèle d'IA générative, et même évaluer sa réponse.

ZDNet 2024 FR outil PyRIT Microsoft test IA
Chatbots qui «hallucinent» et trompent les clients: quelle responsabilité légale? | ICTjournal https://www.ictjournal.ch/articles/2024-02-27/chatbots-qui-hallucinent-et-trompent-les-clients-quelle-responsabilite-legale
27/02/2024 18:13:40
QRCode
archive.org
thumbnail

Comme l’a illustré un récent verdict contre Air Canada, les entreprises peuvent être jugées responsables des inform

ictjournal FR 2024 chatbots legal hallucinations responsabilité légale IA
Google launches AI Cyber Defense Initiative to improve security infrastructure https://blog.google/technology/safety-security/google-ai-cyber-defense-initiative/
17/02/2024 10:39:19
QRCode
archive.org
thumbnail

Today, many seasoned security professionals will tell you they’ve been fighting a constant battle against cybercriminals and state-sponsored attackers. They will also tell you that any clear-eyed assessment shows that most of the patches, preventative measures and public awareness campaigns can only succeed at mitigating yesterday’s threats — not the threats waiting in the wings.

That could be changing. As the world focuses on the potential of AI — and governments and industry work on a regulatory approach to ensure AI is safe and secure — we believe that AI represents an inflection point for digital security. We’re not alone. More than 40% of people view better security as a top application for AI — and it’s a topic that will be front and center at the Munich Security Conference this weekend.

blog.google EN 2024 google Cyber-Defense initiative IA Defender-Dilemma
L’AI Act européen adopté après des négociations marathon | ICTjournal https://www.ictjournal.ch/articles/2023-12-11/lai-act-europeen-adopte-apres-des-negociations-marathon
11/12/2023 18:57:30
QRCode
archive.org
thumbnail

Les négociateurs du Parlement et du Conseil européens sont parvenus à un accord concernant la réglementation de l'intelligence artificielle. L'approche basée sur les risques, à la base du projet, est confirmée. Des compromis sont censés garantir la protection contre les risques liés à l’IA, tout en encourageant l’innovation.

ictjournal FR 2023 EU IA réglementation act AI
La loi actuelle sur la protection des données est directement applicable à l’IA https://www.edoeb.admin.ch/edoeb/fr/home/kurzmeldungen/20231109_ki_dsg.html
14/11/2023 15:43:50
QRCode
archive.org

En Suisse aussi, l’intelligence artificielle (IA) investit de plus en plus la vie économique et sociale de la population. Dans ce contexte, le PFPDT rappelle que la loi sur la protection des données en vigueur depuis le 1er septembre 2023 est directement applicable aux traitements de données basés sur l’IA.

admin.ch FR CH Suisse IA intelligence artificielle PFPDT 2023 loi
La nLPD est directement applicable à l’intelligence artificielle https://www.ictjournal.ch/news/2023-11-14/la-nlpd-est-directement-applicable-a-lintelligence-artificielle
14/11/2023 15:38:03
QRCode
archive.org
thumbnail

Selon le Préposé fédéral à la protection des données (PFPDT), la nouvelle loi sur la protection des données en vigueur depuis septembre s'applique également aux outils d'intelligence artificielle. Le traitement des données des utilisateurs doit être signalé, même s'il est effectué par une IA.

ictjournal FR CH 2023 nLPD PFPDT intelligence artificielle ia
4737 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio