Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
page 1 / 2
24 résultats taggé OpenAI  ✕
Elon Musk’s xAI Published Hundreds Of Thousands Of Grok Chatbot Conversations https://www.forbes.com/sites/iainmartin/2025/08/20/elon-musks-xai-published-hundreds-of-thousands-of-grok-chatbot-conversations/
20/08/2025 13:48:20
QRCode
archive.org
thumbnail

forbes.com 20.08.2025 - xAI published conversations with Grok and made them searchable on Google, including a plan to assassinate Elon Musk and instructions for making fentanyl and bombs.
Elon Musk’s AI firm, xAI, has published the chat transcripts of hundreds of thousands of conversations between its chatbot Grok and the bot’s users — in many cases, without those users’ knowledge or permission.

Anytime a Grok user clicks the “share” button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message or other means. Unbeknownst to users, though, that unique URL is also made available to search engines, like Google, Bing and DuckDuckGo, making them searchable to anyone on the web. In other words, on Musk’s Grok, hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.

Today, a Google search for Grok chats shows that the search engine has indexed more than 370,000 user conversations with the bot. The shared pages revealed conversations between Grok users and the LLM that range from simple business tasks like writing tweets to generating images of a fictional terrorist attack in Kashmir and attempting to hack into a crypto wallet. Forbes reviewed conversations where users asked intimate questions about medicine and psychology; some even revealed the name, personal details and at least one password shared with the bot by a Grok user. Image files, spreadsheets and some text documents uploaded by users could also be accessed via the Grok shared page.

Among the indexed conversations were some initiated by British journalist Andrew Clifford, who used Grok to summarize the front pages of newspapers and compose tweets for his website Sentinel Current. Clifford told Forbes that he was unaware that clicking the share button would mean that his prompt would be discoverable on Google. “I would be a bit peeved but there was nothing on there that shouldn’t be there,” said Clifford, who has now switched to using Google’s Gemini AI.

Not all the conversations, though, were as benign as Clifford’s. Some were explicit, bigoted and violated xAI’s rules. The company prohibits use of its bot to “promot[e] critically harming human life or to “develop bioweapons, chemical weapons, or weapons of mass destruction,” but in published, shared conversations easily found via a Google search, Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide. Grok also offered a detailed plan for the assassination of Elon Musk. Via the “share” function, the illicit instructions were then published on Grok’s website and indexed by Google.

xAI did not respond to a detailed request for comment.

xAI is not the only AI startup to have published users’ conversations with its chatbots. Earlier this month, users of OpenAI’s ChatGPT were alarmed to find that their conversations were appearing in Google search results, though the users had opted to make those conversations “discoverable” to others. But after outcry, the company quickly changed its policy. Calling the indexing “a short-lived experiment,” OpenAI chief information security officer Dane Stuckey said in a post on X that it would be discontinued because it “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”

After OpenAI canned its share feature, Musk took a victory lap. Grok’s X account claimed at the time that it had no such sharing feature, and Musk tweeted in response, “Grok ftw” [for the win]. It’s unclear when Grok added the share feature, but X users have been warning since January that Grok conversations were being indexed by Google.

Some of the conversations asking Grok for instructions about how to manufacture drugs and bombs were likely initiated by security engineers, redteamers, or Trust & Safety professionals. But in at least a few cases, Grok’s sharing setting misled even professional AI researchers.

Nathan Lambert, a computational scientist at the Allen Institute for AI, used Grok to create a summary of his blog posts to share with his team. He was shocked to learn from Forbes that his Grok prompt and the AI’s response was indexed on Google. “I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it, especially after the recent flare-up with ChatGPT,” said the Seattle-based researcher.

Google allows website owners to choose when and how their content is indexed for search. “Publishers of these pages have full control over whether they are indexed,” said Google spokesperson Ned Adriance in a statement. Google itself previously allowed chats with its AI chatbot, Bard, to be indexed, but it removed them from search in 2023. Meta continues to allow its shared searches to be discoverable by search engines, Business Insider reported.

Opportunists are beginning to notice, and take advantage of, Grok’s published chats. On LinkedIn and the forum BlackHatWorld, marketers have discussed intentionally creating and sharing conversations with Grok to increase the prominence and name recognition of their businesses and products in Google search results. (It is unclear how effective these efforts would be.) Satish Kumar, CEO of SEO agency Pyrite Technologies, demonstrated to Forbes how one business had used Grok to manipulate results for a search of companies that will write your PhD dissertation for you.

“Every shared chat on Grok is fully indexable and searchable on Google,” he said. “People are actively using tactics to push these pages into Google’s index.”

forbes.com EN 2025 Google OpenAI Musk Grok ElonMusk Chatbot xAI AI Conversations data-leak
OpenAI removes ChatGPT feature after private conversations leak to Google search https://venturebeat.com/ai/openai-removes-chatgpt-feature-after-private-conversations-leak-to-google-search/
04/08/2025 16:57:45
QRCode
archive.org
thumbnail

venturebeat.com - OpenAI abruptly removed a ChatGPT feature that made conversations searchable on Google, sparking privacy concerns and industry-wide scrutiny of AI data handling.
OpenAI made a rare about-face Thursday, abruptly discontinuing a feature that allowed ChatGPT users to make their conversations discoverable through Google and other search engines. The decision came within hours of widespread social media criticism and represents a striking example of how quickly privacy concerns can derail even well-intentioned AI experiments.

The feature, which OpenAI described as a “short-lived experiment,” required users to actively opt in by sharing a chat and then checking a box to make it searchable. Yet the rapid reversal underscores a fundamental challenge facing AI companies: balancing the potential benefits of shared knowledge with the very real risks of unintended data exposure.
How thousands of private ChatGPT conversations became Google search results
The controversy erupted when users discovered they could search Google using the query “site:chatgpt.com/share” to find thousands of strangers’ conversations with the AI assistant. What emerged painted an intimate portrait of how people interact with artificial intelligence — from mundane requests for bathroom renovation advice to deeply personal health questions and professionally sensitive resume rewrites. (Given the personal nature of these conversations, which often contained users’ names, locations, and private circumstances, VentureBeat is not linking to or detailing specific exchanges.)

“Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,” OpenAI’s security team explained on X, acknowledging that the guardrails weren’t sufficient to prevent misuse.

venturebeat.com EN 2025 OpenAI ChatGPT Google feature removed
How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
26/05/2025 06:43:02
QRCode
archive.org
thumbnail

In this post I’ll show you how I found a zeroday vulnerability in the Linux kernel using OpenAI’s o3 model. I found the vulnerability with nothing more complicated than the o3 API – no scaffolding, no agentic frameworks, no tool use.

Recently I’ve been auditing ksmbd for vulnerabilities. ksmbd is “a linux kernel server which implements SMB3 protocol in kernel space for sharing files over network.“. I started this project specifically to take a break from LLM-related tool development but after the release of o3 I couldn’t resist using the bugs I had found in ksmbd as a quick benchmark of o3’s capabilities. In a future post I’ll discuss o3’s performance across all of those bugs, but here we’ll focus on how o3 found a zeroday vulnerability during my benchmarking. The vulnerability it found is CVE-2025-37899 (fix here), a use-after-free in the handler for the SMB ‘logoff’ command. Understanding the vulnerability requires reasoning about concurrent connections to the server, and how they may share various objects in specific circumstances. o3 was able to comprehend this and spot a location where a particular object that is not referenced counted is freed while still being accessible by another thread. As far as I’m aware, this is the first public discussion of a vulnerability of that nature being found by a LLM.

Before I get into the technical details, the main takeaway from this post is this: with o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective. If you have a problem that can be represented in fewer than 10k lines of code there is a reasonable chance o3 can either solve it, or help you solve it.

Benchmarking o3 using CVE-2025-37778
Lets first discuss CVE-2025-37778, a vulnerability that I found manually and which I was using as a benchmark for o3’s capabilities when it found the zeroday, CVE-2025-37899.

CVE-2025-37778 is a use-after-free vulnerability. The issue occurs during the Kerberos authentication path when handling a “session setup” request from a remote client. To save us referring to CVE numbers, I will refer to this vulnerability as the “kerberos authentication vulnerability“.

sean.heelan.io EN 2025 CVE-2025-37899 Linux OpenAI CVE 0-day found implementation o3 vulnerability AI
OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters https://arstechnica.com/security/2025/04/openais-gpt-helps-spammers-send-blast-of-80000-messages-that-bypassed-filters/
11/04/2025 07:33:34
QRCode
archive.org
thumbnail

Company didn’t notice its chatbot was being abused for (at least) 4 months.

arstechnica EN 2025 OpenAI chatbot spammers Akirabot
OpenAI launches ChatGPT Gov for U.S. government agencies https://www.cnbc.com/2025/01/28/openai-launches-chatgpt-gov-for-us-government-agencies.html
29/01/2025 08:49:50
QRCode
archive.org
thumbnail

OpenAI on Tuesday announced the launch of ChatGPT for government agencies in the U.S. ...It allows government agencies, as customers, to feed “non-public, sensitive information” into OpenAI’s models while operating within their own secure hosting environments, OpenAI CPO Kevin Weil told reporters during a briefing Monday.

cnbc EN 2025 US OpenAI ChatGPT government sensitive information
Microsoft moves to disrupt hacking-as-a-service scheme that’s bypassing AI safety measures https://cyberscoop.com/microsoft-generative-ai-lawsuit-hacking/
12/01/2025 20:55:44
QRCode
archive.org
thumbnail

The defendants used stolen API keys to gain access to devices and accounts with Microsoft’s Azure OpenAI service, which they then used to generate “thousands” of images that violated content restrictions.

cyberscoop EN 2025 Microsoft hacking-as-a-service stolen API keys images Azure OpenAI
Cybercriminals impersonate OpenAI in large-scale phishing attack https://blog.barracuda.com/2024/10/31/impersonate-openai-steal-data
11/11/2024 11:36:47
QRCode
archive.org

Since the launch of ChatGPT, OpenAI has sparked significant interest among both businesses and cybercriminals. While companies are increasingly concerned about whether their existing cybersecurity measures can adequately defend against threats curated with generative AI tools, attackers are finding new ways to exploit them. From crafting convincing phishing campaigns to deploying advanced credential harvesting and malware delivery methods, cybercriminals are using AI to target end users and capitalize on potential vulnerabilities.

Barracuda threat researchers recently uncovered a large-scale OpenAI impersonation campaign targeting businesses worldwide. Attackers targeted their victims with a well-known tactic — they impersonated OpenAI with an urgent message requesting updated payment information to process a monthly subscription.

barracuda EN 2024 phishing ChatGPT OpenAI large-scale impersonation
Disrupting a covert Iranian influence operation https://openai.com/index/disrupting-a-covert-iranian-influence-operation/
17/08/2024 02:49:59
QRCode
archive.org

We banned accounts linked to an Iranian influence operation using ChatGPT to generate content focused on multiple topics, including the U.S. presidential campaign. We have seen no indication that this content reached a meaningful audience.

openai EN 2024 chatgpt Iran influence-operation US disrupted report
A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too https://www.nytimes.com/2024/07/04/technology/openai-hack.html?unlocked_article_code=1.400.uQ1I.v-uMLR6dv6TK&smid=url-share
05/07/2024 08:49:17
QRCode
archive.org

Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies.

The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.

nytimes EN OpenAI data-leak hacked internal-messaging-systems
OpenAI’s ChatGPT Mac app was storing conversations in plain text https://www.theverge.com/2024/7/3/24191636/openai-chatgpt-mac-app-conversations-plain-text
04/07/2024 07:20:32
QRCode
archive.org
thumbnail

OpenAI updated its ChatGPT macOS app on Friday after users discovered it stored conversations insecurely in plain text.

theverge EN 2024 OpenAI chatgpt macOS app plain-text
ChatGPT-4, Mistral, other AI chatbots spread Russian propaganda https://www.axios.com/2024/06/18/ai-chatbots-russian-propaganda
19/06/2024 19:45:48
QRCode
archive.org

A NewsGuard audit found that chatbots spewed misinformation from American fugitive John Mark Dougan.
#AI #Axios #ChatGPT #Google #Illustrations #License #Microsoft #Misinformation #OpenAI #Visuals #genAI #generative #or

Google Illustrations OpenAI or Misinformation AI Axios Visuals Microsoft License genAI generative ChatGPT
Former head of NSA joins OpenAI board https://www.theverge.com/2024/6/13/24178079/openai-board-paul-nakasone-nsa-safety
16/06/2024 00:03:43
QRCode
archive.org
thumbnail

OpenAI has appointed Paul M. Nakasone, a retired general of the US Army and a former head of the National Security Agency, to its board of directors.

theverge 2024 EN OpenAI NSA Nakasone
OpenAI finds Russian, Chinese propaganda campaigns used its tech https://www.washingtonpost.com/technology/2024/05/30/openai-disinfo-influence-operations-china-russia/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzE3MDQxNjAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzE4NDIzOTk5LCJpYXQiOjE3MTcwNDE2MDAsImp0aSI6IjZmZmEwZWIxLWJiZDItNDBmMi05ZTQ1LWZjYTI3N2U5ODE0MyIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjQvMDUvMzAvb3BlbmFpLWRpc2luZm8taW5mbHVlbmNlLW9wZXJhdGlvbnMtY2hpbmEtcnVzc2lhLyJ9.lZy8-t9Wf1mDTHueMt7j0kCTV8XAifSEbK8hmsBd3bk
31/05/2024 08:02:03
QRCode
archive.org
thumbnail

Covert propagandists have already begun using generative artificial intelligence to boost their influence operations.

washingtonpost EN 2024 OpenAI chatgpt China Russia propaganda
OpenAI's chatbot store is filling up with spam https://techcrunch.com/2024/03/20/openais-chatbot-store-is-filling-up-with-spam/?guccounter=1
21/03/2024 17:26:19
QRCode
archive.org
thumbnail

When OpenAI CEO Sam Altman announced GPTs, custom chatbots powered by OpenAI's generative AI models, onstage at the company's first-ever developer

techcrunch EN 2024 ai apps chatbots chatgpt gpt-store gpts openai copyright leagal spam
Here Come the AI Worms https://www.wired.com/story/here-come-the-ai-worms/
01/03/2024 16:26:09
QRCode
archive.org
thumbnail

Security researchers created an AI worm in a test environment that can automatically spread between generative AI agents—potentially stealing data and sending spam emails along the way.

wired EN 2024 artificial-intelligence openai google worm
Disrupting malicious uses of AI by state-affiliated threat actors https://openai.com/blog/disrupting-malicious-uses-of-ai-by-state-affiliated-threat-actors
15/02/2024 14:16:51
QRCode
archive.org
thumbnail

We terminated accounts associated with state-affiliated threat actors. Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.

openai EN 2024 malicious AI chatGPT
The Times Sues OpenAI and Microsoft Over A.I. Use of Copyrighted Work https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html
27/12/2023 18:03:32
QRCode
archive.org

Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.

nytimes EN 2023 chatgpt legal sued openai Mcrosoft Copyright chatbots
The EU Just Passed Sweeping New Rules to Regulate AI https://www.wired.com/story/eu-ai-act/
11/12/2023 15:51:09
QRCode
archive.org
thumbnail

The European Union agreed on terms of the AI Act, a major new set of rules that will govern the building and use of AI and have major implications for Google, OpenAI, and others racing to develop AI systems.

wired EN 2023 artificial intelligence openai EU legal act ai
Microsoft Temporarily Blocked Internal Access to ChatGPT, Citing Data Concerns https://www.wsj.com/tech/microsoft-temporarily-blocked-internal-access-to-chatgpt-citing-data-concerns-c1ca475d
10/11/2023 09:28:23
QRCode
archive.org
thumbnail

The company later restored access to the chatbot, which is owned by OpenAI.

wsj EN 2023 Microsoft Temporarily Blocked ChatGPT OpenAI
OpenAI’s regulatory troubles are just beginning https://www.theverge.com/2023/5/5/23709833/openai-chatgpt-gdpr-ai-regulation-europe-eu-italy
06/05/2023 21:18:35
QRCode
archive.org
thumbnail

OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over. 

theverge EN 2023 OpenAI ChatGPT European GDPR
page 1 / 2
4710 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio