forbes.com 20.08.2025 - xAI published conversations with Grok and made them searchable on Google, including a plan to assassinate Elon Musk and instructions for making fentanyl and bombs.
Elon Musk’s AI firm, xAI, has published the chat transcripts of hundreds of thousands of conversations between its chatbot Grok and the bot’s users — in many cases, without those users’ knowledge or permission.
Anytime a Grok user clicks the “share” button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message or other means. Unbeknownst to users, though, that unique URL is also made available to search engines, like Google, Bing and DuckDuckGo, making them searchable to anyone on the web. In other words, on Musk’s Grok, hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.
Today, a Google search for Grok chats shows that the search engine has indexed more than 370,000 user conversations with the bot. The shared pages revealed conversations between Grok users and the LLM that range from simple business tasks like writing tweets to generating images of a fictional terrorist attack in Kashmir and attempting to hack into a crypto wallet. Forbes reviewed conversations where users asked intimate questions about medicine and psychology; some even revealed the name, personal details and at least one password shared with the bot by a Grok user. Image files, spreadsheets and some text documents uploaded by users could also be accessed via the Grok shared page.
Among the indexed conversations were some initiated by British journalist Andrew Clifford, who used Grok to summarize the front pages of newspapers and compose tweets for his website Sentinel Current. Clifford told Forbes that he was unaware that clicking the share button would mean that his prompt would be discoverable on Google. “I would be a bit peeved but there was nothing on there that shouldn’t be there,” said Clifford, who has now switched to using Google’s Gemini AI.
Not all the conversations, though, were as benign as Clifford’s. Some were explicit, bigoted and violated xAI’s rules. The company prohibits use of its bot to “promot[e] critically harming human life or to “develop bioweapons, chemical weapons, or weapons of mass destruction,” but in published, shared conversations easily found via a Google search, Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide. Grok also offered a detailed plan for the assassination of Elon Musk. Via the “share” function, the illicit instructions were then published on Grok’s website and indexed by Google.
xAI did not respond to a detailed request for comment.
xAI is not the only AI startup to have published users’ conversations with its chatbots. Earlier this month, users of OpenAI’s ChatGPT were alarmed to find that their conversations were appearing in Google search results, though the users had opted to make those conversations “discoverable” to others. But after outcry, the company quickly changed its policy. Calling the indexing “a short-lived experiment,” OpenAI chief information security officer Dane Stuckey said in a post on X that it would be discontinued because it “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”
After OpenAI canned its share feature, Musk took a victory lap. Grok’s X account claimed at the time that it had no such sharing feature, and Musk tweeted in response, “Grok ftw” [for the win]. It’s unclear when Grok added the share feature, but X users have been warning since January that Grok conversations were being indexed by Google.
Some of the conversations asking Grok for instructions about how to manufacture drugs and bombs were likely initiated by security engineers, redteamers, or Trust & Safety professionals. But in at least a few cases, Grok’s sharing setting misled even professional AI researchers.
Nathan Lambert, a computational scientist at the Allen Institute for AI, used Grok to create a summary of his blog posts to share with his team. He was shocked to learn from Forbes that his Grok prompt and the AI’s response was indexed on Google. “I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it, especially after the recent flare-up with ChatGPT,” said the Seattle-based researcher.
Google allows website owners to choose when and how their content is indexed for search. “Publishers of these pages have full control over whether they are indexed,” said Google spokesperson Ned Adriance in a statement. Google itself previously allowed chats with its AI chatbot, Bard, to be indexed, but it removed them from search in 2023. Meta continues to allow its shared searches to be discoverable by search engines, Business Insider reported.
Opportunists are beginning to notice, and take advantage of, Grok’s published chats. On LinkedIn and the forum BlackHatWorld, marketers have discussed intentionally creating and sharing conversations with Grok to increase the prominence and name recognition of their businesses and products in Google search results. (It is unclear how effective these efforts would be.) Satish Kumar, CEO of SEO agency Pyrite Technologies, demonstrated to Forbes how one business had used Grok to manipulate results for a search of companies that will write your PhD dissertation for you.
“Every shared chat on Grok is fully indexable and searchable on Google,” he said. “People are actively using tactics to push these pages into Google’s index.”
venturebeat.com - OpenAI abruptly removed a ChatGPT feature that made conversations searchable on Google, sparking privacy concerns and industry-wide scrutiny of AI data handling.
OpenAI made a rare about-face Thursday, abruptly discontinuing a feature that allowed ChatGPT users to make their conversations discoverable through Google and other search engines. The decision came within hours of widespread social media criticism and represents a striking example of how quickly privacy concerns can derail even well-intentioned AI experiments.
The feature, which OpenAI described as a “short-lived experiment,” required users to actively opt in by sharing a chat and then checking a box to make it searchable. Yet the rapid reversal underscores a fundamental challenge facing AI companies: balancing the potential benefits of shared knowledge with the very real risks of unintended data exposure.
How thousands of private ChatGPT conversations became Google search results
The controversy erupted when users discovered they could search Google using the query “site:chatgpt.com/share” to find thousands of strangers’ conversations with the AI assistant. What emerged painted an intimate portrait of how people interact with artificial intelligence — from mundane requests for bathroom renovation advice to deeply personal health questions and professionally sensitive resume rewrites. (Given the personal nature of these conversations, which often contained users’ names, locations, and private circumstances, VentureBeat is not linking to or detailing specific exchanges.)
“Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,” OpenAI’s security team explained on X, acknowledging that the guardrails weren’t sufficient to prevent misuse.
In this post I’ll show you how I found a zeroday vulnerability in the Linux kernel using OpenAI’s o3 model. I found the vulnerability with nothing more complicated than the o3 API – no scaffolding, no agentic frameworks, no tool use.
Recently I’ve been auditing ksmbd for vulnerabilities. ksmbd is “a linux kernel server which implements SMB3 protocol in kernel space for sharing files over network.“. I started this project specifically to take a break from LLM-related tool development but after the release of o3 I couldn’t resist using the bugs I had found in ksmbd as a quick benchmark of o3’s capabilities. In a future post I’ll discuss o3’s performance across all of those bugs, but here we’ll focus on how o3 found a zeroday vulnerability during my benchmarking. The vulnerability it found is CVE-2025-37899 (fix here), a use-after-free in the handler for the SMB ‘logoff’ command. Understanding the vulnerability requires reasoning about concurrent connections to the server, and how they may share various objects in specific circumstances. o3 was able to comprehend this and spot a location where a particular object that is not referenced counted is freed while still being accessible by another thread. As far as I’m aware, this is the first public discussion of a vulnerability of that nature being found by a LLM.
Before I get into the technical details, the main takeaway from this post is this: with o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective. If you have a problem that can be represented in fewer than 10k lines of code there is a reasonable chance o3 can either solve it, or help you solve it.
Benchmarking o3 using CVE-2025-37778
Lets first discuss CVE-2025-37778, a vulnerability that I found manually and which I was using as a benchmark for o3’s capabilities when it found the zeroday, CVE-2025-37899.
CVE-2025-37778 is a use-after-free vulnerability. The issue occurs during the Kerberos authentication path when handling a “session setup” request from a remote client. To save us referring to CVE numbers, I will refer to this vulnerability as the “kerberos authentication vulnerability“.
OpenAI on Tuesday announced the launch of ChatGPT for government agencies in the U.S. ...It allows government agencies, as customers, to feed “non-public, sensitive information” into OpenAI’s models while operating within their own secure hosting environments, OpenAI CPO Kevin Weil told reporters during a briefing Monday.
Since the launch of ChatGPT, OpenAI has sparked significant interest among both businesses and cybercriminals. While companies are increasingly concerned about whether their existing cybersecurity measures can adequately defend against threats curated with generative AI tools, attackers are finding new ways to exploit them. From crafting convincing phishing campaigns to deploying advanced credential harvesting and malware delivery methods, cybercriminals are using AI to target end users and capitalize on potential vulnerabilities.
Barracuda threat researchers recently uncovered a large-scale OpenAI impersonation campaign targeting businesses worldwide. Attackers targeted their victims with a well-known tactic — they impersonated OpenAI with an urgent message requesting updated payment information to process a monthly subscription.
We banned accounts linked to an Iranian influence operation using ChatGPT to generate content focused on multiple topics, including the U.S. presidential campaign. We have seen no indication that this content reached a meaningful audience.
Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole details about the design of the company’s A.I. technologies.
The hacker lifted details from discussions in an online forum where employees talked about OpenAI’s latest technologies, according to two people familiar with the incident, but did not get into the systems where the company houses and builds its artificial intelligence.
A NewsGuard audit found that chatbots spewed misinformation from American fugitive John Mark Dougan.
#AI #Axios #ChatGPT #Google #Illustrations #License #Microsoft #Misinformation #OpenAI #Visuals #genAI #generative #or
Millions of articles from The New York Times were used to train chatbots that now compete with it, the lawsuit said.