Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
7 résultats taggé chatbot  ✕
Would you like an IDOR with that? Leaking 64 million McDonald’s job applications https://ian.sh/mcdonalds
10/07/2025 06:50:49
QRCode
archive.org
thumbnail

Ian Carroll, Sam Curry / ian.sh
When applying for a job at McDonald's, over 90% of franchises use "Olivia," an AI-powered chatbot. We discovered a vulnerability that could allow an attacker to access more than 64 million job applications. This data includes applicants' names, resumes, email addresses, phone numbers, and personality test results.

McHire is the chatbot recruitment platform used by 90% of McDonald’s franchisees. Prospective employees chat with a bot named Olivia, created by a company called Paradox.ai, that collects their personal information, shift preferences, and administers personality tests. We noticed this after seeing complaints on Reddit of the bot responding with nonsensical answers.

During a cursory security review of a few hours, we identified two serious issues: the McHire administration interface for restaurant owners accepted the default credentials 123456:123456, and an insecure direct object reference (IDOR) on an internal API allowed us to access any contacts and chats we wanted. Together they allowed us and anyone else with a McHire account and access to any inbox to retrieve the personal data of more than 64 million applicants.

We disclosed this issue to Paradox.ai and McDonald’s at the same time.

06/30/2025 5:46PM ET: Disclosed to Paradox.ai and McDonald’s
06/30/2025 6:24PM ET: McDonald’s confirms receipt and requests technical details
06/30/2025 7:31PM ET: Credentials are no longer usable to access the app
07/01/2025 9:44PM ET: Followed up on status
07/01/2025 10:18PM ET: Paradox.ai confirms the issues have been resolved

ian.sh EN 2025 McHire chatbot recruitment McDonald vulnerabilies
OpenAI helps spammers plaster 80,000 sites with messages that bypassed filters https://arstechnica.com/security/2025/04/openais-gpt-helps-spammers-send-blast-of-80000-messages-that-bypassed-filters/
11/04/2025 07:33:34
QRCode
archive.org
thumbnail

Company didn’t notice its chatbot was being abused for (at least) 4 months.

arstechnica EN 2025 OpenAI chatbot spammers Akirabot
How GhostGPT Empowers Cybercriminals with Uncensored AI | Abnormal https://abnormalsecurity.com/blog/ghostgpt-uncensored-ai-chatbot
24/01/2025 09:22:01
QRCode
archive.org
thumbnail

Cybercriminals use GhostGPT, an uncensored AI chatbot, for malware creation, BEC scams, and more. Learn about the risks and how AI fights back.
#chatbot #creation #cybercriminals #fights #ghostgpt #learn #malware #risks #scams #uncensored

risks uncensored cybercriminals scams ghostgpt creation malware chatbot learn fights
AI girlfriend site breached, user fantasies stolen https://www.malwarebytes.com/blog/news/2024/10/ai-girlfriend-site-breached-user-fantasies-stolen
09/10/2024 19:59:55
QRCode
archive.org
thumbnail

Chatbot companion platform muah.ai was hacked and had its chatbot prompts stolen.

malwarebytes EN 2024 Chatbot muah.ai Data-Breach fantasies
Air Canada must honor refund policy invented by airline’s chatbot https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/
18/02/2024 15:11:38
QRCode
archive.org
thumbnail

Air Canada appears to have quietly killed its costly chatbot support.

arstechnica EN 2024 chatbot legal AirCanada refund policy invented
Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies https://www.wired.com/story/microsoft-ai-copilot-chatbot-election-conspiracy/
16/12/2023 10:13:44
QRCode
archive.org
thumbnail

With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information.

When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.

wired EN 2023 BingChat Chatbot Election Conspiracies Lies AI
"Fobo" Trojan distributed as ChatGPT client for Windows https://www.kaspersky.com/blog/chatgpt-stealer-win-client/47274/
23/02/2023 09:00:46
QRCode
archive.org
thumbnail

Attackers are distributing malware disguised as a ChatGPT desktop client for Windows offering “precreated accounts”

kaspersky EN 2023 threats ChatGPT artificial-intelligence AI fraud scam OpenAI chatbot Trojan-stealer TrojanPSW
4560 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio