Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
page 2 / 3
49 résultats taggé ChatGPT  ✕
Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns https://www.nytimes.com/interactive/2023/12/22/technology/openai-chatgpt-privacy-exploit.html
24/12/2023 12:59:27
QRCode
archive.org
thumbnail

Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.

nytimes en 2023 exploit LLM AI privacy chatgpt
AI Act, come funziona lo stop al riconoscimento biometrico della prima legge europea sull'intelligenza artificiale | Wired Italia https://www.wired.it/article/ai-act-intelligenza-artificiale-regolamento-riconoscimento-biometrico-eccezioni-polizia-crimini-autorizzazione/
12/12/2023 10:50:50
QRCode
archive.org
thumbnail

Sono previste tre eccezioni per le forze dell'ordine, con una lista di 16 crimini per le cui indagini può essere ammesso. Serve un'autorizzazione dall'autorità giudiziaria, ma si può partire senza e richiederla in 24 ore

wired.it IT 2023 ai-act intelligenza-artificiale big-data europa regole copyright privacy chatgpt google-bard sorveglianza riconoscimento-facciale
Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute https://www.robustintelligence.com/blog-posts/using-ai-to-automatically-jailbreak-gpt-4-and-other-llms-in-under-a-minute
09/12/2023 12:12:17
QRCode
archive.org
thumbnail

It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.

robustintelligence EN AI Jailbreak GPT-4 chatgpt hacking LLMs research
Les 10 principales vulnérabilités des modèles GPT https://www.ictjournal.ch/articles/2023-11-17/les-10-principales-vulnerabilites-des-modeles-gpt
17/11/2023 21:08:44
QRCode
archive.org
thumbnail

Les grands modèles de langage peuvent être sujets à des cyberattaques et mettre en danger la sécurité des systèmes

ictjournal FR chatGPT cyberattaques vulnérabilités LLM OWASP top10
A Closer Look at ChatGPT's Role in Automated Malware Creation https://www.trendmicro.com/en_us/research/23/k/a-closer-look-at-chatgpt-s-role-in-automated-malware-creation.html
15/11/2023 15:50:00
QRCode
archive.org
thumbnail

As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities:

We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes.
We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.

trendmicro EN 2023 malware articles news reports research ChatGPT
Microsoft Temporarily Blocked Internal Access to ChatGPT, Citing Data Concerns https://www.wsj.com/tech/microsoft-temporarily-blocked-internal-access-to-chatgpt-citing-data-concerns-c1ca475d
10/11/2023 09:28:23
QRCode
archive.org
thumbnail

The company later restored access to the chatbot, which is owned by OpenAI.

wsj EN 2023 Microsoft Temporarily Blocked ChatGPT OpenAI
AI companies have all kinds of arguments against paying for copyrighted content https://www.theverge.com/2023/11/4/23946353/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai
05/11/2023 13:48:35
QRCode
archive.org
thumbnail

The biggest companies in AI aren’t interested in paying to use copyrighted material as training data, and here are their reasons why.

theverge EN 2023 AI copyright companies ChatGPT
ChatGPT fails in languages like Tamil and Bengali https://restofworld.org/2023/chatgpt-problems-global-language-testing/
12/09/2023 22:00:34
QRCode
archive.org
thumbnail

Outside of English, ChatGPT makes up words, fails logic tests, and can't do basic information retrieval.

restofworld EN 2023 ChatGPT fails Tamil Bengali
Don’t you (forget NLP): Prompt injection with control characters in ChatGPT https://dropbox.tech/machine-learning/prompt-injection-with-control-characters-openai-chatgpt-llm
04/08/2023 09:47:15
QRCode
archive.org
thumbnail

Like many companies, Dropbox has been experimenting with large language models (LLMs) as a potential backend for product and research initiatives. As interest in leveraging LLMs has increased in recent months, the Dropbox Security team has been advising on measures to harden internal Dropbox infrastructure for secure usage in accordance with our AI principles. In particular, we’ve been working to mitigate abuse of potential LLM-powered products and features via user-controlled input.

dropbox EN 2023 ChatGPT LLMs prompt-injection
WormGPT - The Generative AI Tool Cybercriminals Are Using to Launch BEC Attacks https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks/
16/07/2023 11:57:45
QRCode
archive.org
thumbnail

In this blog post, we'll look at the use of generative AI, including OpenAI's ChatGPT, and the cybercrime tool WormGPT, in BEC attacks.

slashnext EN 2023 WormGPT ChatGPT bec email-protection threat-discovery
WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks https://thehackernews.com/2023/07/wormgpt-new-ai-tool-allows.html
15/07/2023 14:11:42
QRCode
archive.org
thumbnail

A new generative AI cybercrime tool called WormGPT is making waves in underground forums. It empowers cybercriminals to automate phishing attacks.

thehackernews EN 2023 WormGPT AI ChatGPT cybercrime automate phishing attacks
ChatGPT creates mutating malware that evades detection by EDR https://www.csoonline.com/article/3698516/chatgpt-creates-mutating-malware-that-evades-detection-by-edr.html
07/06/2023 19:56:49
QRCode
archive.org
thumbnail

A global sensation since its initial release at the end of last year, ChatGPT's popularity among consumers and IT professionals alike has stirred up cybersecurity nightmares about how it can be used to exploit system vulnerabilities. A key problem, cybersecurity experts have demonstrated, is the ability of ChatGPT and other large language models (LLMs) to generate polymorphic, or mutating, code to evade endpoint detection and response (EDR) systems.

csoonline EN 2023 ChatGPT LLMs EDR BlackMamba
ChatGPT Plugins: Data Exfiltration via Images & Cross Plugin Request Forgery https://embracethered.com/blog/posts/2023/chatgpt-webpilot-data-exfil-via-markdown-injection/
23/05/2023 22:30:12
QRCode
archive.org

Plugins can return malicious content and hijack your AI.

embracethered EN 2023 ChatGPT Data Exfiltration Cross Plugin Request Forgery
Apple Restricts Employee Use of ChatGPT, Joining Other Companies Wary of Leaks https://archive.ph/g6Irs
21/05/2023 17:02:34
QRCode
archive.org

The iPhone maker is concerned workers could release confidential data as it develops its own similar technology.

wsj 2023 Apple ChatGPT Restricts Leak confidential
“FleeceGPT” mobile apps target AI-curious to rake in cash https://news.sophos.com/en-us/2023/05/17/fleecegpt-mobile-apps-target-ai-curious-to-rake-in-cash/
18/05/2023 01:37:15
QRCode
archive.org
thumbnail

Interest in OpenAI’s latest version of its interactive language model has spurred a new wave of scam apps looking to cash in on the hype

sophos EN 2023 Fleeceware ChatGPT scam apps
OpenAI’s regulatory troubles are just beginning https://www.theverge.com/2023/5/5/23709833/openai-chatgpt-gdpr-ai-regulation-europe-eu-italy
06/05/2023 21:18:35
QRCode
archive.org
thumbnail

OpenAI managed to appease Italian data authorities and lift the country’s effective ban on ChatGPT last week, but its fight against European regulators is far from over. 

theverge EN 2023 OpenAI ChatGPT European GDPR
Bad Actors Are Joining the AI Revolution: Here’s What We’ve Found in the Wild https://hackernoon.com/bad-actors-are-joining-the-ai-revolution-heres-what-weve-found-in-the-wild?source=rss
03/05/2023 10:05:36
QRCode
archive.org
thumbnail

Follow security researchers as they uncover malicious packages on open-source registries, trace bad actors to Discord, and unveil AI-assisted code.

hackernoon EN 2023 python PyPI Supply-Chain-Attack ChatGPT
AI-Powered 'BlackMamba' Keylogging Attack Evades Modern EDR Security https://www.darkreading.com/endpoint/ai-blackmamba-keylogging-edr-security
03/05/2023 09:43:06
QRCode
archive.org
thumbnail

Researchers warn that polymorphic malware created with ChatGPT and other LLMs will force a reinvention of security automation.

darkreading EN 2023 ChatGPT EDR evasion Polymorphic BlackMamba LLM
Samsung Fab Workers Leak Confidential Data While Using ChatGPT https://www.tomshardware.com/news/samsung-fab-workers-leak-confidential-data-to-chatgpt
08/04/2023 01:33:57
QRCode
archive.org
thumbnail

Samsung fab personnel reportedly used ChatGPT to optimize operations and create presentations, leaking confidential data to the third-party AI.

tomshardware EN 2023 Samsung ChatGPT Leak
The criminal use of ChatGPT – a cautionary tale about large language models https://www.europol.europa.eu/media-press/newsroom/news/criminal-use-of-chatgpt-cautionary-tale-about-large-language-models
27/03/2023 13:18:01
QRCode
archive.org
thumbnail

In response to the growing public attention given to ChatGPT, the Europol Innovation Lab organised a number of workshops with subject matter experts from across Europol to explore how criminals can abuse large language models (LLMs) such as ChatGPT, as well as how it may assist investigators in their daily work.

europol 2023 EN ChatGPT criminal use
page 2 / 3
4481 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio