Quotidien Hebdomadaire Mensuel

Quotidien Shaarli

Tous les liens d'un jour sur une page.

December 9, 2023

23andMe changes terms of service amid legal fallout from data breach

Days after a data breach allowed hackers to steal 6.9 million 23andMe users' personal details, the genetic testing company changed its terms of service to prevent customers from formally suing the firm or pursuing class-action lawsuits against it.

Why it matters: It's unclear if 23andMe is attempting to retroactively shield itself from lawsuits alleging it acted negligently.

Russian Hacker Vladimir Dunaev Pleads Guilty for Creating TrickBot Malware

Russian national Vladimir Dunaev found guilty for developing TrickBot malware, facing up to 35 years in prison.

Inside Job: How a Hacker Helped Cocaine Traffickers Infiltrate Europe’s Biggest Ports

Europe’s commercial ports are top entry points for cocaine flooding in at record rates. The work of a Dutch hacker, who was hired by drug traffickers to penetrate port IT networks, reveals how this...

Ransomware : un mois de novembre hors-norme

Globalement, le mois écoulé se distingue par un niveau inédit de la menace observable, et incohérent avec la saisonnalité historiquement constatée. Mais cela ne vaut pas pour la France.

Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute

It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.