thumbnail MCP Prompt Injection: Not Just For Evil
thumbnail Anatomy of an LLM RCE
thumbnail Recent Jailbreaks Demonstrate Emerging Threat to DeepSeek
thumbnail Many-shot jailbreaking \ Anthropic
thumbnail Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability
thumbnail EPFL: des failles de sécurité dans les modèles d'IA
thumbnail Project Naptime: Evaluating Offensive Security Capabilities of Large Language Models
thumbnail Security Brief: TA547 Targets German Organizations with Rhadamanthys Stealer
thumbnail Diving Deeper into AI Package Hallucinations
thumbnail Personal Information Exploit on OpenAI’s ChatGPT Raise Privacy Concerns
thumbnail Les 10 principales vulnérabilités des modèles GPT
thumbnail AI-Powered 'BlackMamba' Keylogging Attack Evades Modern EDR Security