thumbnail Recent Jailbreaks Demonstrate Emerging Threat to DeepSeek
thumbnail Many-shot jailbreaking \ Anthropic
thumbnail Bad Likert Judge: A Novel Multi-Turn Technique to Jailbreak LLMs by Misusing Their Evaluation Capability
thumbnail EPFL: des failles de sécurité dans les modèles d'IA
thumbnail Here is Apple's official 'jailbroken' iPhone for security researchers | TechCrunch
thumbnail Using AI to Automatically Jailbreak GPT-4 and Other LLMs in Under a Minute