Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
page 1 / 4
70 résultats taggé AI  ✕
How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
26/05/2025 06:43:02
QRCode
archive.org
thumbnail

In this post I’ll show you how I found a zeroday vulnerability in the Linux kernel using OpenAI’s o3 model. I found the vulnerability with nothing more complicated than the o3 API – no scaffolding, no agentic frameworks, no tool use.

Recently I’ve been auditing ksmbd for vulnerabilities. ksmbd is “a linux kernel server which implements SMB3 protocol in kernel space for sharing files over network.“. I started this project specifically to take a break from LLM-related tool development but after the release of o3 I couldn’t resist using the bugs I had found in ksmbd as a quick benchmark of o3’s capabilities. In a future post I’ll discuss o3’s performance across all of those bugs, but here we’ll focus on how o3 found a zeroday vulnerability during my benchmarking. The vulnerability it found is CVE-2025-37899 (fix here), a use-after-free in the handler for the SMB ‘logoff’ command. Understanding the vulnerability requires reasoning about concurrent connections to the server, and how they may share various objects in specific circumstances. o3 was able to comprehend this and spot a location where a particular object that is not referenced counted is freed while still being accessible by another thread. As far as I’m aware, this is the first public discussion of a vulnerability of that nature being found by a LLM.

Before I get into the technical details, the main takeaway from this post is this: with o3 LLMs have made a leap forward in their ability to reason about code, and if you work in vulnerability research you should start paying close attention. If you’re an expert-level vulnerability researcher or exploit developer the machines aren’t about to replace you. In fact, it is quite the opposite: they are now at a stage where they can make you significantly more efficient and effective. If you have a problem that can be represented in fewer than 10k lines of code there is a reasonable chance o3 can either solve it, or help you solve it.

Benchmarking o3 using CVE-2025-37778
Lets first discuss CVE-2025-37778, a vulnerability that I found manually and which I was using as a benchmark for o3’s capabilities when it found the zeroday, CVE-2025-37899.

CVE-2025-37778 is a use-after-free vulnerability. The issue occurs during the Kerberos authentication path when handling a “session setup” request from a remote client. To save us referring to CVE numbers, I will refer to this vulnerability as the “kerberos authentication vulnerability“.

sean.heelan.io EN 2025 CVE-2025-37899 Linux OpenAI CVE 0-day found implementation o3 vulnerability AI
Unit 42 Develops Agentic AI Attack Framework https://www.paloaltonetworks.com/blog/2025/05/unit-42-develops-agentic-ai-attack-framework/
21/05/2025 13:31:05
QRCode
archive.org
thumbnail

Threat actors are advancing AI strategies and outpacing traditional security. CXOs must critically examine AI weaponization across the attack chain.

The integration of AI into adversarial operations is fundamentally reshaping the speed, scale and sophistication of attacks. As AI defense capabilities evolve, so do the AI strategies and tools leveraged by threat actors, creating a rapidly shifting threat landscape that outpaces traditional detection and response methods. This accelerating evolution necessitates a critical examination for CXOs into how threat actors will strategically weaponize AI across each phase of the attack chain.

One of the most alarming shifts we have seen, following the introduction of AI technologies, is the dramatic drop in mean time to exfiltrate (MTTE) data, following initial access. In 2021, the average MTTE stood at nine days. According to our Unit 42 2025 Global Incident Response Report, by 2024 MTTE dropped to two days. In one in five cases, the time from compromise to exfiltration was less than 1 hour.

In our testing, Unit 42 was able to simulate a ransomware attack (from initial compromise to data exfiltration) in just 25 minutes using AI at every stage of the attack chain. That’s a 100x increase in speed, powered entirely by AI.
Recent threat activity observed by Unit 42 has highlighted how adversaries are leveraging AI in attacks:

  • Deepfake-enabled social engineering has been observed in campaigns from groups like Muddled Libra (also known as Scattered Spider), who have used AI-generated audio and video to impersonate employees during help desk scams.
  • North Korean IT workers are using real-time deepfake technology to infiltrate organizations through remote work positions, which poses significant security, legal and compliance risks.
  • Attackers are leveraging generative AI to conduct ransomware negotiations, breaking down language barriers and more effectively negotiating higher ransom payments.
  • AI-powered productivity assistants are being used to identify sensitive credentials in victim environments.
paloaltonetworks EN 2025 Agentic-AI AI attack-chain Attack-Simulations
A Chinese AI video startup appears to be blocking politically sensitive images | TechCrunch https://techcrunch.com/2025/04/22/a-chinese-ai-video-startup-appears-to-be-blocking-politically-sensitive-images/
27/04/2025 11:51:06
QRCode
archive.org
thumbnail

A Chinese startup, Sand AI, appears to be blocking certain politically sensitive images from its online video generation tool.

A China-based startup, Sand AI, has released an openly licensed, video-generating AI model that’s garnered praise from entrepreneurs like the founding director of Microsoft Research Asia, Kai-Fu Lee. But Sand AI appears to be censoring the hosted version of its model to block images that might raise the ire of Chinese regulators from the hosted version of the model, according to TechCrunch’s testing.

Earlier this week, Sand AI announced Magi-1, a model that generates videos by “autoregressively” predicting sequences of frames. The company claims the model can generate high-quality, controllable footage that captures physics more accurately than rival open models.

techcrunch EN 2025 AI China censure Sand-AI AI-model Magi-1
All Major Gen-AI Models Vulnerable to ‘Policy Puppetry’ Prompt Injection Attack https://www.securityweek.com/all-major-gen-ai-models-vulnerable-to-policy-puppetry-prompt-injection-attack/
25/04/2025 21:42:03
QRCode
archive.org

A new attack technique named Policy Puppetry can break the protections of major gen-AI models to produce harmful outputs.

securityweek EN 2025 technique Gen-AI Models Policy-Puppetry AI vulnerabilty
Artificial IntelligenceAI-Powered Polymorphic Phishing Is Changing the Threat Landscape https://www.securityweek.com/ai-powered-polymorphic-phishing-is-changing-the-threat-landscape/
24/04/2025 15:36:58
QRCode
archive.org

Combined with AI, polymorphic phishing emails have become highly sophisticated, creating more personalized and evasive messages that result in higher attack success rates.

securityweek EN 2025 AI polymorphic phishing sophisticated evasive messages
Darknet’s Xanthorox AI Offers Customizable Tools for Hacker https://www.infosecurity-magazine.com/news/darknets-xanthorox-ai-hackers-tools/
13/04/2025 10:50:08
QRCode
archive.org
thumbnail

A self-contained AI system engineered for offensive cyber operations, Xanthorox AI, has surfaced on darknet forums and encrypted channels.

Introduced in late Q1 2025, it marks a shift in the threat landscape with its autonomous, modular structure designed to support large-scale, highly adaptive cyber-attacks.

Built entirely on private servers, Xanthorox avoids using public APIs or cloud services, significantly reducing its visibility and traceability.

infosecurity EN 2025 Xanthorox AI self-contained tool
Anatomy of an LLM RCE https://www.cyberark.com/resources/all-blog-posts/anatomy-of-an-llm-rce
09/04/2025 06:45:55
QRCode
archive.org
thumbnail

As large language models (LLMs) become more advanced and are granted additional capabilities by developers, security risks increase dramatically. Manipulated LLMs are no longer just a risk of...

cyberark EN 2025 LLM RCE analysis AI
Analyzing open-source bootloaders: Finding vulnerabilities faster with AI https://www.microsoft.com/en-us/security/blog/2025/03/31/analyzing-open-source-bootloaders-finding-vulnerabilities-faster-with-ai/
02/04/2025 06:44:13
QRCode
archive.org
thumbnail

By leveraging Microsoft Security Copilot to expedite the vulnerability discovery process, Microsoft Threat Intelligence uncovered several vulnerabilities in multiple open-source bootloaders, impacting all operating systems relying on Unified Extensible Firmware Interface (UEFI) Secure Boot as well as IoT devices. The vulnerabilities found in the GRUB2 bootloader (commonly used as a Linux bootloader) and U-boot and Barebox bootloaders (commonly used for embedded systems), could allow threat actors to gain and execute arbitrary code.

microsoft EN 2025 open-source bootloaders UEFI GRUB2 AI
Many-shot jailbreaking \ Anthropic https://www.anthropic.com/research/many-shot-jailbreaking
08/01/2025 12:17:06
QRCode
archive.org
thumbnail

Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.

anthropic EN 2024 AI LLM Jailbreak Many-shot
Criminals Use Generative Artificial Intelligence to Facilitate Financial Fraud https://www.ic3.gov/PSA/2024/PSA241203
04/12/2024 09:10:07
QRCode
archive.org

The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.

ic3.gov EN 2024 warning Criminals Use Generative AI Financial Fraud recommandations
Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama https://www.reuters.com/technology/artificial-intelligence/chinese-researchers-develop-ai-model-military-use-back-metas-llama-2024-11-01/
01/11/2024 09:24:34
QRCode
archive.org
  • Papers show China reworked Llama model for military tool
  • China's top PLA-linked Academy of Military Science involved
  • Meta says PLA 'unauthorised' to use Llama model
  • Pentagon says it is monitoring competitors' AI capabilities
reuters EN China Llama model military tool Meta AI LLM Pentagon
Researchers say AI transcription tool used in hospitals invents things no one ever said | AP News https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14
28/10/2024 06:38:32
QRCode
archive.org
thumbnail

Whisper is a popular transcription tool powered by artificial intelligence, but it has a major flaw. It makes things up that were never said.

apnews EN 2024 hallucinations transcription Generative AI Health San General Artificial Technology US Whisper
MITRE Announces AI Incident Sharing Project https://www.securityweek.com/mitre-announces-ai-incident-sharing-project/
14/10/2024 09:07:29
QRCode
archive.org

MITRE’s AI Incident Sharing initiative helps organizations receive and hand out data on real-world AI incidents.
Non-profit technology and R&D company MITRE has introduced a new mechanism that enables organizations to share intelligence on real-world AI-related incidents.

Shaped in collaboration with over 15 companies, the new AI Incident Sharing initiative aims to increase community knowledge of threats and defenses involving AI-enabled systems.

securityweek EN 2024 MITRE AI-related incidents AI Incident Sharing initiative
Critical flaw in NVIDIA Container Toolkit allows full host takeover https://www.bleepingcomputer.com/news/security/critical-flaw-in-nvidia-container-toolkit-allows-full-host-takeover/
01/10/2024 11:16:27
QRCode
archive.org
thumbnail

A critical vulnerability in NVIDIA Container Toolkit impacts all AI applications in a cloud or on-premise environment that rely on it to access GPU resources.

bleepingcomputer EN 2024 AI Artificial-Intelligence Cloud Cloud-Security Container-Escape NVIDIA Vulnerability Security InfoSec Computer-Security
Europe’s privacy watchdog probes Google over data used for AI training https://arstechnica.com/tech-policy/2024/09/europes-privacy-watchdog-probes-google-over-data-used-for-ai-training/
12/09/2024 16:12:53
QRCode
archive.org
thumbnail

Meta and X have already paused some AI training over same set of concerns.

arstechnica EN 2024 Meta AI probe training EU Google watchdog privacy legal
No one’s ready for this https://www.theverge.com/2024/8/22/24225972/ai-photo-era-what-is-reality-google-pixel-9
23/08/2024 09:34:53
QRCode
archive.org
thumbnail

With AI photo editing getting easy and convincing, the world isn’t prepared for an era where photographs aren’t to be trusted.

theverge EN 2024 photo-editing AI fake trust images
Websites are Blocking the Wrong AI Scrapers (Because AI Companies Keep Making New Ones) https://www.404media.co/websites-are-blocking-the-wrong-ai-scrapers-because-ai-companies-keep-making-new-ones/
30/07/2024 10:28:49
QRCode
archive.org
thumbnail

Hundreds of sites have put old Anthropic scrapers on their blocklist, while leaving a new one unblocked.

404media EN 2024 robots.txt bots AI scrapers blocklist
Figma Disables AI App Design Tool After It Copied Apple’s Weather App https://www.404media.co/figma-disables-ai-app-design-tool-after-it-copied-apples-weather-app/
03/07/2024 08:26:10
QRCode
archive.org
thumbnail

“Ultimately it is my fault for not insisting on a better QA process for this work and pushing our team hard to hit a deadline,” Figma’s CEO said.

404media EN Figma disabled AI copyright legal issue design
Probllama: Ollama Remote Code Execution Vulnerability (CVE-2024-37032) https://www.wiz.io/blog/probllama-ollama-vulnerability-cve-2024-37032
25/06/2024 08:51:44
QRCode
archive.org
thumbnail

Wiz Research discovered CVE-2024-37032, an easy-to-exploit Remote Code Execution vulnerability in the open-source AI Infrastructure project Ollama.

wiz EN 2024 CVE-2024-37032 Overview Mitigations Ollama AI Infrastructure easy-to-exploit RCE
In China, AI transformed Ukrainian YouTuber into a Russian https://www.reuters.com/technology/artificial-intelligence/china-ai-transformed-ukrainian-youtuber-into-russian-2024-06-21/
21/06/2024 06:40:50
QRCode
archive.org

Olga Loiek, a University of Pennsylvania student was looking for an audience on the internet – just not like this.
Shortly after launching a YouTube channel in November last year, Loiek, a 21-year-old from Ukraine, found her image had been taken and spun through artificial intelligence to create alter egos on Chinese social media platforms.
Her digital doppelgangers - like "Natasha" - claimed to be Russian women fluent in Chinese who wanted to thank China for its support of Russia and make a little money on the side selling products such as Russian candies.

reuters EN 2024 AI Ukrainian YouTuber Russia China fake
page 1 / 4
4368 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio