Quotidien Hebdomadaire Mensuel

Quotidien Shaarli

Tous les liens d'un jour sur une page.

Aujourd'hui - November 18, 2025

Researchers discover security vulnerability in WhatsApp

univie.ac.at
University of Vienna
18.11.2025

IT-Security Researchers from the University of Vienna and SBA Research identified and responsibly disclosed a large-scale privacy weakness in WhatsApp's contact discovery mechanism that allowed the enumeration of 3.5 billion accounts. In collaboration with the researchers, Meta has since addressed and mitigated the issue. The study underscores the importance of continuous, independent security research on widely used communication platforms and highlights the risks associated with the centralization of instant messaging services. The preprint of the study has now been published, and the results will be presented in 2026 at the Network and Distributed System Security (NDSS) Symposium.

WhatsApp's contact discovery mechanism can use a user's address book to find other WhatsApp users by their phone number. Using the same underlying mechanism, the researchers demonstrated that it was possible to query more than 100 million phone numbers per hour through WhatsApp's infrastructure, confirming more than 3.5 billion active accounts across 245 countries. "Normally, a system shouldn't respond to such a high number of requests in such a short time — particularly when originating from a single source," explains lead author Gabriel Gegenhuber from the University of Vienna. "This behavior exposed the underlying flaw, which allowed us to issue an effectively unlimited requests to the server and, in doing so, map user data worldwide."

The accessible data items used in the study are the same that are public for anyone who knows a user's phone number and consist of: phone number, public keys, timestamps, and, if set to public, about text and profile picture. From these data points, the researchers were able to extract additional information, which allowed them to infer a user's operating system, account age, as well as the number of linked companion devices. The study shows that even this limited amount of data per user can reveal important information, both on macroscopic and individual levels.

The study also revealed a range of broader insights:
Millions of active WhatsApp accounts were identified in countries where the platform was officially banned, including China, Iran, and Myanmar.
Population-level insights into platform usage, such as the global distribution of Android (81%) versus iOS (19%) devices, regional differences in privacy behavior (e.g., use of public profile pictures or "about" tagline), and variations in user growth across countries.
A small number of cases showed re-use of cryptographic keys across different devices or phone numbers, pointing to potential weaknesses in non-official WhatsApp clients or fraudulent use.
Nearly half of all phone numbers that appeared in the 2021 Facebook data leak of 500 million phone numbers (caused by a scraping incident in 2018) were still active on WhatsApp. This highlights the enduring risks for leaked numbers (e.g., being targeted in scam calls) associated with such exposures.
The study did not involve access to message content, and no personal data was published or shared. All retrieved data was deleted by the researchers prior to publication. Message content on WhatsApp is “end-to-end encrypted” and was not affected at any time. “This end-to-end encryption protects the content of messages, but not necessarily the associated metadata,” explains last author Aljosha Judmayer from the University of Vienna. “Our work shows that privacy risks can also arise when such metadata is collected and analysed on a large scale.”

“These findings remind us that even mature, widely trusted systems can contain design or implementation flaws that have real-world consequences," says lead author Gabriel Gegenhuber from the University of Vienna: "They show that security and privacy are not one-time achievements, but must be continuously re-evaluated as technology evolves."

"Building on our previous findings on delivery receipts and key management, we are contributing to a long-term understanding of how messaging systems evolve and where new risks arise," adds co-author Maximilian Günther from the University of Vienna.

“We are grateful to the University of Vienna researchers for their responsible partnership and diligence under our Bug Bounty program. This collaboration successfully identified a novel enumeration technique that surpassed our intended limits, allowing the researchers to scrape basic publicly available information. We had already been working on industry-leading anti-scraping systems, and this study was instrumental in stress-testing and confirming the immediate efficacy of these new defenses. Importantly, the researchers have securely deleted the data collected as part of the study, and we have found no evidence of malicious actors abusing this vector. As a reminder, user messages remained private and secure thanks to WhatsApp’s default end-to-end encryption, and no non-public data was accessible to the researchers”, says Nitin Gupta, Vice President of Engineering at WhatsApp.

Ethical Handling and Disclosure
The research was conducted with strict ethical guidelines and in accordance with responsible disclosure principles. The findings were promptly reported to Meta, the operator of WhatsApp, which has since implemented countermeasures (e.g., rate-limiting, stricter profile information visibility) to close the identified vulnerability. The authors argue that transparency, academic scrutiny, and independent testing are essential to maintaining trust in global communication services. They emphasize that proactive collaboration between researchers and industry can significantly improve user privacy and prevent abuse.

Research Context
This publication represents the third study by researchers from the University of Vienna and SBA Research examining the security and privacy of prevalent instant messengers such as WhatsApp and Signal. The team investigates how design and implementation choices in end-to-end encrypted messaging services can unintentionally expose user information or weaken privacy guarantees.

Earlier this year, the researchers published "Careless Whisper: Exploiting Silent Delivery Receipts to Monitor Users on Mobile Instant Messengers" (distinguished with the Best Paper Award at RAID 2025), which demonstrated how silent pings and their delivery receipts could be abused to infer user activity patterns and online behavior on WhatsApp and similar messaging platforms. Later that same year, "Prekey Pogo: Investigating Security and Privacy Issues in WhatsApp's Handshake Mechanism" (presented at USENIX WOOT 2025) analyzed the cryptographic foundations of WhatsApp's prekey distribution mechanism, revealing implementation weaknesses of the Signal-based protocol.

"By building on our earlier findings about delivery receipts and key management, we're contributing to a long-term understanding of how messaging systems evolve, and where new risks emerge." said Maximilian Günther (University of Vienna).

The current study, "Hey there! You are using WhatsApp: Enumerating Three Billion Accounts for Security and Privacy", extends this line of research to the global scope, showing how contact discovery mechanisms can unintentionally allow large-scale user enumeration at an unprecedented magnitude. It will appear in the proceedings of the NDSS Symposium 2026, one of the leading international conferences on computer and network security.

Publication: Gabriel K. Gegenhuber, Philipp É. Frenzel, Maximilian Günther, Johanna Ullrich und Aljosha Judmayer: Hey there! You are using WhatsApp: Enumerating Three Billion Accounts for Security and Privacy. In: Network and Distributed System Security Symposium (NDSS), 2026.

Deepfakes join Russia’s cultural censorship toolkit

| The European Correspondent
Dmitriy Beliaev

A Russian series released in October used AI to replace actor Maxim Vitorgan’s face – and removed his name from the credits. Vitorgan reported it himself on social media, while the streaming platform Kion offered no explanation.

It was the second time the actor had been digitally erased and replaced with AI – a punishment for his vocal opposition to the war in Ukraine. On the first day of the invasion in 2022, he posted a black square on Instagram with the caption “Shame” to his 700,000 followers. That led to his removal from another show in 2023.

Erasing “undesirable” actors, writers, and musicians has become routine in Russia, where censorship has tightened its grip on cultural life since the country’s full-scale invasion of Ukraine.

TV channels and streaming platforms now not only blur or replace actors with AI, but also cut entire scenes – scrubbing away unwanted dialogue, characters, or references that the state considers unwelcome.

In April 2025, a TV channel removed a map of Odesa and cut a reference to the 2006 deportation of Georgian citizens from Russia in a 2010 film (which also featured Vitorgan). In June, Russian streaming services removed a line mentioning Putin’s death from a 2024 Spanish thriller Rich Flu.

Censorship now extends far beyond politics, reshaping even harmless scenes: in early November, following a law banning so-called “LGBT propaganda”, a Russian online cinema cut a Fight Club (1999) scene showing men kissing.

It goes beyond films. Several broadcasters have been fined for airing music videos deemed “LGBT propaganda”. In January 2023, a court fined the TNT Music channel one million rubles (roughly €10,600) over a music video Hallucination by Regard and Years & Years.

A year later, another broadcaster, Tochka TV, was fined for airing a music video by pro-regime singer Nikolai Baskov for containing “LGBT propaganda” because of “the lyrical subject’s relationship with a male”. The video had aired on television without issue before. After the new laws came in, some Russian artists began deleting their old videos from YouTube and social media.

Publishers are also blacking out entire paragraphs in books. Even a biography of Italian director Pier Paolo Pasolini was censored, with about a fifth of the text removed because it described an openly gay filmmaker's personal life.

The invasion of Ukraine has triggered a kind of patriotic cultural revolution. Actors, directors, and musicians who publicly opposed the war have been effectively blacklisted – removed from the big screens, stripped of work, and, in many cases, pushed into exile. Some have been declared “foreign agents”, a status that severely restricts civil rights and professional opportunities.

Some songs by these “agents” are being removed from Russian streaming platforms, and performing them publicly can lead to fines or even arrest. For the most recent case – in October, several young street musicians in St Petersburg were arrested for singing songs by anti-war artists.

The Pentagon Is Spending Millions On AI Hacking From Startup Twenty

forbes.com
By Thomas Brewster, Forbes Staff.
Nov 15, 2025, 08:00am ESTUpdated Nov 16, 2025, 06:40am EST

The U.S. government has been contracting stealth startup Twenty, which is working on AI agents and automated hacking of foreign targets at massive scale.
The U.S. is quietly investing in AI agents for cyberwarfare, spending millions this year on a secretive startup that’s using AI for offensive cyberattacks on American enemies.
According to federal contracting records, a stealth, Arlington, Virginia-based startup called Twenty, or XX, signed a contract with the U.S. Cyber Command this summer worth up to $12.6 million. It scored a $240,000 research contract with the Navy, too. The company has received VC support from In-Q-Tel, the nonprofit venture capital organization founded by the CIA, as well as Caffeinated Capital and General Catalyst. Twenty couldn’t be reached for comment at the time of publication.

Twenty’s contracts are a rare case of an AI offensive cyber company with VC backing landing Cyber Command work; typically cyber contracts have gone to either small bespoke companies or to the old guard of defense contracting like Booz Allen Hamilton or L3Harris.

Though the firm hasn’t launched publicly yet, its website states its focus is “transforming workflows that once took weeks of manual effort into automated, continuous operations across hundreds of targets simultaneously.” Twenty claims it is “fundamentally reshaping how the U.S. and its allies engage in cyber conflict.”

Its job ads reveal more. In one, Twenty is seeking a director of offensive cyber research, who will develop “advanced offensive cyber capabilities including attack path frameworks… and AI-powered automation tools.” AI engineer job ads indicate Twenty will be deploying open source tools like CrewAI, which is used to manage multiple autonomous AI agents that collaborate. And an analyst role says the company will be working on “persona development.” Often, government cyberattacks use social engineering, relying on convincing fake online accounts to infiltrate enemy communities and networks. (Forbes has previously reported on police contractors who’ve created such avatars with AI.)

Twenty’s executive team, according to its website, is stacked with former military and intelligence agents. CEO and cofounder Joe Lin is a former U.S. Navy Reserve officer who was previously VP of product management at cyber giant Palo Alto Networks. He joined Palo Alto after the firm acquired Expanse, where he helped national security clients determine where their networks were vulnerable. CTO Leo Olson also worked on the national security team at Expanse and was a signals intelligence officer at the U.S. Army. VP of engineering Skyler Onken spent over a decade at U.S. Cyber Command and the U.S. Army. The startup’s head of government relations, Adam Howard, spent years on the Hill, most recently working on the National Security Council transition team for the incoming Trump administration.

The U.S. government isn’t the only country using AI to build out its hacking capabilities. Last week, AI giant Anthropic released some startling research: Chinese hackers were using its tools to carry out cyberattacks. The company said hackers had deployed Claude to spin up AI agents to do 90% of the work on scouting out targets and coming up with ideas on how to hack them.

It’s possible the U.S. could also be using OpenAI, Anthropic or Elon Musk’s xAI in offensive cyber operations. The Defense Department gave each company contracts worth up to $200 million for unspecified “frontier AI” projects. None have confirmed what they’re working on for the DOD.

Given its focus on simultaneous attacks on hundreds of targets, Twenty’s products appear to be a step up in terms of cyberwarfare automation.

By contrast, beltway contractor Two Six Technologies has received a number of contracts in the AI offensive cyber space, including one for $90 million in 2020, but its tools are mostly to assist humans rather than replace them. For the last six years, it’s been working on developing automated AI “to assist cyber battlespace” and “support development of cyber warfare strategies” under a project dubbed IKE. Reportedly its AI was allowed to press ahead with carrying out an attack if the chances of success were high. The contract value was ramped up to $190 million by 2024, but there’s no indication IKE uses agents to carry out operations at the scale that Twenty is claiming. Two Six did not respond to requests for comment.

AI is much more commonly used on the defensive side, particularly in enterprises. As Forbes reported earlier this week, an Israeli startup called Tenzai is tweaking AI models from OpenAI and Anthropic, among others, to try to find vulnerabilities in customer software, though its goal is red teaming, not hacking.

Kremlin Propagandists Weaponize OpenAI's Video Generator

NewsGuard's Reality Check
newsguardrealitycheck.com
Nov 17, 2025

What happened: In an effort to discredit the Ukrainian Armed Forces and undermine their morale at a critical juncture of the Russia-Ukraine war, Kremlin propagandists are weaponizing OpenAI’s new Sora 2 text-to-video tool to create fake, viral videos showing Ukrainian soldiers surrendering in tears.

Context: In a recent report, NewsGuard found that OpenAI’s new video generator tool Sora 2, which creates 10-second videos based on the user’s written prompt, advanced provably false claims on topics in the news 80 percent of the time when prompted to do so, demonstrating how the new and powerful technology could be easily weaponized by foreign malign actors.

A closer look: Indeed, so far in November 2025, NewsGuard has identified seven AI-generated videos presented as footage from the front lines in Pokrovsk, a key eastern Ukrainian city that experts expect to soon fall to Russia.

The videos, which received millions of views on X, TikTok, Facebook, and Telegram, showed scenes of Ukrainian soldiers surrendering en masse and begging Russia for forgiveness.

Here’s one video supposedly showing Ukrainian soldiers surrendering:

And a video purporting to show Ukrainian soldiers begging for forgiveness:

Actually: There is no evidence of mass Ukrainian surrenders in or around Pokrovsk.

The videos contain multiple inconsistencies, including gear and uniforms that do not match those used by the Ukrainian Armed Forces, unnatural faces, and mispronunciations of the names of Ukrainian cities. NewsGuard tested the videos with AI detector Hive, which found with 100 percent certainty that all seven were created with Sora 2. The videos either had the small Sora watermark or a blurry patch in the location where the watermark had been removed. Users shared both types as if they were authentic.

The AI-generated videos were shared by anonymous accounts that NewsGuard has found to regularly spread pro-Kremlin propaganda.

Ukraine’s Center for Countering Disinformation said in a Telegram post that the accounts “show signs of a coordinated network specifically created to promote Kremlin narratives among foreign audiences.”

In response to NewsGuard’s Nov. 12, 2025, emailed request for comment on the videos, OpenAI spokesperson Oscar Haines said “we’ll investigate” and asked for an extension to Nov. 13, 2025, to provide comment, which NewsGuard provided. However, Haines did not respond to follow-up inquiries.

This is not the first time Kremlin propagandists have weaponized OpenAI’s tools for propaganda. In April 2025, NewsGuard found that pro-Kremlin sources used OpenAI’s image generator to create images of action figure dolls depicting Ukrainian President Volodymyr Zelensky as a drug addict and corrupt warmonger.

Defending the cloud: Azure neutralized a record-breaking 15 Tbps DDoS attack | Microsoft Community Hub

techcommunity.microsoft.com
Sean_Whalen
Microsoft
Nov 17, 2025

On October 24, 2025, Azure DDOS Protection automatically detected and mitigated a multi-vector DDoS attack measuring 15.72 Tbps and nearly 3.64 billion packets per second (pps). This was the largest DDoS attack ever observed in the cloud and it targeted a single endpoint in Australia.

By utilizing Azure’s globally distributed DDoS Protection infrastructure and continuous detection capabilities, mitigation measures were initiated. Malicious traffic was effectively filtered and redirected, maintaining uninterrupted service availability for customer workloads.

The attack originated from Aisuru botnet. Aisuru is a Turbo Mirai-class IoT botnet that frequently causes record-breaking DDoS attacks by exploiting compromised home routers and cameras, mainly in residential ISPs in the United States and other countries.

The attack involved extremely high-rate UDP floods targeting a specific public IP address, launched from over 500,000 source IPs across various regions. These sudden UDP bursts had minimal source spoofing and used random source ports, which helped simplify traceback and facilitated provider enforcement.

Attackers are scaling with the internet itself. As fiber-to-the-home speeds rise and IoT devices get more powerful, the baseline for attack size keeps climbing.

As we approach the upcoming holiday season, it is essential to confirm that all internet-facing applications and workloads are adequately protected against DDOS attacks. Additionally, do not wait for an actual attack to assess your defensive capabilities or operational readiness—conduct regular simulations to identify and address potential issues proactively.