| The European Correspondent
Dmitriy Beliaev
A Russian series released in October used AI to replace actor Maxim Vitorgan’s face – and removed his name from the credits. Vitorgan reported it himself on social media, while the streaming platform Kion offered no explanation.
It was the second time the actor had been digitally erased and replaced with AI – a punishment for his vocal opposition to the war in Ukraine. On the first day of the invasion in 2022, he posted a black square on Instagram with the caption “Shame” to his 700,000 followers. That led to his removal from another show in 2023.
Erasing “undesirable” actors, writers, and musicians has become routine in Russia, where censorship has tightened its grip on cultural life since the country’s full-scale invasion of Ukraine.
TV channels and streaming platforms now not only blur or replace actors with AI, but also cut entire scenes – scrubbing away unwanted dialogue, characters, or references that the state considers unwelcome.
In April 2025, a TV channel removed a map of Odesa and cut a reference to the 2006 deportation of Georgian citizens from Russia in a 2010 film (which also featured Vitorgan). In June, Russian streaming services removed a line mentioning Putin’s death from a 2024 Spanish thriller Rich Flu.
Censorship now extends far beyond politics, reshaping even harmless scenes: in early November, following a law banning so-called “LGBT propaganda”, a Russian online cinema cut a Fight Club (1999) scene showing men kissing.
It goes beyond films. Several broadcasters have been fined for airing music videos deemed “LGBT propaganda”. In January 2023, a court fined the TNT Music channel one million rubles (roughly €10,600) over a music video Hallucination by Regard and Years & Years.
A year later, another broadcaster, Tochka TV, was fined for airing a music video by pro-regime singer Nikolai Baskov for containing “LGBT propaganda” because of “the lyrical subject’s relationship with a male”. The video had aired on television without issue before. After the new laws came in, some Russian artists began deleting their old videos from YouTube and social media.
Publishers are also blacking out entire paragraphs in books. Even a biography of Italian director Pier Paolo Pasolini was censored, with about a fifth of the text removed because it described an openly gay filmmaker's personal life.
The invasion of Ukraine has triggered a kind of patriotic cultural revolution. Actors, directors, and musicians who publicly opposed the war have been effectively blacklisted – removed from the big screens, stripped of work, and, in many cases, pushed into exile. Some have been declared “foreign agents”, a status that severely restricts civil rights and professional opportunities.
Some songs by these “agents” are being removed from Russian streaming platforms, and performing them publicly can lead to fines or even arrest. For the most recent case – in October, several young street musicians in St Petersburg were arrested for singing songs by anti-war artists.
forbes.com
By Thomas Brewster, Forbes Staff.
Nov 15, 2025, 08:00am ESTUpdated Nov 16, 2025, 06:40am EST
The U.S. government has been contracting stealth startup Twenty, which is working on AI agents and automated hacking of foreign targets at massive scale.
The U.S. is quietly investing in AI agents for cyberwarfare, spending millions this year on a secretive startup that’s using AI for offensive cyberattacks on American enemies.
According to federal contracting records, a stealth, Arlington, Virginia-based startup called Twenty, or XX, signed a contract with the U.S. Cyber Command this summer worth up to $12.6 million. It scored a $240,000 research contract with the Navy, too. The company has received VC support from In-Q-Tel, the nonprofit venture capital organization founded by the CIA, as well as Caffeinated Capital and General Catalyst. Twenty couldn’t be reached for comment at the time of publication.
Twenty’s contracts are a rare case of an AI offensive cyber company with VC backing landing Cyber Command work; typically cyber contracts have gone to either small bespoke companies or to the old guard of defense contracting like Booz Allen Hamilton or L3Harris.
Though the firm hasn’t launched publicly yet, its website states its focus is “transforming workflows that once took weeks of manual effort into automated, continuous operations across hundreds of targets simultaneously.” Twenty claims it is “fundamentally reshaping how the U.S. and its allies engage in cyber conflict.”
Its job ads reveal more. In one, Twenty is seeking a director of offensive cyber research, who will develop “advanced offensive cyber capabilities including attack path frameworks… and AI-powered automation tools.” AI engineer job ads indicate Twenty will be deploying open source tools like CrewAI, which is used to manage multiple autonomous AI agents that collaborate. And an analyst role says the company will be working on “persona development.” Often, government cyberattacks use social engineering, relying on convincing fake online accounts to infiltrate enemy communities and networks. (Forbes has previously reported on police contractors who’ve created such avatars with AI.)
Twenty’s executive team, according to its website, is stacked with former military and intelligence agents. CEO and cofounder Joe Lin is a former U.S. Navy Reserve officer who was previously VP of product management at cyber giant Palo Alto Networks. He joined Palo Alto after the firm acquired Expanse, where he helped national security clients determine where their networks were vulnerable. CTO Leo Olson also worked on the national security team at Expanse and was a signals intelligence officer at the U.S. Army. VP of engineering Skyler Onken spent over a decade at U.S. Cyber Command and the U.S. Army. The startup’s head of government relations, Adam Howard, spent years on the Hill, most recently working on the National Security Council transition team for the incoming Trump administration.
The U.S. government isn’t the only country using AI to build out its hacking capabilities. Last week, AI giant Anthropic released some startling research: Chinese hackers were using its tools to carry out cyberattacks. The company said hackers had deployed Claude to spin up AI agents to do 90% of the work on scouting out targets and coming up with ideas on how to hack them.
It’s possible the U.S. could also be using OpenAI, Anthropic or Elon Musk’s xAI in offensive cyber operations. The Defense Department gave each company contracts worth up to $200 million for unspecified “frontier AI” projects. None have confirmed what they’re working on for the DOD.
Given its focus on simultaneous attacks on hundreds of targets, Twenty’s products appear to be a step up in terms of cyberwarfare automation.
By contrast, beltway contractor Two Six Technologies has received a number of contracts in the AI offensive cyber space, including one for $90 million in 2020, but its tools are mostly to assist humans rather than replace them. For the last six years, it’s been working on developing automated AI “to assist cyber battlespace” and “support development of cyber warfare strategies” under a project dubbed IKE. Reportedly its AI was allowed to press ahead with carrying out an attack if the chances of success were high. The contract value was ramped up to $190 million by 2024, but there’s no indication IKE uses agents to carry out operations at the scale that Twenty is claiming. Two Six did not respond to requests for comment.
AI is much more commonly used on the defensive side, particularly in enterprises. As Forbes reported earlier this week, an Israeli startup called Tenzai is tweaking AI models from OpenAI and Anthropic, among others, to try to find vulnerabilities in customer software, though its goal is red teaming, not hacking.
NewsGuard's Reality Check
newsguardrealitycheck.com
Nov 17, 2025
What happened: In an effort to discredit the Ukrainian Armed Forces and undermine their morale at a critical juncture of the Russia-Ukraine war, Kremlin propagandists are weaponizing OpenAI’s new Sora 2 text-to-video tool to create fake, viral videos showing Ukrainian soldiers surrendering in tears.
Context: In a recent report, NewsGuard found that OpenAI’s new video generator tool Sora 2, which creates 10-second videos based on the user’s written prompt, advanced provably false claims on topics in the news 80 percent of the time when prompted to do so, demonstrating how the new and powerful technology could be easily weaponized by foreign malign actors.
A closer look: Indeed, so far in November 2025, NewsGuard has identified seven AI-generated videos presented as footage from the front lines in Pokrovsk, a key eastern Ukrainian city that experts expect to soon fall to Russia.
The videos, which received millions of views on X, TikTok, Facebook, and Telegram, showed scenes of Ukrainian soldiers surrendering en masse and begging Russia for forgiveness.
Here’s one video supposedly showing Ukrainian soldiers surrendering:
And a video purporting to show Ukrainian soldiers begging for forgiveness:
Actually: There is no evidence of mass Ukrainian surrenders in or around Pokrovsk.
The videos contain multiple inconsistencies, including gear and uniforms that do not match those used by the Ukrainian Armed Forces, unnatural faces, and mispronunciations of the names of Ukrainian cities. NewsGuard tested the videos with AI detector Hive, which found with 100 percent certainty that all seven were created with Sora 2. The videos either had the small Sora watermark or a blurry patch in the location where the watermark had been removed. Users shared both types as if they were authentic.
The AI-generated videos were shared by anonymous accounts that NewsGuard has found to regularly spread pro-Kremlin propaganda.
Ukraine’s Center for Countering Disinformation said in a Telegram post that the accounts “show signs of a coordinated network specifically created to promote Kremlin narratives among foreign audiences.”
In response to NewsGuard’s Nov. 12, 2025, emailed request for comment on the videos, OpenAI spokesperson Oscar Haines said “we’ll investigate” and asked for an extension to Nov. 13, 2025, to provide comment, which NewsGuard provided. However, Haines did not respond to follow-up inquiries.
This is not the first time Kremlin propagandists have weaponized OpenAI’s tools for propaganda. In April 2025, NewsGuard found that pro-Kremlin sources used OpenAI’s image generator to create images of action figure dolls depicting Ukrainian President Volodymyr Zelensky as a drug addict and corrupt warmonger.
techcommunity.microsoft.com
Sean_Whalen
Microsoft
Nov 17, 2025
On October 24, 2025, Azure DDOS Protection automatically detected and mitigated a multi-vector DDoS attack measuring 15.72 Tbps and nearly 3.64 billion packets per second (pps). This was the largest DDoS attack ever observed in the cloud and it targeted a single endpoint in Australia.
By utilizing Azure’s globally distributed DDoS Protection infrastructure and continuous detection capabilities, mitigation measures were initiated. Malicious traffic was effectively filtered and redirected, maintaining uninterrupted service availability for customer workloads.
The attack originated from Aisuru botnet. Aisuru is a Turbo Mirai-class IoT botnet that frequently causes record-breaking DDoS attacks by exploiting compromised home routers and cameras, mainly in residential ISPs in the United States and other countries.
The attack involved extremely high-rate UDP floods targeting a specific public IP address, launched from over 500,000 source IPs across various regions. These sudden UDP bursts had minimal source spoofing and used random source ports, which helped simplify traceback and facilitated provider enforcement.
Attackers are scaling with the internet itself. As fiber-to-the-home speeds rise and IoT devices get more powerful, the baseline for attack size keeps climbing.
As we approach the upcoming holiday season, it is essential to confirm that all internet-facing applications and workloads are adequately protected against DDOS attacks. Additionally, do not wait for an actual attack to assess your defensive capabilities or operational readiness—conduct regular simulations to identify and address potential issues proactively.