Google has confirmed that hackers created a fraudulent account in its Law Enforcement Request System (LERS) platform that law enforcement uses to submit official data requests to the company
"We have identified that a fraudulent account was created in our system for law enforcement requests and have disabled the account," Google told BleepingComputer.
"No requests were made with this fraudulent account, and no data was accessed."
The FBI declined to comment on the threat actor's claims.
This statement comes after a group of threat actors calling itself "Scattered Lapsus$ Hunters" claimed on Telegram to have gained access to both Google's LERS portal and the FBI's eCheck background check system.
The group posted screenshots of their alleged access shortly after announcing on Thursday that they were "going dark."
The hackers' claims raised concerns as both LERS and the FBI's eCheck system are used by police and intelligence agencies worldwide to submit subpoenas, court orders, and emergency disclosure requests.
Unauthorized access could allow attackers to impersonate law enforcement and gain access to sensitive user data that should normally be protected.
The "Scattered Lapsus$ Hunters" group, which claims to consist of members linked to the Shiny Hunters, Scattered Spider, and Lapsus$ extortion groups, is behind widespread data theft attacks targeting Salesforce data this year.
The threat actors initially utilized social engineering scams to trick employees into connecting Salesforce's Data Loader tool to corporate Salesforce instances, which was then used to steal data and extort companies.
The threat actors later breached Salesloft's GitHub repository and used Trufflehog to scan for secrets exposed in the private source code. This allowed them to find authentication tokens for Salesloft Drift, which were used to conduct further Salesforce data theft attacks.
These attacks have impacted many companies, including Google, Adidas, Qantas, Allianz Life, Cisco, Kering, Louis Vuitton, Dior, Tiffany & Co, Cloudflare, Zscaler, Elastic, Proofpoint, JFrog, Rubrik, Palo Alto Networks, and many more.
Google Threat Intelligence (Mandiant) has been a thorn in the side of these threat actors, being the first to disclose the Salesforce and Salesloft attacks and warning companies to shore up their defenses.
Since then, the threat actors have been taunting the FBI, Google, Mandiant, and security researchers in posts to various Telegram channels.
Late Thursday night, the group posted a lengthy message to a BreachForums-linked domain causing some to believe the threat actors were retiring.
"This is why we have decided that silence will now be our strength," wrote the threat actors.
"You may see our names in new databreach disclosure reports from the tens of other multi billion dollar companies that have yet to disclose a breach, as well as some governmental agencies, including highly secured ones, that does not mean we are still active."
However, cybersecurity researchers who spoke with BleepingComputer believe the group will continue conducting attacks quietly despite their claims of going dark.
Update 9/15/25: Article title updated as some felt it indicated a breach.
developers.googleblog.com
JULY 18, 2024
Sumit Chandel
Developer Relations Engineer
Understand how you will be impacted by our decision to turn off the serving portion of Google URL Shortener.
Updated August 1, 2025: While we previously announced discontinuing support for all goo.gl URLs after August 25, 2025, we've adjusted our approach in order to preserve actively used links.
We understand these links are embedded in countless documents, videos, posts and more, and we appreciate the input received.
Nine months ago, we redirected URLs that showed no activity in late 2024 to a message specifying that the link would be deactivated in August, and these are the only links targeted to be deactivated. If you get a message that states, “This link will no longer work in the near future”, the link won't work after August 25 and we recommend transitioning to another URL shortener if you haven’t already.
All other goo.gl links will be preserved and will continue to function as normal. To check if your link will be retained, visit the link today. If your link redirects you without a message, it will continue to work.
In 2018, we announced the deprecation and transition of Google URL Shortener because of the changes we’ve seen in how people find content on the internet, and the number of new popular URL shortening services that emerged in that time. This meant that we no longer accepted new URLs to shorten but that we would continue serving existing URLs.
Over time, these existing URLs saw less and less traffic as the years went on - in fact more than 99% of them had no activity in the last month.
As such, we will be turning off Google URL Shortener. Please read on below to understand more about how this may impact you.
Who is impacted?
Any developers using links built with the Google URL Shortener in the form https://goo.gl/* will be impacted, and these URLs will no longer return a response after August 25th, 2025. We recommend transitioning these links to another URL shortener provider.
Note that goo.gl links generated via Google apps (such as Maps sharing) will continue to function.
What to expect
Starting August 23, 2024, goo.gl links will start displaying an interstitial page for a percentage of existing links notifying your users that the link will no longer be supported after August 25th, 2025 prior to navigating to the original target page.
Over time the percentage of links that will show the interstitial page will increase until the shutdown date. This interstitial page should help you track and adjust any affected links that you will need to transition as part of this change. We will continue to display this interstitial page until the shutdown date after which all links served will return a 404 response.
Note that the interstitial page may cause disruptions in the current flow of your goo.gl links. For example, if you are using other 302 redirects, the interstitial page may prevent the redirect flow from completing correctly. If you’ve embedded social metadata in your destination page, the interstitial page will likely cause these to no longer show up where the initial link is displayed. For this reason, we advise transitioning these links as soon as possible.
Note: In the event the interstitial page is disrupting your use cases, you can suppress it by adding the query param “si=1” to existing goo.gl links.
We understand the transition away from using goo.gl short links may cause some inconvenience. If you have any questions or concerns, please reach out to us at Firebase Support. Thank you for using the service and we hope you join us in moving forward into new and innovative ways for navigating web and app experiences.
cyberscoop.com
article By
Tim Starks
August 27, 2025
Google says it is starting a cyber “disruption unit,” a development that arrives in a potentially shifting U.S. landscape toward more offensive-oriented approaches in cyberspace.
But the contours of that larger shift are still unclear, and whether or to what extent it’s even possible. While there’s some momentum in policymaking and industry circles to put a greater emphasis on more aggressive strategies and tactics to respond to cyberattacks, there are also major barriers.
Sandra Joyce, vice president of Google Threat Intelligence Group, said at a conference Tuesday that more details of the disruption unit would be forthcoming in future months, but the company was looking for “legal and ethical disruption” options as part of the unit’s work.
“What we’re doing in the Google Threat Intelligence Group is intelligence-led proactive identification of opportunities where we can actually take down some type of campaign or operation,” she said at the Center for Cybersecurity Policy and Law event, where she called for partners in the project. “We have to get from a reactive position to a proactive one … if we’re going to make a difference right now.”
The boundaries in the cyber domain between actions considered “cyber offense” and those meant to deter cyberattacks are often unclear. The tradeoff between “active defense” vs. “hacking back” is a common dividing line. On the less aggressive end, “active defense” can include tactics like setting up honeypots designed to lure and trick attackers. At the more extreme end, “hacking back” would typically involve actions that attempt to deliberately destroy an attacker’s systems or networks. Disruption operations might fall between the two, like Microsoft taking down botnet infrastructure in court or the Justice Department seizing stolen cryptocurrency from hackers.
Trump administration officials and some in Congress have been advocating for the U.S. government to go on offense in cyberspace, saying that foreign hackers and criminals aren’t suffering sufficient consequences. Much-criticized legislation to authorize private sector “hacking back” has long stalled in Congress, but some have recently pushed a version of the idea where the president would give “letters of marque” like those for early-U.S. sea privateers to companies authorizing them to legally conduct offensive cyber operations currently forbidden under U.S. law.
The private sector has some catching up to do if there’s to be a worthy field of firms able to focus on offense, experts say.
John Keefe, a former National Security Council official from 2022 to 2024 and National Security Agency official before that, said there had been government talks about a “narrow” letters of marque approach “with the private sector companies that we thought had the capabilities.” The concept was centered on ransomware, Russia and rules of the road for those companies to operate. “It wasn’t going to be the Wild West,” said Keefe, now founder of Ex Astris Scientia, speaking like others in this story at Tuesday’s conference.
The companies with an emphasis on offense largely have only one customer — and that’s governments, said Joe McCaffrey, chief information security officer at defense tech company Anduril Industries. “It’s a really tough business to be in,” he said. “If you develop an exploit, you get to sell to one person legally, and then it gets burned, and you’re back again.”
By their nature, offensive cyber operations in the federal government are already very time- and manpower-intensive, said Brandon Wales, a former top official at the Cybersecurity and Infrastructure Security Agency and now vice president of cybersecurity at SentinelOne. Private sector companies could make their mark by innovating ways to speed up and expand the number of those operations, he said.
Overall, among the options of companies that could do more offensive work, the “industry doesn’t exist yet, but I think it’s coming,” said Andrew McClure, managing director at Forgepoint Capital.
Certainly Congress would have to clarify what companies are able to do legally as well, Wales said.
But that’s just the industry side. There’s plenty more to weigh when stepping up offense.
“However we start, we need to make sure that we are having the ability to measure impact,” said Megan Stifel, chief strategy officer for the Institute for Security and Technology. “Is this working? How do we know?”
If there was a consensus at the conference it’s that the United States — be it the government or private sector — needs to do more to deter adversaries in cyberspace by going after them more in cyberspace.
One knock on that idea has been that the United States can least afford to get into a cyber shooting match, since it’s more reliant on tech than other nations and an escalation would hurt the U.S. the most by presenting more vulnerable targets for enemies. But Dmitri Alperovitch, chairman of the Silverado Policy Accelerator, said that idea was wrong for a couple reasons, among them that other nations have become just as reliant on tech, too.
And “the very idea that in this current bleak state of affairs, engaging in cyber offense is escalatory, I propose to you, is laughable,” he said. “After all, what are our adversaries going to escalate to in response? Ransom more of our hospitals, penetrate more of our water and electric utilities, steal even more of our IP and financial assets?”
Alperovitch continued: “Not only is engaging in thoughtful and careful cyber offense not escalatory, but not doing so is.”
forbes.com 20.08.2025 - xAI published conversations with Grok and made them searchable on Google, including a plan to assassinate Elon Musk and instructions for making fentanyl and bombs.
Elon Musk’s AI firm, xAI, has published the chat transcripts of hundreds of thousands of conversations between its chatbot Grok and the bot’s users — in many cases, without those users’ knowledge or permission.
Anytime a Grok user clicks the “share” button on one of their chats with the bot, a unique URL is created, allowing them to share the conversation via email, text message or other means. Unbeknownst to users, though, that unique URL is also made available to search engines, like Google, Bing and DuckDuckGo, making them searchable to anyone on the web. In other words, on Musk’s Grok, hitting the share button means that a conversation will be published on Grok’s website, without warning or a disclaimer to the user.
Today, a Google search for Grok chats shows that the search engine has indexed more than 370,000 user conversations with the bot. The shared pages revealed conversations between Grok users and the LLM that range from simple business tasks like writing tweets to generating images of a fictional terrorist attack in Kashmir and attempting to hack into a crypto wallet. Forbes reviewed conversations where users asked intimate questions about medicine and psychology; some even revealed the name, personal details and at least one password shared with the bot by a Grok user. Image files, spreadsheets and some text documents uploaded by users could also be accessed via the Grok shared page.
Among the indexed conversations were some initiated by British journalist Andrew Clifford, who used Grok to summarize the front pages of newspapers and compose tweets for his website Sentinel Current. Clifford told Forbes that he was unaware that clicking the share button would mean that his prompt would be discoverable on Google. “I would be a bit peeved but there was nothing on there that shouldn’t be there,” said Clifford, who has now switched to using Google’s Gemini AI.
Not all the conversations, though, were as benign as Clifford’s. Some were explicit, bigoted and violated xAI’s rules. The company prohibits use of its bot to “promot[e] critically harming human life or to “develop bioweapons, chemical weapons, or weapons of mass destruction,” but in published, shared conversations easily found via a Google search, Grok offered users instructions on how to make illicit drugs like fentanyl and methamphetamine, code a self-executing piece of malware and construct a bomb and methods of suicide. Grok also offered a detailed plan for the assassination of Elon Musk. Via the “share” function, the illicit instructions were then published on Grok’s website and indexed by Google.
xAI did not respond to a detailed request for comment.
xAI is not the only AI startup to have published users’ conversations with its chatbots. Earlier this month, users of OpenAI’s ChatGPT were alarmed to find that their conversations were appearing in Google search results, though the users had opted to make those conversations “discoverable” to others. But after outcry, the company quickly changed its policy. Calling the indexing “a short-lived experiment,” OpenAI chief information security officer Dane Stuckey said in a post on X that it would be discontinued because it “introduced too many opportunities for folks to accidentally share things they didn’t intend to.”
After OpenAI canned its share feature, Musk took a victory lap. Grok’s X account claimed at the time that it had no such sharing feature, and Musk tweeted in response, “Grok ftw” [for the win]. It’s unclear when Grok added the share feature, but X users have been warning since January that Grok conversations were being indexed by Google.
Some of the conversations asking Grok for instructions about how to manufacture drugs and bombs were likely initiated by security engineers, redteamers, or Trust & Safety professionals. But in at least a few cases, Grok’s sharing setting misled even professional AI researchers.
Nathan Lambert, a computational scientist at the Allen Institute for AI, used Grok to create a summary of his blog posts to share with his team. He was shocked to learn from Forbes that his Grok prompt and the AI’s response was indexed on Google. “I was surprised that Grok chats shared with my team were getting automatically indexed on Google, despite no warnings of it, especially after the recent flare-up with ChatGPT,” said the Seattle-based researcher.
Google allows website owners to choose when and how their content is indexed for search. “Publishers of these pages have full control over whether they are indexed,” said Google spokesperson Ned Adriance in a statement. Google itself previously allowed chats with its AI chatbot, Bard, to be indexed, but it removed them from search in 2023. Meta continues to allow its shared searches to be discoverable by search engines, Business Insider reported.
Opportunists are beginning to notice, and take advantage of, Grok’s published chats. On LinkedIn and the forum BlackHatWorld, marketers have discussed intentionally creating and sharing conversations with Grok to increase the prominence and name recognition of their businesses and products in Google search results. (It is unclear how effective these efforts would be.) Satish Kumar, CEO of SEO agency Pyrite Technologies, demonstrated to Forbes how one business had used Grok to manipulate results for a search of companies that will write your PhD dissertation for you.
“Every shared chat on Grok is fully indexable and searchable on Google,” he said. “People are actively using tactics to push these pages into Google’s index.”
arstechnica.com - Disclosure comes two months after Google warned the world of ongoing spree.
In June, Google said it unearthed a campaign that was mass-compromising accounts belonging to customers of Salesforce. The means: an attacker pretending to be someone in the customer's IT department feigning some sort of problem that required immediate access to the account. Two months later, Google has disclosed that it, too, was a victim.
The series of hacks are being carried out by financially motivated threat actors out to steal data in hopes of selling it back to the targets at sky-high prices. Rather than exploiting software or website vulnerabilities, they take a much simpler approach: calling the target and asking for access. The technique has proven remarkably successful. Companies whose Salesforce instances have been breached in the campaign, Bleeping Computer reported, include Adidas, Qantas, Allianz Life, Cisco, and the LVMH subsidiaries Louis Vuitton, Dior, and Tiffany & Co.
Better late than never
The attackers abuse a Salesforce feature that allows customers to link their accounts to third-party apps that integrate data with in-house systems for blogging, mapping tools, and similar resources. The attackers in the campaign contact employees and instruct them to connect an external app to their Salesforce instance. As the employee complies, the attackers ask the employee for an eight-digit security code that the Salesforce interface requires before a connection is made. The attackers then use this number to gain access to the instance and all data stored in it.
Google said that its Salesforce instance was among those that were compromised. The breach occurred in June, but Google only disclosed it on Tuesday, presumably because the company only learned of it recently.
“Analysis revealed that data was retrieved by the threat actor during a small window of time before the access was cut off,” the company said.
Data retrieved by the attackers was limited to business information such as business names and contact details, which Google said was “largely public” already.
Google initially attributed the attacks to a group traced as UNC6040. The company went on to say that a second group, UNC6042, has engaged in extortion activities, “sometimes several months after” the UNC6040 intrusions. This group brands itself under the name ShinyHunters.
“In addition, we believe threat actors using the 'ShinyHunters' brand may be preparing to escalate their extortion tactics by launching a data leak site (DLS),” Google said. “These new tactics are likely intended to increase pressure on victims, including those associated with the recent UNC6040 Salesforce-related data breaches.”
With so many companies falling to this scam—including Google, which only disclosed the breach two months after it happened—the chances are good that there are many more we don’t know about. All Salesforce customers should carefully audit their instances to see what external sources have access to it. They should also implement multifactor authentication and train staff how to detect scams before they succeed.
techcrunch.com - Google’s AI-powered bug hunter has just reported its first batch of security vulnerabilities.
Heather Adkins, Google’s vice president of security, announced Monday that its LLM-based vulnerability researcher Big Sleep found and reported 20 flaws in various popular open source software.
Adkins said that Big Sleep, which is developed by the company’s AI department DeepMind as well as its elite team of hackers Project Zero, reported its first-ever vulnerabilities, mostly in open source software such as audio and video library FFmpeg and image-editing suite ImageMagick.
Given that the vulnerabilities are not fixed yet, we don’t have details of their impact or severity, as Google does not yet want to provide details, which is a standard policy when waiting for bugs to be fixed. But the simple fact that Big Sleep found these vulnerabilities is significant, as it shows these tools are starting to get real results, even if there was a human involved in this case.
“To ensure high quality and actionable reports, we have a human expert in the loop before reporting, but each vulnerability was found and reproduced by the AI agent without human intervention,” Google’s spokesperson Kimberly Samra told TechCrunch.
Royal Hansen, Google’s vice president of engineering, wrote on X that the findings demonstrate “a new frontier in automated vulnerability discovery.”
LLM-powered tools that can look for and find vulnerabilities are already a reality. Other than Big Sleep, there’s RunSybil and XBOW, among others.
venturebeat.com - OpenAI abruptly removed a ChatGPT feature that made conversations searchable on Google, sparking privacy concerns and industry-wide scrutiny of AI data handling.
OpenAI made a rare about-face Thursday, abruptly discontinuing a feature that allowed ChatGPT users to make their conversations discoverable through Google and other search engines. The decision came within hours of widespread social media criticism and represents a striking example of how quickly privacy concerns can derail even well-intentioned AI experiments.
The feature, which OpenAI described as a “short-lived experiment,” required users to actively opt in by sharing a chat and then checking a box to make it searchable. Yet the rapid reversal underscores a fundamental challenge facing AI companies: balancing the potential benefits of shared knowledge with the very real risks of unintended data exposure.
How thousands of private ChatGPT conversations became Google search results
The controversy erupted when users discovered they could search Google using the query “site:chatgpt.com/share” to find thousands of strangers’ conversations with the AI assistant. What emerged painted an intimate portrait of how people interact with artificial intelligence — from mundane requests for bathroom renovation advice to deeply personal health questions and professionally sensitive resume rewrites. (Given the personal nature of these conversations, which often contained users’ names, locations, and private circumstances, VentureBeat is not linking to or detailing specific exchanges.)
“Ultimately we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,” OpenAI’s security team explained on X, acknowledging that the guardrails weren’t sufficient to prevent misuse.
techcrunch.com - Google has suspended the account of phone surveillance operator Catwatchful, which was using the tech giant’s servers to host and operate the monitoring software.
Google’s move to shut down the spyware operation comes a month after TechCrunch alerted the technology giant the operator was hosting the operation on Firebase, one of Google’s developer platforms. Catwatchful relied on Firebase to host and store vast amounts of data stolen from thousands of phones compromised by its spyware.
“We’ve investigated these reported Firebase operations and suspended them for violating our terms of service,” Google spokesperson Ed Fernandez told TechCrunch in an email this week.
When asked by TechCrunch, Google would not say why it took a month to investigate and suspend the operation’s Firebase account. The company’s own terms of use broadly prohibit its customers from hosting malicious software or spyware operations on its platforms. As a for-profit company, Google has a commercial interest in retaining customers who pay for its services.
As of Friday, Catwatchful is no longer functioning nor does it appear to transmit or receive data, according to a network traffic analysis of the spyware carried out by TechCrunch.
Catwatchful was an Android-specific spyware that presented itself as a child-monitoring app “undetectable” to the user. Much like other phone spyware apps, Catwatchful required its customers to physically install it on a person’s phone, which usually requires prior knowledge of their passcode. These monitoring apps are often called “stalkerware” (or spouseware) for their propensity to be used for non-consensual surveillance of spouses and romantic partners, which is illegal.
Once installed, the app was designed to stay hidden from the victim’s home screen, and upload the victim’s private messages, photos, location data, and more to a web dashboard viewable by the person who planted the app.
TechCrunch first learned of Catwatchful in mid-June after security researcher Eric Daigle identified a security bug that was exposing the spyware operation’s back-end database.
The bug allowed unauthenticated access to the database, meaning no passwords or credentials were needed to see the data inside. The database contained more than 62,000 Catwatchful customer email addresses and plaintext passwords, as well as records on 26,000 victim devices compromised by the spyware.
The data also exposed the administrator behind the operation, a Uruguay-based developer called Omar Soca Charcov. TechCrunch contacted Charcov to ask if he was aware of the security lapse, or if he planned to notify affected individuals about the breach. Charcov did not respond.
With no clear indication that Charcov would disclose the breach, TechCrunch provided a copy of the Catwatchful database to data breach notification service Have I Been Pwned.
Catwatchful is the latest in a long list of surveillance operations that have experienced a data breach in recent years, in large part due to shoddy coding and poor cybersecurity practices. Catwatchful is by TechCrunch’s count the fifth spyware operation this year to have spilled users’ data, and the most recent entry in a list of more than two-dozen known spyware operations since 2017 that have exposed their banks of data.
As we noted in our previous story: Android users can identify if the Catwatchful spyware is installed, even if the app is hidden, by dialing 543210 into your Android phone app’s keypad and pressing the call button.
Google on Monday released a fresh Chrome 137 update to address three vulnerabilities, including a high-severity bug exploited in the wild.
Tracked as CVE-2025-5419, the zero-day is described as an out-of-bounds read and write issue in the V8 JavaScript engine.
“Google is aware that an exploit for CVE-2025-5419 exists in the wild,” the internet giant’s advisory reads. No further details on the security defect or the exploit have been provided.
However, the company credited Clement Lecigne and Benoît Sevens of Google Threat Analysis Group (TAG) for reporting the issue.
TAG researchers previously reported multiple vulnerabilities exploited by commercial surveillance software vendors, including such bugs in Chrome. Flaws in Google’s browser are often exploited by spyware vendors and CVE-2025-5419 could be no different.
According to a NIST advisory, the exploited zero-day “allowed a remote attacker to potentially exploit heap corruption via a crafted HTML page”. It should be noted that the exploitation of out-of-bounds defects often leads to arbitrary code execution.
The latest browser update also addresses CVE-2025-5068, a medium-severity use-after-free in Blink that earned the reporting researcher a $1,000 bug bounty. No reward will be handed out for the zero-day.
The latest Chrome iteration is now rolling out as version 137.0.7151.68/.69 for Windows and macOS, and as version 137.0.7151.68 for Linux.
This Google Threat Intelligence Group report presents an analysis of detected 2024 zero-day exploits.
Google Threat Intelligence Group (GTIG) tracked 75 zero-day vulnerabilities exploited in the wild in 2024, a decrease from the number we identified in 2023 (98 vulnerabilities), but still an increase from 2022 (63 vulnerabilities). We divided the reviewed vulnerabilities into two main categories: end-user platforms and products (e.g., mobile devices, operating systems, and browsers) and enterprise-focused technologies, such as security software and appliances.
Vendors continue to drive improvements that make some zero-day exploitation harder, demonstrated by both dwindling numbers across multiple categories and reduced observed attacks against previously popular targets. At the same time, commercial surveillance vendors (CSVs) appear to be increasing their operational security practices, potentially leading to decreased attribution and detection.
We see zero-day exploitation targeting a greater number and wider variety of enterprise-specific technologies, although these technologies still remain a smaller proportion of overall exploitation when compared to end-user technologies. While the historic focus on the exploitation of popular end-user technologies and their users continues, the shift toward increased targeting of enterprise-focused products will require a wider and more diverse set of vendors to increase proactive security measures in order to reduce future zero-day exploitation attempts.
Google intelligence report finds UK is a particular target of IT worker ploy that sends wages to Kim Jong Un’s state
British companies are being urged to carry out job interviews for IT workers on video or in person to head off the threat of giving jobs to fake North Korean employees.
The warning was made after analysts said that the UK had become a prime target for hoax IT workers deployed by the Democratic People’s Republic of Korea. They are typically hired to work remotely, enabling them to escape detection and send their wages to Kim Jong-un’s state.
Google said in a report this month that a case uncovered last year involved a single North Korean worker deploying at least 12 personae across Europe and the US. The IT worker was seeking jobs within the defence industry and government sectors. Under a new tactic, the bogus IT professionals have been threatening to release sensitive company data after being fired.
Like any garden, the digital landscape experiences the emergence of unexpected blooms. Among the helpful flora of browser and application extensions, some appear with intentions less than pure. These deceptive ones, often born from a fleeting desire for illicit gain or mischievous disruption, may possess a certain transient beauty in their ingenuity. They arrive, sometimes subtly flawed in their execution, yet are driven by an aspiration to infiltrate our digital lives, to harvest our data, or to simply sow chaos.
Google Threat Intelligence Group (GTIG) has observed increasing efforts from several Russia state-aligned threat actors to compromise Signal Messenger accounts used by individuals of interest to Russia's intelligence services. While this emerging operational interest has likely been sparked by wartime demands to gain access to sensitive government and military communications in the context of Russia's re-invasion of Ukraine, we anticipate the tactics and methods used to target Signal will grow in prevalence in the near-term and proliferate to additional threat actors and regions outside the Ukrainian theater of war.
A shell-shocked owner woke to find a barrage of one-star reviews had dragged her Google rating from 4.9 to 2.3 virtually overnight.