Key findings from our analysis include:
Advanced Surveillance Capabilities:
Comprehensive Data Exfiltration:
Persistence Mechanisms:
Abuse of Legitimate Services:
Indicators of Compromise (IoCs):
Need for Collective Protection:
Veeam released security updates today to address two Service Provider Console (VSPC) vulnerabilities, including a critical remote code execution (RCE) discovered during internal testing.
VSPC, described by the company as a remote-managed BaaS (Backend as a Service) and DRaaS (Disaster Recovery as a Service) platform, is used by service providers to monitor the health and security of customer backups, as well as manage their Veeam-protected virtual, Microsoft 365, and public cloud workloads.
Cisco on Dec. 2 updated an advisory from March 18 about a 10-year-old vulnerability in the WebVPN login page of Cisco’s Adaptive Security Appliance (ASA) software that could let an unauthenticated remote attacker conduct a cross-site scripting (XSS) attack.
In its recent update, the Cisco Product Security Incident Response Team (PSIRT) said it became aware of additional attempted exploitation of this vulnerability in the wild last month.
We uncover macOS lateral movement tactics, such as SSH key misuse and AppleScript exploitation. Strategies to counter this attack trend are also discussed. We uncover macOS lateral movement tactics, such as SSH key misuse and AppleScript exploitation. Strategies to counter this attack trend are also discussed.
The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.