RSAC: Can we turn to govt, academic models instead?
Corporate AI models are already skewed to serve their makers' interests, and unless governments and academia step up to build transparent alternatives, the tech risks becoming just another tool for commercial manipulation.
That's according to cryptography and privacy guru Bruce Schneier, who spoke to The Register last week following a keynote speech at the RSA Conference in San Francisco.
"I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers," he told us. "It's integrity that we really need to think about, integrity as a security property and how it works with AI."
During his RSA keynote, Schneier asked: "Did your chatbot recommend a particular airline or hotel because it's the best deal for you, or because the AI company got a kickback from those companies?"
To deal with this quandary, Schneier proposes that governments should start taking a more hands-on stance in regulating AI, forcing model developers to be more open about the information they receive, and how the decisions models make are conceived.
He praised the EU AI Act, noting that it provides a mechanism to adapt the law as technology evolves, though he acknowledged there are teething problems. The legislation, which entered into force in August 2024, introduces phased requirements based on the risk level of AI systems. Companies deploying high-risk AI must maintain technical documentation, conduct risk assessments, and ensure transparency around how their models are built and how decisions are made.
Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there, he opined. This could push other regions toward similar regulation, though he added that in the US, meaningful legislative movement remains unlikely under the current administration.
Google intelligence report finds UK is a particular target of IT worker ploy that sends wages to Kim Jong Un’s state
British companies are being urged to carry out job interviews for IT workers on video or in person to head off the threat of giving jobs to fake North Korean employees.
The warning was made after analysts said that the UK had become a prime target for hoax IT workers deployed by the Democratic People’s Republic of Korea. They are typically hired to work remotely, enabling them to escape detection and send their wages to Kim Jong-un’s state.
Google said in a report this month that a case uncovered last year involved a single North Korean worker deploying at least 12 personae across Europe and the US. The IT worker was seeking jobs within the defence industry and government sectors. Under a new tactic, the bogus IT professionals have been threatening to release sensitive company data after being fired.
The Federal Bureau of Investigation (FBI) warns the public about an ongoing fraud scheme where criminal scammers are impersonating FBI Internet Crime Complaint Center (IC3) employees to deceive and defraud individuals. Between December 2023 and February 2025, the FBI received more than 100 reports of IC3 impersonation scams.
The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.
Over the past few months, we have observed increased interest of malicious groups in leveraging remote-access VPN environments as an entry point and
Based upon the authoring organizations’ observations during incident response activities and available industry reporting, as supplemented by CISA’s research findings, the authoring organizations recommend that the safest course of action for network defenders is to assume a sophisticated threat actor may deploy rootkit level persistence on a device that has been reset and lay dormant for an arbitrary amount of time. For example, as outlined in PRC State-Sponsored Actors Compromise and Maintain Persistent Access to U.S. Critical Infrastructure), sophisticated actors may remain silent on compromised networks for long periods. The authoring organizations strongly urge all organizations to consider the significant risk of adversary access to, and persistence on, Ivanti Connect Secure and Ivanti Policy Secure gateways when determining whether to continue operating these devices in an enterprise environment.
Bluetooth Trackers Exploited for Geolocation in Organised CrimeBluetooth trackers, commonly used for locating personal items and vehicles, have become an unexpected tool in organised crime, according to recent findings reported by Europol in an Early Warning Notification. Typically designed for purposes such as finding lost keys or preventing vehicle theft, Bluetooth trackers are now being leveraged by criminals for geo-locating...
Warning on KIMSUKY Cyber Actor's Recent Cyber Campaigns against Google's Browser and App Store Services