RSAC: Can we turn to govt, academic models instead?
Corporate AI models are already skewed to serve their makers' interests, and unless governments and academia step up to build transparent alternatives, the tech risks becoming just another tool for commercial manipulation.
That's according to cryptography and privacy guru Bruce Schneier, who spoke to The Register last week following a keynote speech at the RSA Conference in San Francisco.
"I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers," he told us. "It's integrity that we really need to think about, integrity as a security property and how it works with AI."
During his RSA keynote, Schneier asked: "Did your chatbot recommend a particular airline or hotel because it's the best deal for you, or because the AI company got a kickback from those companies?"
To deal with this quandary, Schneier proposes that governments should start taking a more hands-on stance in regulating AI, forcing model developers to be more open about the information they receive, and how the decisions models make are conceived.
He praised the EU AI Act, noting that it provides a mechanism to adapt the law as technology evolves, though he acknowledged there are teething problems. The legislation, which entered into force in August 2024, introduces phased requirements based on the risk level of AI systems. Companies deploying high-risk AI must maintain technical documentation, conduct risk assessments, and ensure transparency around how their models are built and how decisions are made.
Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there, he opined. This could push other regions toward similar regulation, though he added that in the US, meaningful legislative movement remains unlikely under the current administration.
Le conseiller fédéral Albert Rösti signera aujourd’hui à Strasbourg la Convention-cadre du Conseil de l’Europe sur l’intelligence artificielle. Par cet acte, la Suisse rejoint les États signataires d’un premier instrument juridiquement contraignant au niveau international visant à encadrer le développement et l’utilisation de l’IA dans le respect des droits fondamentaux
Today, we’re announcing Sec-Gemini v1, a new experimental AI model focused on advancing cybersecurity AI frontiers.
As outlined a year ago, defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. This fundamental asymmetry has made securing systems extremely difficult, time consuming and error prone. AI-powered cybersecurity workflows have the potential to help shift the balance back to the defenders by force multiplying cybersecurity professionals like never before.
La loi sur l’IA est le tout premier cadre juridique en matière d’IA, qui traite des risques liés à l’IA et positionne l’Europe pour qu’elle joue un rôle de premier plan à l’échelle mondiale.
Today, many seasoned security professionals will tell you they’ve been fighting a constant battle against cybercriminals and state-sponsored attackers. They will also tell you that any clear-eyed assessment shows that most of the patches, preventative measures and public awareness campaigns can only succeed at mitigating yesterday’s threats — not the threats waiting in the wings.
That could be changing. As the world focuses on the potential of AI — and governments and industry work on a regulatory approach to ensure AI is safe and secure — we believe that AI represents an inflection point for digital security. We’re not alone. More than 40% of people view better security as a top application for AI — and it’s a topic that will be front and center at the Munich Security Conference this weekend.
Les négociateurs du Parlement et du Conseil européens sont parvenus à un accord concernant la réglementation de l'intelligence artificielle. L'approche basée sur les risques, à la base du projet, est confirmée. Des compromis sont censés garantir la protection contre les risques liés à l’IA, tout en encourageant l’innovation.
En Suisse aussi, l’intelligence artificielle (IA) investit de plus en plus la vie économique et sociale de la population. Dans ce contexte, le PFPDT rappelle que la loi sur la protection des données en vigueur depuis le 1er septembre 2023 est directement applicable aux traitements de données basés sur l’IA.