Cyberveillecurated by Decio
Nuage de tags
Mur d'images
Quotidien
Flux RSS
  • Flux RSS
  • Daily Feed
  • Weekly Feed
  • Monthly Feed
Filtres

Liens par page

  • 20 links
  • 50 links
  • 100 links

Filtres

Untagged links
6 résultats taggé Schneier  ✕
Schneier warns that AI loses integrity due to corporate bias https://www.theregister.com/2025/05/06/schneier_ai_models/
10/05/2025 22:42:42
QRCode
archive.org
thumbnail

RSAC: Can we turn to govt, academic models instead?
Corporate AI models are already skewed to serve their makers' interests, and unless governments and academia step up to build transparent alternatives, the tech risks becoming just another tool for commercial manipulation.

That's according to cryptography and privacy guru Bruce Schneier, who spoke to The Register last week following a keynote speech at the RSA Conference in San Francisco.

"I worry that it'll be like search engines, which you use as if they are neutral third parties but are actually trying to manipulate you. They try to kind of get you to visit the websites of the advertisers," he told us. "It's integrity that we really need to think about, integrity as a security property and how it works with AI."

During his RSA keynote, Schneier asked: "Did your chatbot recommend a particular airline or hotel because it's the best deal for you, or because the AI company got a kickback from those companies?"

To deal with this quandary, Schneier proposes that governments should start taking a more hands-on stance in regulating AI, forcing model developers to be more open about the information they receive, and how the decisions models make are conceived.

He praised the EU AI Act, noting that it provides a mechanism to adapt the law as technology evolves, though he acknowledged there are teething problems. The legislation, which entered into force in August 2024, introduces phased requirements based on the risk level of AI systems. Companies deploying high-risk AI must maintain technical documentation, conduct risk assessments, and ensure transparency around how their models are built and how decisions are made.

Because the EU is the world's largest trading bloc, the law is expected to have a significant impact on any company wanting to do business there, he opined. This could push other regions toward similar regulation, though he added that in the US, meaningful legislative movement remains unlikely under the current administration.

theregister EN 2025 Schneier IA corporate bias corporate-bias warning
DOGE as a National Cyberattack https://www.schneier.com/blog/archives/2025/02/doge-as-a-national.html?ref=metacurity.com
16/02/2025 01:58:09
QRCode
archive.org

In the span of just weeks, the US government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound. First, it was reported that people associated with the newly created Department of Government Efficiency (DOGE) had accessed the US Treasury computer system, giving them the ability to collect data on and potentially control the department’s roughly ...

schneier EN 2025 DOGE Cyberattack US
How AI Will Change Democracy https://www.schneier.com/blog/archives/2024/05/how-ai-will-change-democracy.html
01/06/2024 13:53:35
QRCode
archive.org

I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.

Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.

schneier EN 2024 AI risk Democracy Change analysis
AI Risks https://www.schneier.com/blog/archives/2023/10/ai-risks.html
09/10/2023 19:15:15
QRCode
archive.org

There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.

Schneier EN 2023 AI Risks
Large Language Models and Elections https://www.schneier.com/blog/archives/2023/05/large-language-models-and-elections.html
04/05/2023 16:16:24
QRCode
archive.org

Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.

Schneier EN 2023 LLM election disinformation AI
Breaking RSA with a Quantum Computer https://www.schneier.com/blog/archives/2023/01/breaking-rsa-with-a-quantum-computer.html
04/01/2023 09:18:15
QRCode
archive.org

A group of Chinese researchers have just published a paper claiming that they can—although they have not yet done so—break 2048-bit RSA. This is something to take seriously. It might not be correct, but it’s not obviously wrong.

Schneier EN 2023 RSA Quantum Computer China break cryptography
4472 links
Shaarli - The personal, minimalist, super-fast, database free, bookmarking service par la communauté Shaarli - Theme by kalvn - Curated by Decio