Google has confirmed a new security scheme which, it says, will help “secure, empower and advance our collective digital future” using AI. Part of this AI Cyber Defence Initiative includes open-sourcing the new, AI-powered, Magika tool that is already being used to help protect Gmail users from potentially problematic content.
A full of spectrum of infringment
The cat is out of the bag:
Generative AI systems like DALL-E and ChatGPT have been trained on copyrighted materials;
OpenAI, despite its name, has not been transparent about what it has been trained on.
Generative AI systems are fully capable of producing materials that infringe on copyright.
They do not inform users when they do so.
They do not provide any information about the provenance of any of the images they produce.
Users may not know when they produce any given image whether they are infringing.
Last month, I received an alarming email from someone I did not know: Rui Zhu, a Ph.D. candidate at Indiana University Bloomington. Mr. Zhu had my email address, he explained, because GPT-3.5 Turbo, one of the latest and most robust large language models (L.L.M.) from OpenAI, had delivered it to him.
With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information.
When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.
Les négociateurs du Parlement et du Conseil européens sont parvenus à un accord concernant la réglementation de l'intelligence artificielle. L'approche basée sur les risques, à la base du projet, est confirmée. Des compromis sont censés garantir la protection contre les risques liés à l’IA, tout en encourageant l’innovation.
It’s been one year since the launch of ChatGPT, and since that time, the market has seen astonishing advancement of large language models (LLMs). Despite the pace of development continuing to outpace model security, enterprises are beginning to deploy LLM-powered applications. Many rely on guardrails implemented by model developers to prevent LLMs from responding to sensitive prompts. However, even with the considerable time and effort spent by the likes of OpenAI, Google, and Meta, these guardrails are not resilient enough to protect enterprises and their users today. Concerns surrounding model risk, biases, and potential adversarial exploits have come to the forefront.