Some hallucinations could ‘potentially induce cardiac incidents in Legal,’ according to internal documents
There is no shortage of researchers and industry titans willing to warn us about the potential destructive power of artificial intelligence. Reading the headlines, one would hope that the rapid gains in AI technology have also brought forth a unifying realization of the risks—and the steps we need to take to mitigate them.
"We must build a robust understanding of AI vulnerabilities, foreign intelligence threats to these AI systems and ways to counter the threat in order to have AI security," Gen. Paul Nakasone said. "We must also ensure that malicious foreign actors can't steal America’s innovative AI capabilities to do so.”
US tech giant will assume customers’ liability for material created by AI assistants in Word and coding tools
Several leading AI companies – Anthropic, Google, Microsoft, and OpenAI – to partner with DARPA in major competition to make software more secure The Biden-Harris Administration today launched a major two-year competition that will use artificial intelligence (AI) to protect the United States’ most important software, such as code that helps run the internet and…
Earlier this week, the Republican National Committee released a video that it claims was “built entirely with AI imagery.” The content of the ad isn’t especially novel—a dystopian vision of America under a second term with President Joe Biden—but the deliberate emphasis on the technology used to create it stands out: It’s a “Daisy” moment for the 2020s.
Large Language Models (LLM) have made amazing progress in recent years. Most recently, they have demonstrated to answer natural language questions at a surprising performance level. In addition, by clever prompting, these models can change their behavior. In this way, these models blur the line between data and instruction. From "traditional" cybersecurity, we know that this is a problem. The importance of security boundaries between trusted and untrusted inputs for LLMs was underestimated. We show that Prompt Injection is a serious security threat that needs to be addressed as models are deployed to new use-cases and interface with more systems.
[PDF DOC] https://arxiv.org/pdf/2302.12173.pdf
Ever since OpenAI launched ChatGPT at the end of November, commentators on all sides have been concerned about the impact AI-driven content-creation will have, particularly in the realm of cybersecurity. In fact, many researchers are concerned that generative AI solutions will democratize cybercrime.
The way that many of our systems currently focus on engagement makes them particularly vulnerable to the incoming wave of content from bots like GPT-3