The FBI is warning the public that criminals exploit generative artificial intelligence (AI) to commit fraud on a larger scale which increases the believability of their schemes. Generative AI reduces the time and effort criminals must expend to deceive their targets. Generative AI takes what it has learned from examples input by a user and synthesizes something entirely new based on that information. These tools assist with content creation and can correct for human errors that might otherwise serve as warning signs of fraud. The creation or distribution of synthetic content is not inherently illegal; however, synthetic content can be used to facilitate crimes, such as fraud and extortion.1 Since it can be difficult to identify when content is AI-generated, the FBI is providing the following examples of how criminals may use generative AI in their fraud schemes to increase public recognition and scrutiny.
MITRE’s AI Incident Sharing initiative helps organizations receive and hand out data on real-world AI incidents.
Non-profit technology and R&D company MITRE has introduced a new mechanism that enables organizations to share intelligence on real-world AI-related incidents.
Shaped in collaboration with over 15 companies, the new AI Incident Sharing initiative aims to increase community knowledge of threats and defenses involving AI-enabled systems.
Olga Loiek, a University of Pennsylvania student was looking for an audience on the internet – just not like this.
Shortly after launching a YouTube channel in November last year, Loiek, a 21-year-old from Ukraine, found her image had been taken and spun through artificial intelligence to create alter egos on Chinese social media platforms.
Her digital doppelgangers - like "Natasha" - claimed to be Russian women fluent in Chinese who wanted to thank China for its support of Russia and make a little money on the side selling products such as Russian candies.
A NewsGuard audit found that chatbots spewed misinformation from American fugitive John Mark Dougan.
#AI #Axios #ChatGPT #Google #Illustrations #License #Microsoft #Misinformation #OpenAI #Visuals #genAI #generative #or
We recently rolled out a re-acceptance of our Terms of Use which has led to concerns about what these terms are and what they mean to our customers. This has caused us to reflect on the language we use in our Terms, and the opportunity we have to be clearer and address the concerns raised by the community.
Over the next few days, we will speak to our customers with a plan to roll out updated changes by June 18, 2024.
Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale.
I don’t think it’s an exaggeration to predict that artificial intelligence will affect every aspect of our society. Not by doing new things. But mostly by doing things that are already being done by humans, perfectly competently.
Replacing humans with AIs isn’t necessarily interesting. But when an AI takes over a human task, the task changes.