futurism.com Aug 27, 5:05 PM EDT by Noor Al-Sibai
OpenAI has authorized itself to call law enforcement if users say threatening enough things when talking to ChatGPT.
Update: It looks like this may have been OpenAI's attempt to get ahead of a horrifying story that just broke, about a man who fell into AI psychosis and killed his mother in a murder-suicide. Full details here.
For the better part of a year, we've watched — and reported — in horror as more and more stories emerge about AI chatbots leading people to self-harm, delusions, hospitalization, arrest, and suicide.
As the loved ones of the people impacted by these dangerous bots rally for change to prevent such harm from happening to anyone else, the companies that run these AIs have been slow to implement safeguards — and OpenAI, whose ChatGPT has been repeatedly implicated in what experts are now calling "AI psychosis," has until recently done little more than offer copy-pasted promises.
In a new blog post admitting certain failures amid its users' mental health crises, OpenAI also quietly disclosed that it's now scanning users' messages for certain types of harmful content, escalating particularly worrying content to human staff for review — and, in some cases, reporting it to the cops.
"When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts," the blog post notes. "If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement."
That short and vague statement leaves a lot to be desired — and OpenAI's usage policies, referenced as the basis on which the human review team operates, don't provide much more clarity.
When describing its rule against "harm [to] yourself or others," the company listed off some pretty standard examples of prohibited activity, including using ChatGPT "to promote suicide or self-harm, develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system."
But in the post warning users that the company will call the authorities if they seem like they're going to hurt someone, OpenAI also acknowledged that it is "currently not referring self-harm cases to law enforcement to respect people’s privacy given the uniquely private nature of ChatGPT interactions."
While ChatGPT has in the past proven itself pretty susceptible to so-called jailbreaks that trick it into spitting out instructions to build neurotoxins or step-by-step instructions to kill yourself, this new rule adds an additional layer of confusion. It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police. We've reached out to OpenAI to ask for clarity.
While it's certainly a relief that AI conversations won't result in police wellness checks — which often end up causing more harm to the person in crisis due to most cops' complete lack of training in handling mental health situations — it's also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it's monitoring user chats and potentially sharing them with the fuzz.
To make the announcement all the weirder, this new rule seems to contradict the company's pro-privacy stance amid its ongoing lawsuit with the New York Times and other publishers as they seek access to troves of ChatGPT logs to determine whether any of their copyrighted data had been used to train its models.
OpenAI has steadfastly rejected the publishers' request on grounds of protecting user privacy and has, more recently, begun trying to limit the amount of user chats it has to give the plaintiffs.
Last month, the company's CEO Sam Altman admitted during an appearance on a podcast that using ChatGPT as a therapist or attorney doesn't confer the same confidentiality that talking to a flesh-and-blood professional would — and that thanks to the NYT lawsuit, the company may be forced to turn those chats over to courts.
In other words, OpenAI is stuck between a rock and a hard place. The PR blowback from its users spiraling into mental health crises and dying by suicide is appalling — but since it's clearly having trouble controlling its own tech enough to protect users from those harmful scenarios, it's falling back on heavy-handed moderation that flies in the face of its own CEO's promises.
theregister.com - Under oath in French Senate, exec says it would be compelled – however unlikely – to pass local customer info to US admin
Microsoft says it "cannot guarantee" data sovereignty to customers in France – and by implication the wider European Union – should the Trump administration demand access to customer information held on its servers.
The Cloud Act is a law that gives the US government authority to obtain digital data held by US-based tech corporations irrespective of whether that data is stored on servers at home or on foreign soil. It is said to compel these companies, via warrant or subpoena, to accept the request.
Talking on June 18 before a Senate inquiry into public procurement and the role it plays in European digital sovereignty, Microsoft France's Anton Carniaux, director of public and legal affairs, along with Pierre Lagarde, technical director of the public sector, were quizzed by local politicians.
Asked of any technical or legal mechanisms that could prevent this access under the Cloud Act, Carniaux said it had "contractually committed to our clients, including those in the public sector, to resist these requests when they are unfounded."
"We have implemented a very rigorous system, initiated during the Obama era by legal actions against requests from the authorities, which allows us to obtain concessions from the American government. We begin by analyzing very precisely the validity of a request and reject it if it is unfounded."
He said that Microsoft asks the US administration to redirect it to the client.
"When this proves impossible, we respond in extremely specific and limited cases. I would like to point out that the government cannot make requests that are not precisely defined."
Carniaux added: "If we must communicate, we ask to be able to notify the client concerned." He said that under the former Obama administration, Microsoft took cases to the US Supreme Court and as such ensured requests are "more focused, precise, justified and legally sound."
The Irish Data Privacy Commission announced that TikTok is facing a new European Union privacy investigation into user data sent to China.
TikTok is facing a fresh European Union privacy investigation into user data sent to China, regulators said Thursday.
The Data Protection Commission opened the inquiry as a follow up to a previous investigation that ended earlier this year with a 530 million euro ($620 million) fine after it found the video sharing app put users at risk of spying by allowing remote access their data from China.
The Irish national watchdog serves as TikTok’s lead data privacy regulator in the 27-nation EU because the company’s European headquarters is based in Dublin.
During an earlier investigation, TikTok initially told the regulator it didn’t store European user data in China, and that data was only accessed remotely by staff in China. However, it later backtracked and said that some data had in fact been stored on Chinese servers. The watchdog responded at the time by saying it would consider further regulatory action.
“As a result of that consideration, the DPC has now decided to open this new inquiry into TikTok,” the watchdog said.
“The purpose of the inquiry is to determine whether TikTok has complied with its relevant obligations under the GDPR in the context of the transfers now at issue, including the lawfulness of the transfers,” the regulator said, referring to the European Union’s strict privacy rules, known as the General Data Protection Regulation.
TikTok, which is owned by China’s ByteDance, has been under scrutiny in Europe over how it handles personal user information amid concerns from Western officials that it poses a security risk.
TikTok noted that it was one that notified the Data Protection Commission, after it embarked on a data localization project called Project Clover that involved building three data centers in Europe to ease security concerns.
“Our teams proactively discovered this issue through the comprehensive monitoring TikTok implemented under Project Clover,” the company said in a statement. “We promptly deleted this minimal amount of data from the servers and informed the DPC. Our proactive report to the DPC underscores our commitment to transparency and data security.”
Under GDPR, European user data can only be transferred outside of the bloc if there are safeguards in place to ensure the same level of protection. Only 15 countries or territories are deemed to have the same data privacy standard as the EU, but China is not one of them.
For many years, data brokers have existed in the shadows, exploiting gaps in privacy laws to harvest our information—all for their own profit. They sell our precise movements without our knowledge or meaningful consent to a variety of private and state actors, including law enforcement agencies. And they show no sign of stopping.
This incentivizes other bad actors. If companies collect any kind of personal data and want to make a quick buck, there’s a data broker willing to buy it and sell it to the highest bidder–often law enforcement and intelligence agencies.
One recent investigation by 404 Media revealed that the Airlines Reporting Corporation (ARC), a data broker owned and operated by at least eight major U.S. airlines, including United Airlines and American Airlines, collected travelers’ domestic flight records and secretly sold access to U.S. Customs and Border Protection (CBP). Despite selling passengers’ names, full flight itineraries, and financial details, the data broker prevented U.S. border forces from revealing it as the origin of the information. So, not only is the government doing an end run around the Fourth Amendment to get information where they would otherwise need a warrant—they’ve also been trying to hide how they know these things about us.
ARC’s Travel Intelligence Program (TIP) aggregates passenger data and contains more than one billion records spanning 39 months of past and future travel by both U.S. and non-U.S. citizens. CBP, which sits within the U.S. Department of Homeland Security (DHS), claims it needs this data to support local and state police keeping track of people of interest. But at a time of growing concerns about increased immigration enforcement at U.S. ports of entry, including unjustified searches, law enforcement officials will use this additional surveillance tool to expand the web of suspicion to even larger numbers of innocent travelers.
More than 200 airlines settle tickets through ARC, with information on more than 54% of flights taken globally. ARC’s board of directors includes representatives from U.S. airlines like JetBlue and Delta, as well as international airlines like Lufthansa, Air France, and Air Canada.
In selling law enforcement agencies bulk access to such sensitive information, these airlines—through their data broker—are putting their own profits over travelers' privacy. U.S. Immigration and Customs Enforcement (ICE) recently detailed its own purchase of personal data from ARC. In the current climate, this can have a detrimental impact on people’s lives.
Company will no longer provide its highest security offering in Britain in the wake of a government order to let security officials see protected data.
Our first network security analysis of the popular Chinese social media platform, RedNote, revealed numerous issues with the Android and iOS versions of the app. Most notably, we found that both the Android and iOS versions of RedNote fetch viewed images and videos without any encryption, which enables network eavesdroppers to learn exactly what content users are browsing. We also found a vulnerability in the Android version that enables network attackers to learn the contents of files on users’ devices. We disclosed the vulnerability issues to RedNote, and its vendors NEXTDATA, and MobTech, but did not receive a response from any party. This report underscores the importance of using well-supported encryption implementations, such as transport layer security (TLS). We recommend that users who are highly concerned about network surveillance from any party refrain from using RedNote until these security issues are resolved.
The iPhone Mirroring feature in macOS Sequoia and iOS 18 may expose employees’ private applications to corporate IT environments.
To attract users across the Global Majority, many technology companies have introduced “lite” versions of their products: Applications that are designed for lower-bandwidth contexts. TikTok is no exception, with TikTok Lite estimated to have more than 1 billion users.
Mozilla and AI Forensics research reveals that TikTok Lite doesn’t just reduce required bandwidth, however. In our opinion, it also reduces trust and safety. In comparing TikTok Lite with the classic TikTok app, we found several discrepancies between trust and safety features that could have potentially dangerous consequences in the context of elections and public health.
Our research revealed TikTok Lite lacks basic protections that are afforded to other TikTok users, including content labels for graphic, AI-generated, misinformation, and dangerous acts videos. TikTok Lite users also encounter arbitrarily shortened video descriptions that can easily eliminate crucial context.
Further, TikTok Lite users have fewer proactive controls at their disposal. Unlike traditional TikTok users, they cannot filter offensive keywords or implement screen management practices.
Our findings are concerning, and reinforce patterns of double-standard. Technology platforms have a history of neglecting users outside of the US and EU, where there is markedly less potential for constraining regulation and enforcement. As part of our research, we discuss the implications of this pattern and also offer concrete recommendations for TikTok Lite to improve.
Secure and private AI processing in the cloud poses a formidable new challenge. To support advanced features of Apple Intelligence with larger foundation models, we created Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed specifically for private AI processing. Built with custom Apple silicon and a hardened operating system, Private Cloud Compute extends the industry-leading security and privacy of Apple devices into the cloud, making sure that personal user data sent to PCC isn’t accessible to anyone other than the user — not even to Apple. We believe Private Cloud Compute is the most advanced security architecture ever deployed for cloud AI compute at scale.