cybernews.com
Paulina Okunytė - Journalist
Published: 29 September 2025
Last updated: 29 September 2025
An EU privacy watchdog has filed a complaint against an AI company for selling creepy “reputation reports” that scrape anyone's sensitive information online.
Noyb, a non-profit organization that enforces data protection and privacy rights in Europe, has filed a complaint against a Lithuania-based AI company.
According to the complaint, the company has been scraping social media data and forming reports that included personality traits, conversation tips, photos taken from internet sources, religious beliefs, alcohol consumption, toxic behaviour, negative press, and flagged people for “dangerous political content” or “sexual nudity.”
Whitebridge AI markets its “reputation reports” as a way to “find everything about you online.”
The company’s ads seem to target the people it profiles, using slogans like “this is kinda scary” and “check your own data.” However, anyone willing to pay for a report could get information about a profiled person without informing them.
“Whitebridge AI just has a very shady business model aimed at scaring people into paying for their own, unlawfully collected data. Under EU law, people have the right to access their own data for free,” said Lisa Steinfeld, data protection lawyer at noyb.
When complainants represented by the NGO asked to see their reports, they got nowhere until noyb bought the reports themselves.
According to the noyb representatives, who downloaded the reports, the outputs are largely of low quality and seem to be randomly generated AI texts based on “unlawfully scraped online data.”
Some of the complainant’s reports contained false warnings for “sexual nudity” and “dangerous political content,” which are considered specially protected sensitive data under Article 9 of the GDPR.
In its privacy notice, Whitebridge claims that scraping user data is legal thanks to its “freedom to conduct a business.”
The company claims to only process data from “publicly available sources.”
According to the noyb representative, most of this data is taken from social network pages that are not indexed or found on search engines. The law states that entering information on a social networking application does not constitute making it “manifestly public.”
Under GDPR, any individual can request information about their data and ask for removal. Both complainants that noyb represents filed an access request under Article 15 GDPR, but didn’t receive the desired response from Whitebridge.ai.
When the complainants asked for corrections, Whitebridge demanded a qualified electronic signature. Such a requirement is not found anywhere in EU law, states noyb.
The watchdog demands that Whitebridge comply with the complainants’ access requests and fix the false data in the reports on them.
“We also request the company to comply with its information obligations, to stop all illegal processing, and to notify the complainants of the outcome of a rectification process. Last but not least, we suggest that the authority impose a fine to prevent similar violations in the future,” wrote noyb in the statement.
Cybernews reached out to Whitebridge.ai for a comment, but a response is yet to be received. We will update the article when we receive it.
Since late 2022, Earth Baku has broadened its scope from the Indo-Pacific region to Europe, the Middle East, and Africa. Their latest operations demonstrate sophisticated techniques, such as exploiting public-facing applications like IIS servers for initial access and deploying the Godzilla webshell for command and control.
As the use of ChatGPT and other artificial intelligence (AI) technologies becomes more widespread, it is important to consider the possible risks associated with their use. One of the main concerns surrounding these technologies is the potential for malicious use, such as in the development of malware or other harmful software. Our recent reports discussed how cybercriminals are misusing the large language model’s (LLM) advanced capabilities:
We discussed how ChatGPT can be abused to scale manual and time-consuming processes in cybercriminals’ attack chains in virtual kidnapping schemes.
We also reported on how this tool can be used to automate certain processes in harpoon whaling attacks to discover “signals” or target categories.