Sat. Feb 21st, 2026

Android AI app exposes nearly 2m user files – including private videos


Share

Image: Cybernews

A popular Android AI application has left millions of private user files exposed, allowing anyone with the correct link to view private videos and photos without a password.

Researchers from Cybernews discovered that “Video AI Art Generator & Maker,” an app designed to transform media using artificial intelligence, suffered from a critical server misconfiguration. The lapse highlights the growing privacy risks associated with the rapid rise of AI-powered creative tools.

The security failure centered on a misconfigured Google Cloud Storage bucket which lacked any form of authentication. Because the server was left open, every single piece of media uploaded to the app since its launch in June 2023 was accessible to the public.

In total, the exposed bucket contained approximately 8.27 million media files, creating a massive digital footprint of sensitive user data.

Millions of private memories at risk

The breach is particularly severe because it involves nearly 2 million original, private files uploaded by users from their personal devices. Specifically, the leak includes over 1.57 million private images and more than 385,000 personal videos.

Beyond these original uploads, the database also spilled millions of AI-generated assets, including 2.87 million generated videos, 2.87 million images, and over 386,000 audio files.

The app was developed by Codeway Dijital Hizmetler Anonim Sirketi, a firm registered in Turkey. While the developers have since secured the bucket, the exposure affects anyone who has used the application to generate AI art over the past several years.

The scale of the leak is compounded by the app’s own privacy documentation, which explicitly warns that shared information “cannot be regarded as 100% secure” and may be subject to unauthorized access.

Legal experts suggest these disclaimers may fall short of strict international privacy standards, such as Europe’s General Data Protection Regulation (GDPR), which mandates that companies provide “material and verifiable” security for user data.

For the affected users, the primary risks include targeted phishing, identity theft, or the potential for private videos to be repurposed for malicious “deepfake” content.

Security researchers advise that users of AI editing tools should regularly audit their app permissions and remain cautious about uploading highly personal or identifying content to cloud-based platforms that do not guarantee end-to-end encryption.

This is not the first time the company’s apps have leaked user data. Reportedly, an independent security researcher has discovered that another app developed by Codeway, Chat & Ask AI, had a misconfigured backend using Google Firebase. According to the researcher, he accessed roughly 300 million messages tied to more than 25 million users.

For more information, see the full report: https://cybernews.com/security/android-ai-app-photo-video-editor-leak/


For latest tech stories go to TechDigest.tv


Discover more from Tech Digest

Subscribe to get the latest posts sent to your email.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *