
As artificial intelligence (AI) technologies continue to deeply integrate into various aspects of our lives, the issue of personal data security has emerged as a pressing global concern. Every algorithm, digital service, or assistant, in essence, collects, analyzes, and in many cases stores vast amounts of user data.
From mobile phones and messaging apps to medical applications and even music recommendations—AI systems are actively at work. Yet most users remain unaware of how much personal information they hand over to these systems daily. While AI may appear to bring convenience into our lives, few stop to consider how their data is being collected and used behind the scenes.
How is our data collected?
AI systems typically gather users’ digital footprints—browsing behavior, app usage patterns, voice commands, images, and even facial expressions. With facial recognition technologies, even details such as a person’s location, health status, or emotional state can be inferred.
In 2021, the U.S.-based company Clearview AI came under heavy criticism after it was discovered that the firm had collected millions of facial images from social media without user consent and shared them with law enforcement. Both the European Union and Canadian authorities launched investigations into this matter. Although Clearview AI claimed its actions were aimed at enhancing security, its practices were deemed violations of privacy norms.
When AI “leaks” information
In April 2023, a technical malfunction in OpenAI’s popular AI model—ChatGPT—exposed some users’ chat histories and billing information to others. The company acknowledged the security lapse and temporarily suspended the service.
Another incident occurred in 2020 when Amazon’s Alexa voice assistant was found to have recorded random user conversations. These recordings were reportedly accessed by Amazon employees under the guise of system improvements. The company later restricted employee access to such data.
Are medical records safe?
In healthcare, AI is accelerating diagnostic processes—but this often puts sensitive patient information at risk. In 2016, DeepMind, a subsidiary of Google, collaborated with the UK’s National Health Service (NHS) to build a medical AI platform. During this collaboration, health data of 1.6 million patients was transferred to Google without proper authorization. The incident was later condemned as a privacy breach and sparked widespread public debate.
The “consent” problem in AI
Consent is widely cited as a legal basis for data use. Many platforms require users to click a simple “I agree” button—after which their data is collected, stored, and processed by AI algorithms. However, this consent is often uninformed, superficial, or effectively coerced.
Mobile apps—including messaging, weather, and even flashlight applications—commonly request permissions to access location, contacts, microphone, camera, and files. Frequently, these permissions have no direct relevance to the app’s functionality. Yet without granting them, the user is denied access to the service—turning consent into a form of conditional compulsion.
A 2021 report by the Mozilla Foundation revealed that 80% of mobile apps collect excessive data and fail to clearly inform users how that data is utilized.
Furthermore, many tech companies that cite user consent often obtain it without meaningful disclosure. In 2019, Facebook admitted to collecting the email contact lists of over 1.5 million users without their knowledge. During account registration, users were asked to enter their email passwords—enabling the system to automatically scrape and store their contact data.
Similarly, Google’s Location History service was found to track users’ geolocation even when the feature was turned off. An Associated Press investigation revealed that both Android and iPhone users were being secretly monitored by certain services despite disabling location tracking.
What about Uzbekistan?
In Uzbekistan, many government services have been digitized—through platforms such as my.gov.uz, id.gov.uz, and Safe City systems. These platforms maintain extensive user data repositories. In 2022, a technical glitch on the Unified Portal of Interactive Public Services exposed some users’ phone numbers and personal details publicly.
As of now, Uzbekistan lacks specific legislation regulating the AI sector. The 2020 Law “On Personal Data” outlines general principles but fails to address AI-specific risks from a legal standpoint.
What can be done?
In today’s digital world, rejecting AI entirely is not feasible. However, users must become more aware of the data they share and exercise caution. Governments, too, must enact stronger data protection laws and develop clear ethical and legal norms for AI technologies.
International organizations such as the UN and the European Union are already developing AI safety frameworks. Without such measures, the risk of entering an era of “digital servitude” becomes increasingly real.
AI has the potential to be a source of opportunity. But if used irresponsibly or without limits, it can become a tool of risk. Personal data is one of the most valuable assets of the modern age.
Global experience shows that independent oversight, robust legal regulation, and adherence to ethical standards are crucial for AI-based services. Otherwise, technology may cease to empower individuals and instead become a tool of surveillance and manipulation.
Prepared by: Mahliyo Hamid
Leave a Reply