Pivot
  • Market Data & Reports
  • Podcasts
  • Events
  • Premium
  • English
    • Uzbek
No Result
View All Result
  • Login
  • News
  • Funding & Deals
  • Startups
  • Venture Capital
  • SaaS & AI
  • Founder Stories
  • Uzbek Startups
Pivot
  • Market Data & Reports
  • Podcasts
  • Events
  • Premium
  • English
    • Uzbek
No Result
View All Result
Pivot

Has AI set its sights on your privacy?

by Gulnoza Sobirova
June 18, 2025
in SaaS & AI
Reading Time: 4 mins read
A A
Has AI set its sights on your privacy?
Share on FacebookShare on TwitterShare on Telegram

As artificial intelligence (AI) technologies continue to deeply integrate into various aspects of our lives, the issue of personal data security has emerged as a pressing global concern. Every algorithm, digital service, or assistant, in essence, collects, analyzes, and in many cases stores vast amounts of user data.

From mobile phones and messaging apps to medical applications and even music recommendations—AI systems are actively at work. Yet most users remain unaware of how much personal information they hand over to these systems daily. While AI may appear to bring convenience into our lives, few stop to consider how their data is being collected and used behind the scenes.

How is our data collected?

AI systems typically gather users’ digital footprints—browsing behavior, app usage patterns, voice commands, images, and even facial expressions. With facial recognition technologies, even details such as a person’s location, health status, or emotional state can be inferred.

In 2021, the U.S.-based company Clearview AI came under heavy criticism after it was discovered that the firm had collected millions of facial images from social media without user consent and shared them with law enforcement. Both the European Union and Canadian authorities launched investigations into this matter. Although Clearview AI claimed its actions were aimed at enhancing security, its practices were deemed violations of privacy norms.

When AI “leaks” information

In April 2023, a technical malfunction in OpenAI’s popular AI model—ChatGPT—exposed some users’ chat histories and billing information to others. The company acknowledged the security lapse and temporarily suspended the service.

Another incident occurred in 2020 when Amazon’s Alexa voice assistant was found to have recorded random user conversations. These recordings were reportedly accessed by Amazon employees under the guise of system improvements. The company later restricted employee access to such data.

Are medical records safe?

In healthcare, AI is accelerating diagnostic processes—but this often puts sensitive patient information at risk. In 2016, DeepMind, a subsidiary of Google, collaborated with the UK’s National Health Service (NHS) to build a medical AI platform. During this collaboration, health data of 1.6 million patients was transferred to Google without proper authorization. The incident was later condemned as a privacy breach and sparked widespread public debate.

The “consent” problem in AI

Consent is widely cited as a legal basis for data use. Many platforms require users to click a simple “I agree” button—after which their data is collected, stored, and processed by AI algorithms. However, this consent is often uninformed, superficial, or effectively coerced.

Mobile apps—including messaging, weather, and even flashlight applications—commonly request permissions to access location, contacts, microphone, camera, and files. Frequently, these permissions have no direct relevance to the app’s functionality. Yet without granting them, the user is denied access to the service—turning consent into a form of conditional compulsion.

A 2021 report by the Mozilla Foundation revealed that 80% of mobile apps collect excessive data and fail to clearly inform users how that data is utilized.

Furthermore, many tech companies that cite user consent often obtain it without meaningful disclosure. In 2019, Facebook admitted to collecting the email contact lists of over 1.5 million users without their knowledge. During account registration, users were asked to enter their email passwords—enabling the system to automatically scrape and store their contact data.

Similarly, Google’s Location History service was found to track users’ geolocation even when the feature was turned off. An Associated Press investigation revealed that both Android and iPhone users were being secretly monitored by certain services despite disabling location tracking.

What about Uzbekistan?

In Uzbekistan, many government services have been digitized—through platforms such as my.gov.uz, id.gov.uz, and Safe City systems. These platforms maintain extensive user data repositories. In 2022, a technical glitch on the Unified Portal of Interactive Public Services exposed some users’ phone numbers and personal details publicly.

As of now, Uzbekistan lacks specific legislation regulating the AI sector. The 2020 Law “On Personal Data” outlines general principles but fails to address AI-specific risks from a legal standpoint.

What can be done?

In today’s digital world, rejecting AI entirely is not feasible. However, users must become more aware of the data they share and exercise caution. Governments, too, must enact stronger data protection laws and develop clear ethical and legal norms for AI technologies.

International organizations such as the UN and the European Union are already developing AI safety frameworks. Without such measures, the risk of entering an era of “digital servitude” becomes increasingly real.

AI has the potential to be a source of opportunity. But if used irresponsibly or without limits, it can become a tool of risk. Personal data is one of the most valuable assets of the modern age.

Global experience shows that independent oversight, robust legal regulation, and adherence to ethical standards are crucial for AI-based services. Otherwise, technology may cease to empower individuals and instead become a tool of surveillance and manipulation.

Prepared by: Mahliyo Hamid

Previous Post

How Customer Feedback Can Become Your Growth Engine

Next Post

What Is Corporate Venture Capital (CVC)?

Gulnoza Sobirova

Related Posts

Anthropic drops flagship safety pledge

Anthropic drops flagship safety pledge

February 28, 2026
Nvidia invests $2 Billion in Synopsys, strengthening its position in AI chip development

Nvidia invests $2 Billion in Synopsys, strengthening its position in AI chip development

December 2, 2025
Kazakhstan adopts new laws regulating Artificial Intelligence

Kazakhstan adopts new laws regulating Artificial Intelligence

November 22, 2025
Can AI really measure pain?

Can AI really measure pain?

October 25, 2025
Next Post
What Is Corporate Venture Capital (CVC)?

What Is Corporate Venture Capital (CVC)?

Pulseev to Install 3,000 Locally-Produced EV Chargers in Uzbekistan, Launching the Region’s First National Charging Network

Pulseev to Install 3,000 Locally-Produced EV Chargers in Uzbekistan, Launching the Region’s First National Charging Network

Please login to join discussion
  • Trending
  • Comments
  • Latest

18-year-old high school dropout raises $6.2M from Y Combinator

October 2, 2025
Junior crisis: are IT training centers creating an army of the unemployed?

Junior crisis: are IT training centers creating an army of the unemployed?

January 6, 2026
Airbnb: The $100 Billion Success Story – Its Origins and Transformative Impact on Hospitality!

Airbnb: The $100 Billion Success Story – Its Origins and Transformative Impact on Hospitality!

January 4, 2025
Alipos startup received a $200,000 investment offer on the “Taqdimot” TV show

Alipos startup received a $200,000 investment offer on the “Taqdimot” TV show

November 25, 2025
$1 billion allocated to the “Mahalla Project” program

$1 billion allocated to the “Mahalla Project” program

AloqaVentures: Fueling Innovation in Uzbekistan’s Startup Ecosystem

AloqaVentures: Fueling Innovation in Uzbekistan’s Startup Ecosystem

Musk’s xAI Valuation Surpasses $40 Billion After Funding Round

What changes does Elon Musk want to make with a $6 billion investment?

What changes does Elon Musk want to make with a $6 billion investment?

Zypl.ai raises $2M at an $80M valuation

Zypl.ai raises $2M at an $80M valuation

March 12, 2026

New rules for issuing IT visas approved in Uzbekistan: what has changed for investors and founders?

March 11, 2026
Meta acquires Moltbook: a social network designed for AI

Meta acquires Moltbook: a social network designed for AI

March 11, 2026
Uzum raised $130 million in investment from the Oman investment authority

Uzum raised $130 million in investment from the Oman investment authority

March 10, 2026

Pivot

We are the Intelligence Platform for Founders & Investors in Emerging Markets — combining news, data, and community to unlock opportunities across GCC, Central Asia, and frontier ecosystems.

Follow us

Categories

  • News
  • Funding & Deals
  • Startups
  • Venture Capital
  • SaaS & AI
  • Founder Stories
  • Uzbek Startups

Pages

  • Market Data & Reports
  • Podcasts
  • Events
  • Premium
  • English
    • Uzbek

Recent Post

  • Zypl.ai raises $2M at an $80M valuation
  • New rules for issuing IT visas approved in Uzbekistan: what has changed for investors and founders?
  • Meta acquires Moltbook: a social network designed for AI
  • Privacy policy

© 2025 Pivot

Welcome Back!

Sign In with Google
Sign In with Linked In
OR

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • News
  • Funding & Deals
  • Startups
  • Venture Capital
  • SaaS & AI
  • Founder Stories
  • Uzbek Startups
  • Login
  • Cart
  • uz Uzbek
  • en English

© 2025 Pivot

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?