In a joint column by IT Park Uzbekistan and The Tech, Sanzhar shared how his journey unfolded—from a boy herding sheep under the stars of Samarkand to the founder of a startup in Silicon Valley that raised $2.5 million to build a foundation for global robotics.
Aboutme
I was born in Samarkand — a city where childhood passes beneath endless stars and where dreams somehow feel closer. From a very early age, I was drawn to engineering: I would take apart everything I could, trying to understand how mechanisms worked. Later, this curiosity turned into a passion for computers and robots.
After finishing school, I moved to South Korea to study at KAIST — one of the world’s leading universities in engineering and robotics. There, I studied computer science with a focus on robotics, built autonomous systems and drones. That experience became my starting point: I realized I wanted to dedicate my entire life to humanoid robots and artificial intelligence.
During my studies at KAIST, I focused primarily on AI and robotics. During academic semesters, I immersed myself in theory, while during breaks I worked as an intern in various companies. In Korea, there are two long breaks every year — summer and winter — each lasting about two and a half months. This gave me ample time to work on real-world engineering projects.
First steps in the profession
After returning to South Korea, I interned at StoneLab Inc., where I participated in the development of a medical diagnostic application based on computer vision. My work included data annotation, neural network training, and backend integration. Around the same time, I built an API for biometric authentication — my first system used by real users.
My next major milestone was Macroact Inc., where I worked on autonomous navigation for quadruped robots, optimized algorithms, tuned robot behavior, and ran simulations in Gazebo — a virtual environment for testing robots without risking real hardware. This became a powerful launchpad toward more complex robotic systems.
While studying at KAIST, I also worked as a researcher at the university’s AVE Lab and IRiS Lab. At AVE Lab, I developed algorithms for autonomous vehicles. At IRiS Lab, I worked on perception systems for self-driving cars — from processing LiDAR data to lane detection. This stage gave me deep insight into how robots “see” the world.

Industry experience
After my research work, I transitioned into industry. At Digitrack Inc., I worked on autonomous mobile robots (AMRs) for warehouse automation. Later, at Raion Robotics, I focused on quadruped robot locomotion. It was a real challenge to teach a robot to walk stably, adapt to terrain, and perform complex maneuvers.
Education and work in the U.S.
I then moved to Atlanta and enrolled at Georgia Institute of Technology — one of the strongest robotics and AI centers in the world.
My education in the U.S. became a turning point. I gained deep fundamental knowledge in robot control, neural networks, perception systems, and autonomous behavior. I studied courses on robotic manipulators, motion optimization, machine learning, and real-time algorithms — everything that forms the backbone of modern robotics.
In parallel, I worked as a robotics engineer at Atlanta Ventures, where I developed mobile robots for security and inspection tasks. That was when I truly felt how theory turns into real code, algorithms, and moving machines.
Later, I continued my research at Georgia Tech, focusing on humanoid loco-manipulation — the ability of humanoid robots to coordinate walking, lifting objects, opening doors, and performing complex sequential tasks.
That was the moment of realization: large language models achieved their breakthrough thanks to massive amounts of internet data. But robots have no such “internet” — they are starving for physical data. I realized the only scalable way to generate high-quality training data was through telepresence — when a human directly operates the robot. Yet no one in the world was doing this in a systematic, industry-wide way.
That idea changed my life. In June 2025, Humanola was born.
The problem Humanola solves
Today, robot training data is isolated inside individual companies and laboratories. Everyone builds their own small datasets, no one shares, and the entire industry stagnates. It is like a world where every AI system in 2023 was trained on a small private version of the internet.
Before Humanola, every company had to build its own data collection infrastructure — expensive, slow, and inefficient. Many relied on limited datasets, which severely slowed down progress. There was no unified ecosystem that could connect data and provide access at scale.
Our team envisions a future where robots perform dangerous, monotonous, and physically demanding work, while humans focus on creativity and strategy. The only real barrier between today and that future is the lack of data for physical AI. By solving this problem, we accelerate that future by years. Humanola develops a platform for remote robot control and a complete infrastructure for collecting, processing, and analyzing physical data.
The platform consists of two key components:
— operators use VR headsets to control robots in real time with minimal latency
— all session data is automatically collected, cleaned, labeled, and converted into ready-to-use datasets for AI training.
As a result, companies receive a powerful tool to dramatically accelerate the development of their robots. Humanola is the only independent platform not tied to any specific hardware and providing a full cycle — from robot control to full-scale data processing. Other solutions either require building proprietary infrastructure or lack this level of integration.
The biggest challenge was achieving ultra-low-latency control over long distances and integrating multiple types of hardware. This required deep technical optimization across all layers — from the VR interface to network protocols and cloud infrastructure.
We solved this using our own real-time networking stack based on UDP with error correction and adaptive streaming, similar to top-tier VR games. For long-distance control, we deployed regional edge servers to keep latency within 80–100 ms even between continents.
A breakthrough came when we combined private 5G networks in test environments with GPU-accelerated video encoding and decoding on both operator and robot sides. This enabled stable sub-100 ms control — for example, from Tashkent to San Francisco. We proved that global, scalable telepresence is real.
What I am most proud of:
— We have real clients who report radical improvements in their workflow. Today, we have two B2B customers — both humanoid robot developers — actively using our platform for telepresence and data collection. Their development speed increased by 30–40%.
— A rapid transition from idea to an industrial-grade platform. Companies can instantly deploy robots and control them from anywhere in the world without long development cycles.
— A strong, mission-driven team. Today, our team consists of five people: three co-founders and two engineers. I lead the company as CEO, overseeing both technology and strategy.
We completed a $2.5 million seed round from U.S. venture funds and angel investors. First clients are already deploying the platform.
The U.S., Korea, and Uzbekistan
The U.S. is the global leader in building the “brain” of robotics — artificial intelligence and software that enables robots to think and act. Korea has strong teams and consistently pushes hardware innovation, though its market scale is smaller. Uzbekistan is only at the beginning of its robotics journey.
We aim to integrate Humanola into all major humanoid robot platforms in the U.S. and globally, becoming an infrastructure partner for industry leaders such as Nvidia, Tesla, and Google in building physical AGI. Over the next 12 months, we plan large-scale deployments through robot manufacturers. We are already hiring telepresence operators from Uzbekistan who will remotely control robots operating in the U.S. and other countries.
This opens massive economic opportunities — thousands of new remote jobs in Uzbekistan while simultaneously scaling the volume of physical AI data. Operators can work from anywhere, robots can act anywhere, and all data will return to improve future models. It is a win for the global robotics industry, emerging job markets, and the future of physical AI.















