Physical AI: the robot revolution starts now
For years we’ve been talking to AI through screens. Typing prompts, reading outputs, copying and pasting between tools. All of it has been virtual. Pixels on glass. But something is shifting. AI is stepping out of the screen and into the physical world. Through robots, drones, self-driving vehicles, and hardware that can see, move, and interact with the world around us.
This is the next chapter. And it might be the biggest one yet.
A slow start
Humanoid robots are not new. Honda unveiled ASIMO back in 2000. It could walk, climb stairs, and wave at crowds. Sony had AIBO, the robot dog that became a collector’s item. Universities around the world poured decades of research into bipedal locomotion, grasping, and navigation.
But for a long time, robots were science projects. Impressive demos at trade shows. Clunky prototypes in labs. They could walk across a stage but couldn’t figure out what to do when they got to the other side.
The hardware was actually decent. Motors, sensors, actuators. All of that progressed steadily. The problem was the brain. Robots could move but they couldn’t think. They couldn’t adapt. They couldn’t understand context or recover from the unexpected. Every action had to be pre-programmed, and the real world is far too messy for that.
So for about two decades, progress was slow. Robots stayed in labs and factories doing repetitive, structured tasks. The revolution kept getting pushed back.
AI changes everything
What changed is exactly what you’d expect: AI got good enough.
Modern AI brought vision models that can actually see and understand what’s in front of them. Large language models gave robots the ability to understand instructions in natural language. Reinforcement learning taught them to figure things out through trial and error, learning to walk, grasp objects, and navigate spaces without being explicitly programmed for every scenario.
The missing piece was never the mechanics. It was the intelligence. We could build a robot body for years. We just couldn’t give it a brain that worked in the real world. Now we can. And that changes everything about the trajectory of physical AI.
When Boston Dynamics showed Atlas doing parkour, people were amazed. But the real breakthrough isn’t the backflip. It’s that a robot can now walk into an unfamiliar room, understand what it sees, hear a spoken instruction, figure out how to accomplish the task, and recover when something goes wrong. That’s intelligence. And it’s what makes everything else possible. I wrote about this shift back in 2023 in the robots are coming, but things have accelerated far beyond what I expected.
Figure's humanoid robots showing what's possible today.The state of humanoid robots in 2026
This is where it gets really interesting. The humanoid robot landscape has exploded, and most people haven’t noticed yet. If you want a taste of where things are, watch Unitree’s robots performing fully autonomous kung fu at China’s 2026 Spring Festival Gala. Nearly 700 million people saw that live.
Figure is another one to watch. Backed by major investors and partnered with OpenAI, their robots are already working in BMW factories. The Figure 02 combines advanced manipulation with conversational AI, and they’re moving fast.
China is leading on volume and price. Unitree has shipped around 1,400 units across their lineup. Their G1 model runs about $16,000. Their R1, designed for home use, starts at roughly $5,900. That’s less than a used car. The X1 Neo is expected to be the first humanoid robot sold directly to consumers, with deliveries starting this year, likely teleoperated initially but still a milestone.
Factories are already using them. UBTECH has deployed robots at Audi, BYD, and Foxconn factories, with over 500 units delivered in 2025. These aren’t demos. They’re working production lines.
AgiBot in Shanghai is mass producing over 1,500 units, with models priced between $14,000 and $55,000 depending on capability. Dobot has a model at $27,500 and is in mass production. Fourier’s GR-1 sits at the higher end around $150,000 but pushes the boundary on capability.
Tesla’s Optimus is said to go on sale this year. Whether it ships in volume remains to be seen, but the fact that a company with Tesla’s manufacturing scale is building humanoid robots matters enormously.
Open source is emerging too. X-Humanoid’s Tiangong platform is open source, and multiple companies are releasing open SDKs. This is significant because it means anyone can start building on top of these platforms. The same pattern that accelerated software AI is starting to play out in physical AI.
And the prices are dropping fast. A few years ago, a capable humanoid robot cost $150,000 or more. Now you can get one for under $16,000. According to Goldman Sachs, the global humanoid robot market could reach $38 billion by 2035. I think that estimate might be conservative.
When the dam breaks
Right now, most robots are in factories and research labs. Controlled environments where the tasks are relatively structured. But the dam is about to break.
When it does, we’ll see robots everywhere. In homes. On streets. In gardens. In restaurants. In hospitals and elder care facilities. On construction sites. In warehouses. On battlefields. In environments too dangerous for humans. Underground, underwater, in disaster zones.
The transition from “robots are a curiosity” to “robots are everywhere” will be faster than people expect. And the reason is simple: the AI is already there. The intelligence layer that held everything back for decades is now advancing faster than the hardware. Models are improving every few months. Once a robot has a capable enough body, you can keep upgrading its brain with software updates.
Think about how fast smartphones went from novelty to necessity. Robots will follow a similar curve, but potentially faster because the manufacturing infrastructure is already being built at scale in China. And once multiple robots need to work together, coordination becomes the real challenge.
What I see as a designer and product person
This is where my mind goes as someone who thinks about products, experiences, and how people interact with technology.
Manufacturing becomes personal. When robot labor is cheap and available, you can produce things locally and on demand. Custom furniture built in your neighborhood. Personalized products made in small batches. The economics of mass production flip when the labor cost approaches zero.
Service gets extreme. Imagine a restaurant where every table has attentive staff that never gets tired, never has a bad day, and remembers every customer’s preferences perfectly. Or a hotel where your room is prepared exactly the way you like it every single time. When service labor is abundant, the standard of service goes through the roof.
Home life transforms. A robot that constantly tends your garden, keeping it impeccable. One that handles all the cleaning, cooking, laundry, maintenance. Not as a clunky appliance but as something that understands your home and adapts to your routines. The home becomes a managed environment.
Elder care. An always-on companion that helps with medication, mobility, conversation, and daily tasks. One that can call for help when something is wrong. Robots could give elderly people more independence and let them stay in their own homes longer.
Customer service becomes physical. We’ve automated a lot of customer service online. Now imagine that extending into the physical world. A robot in a store that can walk you to what you’re looking for, explain products, handle returns. Not a kiosk. An actual presence.
And here’s the thing that’s hardest to wrap your head around: when work becomes basically free and local, you can do completely new things. Things we can’t even imagine yet. Just like nobody predicted Instagram or Uber when smartphones launched, the most interesting applications of physical AI will be things we haven’t thought of. The platform enables creativity we can’t forecast.
Start building now
If you’ve been following my thinking on AI, you know what I’m going to say. The same philosophy applies here as with software AI: just do things.
You don’t need to wait for the hardware to be perfect. You can start now. Prototype the interactions, the workflows, the use cases. Build the logic in software first. Design the experiences. Think about what a robot-assisted version of your business or life looks like.
You don’t even need hardware to get started. Open source simulation frameworks let you prototype robot behavior entirely in software. Gazebo and ROS 2 are the established stack for robotics simulation. MuJoCo (now open source from DeepMind) is great for physics-based learning and manipulation tasks. NVIDIA Isaac Sim gives you photorealistic environments for testing perception and navigation. Webots is a lightweight cross-platform option that’s easy to pick up. All free, all actively maintained, and all capable enough to build real prototypes before you ever touch hardware.
The robot revolution starts this year. Not in ten years. Not in five. The hardware is shipping. The AI is ready. The prices are accessible. The platforms are opening up.
If you’re a designer, start thinking about human-robot interaction. If you’re a developer, start exploring robotics frameworks. If you’re a business owner, start thinking about what becomes possible when physical labor costs drop by an order of magnitude.
Don’t wait for it to be obvious. By then you’re already behind.
The physical world is about to get a lot more intelligent
We spent the last few years making the digital world smarter. AI that writes, codes, designs, analyzes, creates. All incredible. All contained within screens.
Now that intelligence is getting a body. It’s stepping into our world. Into our homes, our streets, our workplaces. And it’s happening right now, not in some distant future.
The physical world is about to get a lot more intelligent. The question is what you’re going to build with it.
