AI for Automation
Back to AI News
2026-03-17roboticsopen-sourcePythonAI agentsnatural language controlDimOS

Say "Go to the Kitchen" and the Robot Moves

DimOS, an open-source project that dramatically lowers the barrier to robot programming, is trending on GitHub. Previously, developers had to learn the complex ROS (Robot Operating System), but DimOS lets you control quadruped robots with just a few lines of Python and natural language commands...


Until now, programming a robot required years of learning a complex system called ROS (Robot Operating System). Knowledge of C++ and the Linux kernel was a baseline requirement, and connecting a single sensor could take days.

DimOS is an open-source robot operating system that replaces all of that with just a few lines of Python. Say "go to the kitchen" and the robot finds its own way there. Ask "is there a red cup on the desk?" and it checks with its camera and answers.

DimOS agent control interface — 3D environment mapping and real-time monitoring

Control Robots with Python Alone — No ROS Required

The core value proposition of DimOS is simple: dramatically lower the barrier to entry for robot development.

Traditional Approach (ROS): Hundreds of lines of C++ → build configuration → inter-node communication setup → sensor calibration → testing... (weeks to months)

DimOS Approach: Install Python → uv pip install 'dimos[base,unitree]' → give natural language commands → robot moves (minutes to hours)

The list of currently supported hardware is equally impressive.

Quadruped Robots — Unitree Go2 Pro/Air (stable support), Unitree B1 (experimental)

Humanoids — Unitree G1 (beta)

Robot Arms — Xarm, AgileX Piper (beta)

Drones — MAVLink-compatible aircraft, DJI Mavic (alpha)

AI Becomes the Robot's Eyes and Memory

What makes DimOS more than a simple "remote control app" is that an AI agent serves as the robot's brain.

DimOS perception system — 3D point cloud mapping and automatic object detection

The image above shows DimOS's perception system in action. On the left is a 3D map the robot built of its surroundings, and on the bottom right, you can see it automatically detecting and identifying beverage bottles on a convenience store shelf. Each object displays a confidence score (0.677–0.773), enabling the robot to accurately execute commands like "bring me a Coca-Cola."

Here's a summary of the core AI capabilities.

🗺️ SLAM Navigation — The robot builds its own map as it moves, automatically avoiding obstacles and navigating to its destination

👁️ Vision AI — Recognizes objects via camera and answers questions like "what's that over there?" in natural language (powered by a vision-language model)

🧠 Spatial Memory — The robot remembers what it has seen. Ask "where were the keys I saw in the living room earlier?" and it tells you the location

🔌 MCP Integration — Supports MCP (a standard that lets AI connect to and use external tools), allowing you to directly connect AI systems like ChatGPT or Claude to your robot

DimOS spatial memory system — the robot remembers explored environments in 3D

Get Started in 5 Minutes

One particularly appealing feature is that you can try it in simulation mode even without a physical robot.

# 1. One-line install
curl -fsSL https://raw.githubusercontent.com/dimensionalOS/dimos/main/scripts/install.sh | bash

# 2. Launch Go2 robot in simulation
dimos --replay run unitree-go2

# 3. Send a natural language command
dimos agent-send "go to the kitchen"

# 4. Run in background
dimos run unitree-go2 --daemon

If you prefer direct control with Python code, you can do this instead.

from dimos.robot import UnitreeGo2
from dimos.agents import NavigationAgent

robot = UnitreeGo2()
agent = NavigationAgent(robot)
agent.go_to("kitchen")

After GTC 2026: Could This Become the "Android" of Robotics?

The timing of this project's rise to prominence is striking. At NVIDIA GTC 2026 this week, CEO Jensen Huang declared "the age of AI robotics has arrived" and announced a wave of robotics partnerships with Disney, Samsung, and Foxconn.

The problem was that while robot hardware was flooding the market, the software barrier to entry remained high. ROS was built in 2007, and its integration with modern AI tools has been far from seamless. DimOS targets this exact gap.

📊 DimOS Status (as of March 17, 2026)

• GitHub Stars: 1,600 (+393 today)

• Forks: 241

• Commits: 568

• Latest Version: v0.0.11 — agent-native architecture, drone support, spatiotemporal memory, and fleet (multi-robot) control added

• License: Open source (free)

Who Should Try This?

Robotics beginners can use DimOS to get a feel for robot programming before diving into ROS. With simulation mode, you can experiment without owning a physical robot.

AI developers can rapidly prototype robot applications using the Python and LLMs (large language models like ChatGPT) they already know. Thanks to MCP support, you can even control robots conversationally from Claude Code.

In education, DimOS serves as an excellent teaching tool for giving students hands-on experience with the combination of AI and robotics. With the Unitree Go2 Air priced at around $1,500, it's well within reach for university labs and makerspaces to consider adopting.

It's worth noting that at v0.0.11, the project is still in its early stages, and drone and humanoid support remain at alpha/beta levels. But whether DimOS can do for the robotics market what Android did for smartphones is a question well worth watching.

To learn more about AI and vibe coding, check out our Free Learning Guide.

Related ContentMore AI News | Free Learning Guide

Stay updated on AI news

Simple explanations of the latest AI developments