Posts

Motion Planning in 2026: From RRT* to Neural Time Fields (NTFields)

  Motion Planning in the Era of Physical AI In traditional robotics, motion planning was a reactive game of "don't touch the obstacles." Today, as humanoids enter our factories and construction sites, the game has changed. We are moving toward Agentic AI —systems that don't just follow a path but understand the physics of their journey. 1. The Fundamentals: Navigating the Configuration Space Before a robot can move, it must translate the physical world into a mathematical one. The Configuration Space (C-Space) The robot’s position is defined by its Configuration ( $q$ ) . The set of all possible $q$ is the Configuration Space ( $C$ ) . $C_{free}$ : The subset of configurations where the robot is not in collision. $C_{obs}$ : The subset of configurations that lead to a collision. Mathematically, the goal of motion planning is to find a continuous path $\tau: [0, 1] \to C_{free}$ such that $\tau(0) = q_{start}$ and $\tau(1) = q_{goal}$ . Sampling-Based vs. Opt...

Closing the Loop: Perception and Control for the Franka Emika Panda Robot

  Closing the Loop: Perception and Control for the Franka Emika Panda Robot In the modern landscape of "Software Defined Hardware," a robot that can only execute pre-programmed, blind trajectories is entirely obsolete. For physical AI to function in dynamic environments—like kitchens, surgical rooms, or unstructured warehouses—robots must be able to "see" their surroundings and dynamically adjust their movements in real-time. This requires a tightly integrated Perception-Control Pipeline . In this guide, we break down the fundamentals of visual-motor integration and walk you step-by-step through setting up a complete perception and control architecture for the industry-favorite Franka Emika Panda robot, utilizing the Perception-Control-Franka-Panda-Robot repository. 1. The Fundamentals: Bridging Vision and Action To build an autonomous manipulation system, we must bridge two distinct domains of robotics: The Perception Stack (The "Eyes") Perception is ho...

Graph-based SLAM Guide 2026: Fundamentals, Optimization, and Examples

Graph-based SLAM Guide 2026: Fundamentals, Optimization, and Examples Master Graph-based SLAM for modern robotics. Learn about Factor Graphs, Loop Closure, and why Pose-Graph Optimization is the backbone of autonomous vehicle mapping in 2026. 1. Introduction: From Filtering to Smoothing The fundamental shift in SLAM over the last decade has been the move from Filtering (like EKF) to Smoothing (Graph-based). Filtering: Only maintains the current state. Once a landmark is passed, its specific correlation to the robot's starting point is often simplified or lost to save memory. Graph-based (Smoothing): Maintains the entire history of the robot's trajectory. It treats every pose and landmark as a node in a massive web, allowing the system to "smooth out" errors across the entire path. This "Global Consistency" is what allows a robot to map a 10-kilometer warehouse without the map slowly curving or drifting into nonsense. 2. Fundamentals: The Anatomy of the...