Blog posts

2025

Learning-in-the-Loop: When LLMs Meet Control Systems

less than 1 minute read

Published:

Control systems have long relied on math, not language. But what if large language models could sit in the control loop — interpreting signals, reasoning about intent, and tuning behavior in real time? This hybrid paradigm — learning-in-the-loop — could merge symbolic reasoning with continuous control, enabling robots that talk, learn, and act coherently.

Debugging Agents That Think Like Engineers

less than 1 minute read

Published:

In most factories, debugging a failure involves staring at logs, cross-checking sensor data, and guessing where things went wrong. My multi-stage debugging agent emulates this process. It first locates the “error tick” in the logs, analyzes correlated sensor data, builds hypotheses, and refines them with context from prior errors. It doesn’t just answer “what went wrong,” but also why — by connecting data, context, and code knowledge.

The Debugger: Teaching Machines to Troubleshoot Themselves

less than 1 minute read

Published:

In robotics and automation, downtime is deadly. I’ve been working on a system called The Debugger — an AI-driven troubleshooting agent that understands logs, analyzes system behavior, and even hypothesizes root causes like an engineer would. It integrates with real robots via Raspberry Pi hardware, pulling logs over MQTT and using an LLM-based reasoning layer to identify issues. Imagine a robotic technician that reads your logs, diagnoses faults, and guides you through recovery — in seconds, not hours.

GRAL: The Language Brain of Robots

less than 1 minute read

Published:

I’ve been conceptualizing GRAL (General Robotic API Layer) — a high-level interface that allows robots to understand, execute natural-language commands and also provide brain support for robots to undersand and reason their environment better. Instead of diving into ROS2 nodes and topics, you could simply say: “Navigate to the conveyor belt and pick up the red box.” GRAL decomposes this into primitive skills like navigation, picking, and placing, generates the required code, and executes it. The goal: bridge the gap between human intent and robotic action — seamlessly. GRAL can also try understanding the environment and by taking the sensory inputs and supply them as rich contextual info to other nodes and algorithms of the robotics stack. For example: there is a crowd in the robots shortest path and no crowd in a bit longer path, the GRAL can help in making robot understand that by adding additional cost in that path making it take a longer path becuase the shorter one is through a crowded environment.