Autonomous Systems Safety · Verification & Validation
About
I work on risk-aware verification and control of safety-critical autonomous systems, focusing on understanding and formalizing their safety and performance boundaries under uncertainty.
My background spans robust control, observer design, reinforcement learning, and formal methods. My research across these areas has been driven by a consistent and persistent thread: how to reason about safety in systems that operate beyond idealized assumptions, such as under disturbance, partial observability, imperfect policies, and complex interaction environments.
Currently, I am building specification-centered verification and simulation tools for autonomous vehicles and robots. Rather than relying solely on scenario enumeration, I emphasize property-based reasoning, risk-aware modeling, and mathematically grounded evaluation of safety claims.
I am particularly interested in making safety validation interpretable, structural, and transparent, thus enabling rigorous analysis of edge cases, long-tail risks and rare events in AI-driven decision systems.