Autonomous Systems Safety · Verification & Validation

About

I work on risk-aware verification and control of safety-critical autonomous systems, focusing on understanding and formalizing their safety and performance boundaries under uncertainty.

My background spans robust control, observer design, reinforcement learning, and formal methods. My research across these areas has been driven by a consistent and persistent thread: how to reason about safety in systems that operate beyond idealized assumptions, such as under disturbance, partial observability, imperfect policies, and complex interaction environments.

Currently, I am building specification-centered verification and simulation tools for autonomous vehicles and robots. Rather than relying solely on scenario enumeration, I emphasize property-based reasoning, risk-aware modeling, and mathematically grounded evaluation of safety claims.

I am particularly interested in making safety validation interpretable, structural, and transparent, thus enabling rigorous analysis of edge cases, long-tail risks and rare events in AI-driven decision systems.

Full Research & Academic Record →

Technology Blueprint

Ongoing Projects

Specification-Centered Safety Validation Benchmark (2025 - )
I am currently developing a specification-centered testing and validation framework for safety-critical autonomous systems. The goal is to move beyond scenario enumeration and toward property-driven evaluation of system behavior under structured uncertainty. The framework integrates formal specifications, risk-aware modeling, and adversarial scenario generation to construct measurable safety boundaries instead of isolated test cases. The core infrastructure and implementation are under active development as part of a startup initiative. Architectural concepts and methodology can be discussed upon request.

Recent Writing