The Lab

The Lab focuses on robust AI for sequential decision-making in environments under epistemic and aleatory uncertainty. We develop computational frameworks that integrate Machine Learning, Uncertainty Quantification, Reinforcement Learning, and Large-Scale Distributed Computing. Our research addresses the reliability gap in autonomous systems. We aim to architect agentic systems that maintain safety and performance guarantees in high-stakes, real-world deployment.

Research Areas

Gradient-Based Optimization

Optimization methods using gradient information for efficient parameter tuning in complex models.

Epistemic Uncertainty

Quantifying and reasoning about model uncertainty to make robust decisions under limited data.

Reinforcement Learning

Sequential decision-making algorithms that learn optimal policies through interaction with environments.

Time Series Prediction

Forecasting methods for temporal data with applications in finance and system monitoring.

Spatial Uncertainty Modeling

Probabilistic models for spatial data with applications in navigation and environmental sensing.

Robot Navigation

Autonomous navigation systems for robots operating in uncertain and dynamic environments.

Anomaly Detection

Identifying unusual patterns and outliers in data for security and system monitoring.

Causal Models

Understanding cause-effect relationships for better decision-making and counterfactual reasoning.

Deep Reinforcement Learning

Combining deep learning with RL for complex control and decision problems.

Activity Recognition

Understanding and classifying human activities from sensor and video data.

Publications