PyTorch Deep Learning Course
10-chapter hands-on course from tensors to deployment. Covers CNNs, transfer learning, Vision Transformers (ViT), experiment tracking with TensorBoard, and model deployment with Gradio.
Math (Combinatorics & Optimization + Statistics) @ University of Waterloo. Currently focused on world-model reinforcement learning, topology-guided optimization, and efficient LLM fine-tuning.
I work at the intersection of mathematical theory and ML engineering. My research focuses on problems where rigorous foundations meet practical implementation: training stability, loss landscape geometry, and building reproducible research prototypes.
Currently exploring how topological data analysis can inform optimizer behavior, and how world models can learn latent dynamics in high-frequency environments. I care about evaluation discipline and making research code that others can actually run.
Educational resources for deep learning and reinforcement learning.
10-chapter hands-on course from tensors to deployment. Covers CNNs, transfer learning, Vision Transformers (ViT), experiment tracking with TensorBoard, and model deployment with Gradio.
4-phase course from RL fundamentals to robotics-scale world models. Covers DQN, PPO, model-based planning (MPC, CEM), and Isaac Lab integration for sim-to-real robotics.
Selected work in reinforcement learning, optimization, and LLM systems.
A DreamerV3-style agent for a 60Hz physics-driven game environment with tight failure constraints, built with a custom Gymnasium stack, Windows↔WSL synchronization, and high-frequency logging for reproducible evaluation.
A PyTorch optimizer that uses GUDHI-based TDA features to probe local loss-landscape geometry (sharp vs. flat regions) and adapt update behavior with stability safeguards.
Memory-efficient fine-tuning pipelines for Dream-7B and GPT-OSS-20B using QLoRA, gradient checkpointing, and DeepSpeed optimizations.
Open to research collaborations, internship opportunities, and interesting projects. Feel free to reach out if you'd like to discuss ML research or mathematics.