Postdoctoral Scholar at California Institute of Technology
Contact: gqu [at] caltech.edu or gqu [at] andrew.cmu.edu
Office: Annenberg 202, Caltech
About myself: I am a CMI and Resnick postdoc in the CMS Department of California Institute of Technology, working with Prof. Steven Low and Prof. Adam Wierman. I recently obtained my Ph.D. degree from Harvard SEAS working with Prof. Na Li. During Spring 2018, I was also affiliated with Simons Institute for the Theory of Computing at the University of California, Berkeley. I obtained my B.S. degree from Tsinghua University in Beijing, China in 2014. My CV can be downloaded here (last updated: December 2020).
Research Interest: I am broadly interested in the theory of control, optimization, learning, and the interplay between control and learning. Particularly, my recent research focuses on developing frameworks and principles that combine methods in model-based control (LQR, robust control, etc.) and model-free RL (Q-learning, policy gradient methods, etc). These two sets of methods are developed based on very different philosophies, yet I believe the two have their unique advantages that complement each other very well, and therefore the combination of the two will yield powerful algorithms that can achieve the best of both worlds. During my Ph.D., I mainly worked on distributed optimization, online control, distributed control, etc. On the practical side, my research is driven by applications like energy/power systems, IoT, transportation systems, robot teams, etc.
I am joining the ECE department of CMU as an assistant professor in September 2021! If you are interested in working with me, feel free to reach out!
- January 2021: new review paper on reinforcement learning for power system control! Available here.
- December 2020: Our proposal "Scalable Reinforcement Learning for Intelligent Multi-Agent Systems" has been funded by CAST ($85k, co-PI: Prof. Adam Wierman).
- November 2020: I won the PIMCO Postdoctoral Fellowship in Data Science ($20k).
- November 2020: Best Student Paper Award at IEEE SmartGridComm!
- June 2020: Four new papers online! Two on RL for multi-agent networked systems (see here and here) and one on the intersection of model-based and model-free control (here); and one on learning-based DC-OPF (here).
- May 2020: Our paper on Q learning has been accepted to Conference on Learning Theory (COLT) 2020!
- March 2020: Our paper Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems has been accepted to 2nd Learning for Dynamics and Control Conference as oral presentation (top 10%).
- February 2020. New paper online, Finite-Time Analysis of Asynchronous Stochastic Approximation and Q-learning.
- December 2019. New paper online, Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems.
- November 2019. I received Simoudis Discovery Award, which is awarded to one project at Caltech each year at the interface of ML/AI and autonomy
- August 2019. I started my postdoc at Caltech under a joint CMI and Recnick Istitute Fellowship.
- August 2019. Two papers got accepted by Transactions on Automatic Control and Transactions on Power System respectively. Check them out here and here.
- May 2019. I defended my dissertation titled "Distributed Decision Making in Cyber-Physical Network Systems".
Teaching: I was a co-instructor for CS/EE 146 (Control and optimization of networks) at Caltech. I was a teaching fellow for ES 158 (Feedback Systems: Analysis and Design) at Harvard.
How to pronounce my name? "Guannan" is like Gwan-Nan, quite straightforward.
"Qu" is trickier, because the letter "Q" may be misleading. "Qu" is pronounced like "ch-yuu" - try to start with "ch" as in "choose" but end with "yuu".