

Guannan Qu
Postdoctoral Scholar at California Institute of Technology
Contact: gqu [at] caltech.edu
Office: Annenberg 202, Caltech
About myself: I am a CMI and Resnick postdoc in the CMS Department of California Institute of Technology, working with Prof. Steven Low and Prof. Adam Wierman. I recently obtained my Ph.D. degree from Harvard SEAS working with Prof. Na Li. During Spring 2018, I was also affiliated with Simons Institute for the Theory of Computing at the University of California, Berkeley. I obtained my B.S. degree from Tsinghua University in Beijing, China in 2014. My CV can be downloaded here (last updated: December 2020).
Research Interest: I am broadly interested in the theory of control, optimization, learning, and the interplay between control and learning. Particularly, my recent research focuses on developing frameworks and principles that combine methods in model-based control (LQR, robust control, etc.) and model-free RL (Q-learning, policy gradient methods, etc). These two sets of methods are developed based on very different philosophies, yet I believe the two have their unique advantages that complement each other very well, and therefore the combination of the two will yield powerful algorithms that can achieve the best of both worlds. During my Ph.D., I mainly worked on distributed optimization, online control, distributed control, etc. On the practical side, my research is driven by applications like energy/power systems, IoT, transportation systems, robot teams, etc.
I am co-organizing Control Meets Learning Seminar Series! Check out the website here.
Recent Talks:
- Scalable Multi-Agent Reinforcement Learning for Networked Systems, invited seminar at ECE@CMU, November 2020.
- Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems, 2nd Learning for Dynamics and Control Conference. Video available here.
- Finite-Time Analysis of Asynchronous Stochastic Approximation and Q-Learning, COLT 2020. Video available here.
Representative Publications:
- Guannan Qu, Adam Wierman, Na Li, Scalable Reinforcement Learning for Multi-Agent Networked Systems, submitted to Operation Research. Conference version accepted to 2nd Conference on Dynamics and Control as oral presentation (top 10%).
- Guannan Qu, Yiheng Lin, Adam Wierman, Na Li, Scalable Multi-Agent Reinforcement Learning for Networked Systems with Average Reward, accepted to NeurIPS 2020.
- Yiheng Lin, Guannan Qu, Longbo Huang, Adam Wierman, Distributed Reinforcement Learning in Multi-Agent Networked Systems, preprint.
- Guannan Qu, Chenkai Yu, Steven Low, Adam Wierman, Combining Model-Based and Model-Free Methods for Nonlinear Control: A Provably Convergent Policy Gradient Approach, preprint.
- Guannan Qu and Adam Wierman, Finite-Time Analysis of Asynchronous Stochastic Approximation and Q-learning, accepted to Conference on Learning Theory (COLT) 2020.
- Guannan Qu and Na Li, Harnessing Smoothness to Accelerate Distributed Optimization, IEEE Transactions on Control of Network Systems, vol. 5, no. 3, pp. 1245-1260, Sept. 2018.
- Guannan Qu and Na Li, Accelerated Distributed Nesterov Gradient Descent, IEEE Transactions on Automatic Control, vol. 65, no. 6, pp. 2566 - 2581, June 2020.
- Guannan Qu and Na Li, Optimal Distributed Feedback Voltage Control under Limited Reactive Power, IEEE Transactions on Power Systems, vol. 35, no. 1, pp. 315 - 331, January 2020.
Update:
- January 2021: new review paper on reinforcement learning for power system control! Available here.
- December 2020: Our proposal "Scalable Reinforcement Learning for Intelligent Multi-Agent Systems" has been funded by CAST ($85k, co-PI: Prof. Adam Wierman).
- November 2020: I won the PIMCO Postdoctoral Fellowship in Data Science ($20k).
- November 2020: Best Student Paper Award at IEEE SmartGridComm!
- June 2020: Four new papers online! Two on RL for multi-agent networked systems (see here and here) and one on the intersection of model-based and model-free control (here); and one on learning-based DC-OPF (here).
- May 2020: Our paper on Q learning has been accepted to Conference on Learning Theory (COLT) 2020!
- March 2020: Our paper Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems has been accepted to 2nd Learning for Dynamics and Control Conference as oral presentation (top 10%).
- February 2020. New paper online, Finite-Time Analysis of Asynchronous Stochastic Approximation and Q-learning.
- December 2019. New paper online, Scalable Reinforcement Learning of Localized Policies for Multi-Agent Networked Systems.
- November 2019. I received Simoudis Discovery Award, which is awarded to one project at Caltech each year at the interface of ML/AI and autonomy
- August 2019. I started my postdoc at Caltech under a joint CMI and Recnick Istitute Fellowship.
- August 2019. Two papers got accepted by Transactions on Automatic Control and Transactions on Power System respectively. Check them out here and here.
- May 2019. I defended my dissertation titled "Distributed Decision Making in Cyber-Physical Network Systems".
Teaching: I was a co-instructor for CS/EE 146 (Control and optimization of networks) at Caltech. I was a teaching fellow for ES 158 (Feedback Systems: Analysis and Design) at Harvard.
How to pronounce my name? "Guannan" is like Gwan-Nan, quite straightforward.
"Qu" is trickier, because the letter "Q" may be misleading. "Qu" is pronounced like "ch-yuu" - try to start with "ch" as in "choose" but end with "yuu".