Andrew Perrault
My research focuses on multi-agent interactions that arise in combating societal challenges, especially in the areas of conservation and public health. These interactions often involve challenges of uncertainty in the environment and the utility functions of the agents, necessitating approaches that handle scarce data. To achieve this end, I combine methodologies from game theory and multi-agent systems with machine learning, robust planning and optimization techniques.
I am an assistant professor in the Department of Computer Science and Engineering at The Ohio State University. This semester, I’m leading a seminar on reinforcement learning for optimization.
Before that, I was a Postdoctoral Fellow with Milind Tambe at Teamcore at the Center for Research on Computation and Society at Harvard. I completed my PhD in September 2018 under the supervision of Craig Boutilier at University of Toronto.
Research interests: decision-making in uncertain multi-agent systems with social good applications (game theory, sequential decision-making, machine learning, optimization, agent-based modeling)
Updates:
- Oct. 2024—Attending INFORMS and presenting on Xueqiao and Song’s work in the Decision Analysis in Public Health and Biomedicine session.
- May. 2024—Check out Yi’s work on constraints in restless multi-armed bandits at AAMAS!
- Feb. 2024—If you’ll be attending AAAI, check out collaborative work with Sanket and Adam. (I’ll also be there.)
- Jan. 2024—Excited to receive the College of Engineering Research Initiative grant with Profs. Boian Alexandrov and Joel Paulson and to continue work on bringing machine learning techniques to optimize additive manufacturing processes.
- Nov. 2023—Check out Xueqiao’s work on reinforcement learning for interpretable contact tracing policies at ML4H!
- Jan. 2023—I’m coorganizing the Midwest Machine Learning Symposium in Chicago, May 16-17. Consider submitting your work!
- Dec. 2022—Check out our work on Decision-Focused Learning Without Differentiable Optimization at NeurIPS 2022.
- Jul. 2022—Brandon Amos, Kai Wang and I are giving a tutorial on differentiable optimization and decision-focused learning at IJCAI 2022.
- Dec. 2021—Kai’s work on solving Stackelberg games with multiple followers and constraints will appear at AAAI 2022.
- Oct. 2021—Ju-Seung and I have a new preprint on training transition policies for complex tasks using distribution matching. Update: this paper will appear at ICLR 2022.
- Sept. 2021—Kai’s paper on learning MDPs from features was accepted at NeurIPS 2021 as a spotlight!
- July 2021—Our paper on minimizing max regret in reinforcement learning appeared at UAI 2021. See Lily’s explainer here.
- Dec. 2020—Two papers on restless-bandit based scheduling of community health workers accepted at AAMAS-21. Explainers here and here.
- Dec. 2020—Our paper on dual mandate patrols for green security was accepted at AAAI-21. Update: it was selected as Best Paper Runner-Up!
- Nov. 2020—Our preprint on efficient contact tracing was just released on medRxiv and NBER. Explainer thread here.
- Sept. 2020—I have two papers that will appear at NeurIPS 2020: one on using surrogate models to efficiently differentiate through non-convex optimization and another on restless multi-armed bandits applied to the problem of medication adherence monitoring.
- July 2020—I am hosting the virtual AI for Social Impact Seminar Series at CRCS this fall (2020).
- Feb. 2020—I am co-organizing the AI for Social Good workshop at IJCAI 2020.
- March 2020—Fei Fang, Bo Li, and I will be giving a tutorial on work at the intersection of machine learning and game theory at IJCAI 2020.