IJCAI 2020 Tutorial

Time: Jan. 7, 2021, 4–7:20 pm PST = Jan. 8, 2021, 9 am–12:20 pm JST

Location: Red Wing—North 1

Approximate Schedule

Overview

There is a rising interest in connecting machine learning and game theory. Game theoretic frameworks have been successfully applied to solve real-world security problems in which security agencies (defenders) allocate limited resources to protect important targets against human adversaries, with a rich body of research publications at IJCAI and other AI venues, and the recent ones are using machine learning techniques extensively. On the other hand, advances in machine learning have led to super-human performance in strategic games such as Go and Poker. Furthermore, the wide deployment of machine learning based systems leads to a practical concern on the robustness of the machine learning models, especially when facing strategic attackers.

In this tutorial, we cover several recent research frameworks at the intersection of game theory and machine learning and their applications in domains such as environmental sustainability, cyber security, and mobility. After providing introductory material on game theory and machine learning, we will introduce the first framework – prediction-based prescription. We will describe classical behavioral models of game players and how to learn such models from data. We will cover the latest work on predicting attacks from real-world data, and how to prescribe optimal defending strategy given the predictions. The second framework that will be introduced in the tutorial is deep learning powered strategy generation. We will introduce how to learn a good defender strategy for complex settings from simulated game plays using neural networks, and how the defender can learn to play when payoff information is not readily available. The third frameworks highlight work on differentiable learning of game parameters, going beyond the first framework’s approach where the behavioral model is learned without reference to the game structure. The fourth framework presents a different problem: how can game-theoretic modeling improve robustness of machine learning models used for prediction, via the understanding about vulnerabilities of current ML systems and strategies to enhance learning robustness. The tutorial will conclude with a discussion of opportunities for future work, including exciting new domains and fundamental theoretical and algorithmic challenges.

Presenters

Fei Fang is an Assistant Professor in the School of Computer Science at Carnegie Mellon University. Before joining CMU, she was a Postdoctoral Fellow at the Center for Research on Computation and Society (CRCS) at Harvard University. She received her Ph.D. from the Department of Computer Science at the University of Southern California in June 2016. Her research lies in the field of artificial intelligence and multi-agent systems, focusing on data-aware game theory and mechanism design with applications to security, sustainability, and mobility domains. Her dissertation is selected as the runner-up for IFAAMAS-16 Victor Lesser Distinguished Dissertation Award, and is selected to be the winner of the William F. Ballhaus, Jr. Prize for Excellence in Graduate Engineering Research as well as the Best Dissertation Award in Computer Science at the University of Southern California. Her work has won the Innovative Application Award at Innovative Applications of Artificial Intelligence (IAAI’16), the Outstanding Paper Award in Computational Sustainability Track at the International Joint Conferences on Artificial Intelligence (IJCAI’15). Her work has been deployed by the US Coast Guard for protecting the Staten Island Ferry in New York City since April 2013. Her work has led to the deployment of PAWS (Protection Assistant for Wildlife Security) in multiple conservation areas around the world, which provides predictive and prescriptive analysis for anti-poaching effort.

Bo Li is an assistant professor in Computer Science at University of Illinois at Urbana-Champaign. She was a postdoctoral fellow in UC Berkeley during 2017-2018. She received her Phd in Vanderbilt in 2016. She received the Symantec Research Labs Graduate Fellowship in 2015. Her research focuses on machine learning, security, privacy, game theory, social networks, and adversarial deep learning. She has designed several robust learning algorithms, a scalable framework for achieving robustness for a range of learning methods, and privacy preserving data publishing systems. She is interested in both theoretical analysis of general machine learning models and developing practical systems.

Andrew Perrault is a postdoctoral fellow with Milind Tambe at the Center for Research in Computation and Society at the Harvard John A. Paulson School of Engineering and Applied Science. His work focuses on strategic interactions (both cooperative and non-cooperative) between agents when collaborating with conservation NGOs to plan anti-poaching patrols in wildlife sanctuaries. He received his Ph.D. from University of Toronto in 2018 under the supervision of Craig Boutilier and his B.A. from Cornell University.