Decision Making in Non-Stationary Environments with Policy-Augmented Search

Ava Pettet, Yunuo Zhang, Baiting Luo, Kyle Wray, Hendrik Baier, Aron Laszka, Abhishek Dubey, Ayan Mukhopadhyay

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

1 Citation (Scopus)
23 Downloads (Pure)

Abstract

Sequential decision-making under uncertainty is present in many important problems. Two popular approaches for tackling such problems are reinforcement learning and online search (e.g., Monte Carlo tree search). While the former learns a policy by interacting with the environment (typically done before execution), the latter uses a generative model of the environment to sample promising action trajectories at decision time. Decision-making is particularly challenging in non-stationary environments, where the environment in which an agent operates can change over time. Both approaches have shortcomings in such settings -- on the one hand, policies learned before execution become stale when the environment changes and relearning takes both time and computational effort. Online search, on the other hand, can return sub-optimal actions when there are limitations on allowed runtime. In this paper, we introduce \textit{Policy-Augmented Monte Carlo tree search} (PA-MCTS), which combines action-value estimates from an out-of-date policy with an online search using an up-to-date model of the environment. We prove theoretical results showing conditions under which PA-MCTS selects the one-step optimal action and also bound the error accrued while following PA-MCTS as a policy. We compare and contrast our approach with AlphaZero, another hybrid planning approach, and Deep Q Learning on several OpenAI Gym environments. Through extensive experiments, we show that under non-stationary settings with limited time constraints, PA-MCTS outperforms these baselines.
Original languageEnglish
Title of host publicationAAMAS '24
Subtitle of host publicationProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems
PublisherInternational Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)
Pages2417-2419
Number of pages3
ISBN (Electronic)978-1-4007-0486-4
DOIs
Publication statusPublished - 6 May 2024
Event23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024 - Auckland, New Zealand
Duration: 6 May 202410 May 2024

Conference

Conference23rd International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2024
Country/TerritoryNew Zealand
CityAuckland
Period6/05/2410/05/24

Bibliographical note

Extended Abstract accepted for presentation at AAMAS 2024.

Funding

This material is based upon work sponsored by the National Science Foundation (NSF) under Grant CNS-2238815, the Defense Advanced Research Projects Agency (DARPA), and the Air Force Research Lab (AFRL). Results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF, AFRL, or DARPA.

FundersFunder number
Defense Advanced Research Projects Agency
Air Force Research Laboratory
National Science FoundationCNS-2238815

    Keywords

    • MCTS
    • Non-Stationary Environments
    • Sequential Decision-Making

    Fingerprint

    Dive into the research topics of 'Decision Making in Non-Stationary Environments with Policy-Augmented Search'. Together they form a unique fingerprint.

    Cite this