Indexed by:
Abstract:
Recent years have witnessed continuous optimization and innovation of reinforcement learning algorithms. Games, as a key application paradigm, have been widely employed to develop superior reinforcement learning models. Different game environments present diverse challenges to reinforcement learning agents; however, mainstream gaming paradigms have not yet specifically addressed issues such as variable action spaces and varying tasks. In this regard, this paper introduces a two-player adversarial game with a configurable player action space. The game allows for the diversification of task challenges by configuring opponent strategies. Additionally, we propose a reinforcement learning method to facilitate the decision-making of the AI player (the agent) in the game. The inclusion of an action masking algorithm enables effective handling of variable action space issues. Experimental results indicate that the decision-making behavior of the agent adjusts with changes in opponent behavior and continuously improves with policy updates. The trained agent exhibits impressive performance in this game, it shows that the proposed method could serve as a baseline for the decision-making in the novel game, and a robust foundation for further research and applications is provided. © 2024 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2024
Page: 143-148
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2