Learning in Real-Time Strategy Games

No Thumbnail Available
Date
2016-03-21
Authors
Padmanabhan, Vineet
Goud, Pranay
Pujari, Arun K.
Sethy, Harshit
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
One of the main drawbacks in Real-time strategy (RTS) games is that the built-in artificial intelligence (or gamebots) tend to lag behind human players. To make gamebots perform like human players, gamebots should try to find best action from the Knowledge (training data) for each time-stamp and should be able to play game against every opponent. To achieve this end in this paper we propose a learning approach called Individual Action Plan Learning where each plan has exactly just one action during training. While executing, i.e., playing, we make use of the sensor information from the current game-state (map) to select the best action. There are two main advantages of having such an approach as compared to other works in RTS: (1) we can do away with the concept of a simulator which are often game specific and is usually hard coded in any type of RTS games (2) our system can learn from merely observing humans playing games and do not need any authoring effort. Usually RTS requires demonstrations to be annotated. Two AI games called Battle City and S3 were used to evaluate our approach.
Description
Keywords
Real time Startegy Games, Reinforcement learning
Citation
Proceedings - 2015 14th International Conference on Information Technology, ICIT 2015