Learning in Real-Time Strategy Games

dc.contributor.author Padmanabhan, Vineet
dc.contributor.author Goud, Pranay
dc.contributor.author Pujari, Arun K.
dc.contributor.author Sethy, Harshit
dc.date.accessioned 2022-03-27T05:51:14Z
dc.date.available 2022-03-27T05:51:14Z
dc.date.issued 2016-03-21
dc.description.abstract One of the main drawbacks in Real-time strategy (RTS) games is that the built-in artificial intelligence (or gamebots) tend to lag behind human players. To make gamebots perform like human players, gamebots should try to find best action from the Knowledge (training data) for each time-stamp and should be able to play game against every opponent. To achieve this end in this paper we propose a learning approach called Individual Action Plan Learning where each plan has exactly just one action during training. While executing, i.e., playing, we make use of the sensor information from the current game-state (map) to select the best action. There are two main advantages of having such an approach as compared to other works in RTS: (1) we can do away with the concept of a simulator which are often game specific and is usually hard coded in any type of RTS games (2) our system can learn from merely observing humans playing games and do not need any authoring effort. Usually RTS requires demonstrations to be annotated. Two AI games called Battle City and S3 were used to evaluate our approach.
dc.identifier.citation Proceedings - 2015 14th International Conference on Information Technology, ICIT 2015
dc.identifier.uri 10.1109/ICIT.2015.51
dc.identifier.uri https://ieeexplore.ieee.org/document/7437609
dc.identifier.uri https://dspace.uohyd.ac.in/handle/1/8352
dc.subject Real time Startegy Games
dc.subject Reinforcement learning
dc.title Learning in Real-Time Strategy Games
dc.type Conference Proceeding. Conference Paper
dspace.entity.type
Files
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.71 KB
Format:
Plain Text
Description: