Swarm Intelligence in Strategy Games
Developing an intelligent player for a game is no easy task. Each Artificial Intelligent Player is created specifically for each context with very little re-usability. However, some lower level developments have great significance in both games and other areas — such as the search, pathing, or optimization algorithms. In this work, we designed and implemented an algorithm that combined Swarm Intelligence concepts with the traditional decision mechanisms of modern Artificial Intelligent Players — specifically those used in Strategy Games. Our main objective was to assert the adequacy of Swarm Intelligence current knowledge to the Artificial Intelligent Player requirements, followed by the development of a test algorithm in itself. The basic concept was to distance our implementation from a common centralized solution, into a decentralized solution, and complementing it by applying some of the currently documented Swarm Intelligence notions. The resulting algorithm was especially responsible for the means of communication between the units of an Artificial Intelligent Player. A centralized and scripted Artificial Intelligence was used as benchmark for our Swarm Intelligence based solution. This work is an attempt to answer the problems resulting from predictable Artificial Players — a common issue with scripted implementations — and to improve its adaptability — taking advantage of the emergent behavior resulting of the Swarm Intelligence concepts.
Believable Interactions Between Synthetic Characteres
The human player is often required to interact and cooperate with synthetic characters, which also cooperate and interact with each other. However, unless the action is tightly linear and scripted, the expression of interaction is often confusing and difficult to understand by the human player. This work explores how traditional animation principles can be applied to the expression of interaction between the actors, both synthetic and human, and make the communication and cooperation more believable and the experience more immersive. To validate our work, we implemented our model in a multiplayer sports-game, where each character is an artificial player, and asked participants to evaluate videos of the interactions. The data we collected suggests that our approach not only significantly improves believability, but also makes the interactions between agents easier to understand and the action easier to interpret.
My Army: Strategy game engine
This document explains all the process used to remake the game engine of the "My Army" game. The process starts with a detailed review to the game engine to understand how it works and it was also made a review between the current language and the new language to understand the capacity of each language to solve the existing issues. Then, a solution was presented to solve the existing issues, according to the previous reviews. After weeks of development and solving issues, a new version of the simulator was complete. The next step was to analyze this version to insert parallelism to improve the simulator performance. After several attempts, a parallel solution was reached which offered a better performance and consistent battle outputs. Finally, all the battle simulator versions were tested using different battle examples from the game in order to make a comparative analysis showing the performance from each version.
Cooperation through Reinforcement Learning in a Collaborative Game
This work has the objective of creating agents that are able to work together with other previously unknown teammates and without any a priori coordination for the collaborative game Geometry Friends. Starting with an agent for the circle character that uses a reinforcement learning approach, this work continues its development to further improve its behavior and performance. This process goes through various stages, analyzing the agent's various components and adjusting their behavior when necessary and possible to improve the agent's performance. These mechanisms are then extrapolated for the other character of the game, the square, adapting the components whenever necessary for the specific problems the square character faces. After both agents are completed, the work focuses on problems of coordination between the agents and the difficulties that the implementation brings to the extension of the agents for the cooperative problems. During the various phases of development, the agents are tested determine the impact that each change has on their performance. The tests suggest that the internal functionalities result in some incompatibilities with the intended behavior, since it limits the behaviors that can be added to the agents. While there is an improvement of the circle agent, the square and cooperative performances are below expectation.