Decision and Behavior Trees

Decision Making:

Decision making is the core of all AI in games. Decision making helps the AI to make a decision based on various factors and execute actions based on the inputs. Some of the most common decision making algorithms include Decision trees, Behavior trees, Fuzzy logic, state machines and others. We will cover the first two in the article below.

The most important data structures and classes that we will use in the game are ActionManager and Action. Action encompasses any action that can be performed by the agent. The action manager keeps track of all the actions that are pending and active and adds or removes them from each queue based on their priority and whether they can interrupt other actions or not. The action manager also checks if an action has been waiting for too long and removes it from the pending list. I also have an AI class which serves as the base class for all the AI present in the game.

The AI class calls the update function of the decision making behavior first and gets the final action to be performed. This action is then added to the pending queue of the action manager of that AI. The action manager then looks at all the tasks in the pending queue and as discussed above, based on the priority either assigns them to be the active action or keeps them in the pending queue.

There is also a Game manager which manages the player and all the AI present in the game which calls the update function of the AI.

Decision Trees:

Decision trees are one of the simpler decision making behaviors along with state machines. The decision trees contain only two types of nodes. They are the decision node and the action node. The decision node contains the logic to check for decision and the action node serves as the leaf node and calls a particular action in the game. Both the decision node and the action node are derived from decisiontreenode in my implementation.

In the decision nodes, i have a true and a false children nodes and based on what the condition is evalutated to, the decision node calls the respective child nodes. I am extensively using function pointers to set the functions that the decision node uses to check for the game condition and the action that is to be performed in the action node, so that i do not have to create different types of nodes for different decisions and actions and can use them just by creating objects of the main classes.

The decision tree that i am using in my game is a really simple version of a guard behavior. The AI checks if the player is within a certain radius and if it is present, then it will go to the player. Otherwise the AI will perform a random wander action.

Behavior Tree:

Behavior Trees are one of the most complex decision making algorithms. Halo 2 was one of the first games to come out with behavior tree and since then they have become really popular in the gaming industry. Behavior trees combine many previously existing concepts such as Hierarchical State Machines, Scheduling, Planning, and Action Execution. Similar to a node in decision trees, behavior trees contain tasks. They contain many types of tasks and each tasks can take in different types of inputs and perform many kinds of actions. Some of the tasks include selector, sequencer, inverter and condition. Tasks can also inculde action tasks which are similar to action nodes and usually come at the end of the decision tree. Let us discuss some of the tasks below.

  1. Selector : Selectors are used to select the first task that is successful among its children. It will return as soon as one of the children nodes returns success, otherwise it keeps on running all the nodes. Figure showing example of a selector node in a behavior tree
  2. Sequencer : A sequencer is opposite to a selector in that it will keep on running as long as all its children are return success. It will exit immediately as soon as one child returns failure and does not continue execution.Figure showing example of a sequence node in a behavior tree
  3. Condition: A condition task checks for a condition and selects a child based on the output of that condition. The condition can be a boolean condition or others such as a checking against multiple floats or enum values.Figure showing a behavior tree with composite nodes
  4. Inverter :  An inverter inverts the ouput of the status of its child. Hence it can only have one child compared to the above nodes which can have multiple child nodes.

Implementation of Behavior Tree:

All the tasks derive from an abstract base class Task. Each task then implements all the interfaces and the requred logic for that task. The behavior tree class itself is derived from the Decisionmakingbehavior class similar to Decision tree, but includes additonal information such as a blackboard used to share data between various tasks in the behavior tree.

The AI logic behind the behavior tree is similar to that of the decision tree in that the AI moves randomly, but seeks and eats the player when the player is too close to the AI. But there are many differences between behavior tree and decision tree.

Instead of simple move as i am using in decision tree, I am using A* to do path finding to the next randomly selected point on the screen. The video below is not long enough for it to show path finding to the next point, but once the AI reaches a certain point, it will select a point on the graph. I also use path finding to move the AI towards the player when the player moves too close to the AI.

The behavior tree is also more complex compared to the decision tree with mine containing a sequencer node, multiple condition nodes and an inverter node. Below you can see the representation of my behavior tree.

I had to use an inverter node, because of the way i was retriving the status of the actions was inverted to how the selector works. Return statuses are one of the most important components of behavior trees as almost all nodes depend on them to perform their tasks. I have 4 types of statuses, they are success, failure, running and stopped.

Decision Tree learning

Learning is the process in which the AI looks at existing data and learns how to behave on its own. So instead of writing logic for AI, we can write code to learn some logic from the existing data and AI can create its own logic. Learning AI has the potential to adapt to each player, learning their tricks and techniques and providing a consistent challenge and is an hot topic in the gaming world now. There are many techniques that are applied for learning. Some are local to a machine but some use the power of servers to perform much deeper learning and provide a much better AI experience. Machine learning is games is one of the most researched topics in game AI world now.

In this post, we will keep the learning process simple and create a decision tree out of an exisiting behavior tree. Decision trees can be efficiently learned, since they a series of decisions that generate an action to take based on a set ofobservations. The trees can be either wide or deep and depending on that can perform different tasks. There are many algorithms that are used to perform decision tree learning, but many of them are derived from ID3 which we will look into now.

The ID3 algorithm uses a set of examples that are collected from observations and actions. The observations are called attributes. The main logic for the algorithms takes in an array of examples, array of attributes and a root node to start the tree. The algorithm then divides all the examples into two groups and reucrsively performs the same action by creating a node for each group which can produce the most efficient tree. The algorithm returns when all the examples agree on the same action. The splitting process looks at each attribute and calculates the amount of information gain for that attribute and the division which has the highest information gain is then selected as the most likely to provide an optimized tree.

To select the most optimized branch, ID3 uses entropy the entropy of each action in the set. Entropy measures the degree to which the actions in the all the examples are in agreement with each other. If all examples perform the same action, then the entropy will become zero. If all the possible actions are distributed evenly among all the examples, then the entropy becomes one. We define information gain as the reduction in total entropy and we select the division which can reduce the entropy to zero.

In my implementation, once the total entropy reaches zero, i create an action node as it means that all examples have reached the same action. Below is the code for the Make decision tree function

auto initialEntropy = GetEntropy(i_Examples);
if (initialEntropy <= 0)
{
ActionNode* action = m_ActionFunction(i_Examples[0]->GetAction());
i_RootNode->SetNodes(action, nullptr);
return;
}
auto exampleCount = i_Examples.size();
float bestInfoGain = 0;
std::string bestAttribute = "";
std::vector<std::vector<Example*>> bestSets;
for (auto attribute : i_Attributes)
{
auto sets = SplitByAttribute(i_Examples, attribute);
auto overallEntropy = GetEntropyOfSets(sets, i_Examples.size());
auto infoGain = initialEntropy - overallEntropy;
if (infoGain > bestInfoGain)
{
bestInfoGain = infoGain;
bestAttribute = attribute;
bestSets = sets;
}
}
i_RootNode = m_DecisionFunction(bestAttribute);
auto it = i_Attributes.begin();
while (it!=i_Attributes.end())
{
if ((*it) == bestAttribute)
{
it = i_Attributes.erase(it);
}
}
for (auto set:bestSets)
{
auto attributeValue = set[0]->GetAttribute();
DecisionNode* node = new DecisionNode();
i_RootNode->SetNodes(node, nullptr);
MakeDecisionTree(set, i_Attributes, node);
}

In the above code, each example contains an action to perform and an attribute associated with it. I created a map of action nodes and decision node which can be looked up based on the action name that is stored in the example.

This results in the creation of a decision tree which is similar to how my decision tree works.

 

Downloads: Assignment3

Controls : Click to move the player. AI moves automatically.

Leave a Reply

Your email address will not be published.