Decision and Behavior Trees

Decision Making:

Decision making is the core of all AI in games. Decision making helps the AI to make a decision based on various factors and execute actions based on the inputs. Some of the most common decision making algorithms include Decision trees, Behavior trees, Fuzzy logic, state machines and others. We will cover the first two in the article below.

The most important data structures and classes that we will use in the game are ActionManager and Action. Action encompasses any action that can be performed by the agent. The action manager keeps track of all the actions that are pending and active and adds or removes them from each queue based on their priority and whether they can interrupt other actions or not. The action manager also checks if an action has been waiting for too long and removes it from the pending list. I also have an AI class which serves as the base class for all the AI present in the game.

The AI class calls the update function of the decision making behavior first and gets the final action to be performed. This action is then added to the pending queue of the action manager of that AI. The action manager then looks at all the tasks in the pending queue and as discussed above, based on the priority either assigns them to be the active action or keeps them in the pending queue.

There is also a Game manager which manages the player and all the AI present in the game which calls the update function of the AI.

Decision Trees:

Decision trees are one of the simpler decision making behaviors along with state machines. The decision trees contain only two types of nodes. They are the decision node and the action node. The decision node contains the logic to check for decision and the action node serves as the leaf node and calls a particular action in the game. Both the decision node and the action node are derived from decisiontreenode in my implementation.

In the decision nodes, i have a true and a false children nodes and based on what the condition is evalutated to, the decision node calls the respective child nodes. I am extensively using function pointers to set the functions that the decision node uses to check for the game condition and the action that is to be performed in the action node, so that i do not have to create different types of nodes for different decisions and actions and can use them just by creating objects of the main classes.

The decision tree that i am using in my game is a really simple version of a guard behavior. The AI checks if the player is within a certain radius and if it is present, then it will go to the player. Otherwise the AI will perform a random wander action.

Behavior Tree:

Behavior Trees are one of the most complex decision making algorithms. Halo 2 was one of the first games to come out with behavior tree and since then they have become really popular in the gaming industry. Behavior trees combine many previously existing concepts such as Hierarchical State Machines, Scheduling, Planning, and Action Execution. Similar to a node in decision trees, behavior trees contain tasks. They contain many types of tasks and each tasks can take in different types of inputs and perform many kinds of actions. Some of the tasks include selector, sequencer, inverter and condition. Tasks can also inculde action tasks which are similar to action nodes and usually come at the end of the decision tree. Let us discuss some of the tasks below.

  1. Selector : Selectors are used to select the first task that is successful among its children. It will return as soon as one of the children nodes returns success, otherwise it keeps on running all the nodes. Figure showing example of a selector node in a behavior tree
  2. Sequencer : A sequencer is opposite to a selector in that it will keep on running as long as all its children are return success. It will exit immediately as soon as one child returns failure and does not continue execution.Figure showing example of a sequence node in a behavior tree
  3. Condition: A condition task checks for a condition and selects a child based on the output of that condition. The condition can be a boolean condition or others such as a checking against multiple floats or enum values.Figure showing a behavior tree with composite nodes
  4. Inverter :  An inverter inverts the ouput of the status of its child. Hence it can only have one child compared to the above nodes which can have multiple child nodes.

Implementation of Behavior Tree:

All the tasks derive from an abstract base class Task. Each task then implements all the interfaces and the requred logic for that task. The behavior tree class itself is derived from the Decisionmakingbehavior class similar to Decision tree, but includes additonal information such as a blackboard used to share data between various tasks in the behavior tree.

The AI logic behind the behavior tree is similar to that of the decision tree in that the AI moves randomly, but seeks and eats the player when the player is too close to the AI. But there are many differences between behavior tree and decision tree.

Instead of simple move as i am using in decision tree, I am using A* to do path finding to the next randomly selected point on the screen. The video below is not long enough for it to show path finding to the next point, but once the AI reaches a certain point, it will select a point on the graph. I also use path finding to move the AI towards the player when the player moves too close to the AI.

The behavior tree is also more complex compared to the decision tree with mine containing a sequencer node, multiple condition nodes and an inverter node. Below you can see the representation of my behavior tree.

I had to use an inverter node, because of the way i was retriving the status of the actions was inverted to how the selector works. Return statuses are one of the most important components of behavior trees as almost all nodes depend on them to perform their tasks. I have 4 types of statuses, they are success, failure, running and stopped.

Decision Tree learning

Learning is the process in which the AI looks at existing data and learns how to behave on its own. So instead of writing logic for AI, we can write code to learn some logic from the existing data and AI can create its own logic. Learning AI has the potential to adapt to each player, learning their tricks and techniques and providing a consistent challenge and is an hot topic in the gaming world now. There are many techniques that are applied for learning. Some are local to a machine but some use the power of servers to perform much deeper learning and provide a much better AI experience. Machine learning is games is one of the most researched topics in game AI world now.

In this post, we will keep the learning process simple and create a decision tree out of an exisiting behavior tree. Decision trees can be efficiently learned, since they a series of decisions that generate an action to take based on a set ofobservations. The trees can be either wide or deep and depending on that can perform different tasks. There are many algorithms that are used to perform decision tree learning, but many of them are derived from ID3 which we will look into now.

The ID3 algorithm uses a set of examples that are collected from observations and actions. The observations are called attributes. The main logic for the algorithms takes in an array of examples, array of attributes and a root node to start the tree. The algorithm then divides all the examples into two groups and reucrsively performs the same action by creating a node for each group which can produce the most efficient tree. The algorithm returns when all the examples agree on the same action. The splitting process looks at each attribute and calculates the amount of information gain for that attribute and the division which has the highest information gain is then selected as the most likely to provide an optimized tree.

To select the most optimized branch, ID3 uses entropy the entropy of each action in the set. Entropy measures the degree to which the actions in the all the examples are in agreement with each other. If all examples perform the same action, then the entropy will become zero. If all the possible actions are distributed evenly among all the examples, then the entropy becomes one. We define information gain as the reduction in total entropy and we select the division which can reduce the entropy to zero.

In my implementation, once the total entropy reaches zero, i create an action node as it means that all examples have reached the same action. Below is the code for the Make decision tree function

auto initialEntropy = GetEntropy(i_Examples);
if (initialEntropy <= 0)
ActionNode* action = m_ActionFunction(i_Examples[0]->GetAction());
i_RootNode->SetNodes(action, nullptr);
auto exampleCount = i_Examples.size();
float bestInfoGain = 0;
std::string bestAttribute = "";
std::vector<std::vector<Example*>> bestSets;
for (auto attribute : i_Attributes)
auto sets = SplitByAttribute(i_Examples, attribute);
auto overallEntropy = GetEntropyOfSets(sets, i_Examples.size());
auto infoGain = initialEntropy - overallEntropy;
if (infoGain > bestInfoGain)
bestInfoGain = infoGain;
bestAttribute = attribute;
bestSets = sets;
i_RootNode = m_DecisionFunction(bestAttribute);
auto it = i_Attributes.begin();
while (it!=i_Attributes.end())
if ((*it) == bestAttribute)
it = i_Attributes.erase(it);
for (auto set:bestSets)
auto attributeValue = set[0]->GetAttribute();
DecisionNode* node = new DecisionNode();
i_RootNode->SetNodes(node, nullptr);
MakeDecisionTree(set, i_Attributes, node);

In the above code, each example contains an action to perform and an attribute associated with it. I created a map of action nodes and decision node which can be looked up based on the action name that is stored in the example.

This results in the creation of a decision tree which is similar to how my decision tree works.


Downloads: Assignment3

Controls : Click to move the player. AI moves automatically.

Pathfinding for AI

Assignment 2-Write Up

Path Finding along with movement forms the most basic level of AI techniques and is a fundamental requirement in almost any games. Pathfinding helps in moving the character from one place to another place intelligently in the most efficient path possible. A pathfinding algorithm generates the path from the current position to the final position based on some conditions which we will discuss below. The path generated from the pathfinding algorithms is then resolved using a Path follow algorithm which translates the path to movement and finally this movement is delegated to either arrive or seek to reach the destination.

Before we go into details of the pathfinding algorithms, let us look at the data structures we are using. The main data we use is Node; a node is the representation of a node on the world graph. We also have edges which connect different nodes, and a node can have multiple edges out of it with each edge having a weight that is used to calculate which node is the best path. The next data structure is a Node Record. A node record stores how much was the cost to arrive at a particular node and what is the estimated total amount of cost from that node to the end node. This total estimated cost is calculated as the sum of cost to the node and a value called heuristic. The final data structure is the Path which is a collection of directed weighted edges, and it is what is returned by the pathfinding algorithm.

Different algorithms have different values for the heuristic and different criteria for calculating the heuristic. There can be three broad values for heuristic which are; it can always be underestimated than the actual total cost which is what is used in A*. The second is when heuristic is 0, in which case it is Dijkstra’s algorithm. The final case is when heuristics always overestimate the cost which results in Greedy algorithms. Because of this, Dijstra’s is a special case of A*.

Now let us discuss both Dijkstra’s algorithm and A* in brief.

Dijkstra’s algorithm: Dijkstra works by spreading out from the start node along its connections. As it spreads out to more distant nodes, it keeps a record of the direction it came from, and once it reaches the end node, it will then generate the path by traversing the edges backward to the source node. Dijkstra’s algorithm only considers the cost so far to the current node and does not consider the future cost and since we terminate the algorithm as soon as we find the end node, the path given by Dijkstra might not always be the smallest. The main problem with Dijkstra is that it searches the entire graph for the shortest possible route. This is useful if we’re trying to find the shortest path to every possible node, but not for point-to-point pathfinding.

Dijkstra also has a O(n2) execution complexity and O(n2) memory complexity for the worst case scenario, since it has to go through all the nodes and all the connections of each node. Below is a video showing Dijkstra’s pathfinding. The boid is moving from node 0 to node 6. The nodes are modeled after real-life map as shown below.



A* is designed to be a better version of Dijstra with fewer comparisons as it takes into account not only the current cost to a node but also estimated cost from the node to the end node.

A*:  The basic algorithm for A* is very similar to Dijkstra, but instead of always considering the open node with the lowest cost to that node, we choose the node that is most likely to lead to the shortest overall path. The node that is selected is chosen by a heuristic. If the heuristic is accurate, then the algorithm will be efficient. If the heuristic is terrible, then it can perform even worse than Dijkstra. For each node, we calculate the estimated total cost from the node to the end node which is the sum of cost until that node and the heuristic. The heuristic is generally calculated as the value of the distance between the two nodes. There are many types of heuristics used for A*.

Some of the common heuristics for A* and that I have implemented in my code are [1]

1.       Manhattan distance: Manhattan distance is the standard heuristic for square nodes which is what I am using. It is defined as the sum of difference is x and y between the two nodes.

dx = abs(node.x - goal.x)&amp;lt;/span&amp;gt;
dy = abs(node.y - goal.y)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;
return (dx + dy)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;

2.       Euclidean distance: Manhattan distance does not work well if we want to calculate the path diagonally. To create the path in any direction we use Euclidean distance. It is defined as

dx = abs(node.x - goal.x)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;
dy = abs(node.y - goal.y)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;
return sqrt(dx * dx + dy * dy)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;

3.       Diagonal distance: This is particularly useful for diagonal movement but not for straight line movement.

dx = abs(node.x - goal.x)&amp;lt;/span&amp;gt;

&amp;nbsp; dy = abs(node.y - goal.y)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;

&amp;nbsp; return (dx + dy) + min(dx, dy)&amp;lt;/span&amp;gt;&amp;lt;/span&amp;gt;

4.       Zero: This returns a zero, which essentially converts A* into dijkstra.

Below is a video showing the A* movement.

The video above and the video for Dijkstra are both generated for the same graph, with the same nodes and with their edges having the same weights, but we can see the difference in how the path is calculated for Dijkstra compared to A*. A* is guaranteed to give the shortest path for any given tree as long as the heuristic is admissible.

The performance for A* can be given as O(lm) for both memory usage and execution time, where l is the number of nodes whose estimated cost is less than that of goal and m is the number of outgoing connections for each node. Since A* takes into consideration the future cost and updates the current node with the new connection when it finds a better one, A* is considerably faster compared to Djikstra for the same number of nodes and graph even though the big O notation looks similar for both.

Implementation Details:

1.       AStar: The algorithm has two priority queues called the openlist and closelist. I have implemented these lists as vectors. The reason why we use priority queues is that we want to sort the nodes that are inserted into these queues according to the estimated total cost. The open list contains all the nodes that are currently being considered and nodes which will be considered next. Closelist contains all the nodes that have already been visited. The algorithm starts with inserting the start node into the open list. Once inserted we get the connections of that node. We then iterate over these connections and end node for each connection and the cost to reach that node.

We then check if the node is in the close list and if it is and has less cost to reach, we remove it from the close list and insert into open list. We then check if the node already is in open list. If it is, then checks if the cost to reach it less than the existing cost, then we will update the connection to that node. Else we will insert into the open list and move the current node to close list. Since I am using a vector instead of custom queues, I use a sort function to sort all the data using std::sort.

bool SortHelperFunction(NodeRecord* n1, NodeRecord* n2) { return n1-&gt;m_EstimatedTotalCost &lt; n2-&gt;m_EstimatedTotalCost; }
std::sort(openList.begin(), openList.end(), SortHelperFunction);

I sort both open and close list to have the nodes with the least cost at the beginning to access them easily.

Once we find the end node, then I get each node from the end and reverse the vector. The resulting output is a Path structure which contains a vector of DirectedWeightedEdges.

2.       Calculating Heuristics: I have an enum which specifies all the heuristics supported in my implementation. All the heuristics are implemented as static functions that take in two nodes and return a float representing the heuristic. In the Astar class, I have a GetHeuristic function which serves as a wrapper for calling all the different Heuristic functions.

float AStar::GetHeuristic(Node* i_StartNode, Node* i_EndNode)
switch (m_Heuristic)
case Heuristic::Euclidean:
return Heuristics::Euclidean(i_StartNode, i_EndNode);
case Heuristic::Manhattan:
case Heuristic::Default:
return Heuristics::Manhattan(i_StartNode, i_EndNode);
case Heuristic::Diagonal:
return Heuristics::Diagonal(i_StartNode, i_EndNode);
case Heuristic::Zero:
return 0;
return 0;
enum class Heuristic
struct Heuristics
static float Manhattan(Node* i_StartNode, Node* i_EndNode) { return abs(i_StartNode-&amp;amp;gt;m_Position.x - i_EndNode-&amp;amp;gt;m_Position.x) + abs(i_StartNode-&amp;amp;gt;m_Position.y - i_EndNode-&amp;amp;gt;m_Position.y); }
static float Euclidean(Node* i_StartNode, Node* i_EndNode) { return i_StartNode-&amp;amp;gt;m_Position.distance(i_EndNode-&amp;amp;gt;m_Position); }
static float Diagonal(Node* i_StartNode, Node* i_EndNode)
float dx = abs(i_StartNode-&amp;amp;gt;m_Position.x - i_EndNode-&amp;amp;gt;m_Position.x);
float dy = abs(i_StartNode-&amp;amp;gt;m_Position.y - i_EndNode-&amp;amp;gt;m_Position.y);
return (dx + dy) + std::min(dx, dy);


3.       Calculating the movement from the path returned from A*: The A* algorithm returns a Path struct as described above. The resulting path is then inputted to a new movement algorithm called the DynamicPathFollow.

a.       DynamicPathFollow: DynamicPathFollow is a delegated behavior. It delegates the actual job of getting the dynamic output to DynamicArrive and DynamicLookWhereYouGo. The way that I implemented this algorithm is by taking each point in the path and using dynamicarrive to move the boid to that position. I continuously check if the characters position is inside the slow radius and once it moves inside the slow radius, I change the target for dynamic arrive at the next point. This happens until the final point is reached.

4.       Creating a grid instead of graphs: As discussed above I created a graph based on a real-world location. Since this graph only had 11 nodes, it was pretty easy to create by hand. But if we want to create bigger graphs, then it would be impossible to do it by hand. So I implemented a grid class that reads in an image and translates it into a grid using a technique called tiling. In this method, we divide the entire space into tiles with the center of each tile acting as the node. When creating a path for such grids, instead of checking the connections we get the neighbors and have a constant value as the cost between these nodes. Once we need to insert the node records into the open list, we create a new directed weighted edge for the current node and end node and when creating the path, use this edge to find the source and sink.

The following is the image that I am using for my grid generation. The code accepts any image and converts it into a grid. Also, to work with grids, I added a Boolean to check if a particular node can be reached by the boid. This is for collision avoidance.

There are two main ways of doing collision avoidance. One is to do collision detection actively for every frame for the boid with all the objects in the scene. For a graph as big as mine, this uses considerable resources. The second way is to create a path which has collision information built in as part of the node information. This is the way I am doing. Even though this method is resource intensive when the path is getting generated, once the path is generated it would only take resources that are needed for path follow only.

The better the resolution of each tile, the better the collision detection would be. I am using tiles of 3×3 pixels. I have experimented with other values such as 2×2, 4×4, 8×8, etc. The higher resolution ones provide great collision detection but are slow to get the path, move erratically since there are many more nodes in the path.

The one that I have selected has a good balance between speed and performance.


A* and Dijkstra are two of the most famous path following algorithms that are used. Even though the use of Djikstra in production games is very rare, we can get a better look at how it works and how it relates to A*. As shown above, even for same weights and nodes, A* performs a lot better than Dijkstra and that I think is one of the reason it is so widely used in games. I also found that the amount of memory taken by Dijkstra is higher by a couple MB when creating the path for the grid when compared to A*. It also took considerably longer time for Djikstra to find the path for huge graphs. Sometimes the difference between A* and Dijkstra would be more than a few minutes. This difference is not very visible when using the smaller graph as A* is only faster by a couple of milliseconds instead of minutes. I think the reason for this is because of how Dijkstra goes through every node, the numbers of which are very high in my grid. I believe I had about 90,000 nodes in the scene with a 3×3 tile map. This also shows than even though both Dijkstra and A* have similar theoretical big O values for computational complexity, the practical performance is totally different.


You can download the sample here: Download

Controls on Keyboard

1 – Djikstra

2 – A* with Manhattan

3 – A* with Grid and click to move.


[1] Amit , Heuristics, [website] Available:

Basic AI movement algorithms

Movement forms the most basic level of AI techniques and is a fundamental requirement in almost any games. Each character in the game has a position and some other properties that help in controlling its movement. A movement algorithm uses these properties in finding the next position to move the character. All movement algorithms take data about the current state of character and the world and produce a movement request which is then fed back into the character. Not all movement algorithms require complex input and not all provide a complex output.

There are two basic types of movement algorithms

Kinematic: Kinematic movement algorithms have a single output namely the direction to move. They normally provide only velocity and direction and not accelerations. The characters tend to move in the required direction at full speed, and this results in movement that is not quite realistic.

Dynamic: Dynamic movement algorithms are also called steering algorithms as they provide the accelerations necessary to move the character into a required position. Craig Reynolds first describes steering behaviors for the movement algorithms that he developed. Dynamic algorithms add an extra layer of computational complexity but result in a more realistic movement of characters.

To develop our algorithms, we only consider the character in 2D space, with the 3rd axis representing gravity.

I wrote all the algorithms in C++ with OpenFrameworks. I am depicting the input and outputs as classes and structs in the following manner. The classes and structures as shown below have been evolving as I write more algorithms.

struct DynamicSteeringOutput
~DynamicSteeringOutput() {}
DynamicSteeringOutput(ofVec2f linear, float angular) : LinearAcceleration(linear), AngularAcceleration(angular){}
ofVec2f LinearAcceleration = ofVec2f(0,0);
float AngularAcceleration = 0;

DynamicSteeringOutput operator+ (DynamicSteeringOutput& i_Dymamic);
DynamicSteeringOutput operator- (DynamicSteeringOutput& i_Dymamic);
template <typename T>
DynamicSteeringOutput operator* (T i_Other);

struct KinematicSteeringOutput
ofVec2f Velocity = ofVec2f(0, 0);
float Rotation = 0;

class Kinematic
ofVec2f Position;
float Orientation = 0;
ofVec2f Velocity;
float Rotation = 0;

void Update(KinematicSteeringOutput i_SteeringInput);
void Update(DynamicSteeringOutput i_SteeringInput);

float mMaxVelocity;

The kinematic class is the main class which holds the Position, Rotation, Orientation and Velocity of an object. The update functions in the class take respective Outputs and update the above variables based on the outputs. I also have a boid class which draws a circle and triangle on screen and checks if the boid is out of bounds and wraps around if it does. It also leaves breadcrumbs on the screen so that we can follow the boid around. All algorithms are implemented in their own classes and have a common function named GetSteering, which takes various inputs depending on the algorithm.

Implementing the algorithms:

There are quite a few algorithms that I’ve written to create different behaviors. Kinematic algorithms update both the Velocity and Rotation, which Dynamic algorithms only update one of linear acceleration or angular acceleration. So, to create complex dynamic algorithms, we sum the outputs of different simple algorithms.

The first algorithm I will discuss is ‘Seek.’ Seek is one of the basic movement algorithms, it essentially moves the character to a specific target. We will discuss both kinematic and dynamic seek algorithms. Firstly, kinematic seek. It takes as input the positions of both character and targets and calculates the direction between them both and moves the character in that direction with a maximum velocity that we provide. We can directly set the orientation, if needed, using the arctangent of the velocity. Since the velocity is never set to zero, the character keeps on moving. Hence this algorithm is used for chasing a target that is moving.

Dynamic Seek: Dynamic seek keeps everything the same as kinematic seek, but updates acceleration instead of velocity. Since acceleration increases velocity exponentially, we clamp the max value the velocity can reach.

Kinematic and Dynamic Flee: Flee is similar to seek, but instead of applying acceleration or velocity in the direction of target, we apply them in the opposite direction.

Kinematic Arrive: since kinematic seek never stops once it reached the destination, we need an algorithm that stops the character at the target which is, arrive. The arrive algorithms use a target radius surrounding the target and stop the target as soon as the target enter that radius. If the radius is small, the character moves in the opposite directions to get to target which results in a wiggle like motion. Hence I used a large radius such a 100, which eliminates such wiggle but results in the character not reaching the target.

Dynamic Arrive: Like kinematic arrive, the dynamic arrive also contains a target radius, but it also has a radius bigger than the target radius, the space in which character can slow down. The rate of deceleration is proportional to the distance between character within slow and target radii. Once the character reaches the target radius, we can stop the movement. In my case, I was using a very large slow radius with a very small target radius resulting in a slow stop at the target. Below is a video in which the character can be seen moving to the target which is the mouse click position.

Dynamic Look Where You Are Going (LWYG): As discussed above, all basic dynamic algorithms generate either linear acceleration or angular acceleration. To generate the rotation on the character towards a target as seen above, we use an LWYG algorithm along with dynamic align which we will discuss next. The LWYG takes in the character movement and calculates the orientation in which the character has to align itself to using its velocity.

Dynamic Align: Dynamic align is the main algorithm which creates the angular acceleration that is required to rotate. It calculates the rotation between the character orientation and a target orientation, which can be sent from other places apart from LWYG. It then maps that rotation to one between -180 and 180, calculates the required acceleration and clamps it to max speed and returns this acceleration to us. As dynamic arrive, I used a large slow angle and a small target angle and based on the other algorithm that is calling, set the value of max speed to achieve the desired results as seen in the above video.

Kinematic wander: The wander algorithm generates a random movement in a direction. The kinematic version does this by changing the current orientation and moving the character in the new orientation with maximum speed. To create the orientation, we use a random binomial which generates a random number between -1 and one but generates values nearer to 0 more often. This makes the change in orientation to be random yet small. The same technique is used in dynamic wander too.


Dynamic Wander: Dynamic wander algorithm is part of a group called delegated behaviors of which LWYG is also a part. All delegated behaviors behave in almost the same way. They generate a target and either position or acceleration and then delegate the steering to other basic dynamic behaviors — for example, LWYG delegates to align. In kinematic wander, we randomly selected an orientation, which can be thought of as selecting a point on a circle at some radius. In dynamic wander, we create the target on a circle at a fixed distance in front of the character.


I am using arrive to move to the point and LWYG to rotate towards the point. We can also implement that same using dynamic face, and then apply maximum acceleration in that direction. I implemented both the methods to perform wander and found that arrive + LWYG gave me the best results as seen in the video below.

Blending: Blending is the process in which we linearly combine different behaviors. The usual way of doing blending is to assign weights to different algorithms based on our requirements. We multiply the weight of each algorithm to the output and sum all these outputs together to get the final output for our required scenario. The most common blending algorithm is flocking developed by Craig Reynolds and used in many games and movies. Flocking is the behavior of mimicking the movement of a pattern of birds. There are three main behaviors as part of flocking; they are separation, velocity and alignment match, and moving to the center of mass. In our case, we are assigning a mass of 1 to all the boids in which case the center of mass is the center of all their positions. We will discuss individual algorithms below.

Dynamic Separation: In its fundamental state, dynamic separation checks the distance of boid with each other boid in the scene and if it is too close, applies an acceleration in the opposite direction so that the boid moves away. The amount of steering that is applied to the boid is equal to the sum of all the individual forces applied on that boid due to surrounding boids and can also be zero when it is surrounded by an even number of equidistant boids. To make the forces more realistic, we use inverse square law which applies a force is applied inverse to the distance between the two objects.

Dynamic Velocity Match: This algorithm gets the difference between the velocities of character and target and clamps it to a maximum speed. Then sets that as the linear acceleration to the character.

I’ve implemented moving to the center using Dynamic Align and LWYG algorithms.

Flocking can be done only as a group or as a group following a leader. In our assignment, we are supposed to do the second. So for the calculation of the center of mass, I first average the position of all the boids except the leader and then average this average with the position of leader. This gives the leader’s position more weight and lets the followers follow a little behind the leader. All the boids including the leader avoid each other. I’ve made the leader click to move, and even when the leader reaches the required position, you can still see that he is moving around trying to avoid the other boids.

You can see the flocking in the above video. I’ve also implementing changing the leader. If you notice carefully, once the new leader is selected, the old leader joins the pack and the pack moves away from the new leader. Increasing the number of followers might make the screen a bit crowded since, all the boids tend to repel each other. I’ve experimented with having 5 – 10 followers and having 10 followers made it harder for the leader to go to the designated position when we click on screen.

I also added the ability to add a different leader who moves randomly. The followers then follow the leader who is closest to them. You can enable and switch between the secondary leaders by pressing ‘k’ and disable secondary leader by using ‘l’.


Most of the algorithms that I’ve implemented are dynamic algorithms. I also implemented kinematic ones for seek, arrive, and wander but dynamic algorithms are much better when compared to kinematic ones as they provide much smoother motion. The most I spent is on finding the right values for the maximum speeds, accelerations and other values in these algorithms. Since, even a small change in acceleration or speed drastically changed the output, I had to find the right values with whom I’m satisfied enough with the output. I think I can still make small changes to the values to get even more precise movement, but as my professor said in class, we are not trying to find the perfect movement, but if it is good enough to pass without further close inspection, then we are in a good place.

For flocking, I assigned weights of 2,1 ,3 and 2 to separation, velocity match, Arrive and LWYG respectively. I think for a good flock, even if they fall behind sometimes to the main group, they should mainly avoid colliding with others and try to go to the center. If we change the values to other, I believe we can see some different behaviors. I’ve checked with having fractional values as weights for separation and this caused some of the boids to collide with each other while trying to reach the center. Also, currently I have the leader and followers move at the different speeds and different weights at avoidance. Since, the followers always follow the leader, I’ve halved the leader’s avoidance weight and increased its maximum speed to try reaching the destination better.

Adding a secondary leader to flocking changes a lot of things as the boids have to continuously check which leader they are close to which still staying close to the whole of center of mass.


You can download the sample here: Download. All code is open source and is available at my GitHub.

Controls on Keyboard

press 1 – Basic movement

press 2 – Move to mouse click

press 3 – Wander

press 4 – Flocking

press N – When flocking, to change the leader.

press K – When flocking, to enable secondary leader and change secondary leader.

press L –  When flocking, to disable secondary leader and change secondary leader.

When flocking or seek to mouse, use the mouse left button to set the position.


[1] I. Millington, J. Funge , Artificial Intelligence For Games, 2nd ed. Reading, CRC Press, 2009. [E-book] Available: Safari e-book