Synchronized Behavior Trees

Motivation

I had spent the better part of a year and a half thinking about and researching what I would have want a next-gen AI framework to be. This framework was intended to live within the engine layer and provide a generalized architecture for writing and executing any kind of game AI. I think the best description of the intentions was that I was trying to build a “genre and game agnostic” AI system. My primary concerns with the framework’s design were ones of production. I don’t believe AI can continue to exist only in code and we need to embrace more data driven approaches in terms of authoring AI. In any case, this viewpoint is not shared by the vast majority of my peers which often makes discussions about this topic quite difficult but as part of this generalized framework research, I did a fair amount of work on behavior trees (BT). I went through numerous iterations on the core BT architecture. As part of this work, I came to some conclusion for overall decision making and behavior authoring. For the most part, this work was abandoned and I’ve now resigned myself to the fact that I will probably not get a chance to move forward with it. I feel the information is still useful and so I would like to share my ideas and conclusions in hope that it might trigger a discussion or inspire someone to do something awesome and crazy. This information sharing will be split into two articles, the first discussion about my final BT architecture and and the second will delve into my conclusion and why I feel that way.

NOTE: It is also extremely important to make it clear that that what I am about to discuss is not new techniques or ideas but rather an attempt to formalize some concepts and architectures of behavior trees. This information is necessary for the follow up article.

Introduction

I don’t really want to rehash old information, so I will assume that anyone reading this is familiar with the basic concepts of the behavior trees. If not I would direct you to the following resources:

So now that that’s out of the way then let’s move forward. The first thing you might notice when discussing BTs is that a lot of people will refer to them as reactive planners. What they actually mean by reactive planner is simply a “BIG IF STATEMENT”. The reactive part means that each time you evaluate the tree, the result could change based on the environment i.e. it will react. The planning part simply implies that for some given set of values, the conditions for the tree, it will select some authored plan of actions to execute. I’m sure a lot of programmers (and potentially academics) are probably raging at my gross over simplification of BT so let me give a basic example:

1

Traditionally, behavior tree evaluation starts at the root, and proceeds, depth first, down the tree until it reaches a successful node. If an environmental change occurs that the affects the tree, it will be detected on the next evaluation and the behavior changed accordingly. This implies that you need to be careful when authoring the tree to ensure higher priority behaviors are placed before lower priority behaviors so that we get the proper “reactivity”.

One thing that initially bothered me when looking at various behavior tree implementations was that whereas I saw then as tool to author actual behaviors, a lot of programmers saw BTs as a way to describe the transitions between behaviors. What I mean by is that the granularity of actions in their trees was quite low. You would see leaf nodes such as “FightFromCover”, “GoToCover” or even worse stuff like “RangedCombat” and “InvestigateSuspiciousNoises”. That’s not to say its always the case, I have seen examples of BT with a much higher granularity in their nodes such as “moveto”, “shoot”, “reload”, “enter cover”, etc… This, at least for me, seemed like a more attractive alternative as it allows designers to actually prototype and author new behaviors without depending on a programmer to build the behavior. I envisioned a workflow when game designers could to a large degree script with BTs. The reality is that a lot of newer designers in the industry are not able to script using code and since I don’t want to artificially lock them out, I wanted to provide a toolset for them to be able to have some degree of autonomy with their prototyping and designs.

There is one downside to increasing the granularity: massive growth of the tree size and this can have a pretty bad effect of the performance profile of a tree evaluation especially when in the lowest priority behaviors. Given the simple tree below, to reach our currently running behavior, we needed to evaluate numerous nodes earlier in the graph. For each subsequent update of the behavior we still need to traverse the entire tree, checking whether anything has changed and whether we need to switch behavior. With massive trees, the cost of this traversal can quickly add up. This cost is especially depending on the cost of your condition checks as they will make up the bulk of the operations performed. I hope the irony that the most expensive BT update occurs when the agent is doing nothing, is not lost on you.

RANT: To be frank, I find a lot of performance discussions regarding BT’s to be misleading. I would love to be in a situation where the most expensive thing in my AI update was the actual tree traversal and not the work being done by the nodes. The condition nodes alone, will almost certainly incur cache misses when querying your knowledge structure (be it a blackboard or whatever) as will any calls into other game systems. Barring some sort of idiotic implementation, I think that your time optimizing a BT is better spent in minimizing the cost of the work being done by the nodes and not the traversal of said nodes.

2

Naively, my first thought to get around this problem was that I didn’t need to re-evaluate the entire tree each time. I could simply just resume evaluation at my last point. This is similar toward AIGameDev’s “Event driven behavior trees” but there are some problems with this approach, some that are not covered in the AIGameDev (http://www.youtube.com/watch?v=n4aREFb3SsU) materials. The main problem is that you lose the reactivity of the tree, what I mean by that is that you will only ever react to new events only once the tree finishes an evaluation i.e. a behavior fully completes. It’s also unclear what happens in the case of parallel nodes, as we could at any given point be running N behaviors in parallel across multiple branches of the tree.

Stored State Behavior Trees

After some thought, I decided that there was no way around repeating the evaluation from the root each frame without losing basic functionality. Instead, what I decided to do to limit the amount of work done, by storing the state of each node across subsequent evaluations. I would store the state of each node as it was calculated, then on subsequent evaluations, I would skip all the nodes that had previously completed (succeeded or failed). The stored state of the BT nodes would only be reset once the tree fully completes the evaluation. For clarity’s sake I’m gonna refer to these technique as a Stored State BT (SSBT).

At this point, I’m sure a lot of you are asking yourselves how is this any different from “Event Driven BTs”? Well, let work through an example and the differences should become clear. Given the following tree allowing our agent to investigate a sound:

3

In the traditional BT model, we will evaluate the entire tree each AI update which means we will detect the noise as soon as it happens. Great so what happens with my approach? Well everything works great on the first AI update. We correctly select the smoking idle behavior and we store each node’s state. The tree state looks as follows:

4

On the second update, nothing has changed, so we evaluate the tree skipping all nodes that have completed (i.e. succeeded or failed) and we re-enter the smoking behavior and update it. So far so good. On the third update, a noise is detected, but since our evaluation skips completed nodes, we never enter the first sub-branch until the smoke behavior completes and so we will not react to the sound. This is obviously broken.

Before I discuss a solution I also want to point out a big problem with standard BTs. Let’s take a look at the middle branch (the actual investigation behavior). Each time we evaluate the tree, we will detect that we have an investigation target and run the “move to investigation target” behavior and it is when that behavior completes that things start becoming nasty. Not only are we running the “has investigation target” check each subsequent update but now we also need to verify that we are in fact at the correct required position to be able to progress past the “move to investigation target” node in the tree. Now imagine that the “investigate” behavior actually moves the character. Next time you update the tree, the move to investigation target kicks back in and tried to the adjust the position of the character back to the required point. Congratulations! You are now officially an AI programmer as you have encountered the dreaded behavior oscillation problem. There are a bunch of ways to fix this but all are, in my opinion, workarounds and hacks due to sub-optimal decision making. One of my biggest problems with traditional BT is that most people simple treat them as a glorified decision tree, anyways I’m getting side-tracked but this problem is a very common one, and one that’s not really mentioned in a lot of the BT literature ;)

Back to my broken BT model! A naïve approach to fixing the problem that we don’t enter the first branch (the first selector node) could be to put a “repeat” decorator on it. A “repeat” decorator will cause its decorated node to be reset and re-evaluated each AI update. This decorator is entirely useless in the traditional BT approach but does some value with the SSBT model. Does it fix our problem? Well, sort of, it does fix it but only in that specific case. Consider the following modification to the tree:

5

The repeat decorator was moved to after the first selector node and a sequence was added before it. Granted this example is bit contrived but it does highlight a very real problem. As a result of this setup, after the first evaluation the first sequence node’s status is set to “failed” and on the subsequent evaluation the repeat node will not be reached so we are back to the original problem. The solution I found this problem was actually quite simple. I created a new decorator called a “Monitor”. This decorator had an interesting behavior in that when encountering a Monitor decorator, we would evaluate its child node and store the result of that evaluation (just as before) but the monitor would also register itself to a “monitored nodes list”. Due to the depth first nature of the tree, we would only register monitor nodes with a higher priority that our current branch in the tree. On the AI update after we registered the monitor nodes, we would evaluate all registered monitor nodes in the tree before the tree evaluation step. This simple mechanism allows us to maintain the reactivity of the tree but also prevents us from doing a lot of redundant work. The monitor concept is show in the figure below.

6

As with any technique, there is a bit of catch, well not a catch but rather that you need to change your way of thinking about the behavior tree. The monitor node concept is powerful but there are some more details concerning the monitor nodes and the SSBT model that need mentioning. Remember I mentioned that we have the concept of resetting the tree? This is something similar to the “OnTerminate” call that Alex shows in his BT video. We reset nodes primarily when the tree has been fully evaluated i.e. all active leaf nodes complete, in that case we call reset on the root node which traverse the tree and reset of all child node that require resetting.

There is also the issue of the resulting state of the monitor nodes. Since we evaluate the list of monitored nodes before we evaluate the tree (essentially evaluating them as standalone trees) what do we do with their return state? Well, this is where the reset comes in. If a given monitored node succeeds, it will return an interrupt result which will cause us to reset the tree and restart the evaluation. This allows us to maintain the reactivity of the tree but also gives us the added benefit of clearing the entire state of the tree, more on that in a bit. Since monitor nodes can reset the tree and will always be evaluated, I would strongly suggest that no actual behavior be placed within them. I used the nodes to perform knowledge and world state queries to see if anything had occurred that I needed to react to. I also tended to use generalized cases for the monitor nodes I had i.e. did I hear a new sound of some class, did I see something I need to react to, etc… In my opinion, the monitor decorator tends to give the AI programmer/designer a bit more control over the performance characteristics of the tree than before.

It’s also worth getting into a bit more detail with regards to the reset mechanism of the BT. Resetting a node allows us to actually perform some state clean up operations before re-evaluating the tree. I found this concept extremely powerful in that it allowed me to have synchronicity in actions for the nodes. What I mean by that is that, for example, certain action nodes would change the agent’s knowledge i.e. a talk node would set a “isSpeaking” flag, or a movement node would set a “isMoving” flag and so on. This meant that during the reset of the tree, each node could clean up after themselves in a sensible manner i.e. clearing any flags it had set or sending termination commands to the animation layer.

A good example of this would be if we were interrupted during a full body interaction with the environment. We can, during the reset of the tree, send an interrupt command to animation layer and have it sensible blend out of the act. This would normal have been achieved with checks at a higher level in the tree i.e. if (in animation && animation can be interrupted ) then interrupt. With the reset concept we get two benefits for free:

  1. We don’t have to worry about interrupting behaviors since they can clean up after themselves removing the need for additional checking earlier in the tree. This really helps in reducing issues arising from bad state due to tree interruption
  2. All clean-up is localized to the nodes, each node knows exactly how to clean up after itself. Meaning that you can really have modular nodes that are plug and play within the tree, without having to worry about their potential effects in the context of multiple traversals.

Honestly, I think the reset concepts was one of the nicest things about this BT and the resulting implementation code ended up being really simple and elegant.

Lastly, the SSBT model has one additional benefit, it actually solves the oscillation problem we mentioned earlier. Since we store the state for a node, once the “move to investigation target” behavior completes, we never have to run it again. Since we track the state, we know that we’ve completed it and can simply let the “investigate” behavior take over. I found this to be quite an elegant solution over the alternative.

So now we find ourselves back to the same functional behavior as with the original BT model. In my opinion, I feel that the SSBT model actually has some functional and authoring improvements over the traditional model but hey, I’m biased! As I mentioned earlier, I believe that any performance problems with BTs are a result of the work done by individual nodes rather than due to the traversal of the actual tree structure. I also feel that the stored state evaluation approach really goes a long way to limit the amount of work needed to be performed during each tree evaluation.

Note: Maybe some of the keen readers have noticed what the SSBT model has actually resulted in? It wasn’t immediately obvious to me either. When I finally realized it, I was shocked. Don’t get me wrong, it works exactly as intended and I really liked the elegance of the solution and implementation but it led me down a rabbit hole that change my entire perspective of the topic.

Synchronized Behavior Trees

So let’s actually get to the synchronized part of the post title. One thing I didn’t mention was that, I only starting playing with the evaluation model after I had already done some of the work discussed in this part, it just made more sense for the writeup to discuss it in this order.

When I started thinking about the AI framework, my target platforms were both current and next gen. As such memory savings in the order of a few MBs were still really important. Now behavior trees are probably not the most memory intensive things you’ll encounter but I also had the idea of having all of crowd run in the new AI framework. In fact, I wanted to not make any distinction between crowd and NPCs from the AI’s perspective. As such, I was considering having several thousand behavior trees instantiated simultaneously. Let’s say I wanted 5000 characters, and each tree took 3kb, that’s around 15MB of data, there is no way that memory budget would ever be approved on 360/ps3 platform. So decided to share the same tree across all agents. This approach has already been touched upon with Alex’s data oriented trees.

My approach was pretty simple, each BT node would have a method called “GetMemoryRequirements” that would return the memory size needed to store its require instance state data, as well as all the memory needed for the node instance data of its children. I would then, for a given tree,  calculate the required space needed to store all of the node’s instance data for an agent by calling “GetMemoryRequirements” on the root. I also made the choice then to tightly pack the memory needed into one big block, then I would store each node’s index into the memory block and use that to retrieve the relevant instance data for a node during traversal.  During tree evaluation, I would pass in the agent’s node instance data for the specific tree and then each node would index into the memory and pull its state data as required. As it turned out, I didn’t need a lot of data per node, I think my heaviest node was around 16bytes, with the vast majority weighing in at around 4bytes. Now this in itself is nothing special but only having a single instance of the tree which all agent were evaluation gave me an idea.

Now it is also worth mentioning that the approach I took with designing the BT was quite animation inspired and so I made use of child trees quite extensively. A child tree is simply a behavior tree that is injected into another tree at runtime. All of our tree were statically allocated so by injected, I simply mean that we would store a pointer to the child tree in a “child tree connector” BT node. During evaluation, we would retrieve that pointer from each agent’s node instance data and trigger an evaluation of that child tree with the agent’s node instance data for the child tree. We would not actually dynamically compose trees in any way, all BTs had a fixed memory size. The child tree concept also allowed us the ability to have multiple agents running different child trees from within the same parent tree connector node. I relied heavily on this mechanism to inject contextual behaviors originating from the environment.

Coming back to the synchronized trees, I will, once again, use a contrived example to discuss the concept. You will notice the use of heavyweight behaviors like “RangedCombat” in the example tree, these are simply examples and I don’t recommend using that level of granularity in your BTs. Anyways, so given the very basic combat tree below, we immediately realize that we will encounter some problems when multiple agents run this tree. Can anyone see what they are?

7

When we have multiple agents running the tree, they will all throw a grenade if they can and will taunt the enemy if they can. Imagine a battle with 50 enemies, you probably don’t want 50 grenades in the air and 50 taunts at the same time. Design would immediately want to limit the number of agents that can run these behaviors and maybe add some cool downs. A potential quick fix would be to use the blackboard/knowledge. You would give the node an ID, then keep track of how many agents are using it via a counter in the blackboard. The same approach can be used for the cool-down values. I’ve seen other even more convulated appraoches based on specific knowledge flags or even using external systems as synchronization points. Then comes the matter of thread safety and other concurrency issues which is a whole other discussion. In any case, the main problem I find with a lot of these fixes is one of visibility and accessibility, some of them lock out designers from the equation while others make the code fragile since when reordering the tree, the blackboard flags may not be correctly set/unset. In fact, since I believe that the tree should be used to author behavior with a very high granularity, I also believe the tree should be used to solve exactly those problems. So like, I said having a single tree gave me an idea: “what’s stopping us from having a global tree state and using the tree as a synchronization object?” Taking that idea a bit further, we end up with the following tree:

8

We can use what I termed “Gate Nodes” to help with these problems of synchronization. I was inspired by the talk given by the Yager guys at the Vienna AI game conference, and I borrowed the name from them. I’m not sure that their nodes operate in the same manner as mine but I really liked the idea and the name. The basic premise is that with these “gate nodes” you can globally lock down portions of the tree. The gate nodes have global state which is stored in the tree as well as potentially having agent specific state. They contain a semaphore which allows us to safely control access to sub-branches in the tree. This means that I can now have “Allow Only X agents” nodes in the tree which means that I can limit behaviors globally to only a certain number of agents. As each agent evaluates a gate node, they will try to take a lock on the node, if they succeed they will enter that branch. For example, if I only want to have a single grenade being thrown at any time or only have 3 guys shooting at me while the rest of the enemies charge me then it is relatively easy to do now. And best of all, this can be done without having to touch any agent knowledge or build any external machinery and its clearly visible in the behavior tree.

Furthermore the global state can be used for things like global cool-downs, giving us the flexibility to easily distinguish between local agent cool-down (i.e. special abilities) and global cool-downs (i.e. combat gameplay control by limiting how often grenades are thrown). Best of all, since we only ever have a single instance of each BT, these global locks will also work for all child trees, no matter when and where they are injected, since all child trees are also shared.

I was also playing around with one last use case of the synchronization ability of these trees which i found to be quite elegant. I was using the global tree state to actually coordinate group behaviors. Let’s take a look at the following example:

9

By using global tree state for synchronization, we can restrict certain behavior branches to only execute when we have exactly 2 agents. We can then assign roles to the agents (for clarity’s sake I’ve left out the conditions for the role assignments). The agents can then perform the first part of their behaviors and once each agent completes they will enter a waiting loop after which we can synchronize their state again. After which they can perform their follow up behaviors. We also have the option to reset the tree for both agent’s if one agent breaks out of the behavior for any reason (i.e. due to an environmental interruption) but I will leave the how as an exercise to the readers in an effort to wrap up this document.

NOTE: One thing that’s necessary to mention is that the above behavior tree would be extremely difficult to construct without using the SSBT evaluation model. In fact, it would be such a pain that that I wouldn’t even recommend trying it.

I personally feel that having global tree states is an incredible tool to have when building behaviors. Unfortunately, I have met a lot of opposition to the idea. In fact, when the BT was evaluated by someone else, the very first thing they did was remove the child tree feature and the global tree state. I don’t know, maybe I’m just an idiot and am missing something? I still believe it to be a good idea and this is in part why I feel that I should write this post. If you disagree or spot some obvious problems, please let me know.

Conclusion

Now in discussing this article with several people, it seems that there is a bit of misunderstanding about what I’m trying to say. I’ve come to the opinion that fundamentally there are two ways to look at behaviors trees:

  1. As a tree of behaviors, where the tree represents the decision making structure (i.e. a decision tree)
  2. As a tree that describes a specific behavior i.e. the set of steps need to perform said behavior.

Personally I think of BTs in terms of 2, this is where they really shine! I think of a behavior as a visual scripting language and I think its a very elegant productivity solution to use BTs as such and this is context for the material I covered in this article. As for using BTs as a tool for defining a characters overall behavior, I feel that BTs are a sub-optimal solution to the problem and I would never recommend them for that. This is exactly what the topic of my next article will be about. What are the problems associated with trying to build a full character behavior with just a BT, and why using BTs is potentially a bad idea.

THANKS: Big thanks to Mika Vehkala for his follow up on this post and our nice long Skype discussion which highlighted some potentially ambiguous portions of the article.

Update report and a bit of an AI rant

I have been extremely quiet for the last couple of months and I feel bad that I’ve neglected this blog (especially since I promised I wouldn’t). I’ve been extremely busy with work and my AI prototype and I havent really had too much time to work on anything at home. To be entirely honest, after staring at Visual Studio for 10+ hours a day, the last thing I really want to do when I get home is fire up visual studio.

It was a lot easier when I was in academia, to a large degree academic work is tedious and boring, I couldn’t wait to fire up VS and fuck around with some ideas or whatever. Now I pretty much much get to do that at work (and get paid for it). Dont get me wrong I’m still working on things outside of work (more on that later) but unfortunately it’s pretty hard to discuss them since they are built in our in-house game engine and as you can imagine I cant really post any of that stuff here.

I’ve spent the last 10 months working on Hitman: Absolution ( out 20 Nov 2012). I started out on the crowd team and helped to get the base system built. After that I jumped around between system but firmly stayed in AI land. I now find myself the owner of our navmesh, pathfinding and avoidance systems. The last few months have really opened my eyes to the reality of game AI and the actual problems that need to be solved. The picture painted by hobbyist game dev books, research articles and academic work is extremely misleading especially for larger projects. I almost want to go back and slap myself for some of the academic nonsense that I wrote in my thesis. There are much bigger, unsolved problems out there other than faster pathfinding (or how to build a behavior tree) but you can’t really see them from the outside.

It is these problem’s that primarily interest me, I find myself not really caring about how blazingly fast my A* is but rather worrying about extensibility and scalability of the AI systems. I worry more about code-data dependencies more than my decision making data structure. I worry about workflows and designer control more than how to rate a cover node with respect to a threat. I’ve realized that I’ve moved closer to the engine team than the game team and that is where my primary interest lies.

I’ve always been more about the frameworks than the content, even in my PHP (*shudder) days. Back then, I built a complex template framework system with auto sql generation and access control systems, I didn’t really focus too much on the content but rather on the system and the overall architecture. I find myself in the same position, I’m not as interested in making games as much as I am in building the tech behind games. I guess that I always knew that but it never really hit home until I read a forum thread about an article I had written about higher education and game dev. A childhood friend described me as never really being interested in the local indie game scene and someone that never really built anything. And that was true, I found building little game prototypes boring when compared to screwing around with graphics programming and engine components. So why am I mentioning this? Well because I’ve noticed a rather disturbing trend where game AI is undistinguishable from gameplay code.

In a lot of games there is no real seperation between the AI systems and the gameplay systems. The AI systems are built hand in hand with gameplay code and extended/tweaked as needed arises. Decision making systems are integrated with game specific knowledge systems, dialog systems and animation networks. At the end of the project, it’s extremely hard if at all even possible to decouple the AI systems from the game code. It can potentially be a lot more effort to try and untangle the mess than it would be to rewrite the systems for the next project. If you take a look at the major licensable game engines, none of them feature any AI system past maybe a basic navmesh generator, hell even the bulk of the game AI middleware is just a navmesh generator, some steering behaviors and if you’re lucky some flavor of state machine editor. Considering the large reliance on middleware systems in the industry, AI code can often become the glue code that ties everything together and so the term AI programmer can actually mean middleware integration programmer. I guess that the distinction between gameplay programmers and AI programmers can be quite fuzzy and I’ve heard of some companies that dont even have any “AI programmers” on staff just gameplay programmers.

All of this, in my (possibly naive) opinion is a waste of effort and can be greatly reduced with some fore thought and tech effort. We can, and should, try to generalize and seperate certain AI systems away from the game code. These systems should be moved to the game engine layer and be extended and maintained by a core group of tech programmers. Gameplay programmers should just focus on working with designers to pump out awesome behaviors and make a kick ass game without having to waste time building debuggers, decision systems, job systems, schedulers and so on.

Now the trick is how to identify and seperate away these systems. There is always the danger that you over generalized system and there by make to rigid to achieve the required result and thereby you hurt the game, on the flip side if you under generalize then its super hard to reuse anything for another project.

This is exactly the problem I’m working on solving in my spare time. I’m basically focussing on improving AI workflows, performance and debugging tools. I think by now it is obvious that data oriented programming and parallelism is the way forward and so I’m focusing on building what I like to call a deferred AI framework, with a dependency-aware asynchronous job scheduler at the core! I’m also super lucky that I have some awesome guys at work that I can bounce my ideas off of. I tend to get excited and run wild (I’ve also come up with some amazingly stupid ideas) and my colleagues have been an amazing sanity check for me ( i.e. slapped some sense into me :) ). Anyways, my progress so far has been that I’ve built a very basic AI framework in the glacier 2 engine as well as a generic AI Knowledge system and while I’m not really ready to demo anything yet (I have a todo list as long as my arm), I can at least start to discuss the overall system architecture.

I’m really hoping to throw out some design ideas over the next couple of weeks, and see what people think. There are a couple of problems that I havent exactly figured out how to solve as well as a couple of crazy ideas that might need additional sanity checks.

DirectX 10/11 – Basic Shader Reflection – Automatic Input Layout Creation

This is gonna be a very brief post on a pretty simple but powerful tool for any hobby programmers tinkering with Direct3D. One of the most annoying things, at least for me, was always having to manually create input layouts for each different vertex shader I used in my hobby projects. I have now restarted work on my hobby game engine and have upgraded the renderer to DX11 and in doing so ripped out the DirectX effects framework. I’m actually planning on writing a post on moving from DX10 to DX11 but thats for another time. Right now lets get back to input layouts, if you can remember the input layout basically describes the memory layout of the input to the vertex shader program ( i.e. the input HLSL struct). In all my previous tutorials, we had to describe this input layout by hand and then create it using the compiled vertex shader for validation. We used to do this as follows:

const D3D10_INPUT_ELEMENT_DESC vertexInputLayout[] =
{
	{ "ANCHOR", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
	{ "DIMENSIONS", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 8, D3D10_INPUT_PER_VERTEX_DATA, 0 },
	{ "OPACITY", 0, DXGI_FORMAT_R32_FLOAT, 0, 16, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};

const int vertexInputLayoutNumElements = sizeof(vertexInputLayout)/sizeof(vertexInputLayout[0]);

D3D10_PASS_DESC PassDesc;

pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc );
if ( FAILED( pD3DDevice->CreateInputLayout( vertexInputLayout,
											vertexInputLayoutNumElements,
											PassDesc.pIAInputSignature,
											PassDesc.IAInputSignatureSize,
											&pVertexLayout ) ) )
{
	return fatalError("Could not create Input Layout!");
}

The above sample is still using the effects framework but that’s irrelevant. There is another problem with this approach with regards to flexibility, if you wish to swap shaders on the fly, they either all need to use the same vertex shader input layout or you need to somehow create all your input layouts in advance and then link them with the shaders you wish to load. Well, considering the fact that we use the compiled shader to actually validate the input layout then that means that the compiled shader has all the necessary information available for us to create the input layout. Basically we’re just going to reverse engineer the validation step to create an input layout based on the shader.

Now if we don’t use the effects framework then we have to manually compile our shaders. I’m not going to go into much detail regarding this as the info is readily available in the SDK tutorials. Once you have compiled your shader you have the compiled shader blob which we can then use to create the final vertex shader program as shown below:

if( FAILED( D3DX11CompileFromFile( pFilename, NULL, NULL, pFunctionName, "vs_4_0", shaderFlags, NULL, NULL, &pCompiledShaderBlob, &pErrorBlob, NULL ) ) )
{
	if ( pErrorBlob != NULL )
	{
		PRINT_ERROR_STRING( pErrorBlob->GetBufferPointer() );
		pErrorBlob->Release();
	}
	return false;
}

HRESULT hr = pD3DDevice->CreateVertexShader( pCompiledShaderBlob->GetBufferPointer(), pCompiledShaderBlob->GetBufferSize(), NULL, &shader.pVertexShader);

We can very easily create our input layout using the following function:

HRESULT CreateInputLayoutDescFromVertexShaderSignature( ID3DBlob* pShaderBlob, ID3D11Device* pD3DDevice, ID3D11InputLayout** pInputLayout )
{
	// Reflect shader info
	ID3D11ShaderReflection* pVertexShaderReflection = NULL;
	if ( FAILED( D3DReflect( pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), IID_ID3D11ShaderReflection, (void**) &pVertexShaderReflection ) ) )
	{
		return S_FALSE;
	}

	// Get shader info
	D3D11_SHADER_DESC shaderDesc;
	pVertexShaderReflection->GetDesc( &shaderDesc );

	// Read input layout description from shader info
	std::vector<D3D11_INPUT_ELEMENT_DESC> inputLayoutDesc;
	for ( uint32 i=0; i< shaderDesc.InputParameters; i++ )
	{
		D3D11_SIGNATURE_PARAMETER_DESC paramDesc;
		pVertexShaderReflection->GetInputParameterDesc(i, &paramDesc );

		// fill out input element desc
		D3D11_INPUT_ELEMENT_DESC elementDesc;
		elementDesc.SemanticName = paramDesc.SemanticName;
		elementDesc.SemanticIndex = paramDesc.SemanticIndex;
		elementDesc.InputSlot = 0;
		elementDesc.AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
		elementDesc.InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
		elementDesc.InstanceDataStepRate = 0;	

		// determine DXGI format
		if ( paramDesc.Mask == 1 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32_FLOAT;
		}
		else if ( paramDesc.Mask <= 3 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_FLOAT;
		}
		else if ( paramDesc.Mask <= 7 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT;
		}
		else if ( paramDesc.Mask <= 15 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
		}

		//save element desc
		inputLayoutDesc.push_back(elementDesc);
	}		

	// Try to create Input Layout
	HRESULT hr = pD3DDevice->CreateInputLayout( &inputLayoutDesc[0], inputLayoutDesc.size(), pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), pInputLayout );

	//Free allocation shader reflection memory
	pVertexShaderReflection->Release();
	return hr;
}

What this function does is peek inside the compiled shader using a shader reflection. A shader reflection is simply an interface for reading all the shader details at runtime. We first reflect the shader information using the D3DReflect function and then get the description of the shader using the reflection interface. From this description we can see how many input elements the vertex shader takes and then for each one we can get it descriptions. Using this data we can fill out our inputlayoutdesc structure. The above function is very simple and is not robust to be able to handle any shader thrown at it. I quickly slapped it together just to get things up and running with the intention of extending it as needed in the future. I just figured its something, that at least to my knowledge, isn’t readily known or mentioned and I figured just pointing it out would be useful for any hobbyist programmers  :P

Oh btw, before I forget to actually use the shader reflection functions you need to include the D3DCompiler.h header and link against the D3Dcompiler.lib static library.

A Guide to Higher-Education for Aspiring Game Programmers

DISCLAIMER: I wrote this piece for a South African Game Development Magazine prior to leaving South Africa. It has been some time since I send the piece in for copy-editing and have heard nothing back regarding it so I’m simply going to post it here. The topic is a matter of some debate and the below article is simply my personal opinion! It was mentioned during the initial review by the game development magazine that certain sections of this article are rather inflammatory and we had agreed to remove them from the final piece. Seeing as this is my personal blog I figured I might as well post the entire un-edited version.

Once Again this is simply a personal opinion and should be taken as such!

Introduction

When asked to write an article for a local game dev magazine, I was initially apprehensive as writing is not exactly a strong point of mine. It was only once I realized that the article might actually benefit some prospective game programmers that it was well worth the effort. That being said trying to find a topic for the article proved quite challenging. My initial plan was to write a brief article describing the basics of vehicle steering and waypoint following. That idea got scrapped once I built a simple test bed and I realized that I don’t know nearly enough about physics of car motion to write an in-depth article on the topic.

While I was still deciding on a topic, huge discussions (read arguments) started popping up in various game dev communities I followed, all based around the same theme – education:

There seems to be quite a lot of debate on what prospective game programmers should study as well as what current college curricula should include. The most common questions still being asked by prospective game programmers are: “What do I need to study to become a game programmer?” and “What do I need to do to help me land a game industry job as a programmer?”. It is these questions that I wish to address in this article. Read more of this post

A brief update…

I’ve been rather quiet the last 2 months. It’s taken a while to sort out a lot of things here and I’ve been quite busy with work but I will keep this blog going, I promise. :) I finally managed to get a personal computer around a week ago and by PC I mean the most basic components needed just to get it up and running. Its gonna be a while before I complete the build and even longer still before I can watercool it like my last rig. As I mentioned I’ve been working at IO Interactive as a game programmer for the last two months. I’ve been working on Hitman: Absolution (the upcoming hitman game that blew everyone away at E3 – check out the trailer below).

I cant really talk about what I’ve been doing but what I can say is that the stuff I’ve been lucky enough to contribute to is amazing.  From what I’ve seen Hitman:Absolution is gonna rock. I’ve ended up working with some really cool guys and I’ve learnt so much in the last two months. I’ve also re-learnt and remembered quite a ton of info that I had forgotten as well. I cant believe how rusty I’ve gotten over the course of completing my thesis. I really feel that I need to keep pushing myself to improve and learn.  Speaking of my thesis, I’ve gotten back the initial feedback from my examiner and I have a few additions and corrections to make before I’ll be able to pass. I also have to finish writing the journal paper I started before I left South Africa. In addition to work, I’m gonna be quite busy the next few weeks so dont expect any major updates. After that I plan to just screw around with some hobby probjects and stupid ideas for a bit and hopefully, one of those I can turn into a new blog post.

Personal life wise, Denmark is pretty cool but at this point I’m rather homesick which I guess is only natural. Everything here is quite different and even though I’ve been here 2 months, it isnt even close to feeling like home yet. The internet is quite amazing here tho :P Its been a huge culture shock for me as a lot of things that I took for granted arent available or popular locally. Things like skate brands, energy drinks or even silly things like white bread are quite hard to find. Though there is more to life than clothes and food, right? :P I’ve also found myself riding a bicycle to work each day (around 6KM each day), combine that with Karate twice a week and I dont think I’ve ever had this much exercise in my life before. Damn it, I miss my car, hahaha! Anyways its midnight right now and I need to get to bed since I need to be up early tomorrow.

A wannabe game developer no more

So after all these years of hard work, I’ve finally managed to get into the game industry and into a well known AAA studio to boot. I’ve accepted an offer from IO Interactive (Hitman, Kane & Lynch) who are based in Copenhagen, Denmark. I will be moving to Denmark and starting at IOI in August!

This whole thing still feels a bit like a dream. I cant believe that I’ve finally managed to achieve what honestly felt like an impossible goal when I set it all those years ago. I guess it goes to show that with hard work anything is possible. As excited as I am, I cant help but feel a bit nervous. I’m a little fish getting thrown in the deep end. Then again, I dont think I’d want it any other way. I’ve been dying to improve and to learn and I’m finally going to be working with people that I can really learn a lot from and I honestly cant wait for that!!! To everyone that’s helped me along the way, you know who you are, thank you so much! I honestly dont think I’d have gotten this far without your encouragement and support.

On the downside, I dont know how active this blog will be once I start working but I will do my best to try and keep the content coming. Game dev blogs have been a huge help over the years and I would love to be able to pay the favor forward!

Pathfinding Thesis Complete

WOOT! My thesis, the bane of my existence for the last 2 years, is finally done! Its basically a review of the video game pathfinding field as well as presenting a novel grid map search technique: the spatial grid A*. The version linked below is the final draft that is being submitted to my faculty.

A Robust Explosion Hit Check Technique

In a recent technical interview I got asked the following question: “if an explosion occurs i.e. from a grenade, how would you determine which characters in the game are affected”. This was a question that I couldnt answer at the time which annoyed the hell out of me and it’s been sitting in the back of my head for the last few weeks. So I decided to discuss it (and my proposed solution) on my blog as an “academic” exercise. I would really appreciate any feedback or comments on this discussion. This is far from being a solved problem for me and while my solution may potentially work quite well, there may be a simpler technique available which I don’t know about.

Coming back to the question. Well the naive answer to that question is to simply do a collision detection check with the explosion area of effect/blast radius (sphere) and any nearby characters’ bounding boxes. Read more of this post

DirectX10 Tutorial 10: Shadow Mapping Part 2

This is the second part of my shadow mapping tutorial (find part 1 here). I am going to discuss some of the issues of the shadow mapping technique as well as some basic solutions to the problem. I’m also going to attach a demo application, in which you can play around with some of the shadow mapping variables and techniques. I do have to mention that the demo was a very quickly programmed prototype. It serves its purpose as a demo application but I dont advise cutting and pasting the code into your own projects as a lot of the techniques used arent exactly optimal. Not to mention that I did take a few shortcuts here and there. Read more of this post

DirectX10 Tutorial 10: Shadow Mapping Part 1

I’ve had some downtime lately and seeing as I wrote a basic shadow mapping demo, I figured I’d write a short tutorial on the theory and implementation of shadow mapping. Shadow mapping is one of those topics that tends to get explained in a overly complicated manner when in fact the concept is rather simple. It is expected that you understand the basic of lighting before attempting this tutorial, if you want to learn more about some basic lighting models please read my lighting tutorial. The figure below shows a sample scene with a single light illuminating a cube.

How shadows are formed

Read more of this post