DirectX10 Tutorial 7: Viewports

This is going to be a very brief tutorial; the idea for it came about from a comment on my very first tutorial about using multiple viewports. I assumed that using multiple viewports would be a simple matter of just calling a SetViewport method just like in DX9, but it isn’t. I tried finding some info online but there is absolutely nothing available so I had to figure it out on my own. There are two methods to get multiple viewports working. The first requires a state change when selecting the viewports but I don’t think that the cost of that is too prohibitive since you would probably only swap viewports once per viewport during the scene rendering. The second method involves using a geometry shader to specify which viewport to use during the clipping/screen mapping stage in the pipeline.

What is a viewport

Well lets first discuss what a viewport actually is, if you Google a bit you’ll find almost no information regarding viewports or what they actually are (and there is absolutely no info in the DX documentation). A viewport is a rectangle that defines the area of the frame buffer that you are rendering to. Viewports do have depth values which affect the projected z range of any primitives in the viewport but this is only used in very advanced cases so you should always set the near depth to 0 and the far depth to 1. If we imagine a car game in which we have a rear view mirror, a simple method to draw the rear view mirror contents is to set the viewport to the mirror area, rotate the camera to face backwards and render the scene. Another common use in games is when you see another player’s viewpoint within your HUD (Ghost recon does this quite often), once again to render this all that is require is to set the viewport to the area of your final image you want to render to, then you render the scene from the other players viewpoint. Read more of this post

ColladaX Progress – Nearing Pre-Alpha Stage

I’ve made some progress on my Collada loader, it can now fully load geometry from collada DAE files. I don’t want to release the code until I have some proper error checking in place and have materials and texture loading working as well…

Currently what the loader does is take a Collada DAE  (digital asset exchange) file and load all the geometry data sources from the file. Once that is complete you are given the option of combining the defined objects into a DirectX ready struct (see below figure) which you can plug straight into the D3DxMesh class, it even provides you with an input layout struct for each object type:) Read more of this post

Progress on ColladaX: irrXML issues

Well I’ve made progress with my collada loader, using irrXML (which is by far the best and simplest SAX-style XML parser I’ve found). I have loaded in all object geometries and submeshes and reformatted them into DX data structures. What my loader will do is parse a collada file and return a struct for each object defined in the file, the struct will contain an input layout, vertex/index/attribute buffers ready for use.

Now I’m having a fair amount of issues with the irrXML library tho, firstly I’ve found a few bugs, it has no method of error checking whether a file exists or not, the documentation states that upon failure to open the file the create method will return null, but it doesn’t do that, it creates the irrXML reader anyways whether the file exists or not. If you try call the read()  function then that fails, so I can hack in a basic file check that way or do a separate fopen() check prior to creating the irrXML object. Both methods are just ugly hacks (and I cant check whether the file exists or if its simply empty with the first method).

Then the parser itself has a pretty quirky behavior wherein it reads tabs and whitespace as nodes. For example with this test XML file:

You would expect the parser to read only three nodes? Unfortunately it doesn’t, it treats tabs and whitespace as elements. So the following occurs: Read more of this post

DirectX10 Tutorial 6: Blending and Alpha Blending

It’s been ages since my last DirectX 10 tutorial and I apologize, I’ve been buried under a ton of work and haven’t had much free time lately. This is going to be a very short tutorial on pretty much the final stage in the rendering pipeline: color blending. If you recall, each frame displayed on the screen is simply a bitmap that gets displayed and updated multiple times a second. This bitmap is called the frame buffer, now the frame buffer is technically the image we see at any given point and the back buffer (assuming you are double buffering) is what you actually draw to (referred to as your render target) and only once you finish drawing do you display the back buffer to the screen by swapping the frame buffer and the back buffer by using the Present member of the DX10 swapchain class.

Now think back to the depth testing tutorial where we displayed that cube and had to enable depth testing for it to render properly. Now a  cube is made up of 6 sides with 2 faces per side, so that is 12 triangles we have to draw for each cube. The graphical API draws one triangle at a time to the back buffer, and used the depth buffer to check whether it can overwrite a pixel in the back buffer if the new pixel to be drawn is in front of it. If this test passes then the API is given permission to overwrite that pixel’s value but its not as simple as that! Read more of this post

3D DirectX10 Free Look Camera (Timer based)


Okay so I promised I’d write a tutorial on writing a simple free look vector based camera, this tutorial doesn’t only apply to DirectX10 but to  pretty much any graphics API. We want to keep things simple initially so the simplest possible camera we can implement short of a first person style camera is a free look camera (without roll) so basically only two degrees of freedom: left/right and up/down, we are also going to implement some basic movement controls forwards/backwards and Strafe Left/Right. Read more of this post

My attempt at a DX10 game engine… Name Ideas

So I’ve started with developing the AI test bed for my masters experiments, and since I kinda wanted something that looked nice, I basically started developing a game engine without knowing it😛

I’ve been working on it for around a week now, and have a very basic renderer and a basic camera system going… The next step will be developing the scene graph and spatial data structures needed for rendering. I’ve been doing so much reading on scene graphs and so one that it’s coing out my ears and yet I’m not any closer to having an idea on a good solution. I could probably do my entire masters on scene graphs and spatial sorting.

Anyways I’m going to discontinue my DX10 tutorials since all the future tutorials will anyways be based off of my engine, so I’m going to start a new series of tutorials on building a very basic dx0 game engine.

The amount of files in the projects are growing and I need to come up wiht a nice name so i can start encapsulating the classes in namespaces, and have a nice uniform naming across the components, since the engine is going to be super super simple i was thinking as using one of the following as the engine name:

  • Cimplicity
  • basikEngine
  • CimplEngine
  • SimplEngine
  • engineBasix

Any other suggestions?

DirectX10 Tutorial 5: Basic Meshes

Since my car has been broken for the last two days, I’ve taken off work and have been working on my Masters degree, since part of my Masters involves building a small “game engine” for AI testing, I’ve been doing some more DX10 work, so its convenient for me to quickly slap together a few more tutorials.

I covered the basics of indexed buffers and the depth testing in the last tutorial, in this short tut, I’m going to cover the basics of directX meshes. A mesh is a data structure that contains all the vertex and index buffers needed to draw an object. It’s a neater method of drawing objects as we’ll see. Read more of this post

DirectX10 Tutorial 4: Indexed Buffers and Depth Testing

Okay so it’s been a while since my last tutorial, and I apologize for that. We dealt with textures in the last tutorial, and many of you might be wondering while I handled that so early? Well mainly because D3D 10 isn’t exactly an API designed for beginners, so a critical feature required for any scene rendering (depth testing or z-buffering) is done in D3D by use of a depth stencil texture, covering textures before depth testing makes sense in this case. Remember guys I’m not going to spoon feed you, these tutorials expect you to read the SDK docs for details on the variable types and the methods, these tutorials are just to give you a running start.

Before I get to Depth Testing, let’s draw something a little more complicated that a quad, how about a cube. Using the same method as in tutorial 3 the code to draw a six sided cube is as follows: Read more of this post

DirectX10 Tutorial 3: Textures

So it’s been sometime since the last tutorial and I apologize for that, I’ve been busy wrapping up my exams for my second degree and finishing off a mini thesis for one of my subjects. So now that it’s all over with I‘ve sat down and done a small tutorial on dx10 texturing.

A lot of other tutorials leave texturing for later on in the tutorial but I’m going to do it now because it’s so simple and further illustrates the point of shader programs and what role they play. Read more of this post

DirectX10 Tutorial 2: Basic Primitive Rendering

Okay I managed to find some time and wrote a very basic second tutorial that introduces the main concepts behind primitive rendering in DX10. This tutorial builds upon the base code of tutorial 1 so check that out if you haven’t already.

Also I need to mention that I’m not writing these tutorials for complete beginners, I expect you to at least have a very basic understanding of graphics programming and some of the terminology involved. I’m not going to go into a lot of detail regarding terms like culling, rasterizing, fragments etc.

One last aside before the tutorial, what makes DX10 different to DX and openGL is the removal of the fixed function pipeline. Now what the hell does that all mean? Well in directx9 and openGL, they had default ways of handling vertices, colors, texture co-ordinates etc. You’d pass through a vertex and a color and it would know what to do. It also handled lighting and other effects. In DX10 these defaults were removed and the core API has been simplified and reduced, this allows you to have full control over each pipeline stage and removes any past limitations present on things like the number of light sources and so on, but it has a tiny downside, the code complexity has increased a little.

If we take basic lighting for example, in the past a hobbyist could enable lighting with a few simple function calls and would get a satisfactory result and call it a day. Now for the same effect, the hobbyist would have to write all the pixel and vertex shaders necessary and make use of the phong (or other) reflection model equations to manually calculate the effect of lighting on the scene. Read more of this post