This tutorial is part of a Collection: 03. DirectX 11 - Braynzar Soft Tutorials
rate up
1
rate down
10558
views
bookmark
28. Skeletal Animation (based on the MD5 format)

Here is the second part of the lesson, animating the MD5 model. The MD5 format uses a skeletal system to do the animation (more specifically, joints), so we will be learning how to loop through the animation stored in the "md5anim" file and apply it to our model. Skeletal systems are nice because they take up less memory than storing keyframe animations, where its basically a new model for every frame of animation. The joint system (which we will be using) just stores the orientation and position of the bones at each frame. Bone systems are also cool when you want to do "rag-doll" physics! After this lesson, you should be able to load in skeletal animations from any model, and even be able to create your own animations or a rag-doll effect during runtime!

1792 downloads
##Introduction## This is the second lesson of our md5 lessons. In the last lesson we talked about just loading the md5 model, and using the skeletal system to compute the vertex positions and normals. In this lesson, we will learn how to load in the md5 animation file ".md5anim", and use the data stored in this file to animate our model. I'm aware that my modeling skills are poor, but i'm planning on improving them soon. I think after this lesson I'll get a real server and a real domain name, haha, then start to work on the other parts of this site (forum, audio, vision). ##Animating the Model (Using the Joint Structure)## We briefly covered skeletal animation in the last lesson. So now we will focus on how to use the skeletal structure (joints) to animate a model in this lesson. We can take this in steps. First we will load in the animation file (.md5anim), which we will cover below. After that, we need to compute the skeleton based on the current time in the animation. We can do this by "interpolating" two skeletons based on the current time between two frames (the frame thats already passed, and the frame thats coming up). After we create the interpolated skeleton, we need to recompute the vertices positions and normals using the weight. Finally, after we have our vertex positions and normals (we can find tangents and bitangents the same way as normals), we need to update our vertex buffer. Basically this is like four steps, where the first step will be covered shortly (loading in the animation), so we can skip to the second step, creating the interpolated skeleton. **Creating the Interplated Skeleton** When we load in the animation, we create a skeleton for every frame of animation. We can then "interpolate" two of these frames (eg. frame 1 and frame 2) to get the current frame of animation. We could of course just use the frames of animation themselves without creating an interpolated skeleton per frame, but this might look "choppy", and if there is ever a time when time in the game slows down, like in the matrix where you can see the bullets flying, you would have an even more choppy animation. The animation file also says how many frames per second the animation should run. The model we are loading uses 30 frames per second. We can multiply the current animation time (the time since the beginning of the animation started) with the frame rate, to get the current frame we are on. This calculation will produce a floating point number, for example "5.3667". In this example, "5.3667", we are currently on frame "5", so we can use the function floorf() to round down to the nearest whole number, which is "5" in this case. That is what we will store in frame0, which will be the first of two frame skeletons to interpolate. We can get the second skeleton frame by simply adding "1" to frame0, which gives us frame1, or "6" in the current example. Now we have both the frame skeletons we will be interpolating to find the interpolated skeleton that we will use to update our model, but when interpolating two things, we need a value between 0 and 1 ("0" being all frame0, "0.5" being half frame0 and half frame1, and "1" being completely frame1) to know how much of each thing to use. what value can we use as an interpolation factor? I know, probably a pretty easy question, but we will use the remainder of the value we got above when finding which frame we are one, which in the current example would be "0.3667" (we find this by subtracting frame0 from that value above). Now let's do the interpolating. Now we find the actual interpolated skeleton. This is actually pretty easy. We loop through each joint of the model (since the animations use the same joints, just at different positions and orientations), and interpolate their positions and orientations. To update their positions, we can use the following equation: interpolatedJoint.position = joint0.pos + (interpolation * (joint1.pos - joint0.pos)) To find the interpolated orientation between the two frame's joint's, we will use a technique called "Slerp", or spherical linear interpolation. Conveniently, there is an xna math function that will do this for us, so we are able to get the interpolated orientation with the following line: interpolatedJoint.orientation = XMQuaternionSlerp(joint0.orientation, joint1.orientation, interpolation)) We do the above for every joint, the end result is our interpolated skeleton. **Updating the Vertex Position and Normal** The next thing we need to do is update our vertices and normals using the interpolated skeleton. We can do this basically the same way we got our vertex positions in the last lesson. We loop through each vertex, and in that loop, we loop through each of the vertex's weights positions. We first rotate the weight around the joint using the joints orientation, and the following equation: weightPos = XMQuaternionMultiply(XMQuaternionMultiply(jointOrientation, weightPos), jointOrientationConjugate)) This will give us the weights position in joint space. We now need to move the weight to the joints position in model space. We can easily do this by adding the joints position to the weights position. Finally, we multiply the weights position with the weights bias factor, and add it to the vertices final position. We will also be calculating the normal. This hasen't been explained yet, but we will be using the weight's normal (that we got from updating the function where we load the md5mesh file). We will rotate the weights normal the same way we rotated the weights position using the joints orientation. Since normals do not have a position, we do not need to add the joints position to the normal. Then we multiply the weights normal with the weights bias factor, and add it to the vertex's final normal. **Updating Direct3D's Buffers** The last thing we have to do, is update the vertex buffer. There are three ways to do this (one for each type of buffer, being D3D11_USAGE_STAGING, D3D11_USAGE_DYNAMIC, or D3D11_USAGE_DEFAULT. The D3D11_USAGE_IMMUTABLE type of buffer cannot be updated). The first way is to use a D3D11_USAGE_DEFAULT buffer, with a D3D11_USAGE_STAGING buffer. Staging buffers are used to get information from the buffer that is sent to the GPU, since the default buffer is basically a copy of the staging buffer. To update the buffer, you will first update the staging buffer using ID3D11DeviceContext::Map and ID3D11DeviceContext::Unmap. You will then copy the contents of the staging buffer to the default buffer using ID3D11DeviceContext::CopyResource. The Map method returns a D3D11_MAPPED_SUBRESOURCE object, whose pData member is a pointer to the start of the data in the buffer. We can then use the memcpy() function to store the vertices (or whatever data you want) in the buffer. We can do it something like this: D3D11_MAPPED_SUBRESOURCE mappedVertBuff; ID3D11DeviceContext->Map(stagingVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedVertBuff); memcpy(mappedVertBuff.pData, &vertices[0], (sizeof(Vertex) * vertices.size())); ID3D11DeviceContext->Unmap(stagingVertexBuffer, 0); ID3D11DeviceContext->UpdateSubresource( defaultVertexBuffer, 0, NULL, &stagingVertexBuffer, 0, 0 ); The second way to do it is just update the default buffer without using a staging buffer. We only need one line to do this, and it uses the ID3D11DeviceContext::CopyResource method. You can update a default buffer with the following line: ID3D11DeviceContext->UpdateSubresource( defaultVertexBuffer, 0, NULL, &vertices[0], 0, 0 ); This last way is for a buffer created with D3D11_USAGE_DYNAMIC usage. This is the way we will be updating our buffers, since it's made for fast updating (although it's read a little more slowly by the GPU because of it). We can update this buffer like we did the staging buffer, using Map and Unmap. The following is an example: D3D11_MAPPED_SUBRESOURCE mappedVertBuff; ID3D11DeviceContext->Map(dynamicVertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedVertBuff); memcpy(mappedVertBuff.pData, &vertices[0], (sizeof(Vertex) * vertices.size())); ID3D11DeviceContext->Unmap(dynamicVertexBuffer, 0); The first and second, using staging and default buffers are updated much more slowly than the dynamic buffer, so you should only do the first two methods of updating a buffer if the buffer is updated LESS than once per frame. The default buffers are read by the GPU faster, and updated slower. The dynamic buffers are read by the GPU slower, but are updated much faster. You can find more information on the types of buffers here. ##The .md5anim Format## The .md5anim format is sectioned off into basically five parts: the header, hierarchy, bounds, baseframe, and frames. The header holds information about the file and animation, like frames per second, file version, number of joints and frames total etc. The hierarchy is a list of the joints used in the model. The joints in the hierarchy section should match up in number and parent-child bindings as the .md5mesh file. After the hierarchy is a section called the bounds. This section defines an Axis-Aligned Bounding-Box (AABB) for each frame of the mesh. This AABB can be used for quick and dirty calculations such as collision detection or picking. After the bounds is the baseframe, which stores the position and orientation of each joint in it's default position. All frames will be built off this baseframe, or base skeleton. Finally we have the frames. There is one section per frame, and each section contains a list of floating point values which explain how the joints are to be moved and rotated. **"MD5Version"** Following this string is a number describing the version of the file. Our loader was designed specifically for version "10" (although I decided to skip the actual check in the loader). You can find out information about other version, although every md5 model i've downloaded (which I won't lie, is not many) has been version 10. MD5Version 10 **"commandline"** This line contains something that we will not have to worry about ;) commandline "" **"numFrames"** This is the number of frames in our animation. This lessons model uses 46 frames of animation, so there will be 46 frame sections in the file. numFrames 46 **"numJoints"** This is the number of joints in our model, and in the hierarchy section. This number should match up with the number in the .md5mesh file. numJoints 31 **"frameRate"** The frame rate is how many frames per second the animation should run. We take this into consideration in our program to make sure we keep a smooth and constant animation at the same speed the file intended it to be. frameRate 30 **"numAnimatedComponents"** This is the number of components, or floating point numbers in each of the frame sections. Each of these components will replace one of the components of the base frame joints position or orientation. This will be described later. numAnimatedComponents 186 **"hierarchy"** This is the start of the joint descriptions. The joint descriptions start on the next line (after "joints {") and go until a line containing a closing bracket ("}") is reached. The number of the joints, and the parent index and name of the joints must match up with those in the .md5mesh file. If they didn't match up, the animation would have very unexpected results. Each line after "joints {" is a new joint. Each of these lines starts with a string inside two quotation marks, which is the name of the joint. Following the name of the joint is the parent index. A parent index of "-1" means that this joint is at the top of the hierarchy, and has no parent. The next number is a flag which describes which parts of the joint (position and orientation's x, y and z) will be updated in the frames section later. The last value, is the start index. The start index is the value in the frame data to start updating this specific joint. As you can see in the peice of code below, the "bip01"'s start index is "0", which means that it will be updated starting with the very first value in the frame data. The flags section will define how to update the joint, and how many values from the start index to use in the frame data (between 1 and 6, because 3 for position and 3 for orientation). joints { "Bip01" -1 63 0 ... } **"bounds"** These are the bounds, or an axis aligned bounding box for each of the frames. Each line after "bounds" is a frames AABB, which consists of a min and max point describing the largest and smallest value on each of the x y and z axes. The first three floats inside the paranthesis describe the min point, while the next three describe the max point. ( -30.4137325286 -23.4689254760 -42.4965248107 ) ( 35.7306137084 6.3094730377 48.7827911376 ) **"baseframe"** This is the default position of each joint in the animation. This is not necessarily the same as the bind-pose described in the .md5mesh. Each frame will build it's frame skeleton from this baseframe, since not every frame describes every joints position and rotation. Because of this, it is possible that the baseframe is filled with some or even all zeros, in the case that some or all of the joints positions or orientations are updated every frame. Each line after "baseframe" describes the corresponding joints position and orientation, where the first three values in the line are the position, and the last three are the orientation. ( 0.5196304321 1.7362534999 4.6482505798 ) ( 0.0000000000 0.0000000000 0.7071063041 ) **"frame"** The last section to talk about in the .md5anim file, is the frames section. Each frame of animation will have their own section, and in each section, is a list of floats. The number of floats is defined by "numAnimatedComponents" in the header. Each float will replace one of the x, y, or z values of the position or orientaion of the baseframes joint. Which value and which component of the joint (x, y, or z of the position or rientation) is decided by the joints flag and start index. 0.5196304321 1.7362534999 4.6482505798 0.0000000000 0.0000000000 0.7071063041 ##Animating the Normals## Computing the normals for every vertex can be pretty slow sometimes, so we have a way around this for when we animate our model, so we don't have to compute the normals for every frame. The way we do this is by computing the normals in joint space (for each weight) when we load in the original model from the .md5mesh file (we have updated the function to do this in this lesson). After we have the normals in joint space, we can easily rotate the normals for each frame depending on the joints orientation, the exact same way we rotate the weights position using the joints orientation, except that we don't even have to reposition the normal. When computing the normal in joint space, we first compute the normals for each vertex like we normaly would for the model. We can then use these vertex normals and rotate them by the inverse of the joints orientation. We will go through each of the weights for this vertex and do this computation with the joint that each of these weights is bound to. ##Right/Left Handed Coordinat Systems## You will probably notice that I have swapped the "z" and "y" values when loading in the .md5mesh and .md5anim. This is because the file was exported from a Right handed coordinate system, while directx uses a left handed coordinated system. If your model happnes to be in a left handed coordinate system already, you can just re-swap the y and z values when loading in the data. ##New Structures## We have a couple new structures here. The first you can see is for the bounding box, loaded from the bounds section in the md5anim file. The next structure that you can see is the frame data structure. This structure will contain a number identifying which frame, and an array of floats, which is the frame data. These floats will then be used to update the baseframe skeleton for each frame. Next we have a new structure for our joints. This structure holds different data than the other joint structure, since the joints for each animation (since md5mesh files can contain one or more md5anim files) can differ from animation to animation. So this structure holds animation specific data, like the flags (which components of the joint should be updated per frame) and the startindex (the first value in the frame data array to start updating with). Next is our ModelAnimation structure. You will see next that we have a vector of these as a member or our Model3D structure. This is because each model might contain one or more animations. Anyway, this structure holds information about the animation, along with the animation itself (skeleton poses for each frame). The members have been explained above or are easy enough to understand without my explanation. But I will explain the last member, the frameSkeleton. Each skeleton is defined by a vector of joints, and in the animation, each frame has a single skeleton. So you can see that we have the first vector for each frame of animation, and the vector within that one is a vector of the joints of the skeleton. We can then later call a single joint from this member something like this "frameSkeleton[frame][joint]", or simply an entire frame skeleton like this "frameSkeleton[frame]". struct BoundingBox { XMFLOAT3 min; XMFLOAT3 max; }; struct FrameData { int frameID; std::vector<float> frameData; }; struct AnimJointInfo { std::wstring name; int parentID; int flags; int startIndex; }; struct ModelAnimation { int numFrames; int numJoints; int frameRate; int numAnimatedComponents; float frameTime; float totalAnimTime; float currAnimTime; std::vector<AnimJointInfo> jointInfo; std::vector<BoundingBox> frameBounds; std::vector<Joint> baseFrameJoints; std::vector<FrameData> frameData; std::vector<std::vector<Joint>> frameSkeleton; }; ##Updated Weight Structure## We have updated our weight structure to include a normal. This is because if we define the normal in joint space, we can easily change it as our animation progresses, instead of having to recompute every single normal for every frame. struct Weight { int jointID; float bias; XMFLOAT3 pos; ///////////////**************new**************//////////////////// XMFLOAT3 normal; ///////////////**************new**************//////////////////// }; ##Updated Model3D Structure## Here you can see we have updated the Model3D structure to hold an array of animations. struct Model3D { int numSubsets; int numJoints; std::vector<Joint> joints; std::vector<ModelSubset> subsets; ///////////////**************new**************//////////////////// std::vector<ModelAnimation> animations; ///////////////**************new**************//////////////////// }; ##Two New Functions## These are our new functions. The first one loads the animation from the .md5anim file, and the second one update the vertex buffer based on the time passed in the animation, and which animation to even use. bool LoadMD5Anim(std::wstring filename, Model3D& MD5Model); void UpdateMD5Model(Model3D& MD5Model, float deltaTime, int animation); The DetectInput() Function There is a new key detection here. That key is letter "R". When we press this key, our animation will update. I have added a float called "timeFactor" which you can use to speed up or slow down time, or at least time in the animation scope. If you are really speeding up time or slowing it down throughout the game, you will need to put this time facter in a larger scope. void DetectInput(double time) { DIMOUSESTATE mouseCurrState; BYTE keyboardState[256]; DIKeyboard->Acquire(); DIMouse->Acquire(); DIMouse->GetDeviceState(sizeof(DIMOUSESTATE), &mouseCurrState); DIKeyboard->GetDeviceState(sizeof(keyboardState),(LPVOID)&keyboardState); if(keyboardState[DIK_ESCAPE] & 0x80) PostMessage(hwnd, WM_DESTROY, 0, 0); float speed = 10.0f * time; if(keyboardState[DIK_A] & 0x80) { moveLeftRight -= speed; } if(keyboardState[DIK_D] & 0x80) { moveLeftRight += speed; } if(keyboardState[DIK_W] & 0x80) { moveBackForward += speed; } if(keyboardState[DIK_S] & 0x80) { moveBackForward -= speed; } ///////////////**************new**************//////////////////// if(keyboardState[DIK_R] & 0X80) { float timeFactor = 1.0f; // You can speed up or slow down time by changing this UpdateMD5Model(NewMD5Model, time*timeFactor, 0); } ///////////////**************new**************//////////////////// if((mouseCurrState.lX != mouseLastState.lX) || (mouseCurrState.lY != mouseLastState.lY)) { camYaw += mouseLastState.lX * 0.001f; camPitch += mouseCurrState.lY * 0.001f; mouseLastState = mouseCurrState; } UpdateCamera(); return; } ##The LoadMD5Anim() Function## Here we will load the animation. This file will also create the skeletons for every frame of animation. I will go through this function chunk by chunk. bool LoadMD5Anim(std::wstring filename, Model3D& MD5Model) { ModelAnimation tempAnim; // Temp animation to later store in our model's animation array std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { fileIn >> checkString; // Get next string from file if ( checkString == L"MD5Version" ) // Get MD5 version (this function supports version 10) { fileIn >> checkString; /*MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numFrames" ) { fileIn >> tempAnim.numFrames; // Store number of frames in this animation } else if ( checkString == L"numJoints" ) { fileIn >> tempAnim.numJoints; // Store number of joints (must match .md5mesh) } else if ( checkString == L"frameRate" ) { fileIn >> tempAnim.frameRate; // Store animation's frame rate (frames per second) } else if ( checkString == L"numAnimatedComponents" ) { fileIn >> tempAnim.numAnimatedComponents; // Number of components in each frame section } else if ( checkString == L"hierarchy" ) { fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numJoints; i++) // Load in each joint { AnimJointInfo tempJoint; fileIn >> tempJoint.name; // Get joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size()-1, 1); fileIn >> tempJoint.parentID; // Get joints parent ID fileIn >> tempJoint.flags; // Get flags fileIn >> tempJoint.startIndex; // Get joints start index // Make sure the joint exists in the model, and the parent ID's match up // because the bind pose (md5mesh) joint hierarchy and the animations (md5anim) // joint hierarchy must match up bool jointMatchFound = false; for(int k = 0; k < MD5Model.numJoints; k++) { if(MD5Model.joints[k].name == tempJoint.name) { if(MD5Model.joints[k].parentID == tempJoint.parentID) { jointMatchFound = true; tempAnim.jointInfo.push_back(tempJoint); } } } if(!jointMatchFound) // If the skeleton system does not match up, return false return false; // You might want to add an error message here std::getline(fileIn, checkString); // Skip rest of this line } } else if ( checkString == L"bounds" ) // Load in the AABB for each animation { fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numFrames; i++) { BoundingBox tempBB; fileIn >> checkString; // Skip "(" fileIn >> tempBB.min.x >> tempBB.min.z >> tempBB.min.y; fileIn >> checkString >> checkString; // Skip ") (" fileIn >> tempBB.max.x >> tempBB.max.z >> tempBB.max.y; fileIn >> checkString; // Skip ")" tempAnim.frameBounds.push_back(tempBB); } } else if ( checkString == L"baseframe" ) // This is the default position for the animation { // All frames will build their skeletons off this fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numJoints; i++) { Joint tempBFJ; fileIn >> checkString; // Skip "(" fileIn >> tempBFJ.pos.x >> tempBFJ.pos.z >> tempBFJ.pos.y; fileIn >> checkString >> checkString; // Skip ") (" fileIn >> tempBFJ.orientation.x >> tempBFJ.orientation.z >> tempBFJ.orientation.y; fileIn >> checkString; // Skip ")" tempAnim.baseFrameJoints.push_back(tempBFJ); } } else if ( checkString == L"frame" ) // Load in each frames skeleton (the parts of each joint that changed from the base frame) { FrameData tempFrame; fileIn >> tempFrame.frameID; // Get the frame ID fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numAnimatedComponents; i++) { float tempData; fileIn >> tempData; // Get the data tempFrame.frameData.push_back(tempData); } tempAnim.frameData.push_back(tempFrame); ///*** build the frame skeleton ***/// std::vector tempSkeleton; for(int i = 0; i < tempAnim.jointInfo.size(); i++) { int k = 0; // Keep track of position in frameData array // Start the frames joint with the base frame's joint Joint tempFrameJoint = tempAnim.baseFrameJoints[i]; tempFrameJoint.parentID = tempAnim.jointInfo[i].parentID; // Notice how I have been flipping y and z. this is because some modeling programs such as // 3ds max (which is what I use) use a right handed coordinate system. Because of this, we // need to flip the y and z axes. If your having problems loading some models, it's possible // the model was created in a left hand coordinate system. in that case, just reflip all the // y and z axes in our md5 mesh and anim loader. if(tempAnim.jointInfo[i].flags & 1) // pos.x ( 000001 ) tempFrameJoint.pos.x = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 2) // pos.y ( 000010 ) tempFrameJoint.pos.z = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 4) // pos.z ( 000100 ) tempFrameJoint.pos.y = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 8) // orientation.x ( 001000 ) tempFrameJoint.orientation.x = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 16) // orientation.y ( 010000 ) tempFrameJoint.orientation.z = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 32) // orientation.z ( 100000 ) tempFrameJoint.orientation.y = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; // Compute the quaternions w float t = 1.0f - ( tempFrameJoint.orientation.x * tempFrameJoint.orientation.x ) - ( tempFrameJoint.orientation.y * tempFrameJoint.orientation.y ) - ( tempFrameJoint.orientation.z * tempFrameJoint.orientation.z ); if ( t < 0.0f ) { tempFrameJoint.orientation.w = 0.0f; } else { tempFrameJoint.orientation.w = -sqrtf(t); } // Now, if the upper arm of your skeleton moves, you need to also move the lower part of your arm, and then the hands, and then finally the fingers (possibly weapon or tool too) // This is where joint hierarchy comes in. We start at the top of the hierarchy, and move down to each joints child, rotating and translating them based on their parents rotation // and translation. We can assume that by the time we get to the child, the parent has already been rotated and transformed based of it's parent. We can assume this because // the child should never come before the parent in the files we loaded in. if(tempFrameJoint.parentID >= 0) { Joint parentJoint = tempSkeleton[tempFrameJoint.parentID]; // Turn the XMFLOAT3 and 4's into vectors for easier computation XMVECTOR parentJointOrientation = XMVectorSet(parentJoint.orientation.x, parentJoint.orientation.y, parentJoint.orientation.z, parentJoint.orientation.w); XMVECTOR tempJointPos = XMVectorSet(tempFrameJoint.pos.x, tempFrameJoint.pos.y, tempFrameJoint.pos.z, 0.0f); XMVECTOR parentOrientationConjugate = XMVectorSet(-parentJoint.orientation.x, -parentJoint.orientation.y, -parentJoint.orientation.z, parentJoint.orientation.w); // Calculate current joints position relative to its parents position XMFLOAT3 rotatedPos; XMStoreFloat3(&rotatedPos, XMQuaternionMultiply(XMQuaternionMultiply(parentJointOrientation, tempJointPos), parentOrientationConjugate)); // Translate the joint to model space by adding the parent joint's pos to it tempFrameJoint.pos.x = rotatedPos.x + parentJoint.pos.x; tempFrameJoint.pos.y = rotatedPos.y + parentJoint.pos.y; tempFrameJoint.pos.z = rotatedPos.z + parentJoint.pos.z; // Currently the joint is oriented in its parent joints space, we now need to orient it in // model space by multiplying the two orientations together (parentOrientation * childOrientation) <- In that order XMVECTOR tempJointOrient = XMVectorSet(tempFrameJoint.orientation.x, tempFrameJoint.orientation.y, tempFrameJoint.orientation.z, tempFrameJoint.orientation.w); tempJointOrient = XMQuaternionMultiply(parentJointOrientation, tempJointOrient); // Normalize the orienation quaternion tempJointOrient = XMQuaternionNormalize(tempJointOrient); XMStoreFloat4(&tempFrameJoint.orientation, tempJointOrient); } // Store the joint into our temporary frame skeleton tempSkeleton.push_back(tempFrameJoint); } // Push back our newly created frame skeleton into the animation's frameSkeleton array tempAnim.frameSkeleton.push_back(tempSkeleton); fileIn >> checkString; // Skip closing bracket "}" } } // Calculate and store some usefull animation data tempAnim.frameTime = 1.0f / tempAnim.frameRate; // Set the time per frame tempAnim.totalAnimTime = tempAnim.numFrames * tempAnim.frameTime; // Set the total time the animation takes tempAnim.currAnimTime = 0.0f; // Set the current time to zero MD5Model.animations.push_back(tempAnim); // Push back the animation into our model object } else // If the file was not loaded { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } ##Opening the File and Reading the Header Info## I don't want to spend too much time explaining things that have been explained in earlier lessons, so if you have at least read the last lesson you will know whats happening here. bool LoadMD5Anim(std::wstring filename, Model3D& MD5Model) { ModelAnimation tempAnim; // Temp animation to later store in our model's animation array std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { fileIn >> checkString; // Get next string from file if ( checkString == L"MD5Version" ) // Get MD5 version (this function supports version 10) { fileIn >> checkString; /*MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numFrames" ) { fileIn >> tempAnim.numFrames; // Store number of frames in this animation } else if ( checkString == L"numJoints" ) { fileIn >> tempAnim.numJoints; // Store number of joints (must match .md5mesh) } else if ( checkString == L"frameRate" ) { fileIn >> tempAnim.frameRate; // Store animation's frame rate (frames per second) } else if ( checkString == L"numAnimatedComponents" ) { fileIn >> tempAnim.numAnimatedComponents; // Number of components in each frame section } ... } else // If the file was not loaded { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } ##Load the Joint Hierarchy## Next we load in the joint hierarchy. Like we mentioned above, we check to make sure this hierarchy matches up with the one from the .md5mesh file. If not, we return false. else if ( checkString == L"hierarchy" ) { fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numJoints; i++) // Load in each joint { AnimJointInfo tempJoint; fileIn >> tempJoint.name; // Get joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size()-1, 1); fileIn >> tempJoint.parentID; // Get joints parent ID fileIn >> tempJoint.flags; // Get flags fileIn >> tempJoint.startIndex; // Get joints start index // Make sure the joint exists in the model, and the parent ID's match up // because the bind pose (md5mesh) joint hierarchy and the animations (md5anim) // joint hierarchy must match up bool jointMatchFound = false; for(int k = 0; k < MD5Model.numJoints; k++) { if(MD5Model.joints[k].name == tempJoint.name) { if(MD5Model.joints[k].parentID == tempJoint.parentID) { jointMatchFound = true; tempAnim.jointInfo.push_back(tempJoint); } } } if(!jointMatchFound) // If the skeleton system does not match up, return false return false; // You might want to add an error message here std::getline(fileIn, checkString); // Skip rest of this line } } ##Load Each Frames Bounding Box (AABB)## Now we load in each frames bounding box, or the min and max points. We will not use the bounding boxes in this lesson, but they can be used for any purpose you see fit, like collision detection is what comes first to my mind. else if ( checkString == L"bounds" ) // Load in the AABB for each animation { fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numFrames; i++) { BoundingBox tempBB; fileIn >> checkString; // Skip "(" fileIn >> tempBB.min.x >> tempBB.min.z >> tempBB.min.y; fileIn >> checkString >> checkString; // Skip ") (" fileIn >> tempBB.max.x >> tempBB.max.z >> tempBB.max.y; fileIn >> checkString; // Skip ")" tempAnim.frameBounds.push_back(tempBB); } } ##Loading the Baseframe## Here we load in the baseframe, again, not much to say about it other than its the skeleton that all the frame skeletons build off of. else if ( checkString == L"baseframe" ) // This is the default position for the animation { // All frames will build their skeletons off this fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numJoints; i++) { Joint tempBFJ; fileIn >> checkString; // Skip "(" fileIn >> tempBFJ.pos.x >> tempBFJ.pos.z >> tempBFJ.pos.y; fileIn >> checkString >> checkString; // Skip ") (" fileIn >> tempBFJ.orientation.x >> tempBFJ.orientation.z >> tempBFJ.orientation.y; fileIn >> checkString; // Skip ")" tempAnim.baseFrameJoints.push_back(tempBFJ); } } ##Loading the Frame## Here we get to the frame sections. The frame sections contain an array of float values (the number is described by numAnimatedComponents in the header). else if ( checkString == L"frame" ) // Load in each frames skeleton (the parts of each joint that changed from the base frame) { FrameData tempFrame; fileIn >> tempFrame.frameID; // Get the frame ID fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numAnimatedComponents; i++) { float tempData; fileIn >> tempData; // Get the data tempFrame.frameData.push_back(tempData); } tempAnim.frameData.push_back(tempFrame); ##Creating the Frame Skeleton## Now that we have just loaded a frame containing a bunch of floats for data, we can create a frame skeleton for this frame, using that data we just got. This is where the joints "flag" comes in, along with it's start index. I'll start by explaining the flag. The flag is a bitwise number (0's and 1's), or more specifically, a 64-bit number, that says which part of a joint should be updated. It goes in order from the first bit being the joints position x value should be updated, to the last being the joints orientation z value. For example, "1" or (000001), says that the joints positions x should be replaced by the float in the frame data array. "7" or (000111) says that all x, y, and z of the joints position should be updated with the next three values in the frame data. "63" or (111111) says that both the positin and orientations x y and z values should be updated. The start index of the joint says where to start in the frame data array, or the starting value to use to update the joint. We check each of the six possible flags for the joint, and if one is set, we replace the x, y, or z of the position or orientation, then increment the "k", which is keeping track of our position in the frame data array from the start index of the joint. If you can understand this, you will realize it's not so difficult, I just can't seem to explain it without being so "wordy". After we update the joints of the frame skeleton from the baseframe, we calculate the "w" value of the orientation quaternion before updating the frames joints based on their parents position and orientation. ///*** build the frame skeleton ***/// std::vector<Joint> tempSkeleton; for(int i = 0; i < tempAnim.jointInfo.size(); i++) { int k = 0; // Keep track of position in frameData array // Start the frames joint with the base frame's joint Joint tempFrameJoint = tempAnim.baseFrameJoints[i]; tempFrameJoint.parentID = tempAnim.jointInfo[i].parentID; // Notice how I have been flipping y and z. this is because some modeling programs such as // 3ds max (which is what I use) use a right handed coordinate system. Because of this, we // need to flip the y and z axes. If your having problems loading some models, it's possible // the model was created in a left hand coordinate system. in that case, just reflip all the // y and z axes in our md5 mesh and anim loader. if(tempAnim.jointInfo[i].flags & 1) // pos.x ( 000001 ) tempFrameJoint.pos.x = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 2) // pos.y ( 000010 ) tempFrameJoint.pos.z = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 4) // pos.z ( 000100 ) tempFrameJoint.pos.y = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 8) // orientation.x ( 001000 ) tempFrameJoint.orientation.x = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 16) // orientation.y ( 010000 ) tempFrameJoint.orientation.z = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 32) // orientation.z ( 100000 ) tempFrameJoint.orientation.y = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; // Compute the quaternions w float t = 1.0f - ( tempFrameJoint.orientation.x * tempFrameJoint.orientation.x ) - ( tempFrameJoint.orientation.y * tempFrameJoint.orientation.y ) - ( tempFrameJoint.orientation.z * tempFrameJoint.orientation.z ); if ( t < 0.0f ) { tempFrameJoint.orientation.w = 0.0f; } else { tempFrameJoint.orientation.w = -sqrtf(t); } ##Update Frame Skeleton Joints Based On Parent's Position and Orientation## Next we do what was just said above, updating the joints based on the parent. We can assume that a child never comes before the parent in the hierarchy list, so we know that a child will not be processed before its parent, since, if that were the case it would be pointless to do what we are about to do. We go through each joint, and find its parent joint (as long as it has a parent, since the top joint doesn't have one). We then rotate the joint's position based on the parents orientation. Then we need to add the parents position to the childs position to put it into model space. After that we need to reorient the child joint based on the parent joint. To do that, we simply multiply them together (parent * child). // Now, if the upper arm of your skeleton moves, you need to also move the lower part of your arm, and then the hands, and then finally the fingers (possibly weapon or tool too) // This is where joint hierarchy comes in. We start at the top of the hierarchy, and move down to each joints child, rotating and translating them based on their parents rotation // and translation. We can assume that by the time we get to the child, the parent has already been rotated and transformed based of it's parent. We can assume this because // the child should never come before the parent in the files we loaded in. if(tempFrameJoint.parentID >= 0) { Joint parentJoint = tempSkeleton[tempFrameJoint.parentID]; // Turn the XMFLOAT3 and 4's into vectors for easier computation XMVECTOR parentJointOrientation = XMVectorSet(parentJoint.orientation.x, parentJoint.orientation.y, parentJoint.orientation.z, parentJoint.orientation.w); XMVECTOR tempJointPos = XMVectorSet(tempFrameJoint.pos.x, tempFrameJoint.pos.y, tempFrameJoint.pos.z, 0.0f); XMVECTOR parentOrientationConjugate = XMVectorSet(-parentJoint.orientation.x, -parentJoint.orientation.y, -parentJoint.orientation.z, parentJoint.orientation.w); // Calculate current joints position relative to its parents position XMFLOAT3 rotatedPos; XMStoreFloat3(&rotatedPos, XMQuaternionMultiply(XMQuaternionMultiply(parentJointOrientation, tempJointPos), parentOrientationConjugate)); // Translate the joint to model space by adding the parent joint's pos to it tempFrameJoint.pos.x = rotatedPos.x + parentJoint.pos.x; tempFrameJoint.pos.y = rotatedPos.y + parentJoint.pos.y; tempFrameJoint.pos.z = rotatedPos.z + parentJoint.pos.z; // Currently the joint is oriented in its parent joints space, we now need to orient it in // model space by multiplying the two orientations together (parentOrientation * childOrientation) <- In that order XMVECTOR tempJointOrient = XMVectorSet(tempFrameJoint.orientation.x, tempFrameJoint.orientation.y, tempFrameJoint.orientation.z, tempFrameJoint.orientation.w); tempJointOrient = XMQuaternionMultiply(parentJointOrientation, tempJointOrient); // Normalize the orienation quaternion tempJointOrient = XMQuaternionNormalize(tempJointOrient); XMStoreFloat4(&tempFrameJoint.orientation, tempJointOrient); } // Store the joint into our temporary frame skeleton tempSkeleton.push_back(tempFrameJoint); } // Push back our newly created frame skeleton into the animation's frameSkeleton array tempAnim.frameSkeleton.push_back(tempSkeleton); fileIn >> checkString; // Skip closing bracket "}" } } ##Calculating Some Frame Stuff## We will need to know the length in time of each frame, and the total animation time, so we calculate those quick. We also make sure that the current animation time is set to zero. After we do all that, we push back the temp animation to our models animation vector, and exit the function successfully. // Calculate and store some usefull animation data tempAnim.frameTime = 1.0f / tempAnim.frameRate; // Set the time per frame tempAnim.totalAnimTime = tempAnim.numFrames * tempAnim.frameTime; // Set the total time the animation takes tempAnim.currAnimTime = 0.0f; // Set the current time to zero MD5Model.animations.push_back(tempAnim); // Push back the animation into our model object ##The UpdateMD5Model() Function## Here's the function that we will call whenever we want to update our model using the animation. I'll go through this function in chunks too. void UpdateMD5Model(Model3D& MD5Model, float deltaTime, int animation) { MD5Model.animations[animation].currAnimTime += deltaTime; // Update the current animation time if(MD5Model.animations[animation].currAnimTime > MD5Model.animations[animation].totalAnimTime) MD5Model.animations[animation].currAnimTime = 0.0f; // Which frame are we on float currentFrame = MD5Model.animations[animation].currAnimTime * MD5Model.animations[animation].frameRate; int frame0 = floorf( currentFrame ); int frame1 = frame0 + 1; // Make sure we don't go over the number of frames if(frame0 == MD5Model.animations[animation].numFrames-1) frame1 = 0; float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor std::vector<Joint> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in // Compute the interpolated skeleton for( int i = 0; i < MD5Model.animations[animation].numJoints; i++) { Joint tempJoint; Joint joint0 = MD5Model.animations[animation].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton Joint joint1 = MD5Model.animations[animation].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id // Turn the two quaternions into XMVECTORs for easy computations XMVECTOR joint0Orient = XMVectorSet(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w); XMVECTOR joint1Orient = XMVectorSet(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w); // Interpolate positions tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x)); tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y)); tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z)); // Interpolate orientations using spherical interpolation (Slerp) XMStoreFloat4(&tempJoint.orientation, XMQuaternionSlerp(joint0Orient, joint1Orient, interpolation)); interpolatedSkeleton.push_back(tempJoint); // Push the joint back into our interpolated skeleton } for ( int k = 0; k < MD5Model.numSubsets; k++) { for ( int i = 0; i < MD5Model.subsets[k].vertices.size(); ++i ) { Vertex tempVert = MD5Model.subsets[k].vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first tempVert.normal = XMFLOAT3(0,0,0); // Clear vertices normal // Sum up the joints and weights information to get vertex's position and normal for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = MD5Model.subsets[k].weights[tempVert.StartWeight + j]; Joint tempJoint = interpolatedSkeleton[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion XMVECTOR tempJointOrientationConjugate = XMQuaternionInverse(tempJointOrientation); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Compute the normals for this frames skeleton using the weight normals from before // We can comput the normals the same way we compute the vertices position, only we don't have to translate them (just rotate) XMVECTOR tempWeightNormal = XMVectorSet(tempWeight.normal.x, tempWeight.normal.y, tempWeight.normal.z, 0.0f); // Rotate the normal XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightNormal), tempJointOrientationConjugate)); // Add to vertices normal and ake weight bias into account tempVert.normal.x -= rotatedPoint.x * tempWeight.bias; tempVert.normal.y -= rotatedPoint.y * tempWeight.bias; tempVert.normal.z -= rotatedPoint.z * tempWeight.bias; } MD5Model.subsets[k].positions[i] = tempVert.pos; // Store the vertices position in the position vector instead of straight into the vertex vector MD5Model.subsets[k].vertices[i].normal = tempVert.normal; // Store the vertices normal XMStoreFloat3(&MD5Model.subsets[k].vertices[i].normal, XMVector3Normalize(XMLoadFloat3(&MD5Model.subsets[k].vertices[i].normal))); } // Put the positions into the vertices for this subset for(int i = 0; i < MD5Model.subsets[k].vertices.size(); i++) { MD5Model.subsets[k].vertices[i].pos = MD5Model.subsets[k].positions[i]; } // Update the subsets vertex buffer // First lock the buffer D3D11_MAPPED_SUBRESOURCE mappedVertBuff; hr = d3d11DevCon->Map(MD5Model.subsets[k].vertBuff, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedVertBuff); // Copy the data into the vertex buffer. memcpy(mappedVertBuff.pData, &MD5Model.subsets[k].vertices[0], (sizeof(Vertex) * MD5Model.subsets[k].vertices.size())); d3d11DevCon->Unmap(MD5Model.subsets[k].vertBuff, 0); // The line below is another way to update a buffer. You will use this when you want to update a buffer less // than once per frame, since the GPU reads will be faster (the buffer was created as a DEFAULT buffer instead // of a DYNAMIC buffer), and the CPU writes will be slower. You can try both methods to find out which one is faster // for you. if you want to use the line below, you will have to create the buffer with D3D11_USAGE_DEFAULT instead // of D3D11_USAGE_DYNAMIC //d3d11DevCon->UpdateSubresource( MD5Model.subsets[k].vertBuff, 0, NULL, &MD5Model.subsets[k].vertices[0], 0, 0 ); } } ##Which Frames to Use## We start the function by updating the animations current time. If the current animation time is over the total animation time, we reset the current animation time to zero to restart the animation. After that we find out which frames we are on. We can do this by multiplying the current animation time by the frameRate. Rounding this answer down to the nearest whole number gives us the frame we are currently on, and adding one to this number gives us the next frame. We will interpolate these two frame to find the current frame skeleton, in order to keep a nice smooth animation. When interpolating two things, we have to have an interpolation factor, which we can get by finding the remainder from the current animation time multiplied with the framerate (eg. currentFrameTime = 5.3667, currentFrame = 5, interpolationFactor = 0.3667). void UpdateMD5Model(Model3D& MD5Model, float deltaTime, int animation) { MD5Model.animations[animation].currAnimTime += deltaTime; // Update the current animation time if(MD5Model.animations[animation].currAnimTime > MD5Model.animations[animation].totalAnimTime) MD5Model.animations[animation].currAnimTime = 0.0f; // Which frame are we on float currentFrame = MD5Model.animations[animation].currAnimTime * MD5Model.animations[animation].frameRate; int frame0 = floorf( currentFrame ); int frame1 = frame0 + 1; // Make sure we don't go over the number of frames if(frame0 == MD5Model.animations[animation].numFrames-1) frame1 = 0; float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor std::vector<Joint> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in ##Computing the Interpolated Skeleton## Now we can create the interpolated skeleton. This is actually pretty easy to do. You can see by the code the equation we use to find the interpolated position of the joints, and then below use a technique called spherical linear interpolation (Slerp) to interpolate the two quaternions representing the two joints' orientation. We can use a function called XMQuaternionSlerp() to interpolate two quaternions based on the interpolation factor. // Compute the interpolated skeleton for( int i = 0; i < MD5Model.animations[animation].numJoints; i++) { Joint tempJoint; Joint joint0 = MD5Model.animations[animation].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton Joint joint1 = MD5Model.animations[animation].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id // Turn the two quaternions into XMVECTORs for easy computations XMVECTOR joint0Orient = XMVectorSet(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w); XMVECTOR joint1Orient = XMVectorSet(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w); // Interpolate positions tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x)); tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y)); tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z)); // Interpolate orientations using spherical interpolation (Slerp) XMStoreFloat4(&tempJoint.orientation, XMQuaternionSlerp(joint0Orient, joint1Orient, interpolation)); interpolatedSkeleton.push_back(tempJoint); // Push the joint back into our interpolated skeleton } ##Calculate the Vertex Position and Normal## Now we loop through each subset in the model, then another loop for each vertex, and finally another one for each of the vertex's weights. To calculate the vertex position, we do the same thing we did in the last lesson when creating our bind-pose model. After we calculate the vertex's position, we calculate its normal. We can calculate the normal by first finding the weights normal. To find the weights normal, we just rotate the normal based on the joints orientation. After we have the weight normal, we multiply it by the weights bias factor, and finally add it to the vertex normal. for ( int k = 0; k < MD5Model.numSubsets; k++) { for ( int i = 0; i < MD5Model.subsets[k].vertices.size(); ++i ) { Vertex tempVert = MD5Model.subsets[k].vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first tempVert.normal = XMFLOAT3(0,0,0); // Clear vertices normal // Sum up the joints and weights information to get vertex's position and normal for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = MD5Model.subsets[k].weights[tempVert.StartWeight + j]; Joint tempJoint = interpolatedSkeleton[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion XMVECTOR tempJointOrientationConjugate = XMQuaternionInverse(tempJointOrientation); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Compute the normals for this frames skeleton using the weight normals from before // We can comput the normals the same way we compute the vertices position, only we don't have to translate them (just rotate) XMVECTOR tempWeightNormal = XMVectorSet(tempWeight.normal.x, tempWeight.normal.y, tempWeight.normal.z, 0.0f); // Rotate the normal XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightNormal), tempJointOrientationConjugate)); // Add to vertices normal and ake weight bias into account tempVert.normal.x -= rotatedPoint.x * tempWeight.bias; tempVert.normal.y -= rotatedPoint.y * tempWeight.bias; tempVert.normal.z -= rotatedPoint.z * tempWeight.bias; } MD5Model.subsets[k].positions[i] = tempVert.pos; // Store the vertices position in the position vector instead of straight into the vertex vector MD5Model.subsets[k].vertices[i].normal = tempVert.normal; // Store the vertices normal XMStoreFloat3(&MD5Model.subsets[k].vertices[i].normal, XMVector3Normalize(XMLoadFloat3(&MD5Model.subsets[k].vertices[i].normal))); } ##Updating the Buffer## The last thing we do in this function is update the vertex buffer. We have created a dynamic vertex buffer if you can remember from the last lesson. To update the dynamic buffers in directx, we need to first lock them, then copy the contents that we want into the buffer. The map function returns a filled out D3D11_MAPPED_SUBRESOURCE object, where the pData member of D3D11_MAPPED_SUBRESOURCE is a pointer to the beginning of the data in the buffer. We can use this pointer to copy our vertex array to the buffer, then finally unmapping the buffer. Notice how when copying a vector to a buffer, you must use its pointer, and it's pointer must be pointing to the first element ([0]) that you want to start copying. This is because a pointer directly to the beginning of a vector will point to a bunch of extra stuff used to describe the vector or something. I kept this in the code for reference, but if you are using a default buffer, you can update that buffer using the UpdateSubresource function, although you would only want to do this kind of updating not very often since it's much slower than using a dynamic buffer. // Put the positions into the vertices for this subset for(int i = 0; i < MD5Model.subsets[k].vertices.size(); i++) { MD5Model.subsets[k].vertices[i].pos = MD5Model.subsets[k].positions[i]; } // Update the subsets vertex buffer // First lock the buffer D3D11_MAPPED_SUBRESOURCE mappedVertBuff; hr = d3d11DevCon->Map(MD5Model.subsets[k].vertBuff, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedVertBuff); // Copy the data into the vertex buffer. memcpy(mappedVertBuff.pData, &MD5Model.subsets[k].vertices[0], (sizeof(Vertex) * MD5Model.subsets[k].vertices.size())); d3d11DevCon->Unmap(MD5Model.subsets[k].vertBuff, 0); // The line below is another way to update a buffer. You will use this when you want to update a buffer less // than once per frame, since the GPU reads will be faster (the buffer was created as a DEFAULT buffer instead // of a DYNAMIC buffer), and the CPU writes will be slower. You can try both methods to find out which one is faster // for you. if you want to use the line below, you will have to create the buffer with D3D11_USAGE_DEFAULT instead // of D3D11_USAGE_DYNAMIC //d3d11DevCon->UpdateSubresource( MD5Model.subsets[k].vertBuff, 0, NULL, &MD5Model.subsets[k].vertices[0], 0, 0 ); } } ##Call the LoadMD5Model() Function## Now all thats left is to call the function that will load in our models animation! if(!LoadMD5Anim(L"boy.md5anim", NewMD5Model)) return false; This should give you at least some useful information! hope you enjoyed it! ##Exercise:## 1. Try to update the structures so that you can use a loaded animation for multiply models (as long as they have the same joint hierarchy). 2. Instead of having the guy run in a single spot, have him run in circles or a pattern of your choosing! 3. Let me know what you think! 4. Have a great day! Here's the final code: main.cpp //Include and link appropriate libraries and headers// #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") #pragma comment(lib, "d3dx10.lib") #pragma comment (lib, "D3D10_1.lib") #pragma comment (lib, "DXGI.lib") #pragma comment (lib, "D2D1.lib") #pragma comment (lib, "dwrite.lib") #pragma comment (lib, "dinput8.lib") #pragma comment (lib, "dxguid.lib") #include <windows.h> #include <d3d11.h> #include <d3dx11.h> #include <d3dx10.h> #include <xnamath.h> #include <D3D10_1.h> #include <DXGI.h> #include <D2D1.h> #include <sstream> #include <dwrite.h> #include <dinput.h> #include <vector> #include <fstream> #include <istream> //Global Declarations - Interfaces// IDXGISwapChain* SwapChain; ID3D11Device* d3d11Device; ID3D11DeviceContext* d3d11DevCon; ID3D11RenderTargetView* renderTargetView; ID3D11DepthStencilView* depthStencilView; ID3D11Texture2D* depthStencilBuffer; ID3D11VertexShader* VS; ID3D11PixelShader* PS; ID3D11PixelShader* D2D_PS; ID3D10Blob* D2D_PS_Buffer; ID3D10Blob* VS_Buffer; ID3D10Blob* PS_Buffer; ID3D11InputLayout* vertLayout; ID3D11Buffer* cbPerObjectBuffer; ID3D11BlendState* d2dTransparency; ID3D11RasterizerState* CCWcullMode; ID3D11RasterizerState* CWcullMode; ID3D11SamplerState* CubesTexSamplerState; ID3D11Buffer* cbPerFrameBuffer; ID3D10Device1 *d3d101Device; IDXGIKeyedMutex *keyedMutex11; IDXGIKeyedMutex *keyedMutex10; ID2D1RenderTarget *D2DRenderTarget; ID2D1SolidColorBrush *Brush; ID3D11Texture2D *BackBuffer11; ID3D11Texture2D *sharedTex11; ID3D11Buffer *d2dVertBuffer; ID3D11Buffer *d2dIndexBuffer; ID3D11ShaderResourceView *d2dTexture; IDWriteFactory *DWriteFactory; IDWriteTextFormat *TextFormat; IDirectInputDevice8* DIKeyboard; IDirectInputDevice8* DIMouse; ID3D11Buffer* sphereIndexBuffer; ID3D11Buffer* sphereVertBuffer; ID3D11VertexShader* SKYMAP_VS; ID3D11PixelShader* SKYMAP_PS; ID3D10Blob* SKYMAP_VS_Buffer; ID3D10Blob* SKYMAP_PS_Buffer; ID3D11ShaderResourceView* smrv; ID3D11DepthStencilState* DSLessEqual; ID3D11RasterizerState* RSCullNone; ID3D11BlendState* Transparency; //Mesh variables. Each loaded mesh will need its own set of these ID3D11Buffer* meshVertBuff; ID3D11Buffer* meshIndexBuff; XMMATRIX meshWorld; int meshSubsets = 0; std::vector<int> meshSubsetIndexStart; std::vector<int> meshSubsetTexture; //Textures and material variables, used for all mesh's loaded std::vector<ID3D11ShaderResourceView*> meshSRV; std::vector<std::wstring> textureNameArray; std::wstring printText; //Global Declarations - Others// LPCTSTR WndClassName = L"firstwindow"; HWND hwnd = NULL; HRESULT hr; int Width = 300; int Height = 300; DIMOUSESTATE mouseLastState; LPDIRECTINPUT8 DirectInput; float rotx = 0; float rotz = 0; float scaleX = 1.0f; float scaleY = 1.0f; XMMATRIX Rotationx; XMMATRIX Rotationz; XMMATRIX Rotationy; XMMATRIX WVP; XMMATRIX camView; XMMATRIX camProjection; XMMATRIX d2dWorld; XMVECTOR camPosition; XMVECTOR camTarget; XMVECTOR camUp; XMVECTOR DefaultForward = XMVectorSet(0.0f,0.0f,1.0f, 0.0f); XMVECTOR DefaultRight = XMVectorSet(1.0f,0.0f,0.0f, 0.0f); XMVECTOR camForward = XMVectorSet(0.0f,0.0f,1.0f, 0.0f); XMVECTOR camRight = XMVectorSet(1.0f,0.0f,0.0f, 0.0f); XMMATRIX camRotationMatrix; float moveLeftRight = 0.0f; float moveBackForward = 0.0f; float camYaw = 0.0f; float camPitch = 0.0f; int NumSphereVertices; int NumSphereFaces; XMMATRIX sphereWorld; XMMATRIX Rotation; XMMATRIX Scale; XMMATRIX Translation; float rot = 0.01f; double countsPerSecond = 0.0; __int64 CounterStart = 0; int frameCount = 0; int fps = 0; __int64 frameTimeOld = 0; double frameTime; //Function Prototypes// bool InitializeDirect3d11App(HINSTANCE hInstance); void CleanUp(); bool InitScene(); void DrawScene(); bool InitD2D_D3D101_DWrite(IDXGIAdapter1 *Adapter); void InitD2DScreenTexture(); void UpdateScene(double time); void UpdateCamera(); void RenderText(std::wstring text, int inInt); void StartTimer(); double GetTime(); double GetFrameTime(); bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed); int messageloop(); bool InitDirectInput(HINSTANCE hInstance); void DetectInput(double time); void CreateSphere(int LatLines, int LongLines); LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam); //Create effects constant buffer's structure// struct cbPerObject { XMMATRIX WVP; XMMATRIX World; //These will be used for the pixel shader XMFLOAT4 difColor; BOOL hasTexture; //Because of HLSL structure packing, we will use windows BOOL //instead of bool because HLSL packs things into 4 bytes, and //bool is only one byte, where BOOL is 4 bytes BOOL hasNormMap; }; cbPerObject cbPerObj; //Create material structure struct SurfaceMaterial { std::wstring matName; XMFLOAT4 difColor; int texArrayIndex; int normMapTexArrayIndex; bool hasNormMap; bool hasTexture; bool transparent; }; std::vector<SurfaceMaterial> material; //Define LoadObjModel function after we create surfaceMaterial structure bool LoadObjModel(std::wstring filename, //.obj filename ID3D11Buffer** vertBuff, //mesh vertex buffer ID3D11Buffer** indexBuff, //mesh index buffer std::vector<int>& subsetIndexStart, //start index of each subset std::vector<int>& subsetMaterialArray, //index value of material for each subset std::vector<SurfaceMaterial>& material, //vector of material structures int& subsetCount, //Number of subsets in mesh bool isRHCoordSys, //true if model was created in right hand coord system bool computeNormals); //true to compute the normals, false to use the files normals struct Light { Light() { ZeroMemory(this, sizeof(Light)); } XMFLOAT3 pos; float range; XMFLOAT3 dir; float cone; XMFLOAT3 att; float pad2; XMFLOAT4 ambient; XMFLOAT4 diffuse; }; Light light; struct cbPerFrame { Light light; }; cbPerFrame constbuffPerFrame; struct Vertex //Overloaded Vertex Structure { Vertex(){} Vertex(float x, float y, float z, float u, float v, float nx, float ny, float nz, float tx, float ty, float tz) : pos(x,y,z), texCoord(u, v), normal(nx, ny, nz), tangent(tx, ty, tz){} XMFLOAT3 pos; XMFLOAT2 texCoord; XMFLOAT3 normal; XMFLOAT3 tangent; XMFLOAT3 biTangent; // Will not be sent to shader int StartWeight; int WeightCount; }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0}, { "TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 32, D3D11_INPUT_PER_VERTEX_DATA, 0} }; UINT numElements = ARRAYSIZE(layout); struct Joint { std::wstring name; int parentID; XMFLOAT3 pos; XMFLOAT4 orientation; }; ///////////////**************new**************//////////////////// struct BoundingBox { XMFLOAT3 min; XMFLOAT3 max; }; struct FrameData { int frameID; std::vector<float> frameData; }; struct AnimJointInfo { std::wstring name; int parentID; int flags; int startIndex; }; struct ModelAnimation { int numFrames; int numJoints; int frameRate; int numAnimatedComponents; float frameTime; float totalAnimTime; float currAnimTime; std::vector<AnimJointInfo> jointInfo; std::vector<BoundingBox> frameBounds; std::vector<Joint> baseFrameJoints; std::vector<FrameData> frameData; std::vector<std::vector<Joint>> frameSkeleton; }; ///////////////**************new**************//////////////////// struct Weight { int jointID; float bias; XMFLOAT3 pos; ///////////////**************new**************//////////////////// XMFLOAT3 normal; ///////////////**************new**************//////////////////// }; struct ModelSubset { int texArrayIndex; int numTriangles; std::vector<Vertex> vertices; std::vector<XMFLOAT3> jointSpaceNormals; std::vector<DWORD> indices; std::vector<Weight> weights; std::vector<XMFLOAT3> positions; ID3D11Buffer* vertBuff; ID3D11Buffer* indexBuff; }; struct Model3D { int numSubsets; int numJoints; std::vector<Joint> joints; std::vector<ModelSubset> subsets; ///////////////**************new**************//////////////////// std::vector<ModelAnimation> animations; ///////////////**************new**************//////////////////// }; XMMATRIX smilesWorld; Model3D NewMD5Model; //LoadMD5Model() function prototype bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray); ///////////////**************new**************//////////////////// bool LoadMD5Anim(std::wstring filename, Model3D& MD5Model); void UpdateMD5Model(Model3D& MD5Model, float deltaTime, int animation); ///////////////**************new**************//////////////////// int WINAPI WinMain(HINSTANCE hInstance, //Main windows function HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { if(!InitializeWindow(hInstance, nShowCmd, Width, Height, true)) { MessageBox(0, L"Window Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitializeDirect3d11App(hInstance)) //Initialize Direct3D { MessageBox(0, L"Direct3D Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitScene()) //Initialize our scene { MessageBox(0, L"Scene Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitDirectInput(hInstance)) { MessageBox(0, L"Direct Input Initialization - Failed", L"Error", MB_OK); return 0; } messageloop(); CleanUp(); return 0; } bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed) { typedef struct _WNDCLASS { UINT cbSize; UINT style; WNDPROC lpfnWndProc; int cbClsExtra; int cbWndExtra; HANDLE hInstance; HICON hIcon; HCURSOR hCursor; HBRUSH hbrBackground; LPCTSTR lpszMenuName; LPCTSTR lpszClassName; } WNDCLASS; WNDCLASSEX wc; wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = NULL; wc.cbWndExtra = NULL; wc.hInstance = hInstance; wc.hIcon = LoadIcon(NULL, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(COLOR_WINDOW + 1); wc.lpszMenuName = NULL; wc.lpszClassName = WndClassName; wc.hIconSm = LoadIcon(NULL, IDI_APPLICATION); if (!RegisterClassEx(&wc)) { MessageBox(NULL, L"Error registering class", L"Error", MB_OK | MB_ICONERROR); return 1; } hwnd = CreateWindowEx( NULL, WndClassName, L"Lesson 4 - Begin Drawing", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, NULL, NULL, hInstance, NULL ); if (!hwnd) { MessageBox(NULL, L"Error creating window", L"Error", MB_OK | MB_ICONERROR); return 1; } ShowWindow(hwnd, ShowWnd); UpdateWindow(hwnd); return true; } bool InitializeDirect3d11App(HINSTANCE hInstance) { //Describe our SwapChain Buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC)); bufferDesc.Width = Width; bufferDesc.Height = Height; bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; //Describe our SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = hwnd; ///////////////**************new**************//////////////////// swapChainDesc.Windowed = true; ///////////////**************new**************//////////////////// swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; // Create DXGI factory to enumerate adapters/////////////////////////////////////////////////////////////////////////// IDXGIFactory1 *DXGIFactory; HRESULT hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&DXGIFactory); // Use the first adapter IDXGIAdapter1 *Adapter; hr = DXGIFactory->EnumAdapters1(0, &Adapter); DXGIFactory->Release(); //Create our Direct3D 11 Device and SwapChain////////////////////////////////////////////////////////////////////////// hr = D3D11CreateDeviceAndSwapChain(Adapter, D3D_DRIVER_TYPE_UNKNOWN, NULL, D3D11_CREATE_DEVICE_BGRA_SUPPORT, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &d3d11Device, NULL, &d3d11DevCon); //Initialize Direct2D, Direct3D 10.1, DirectWrite InitD2D_D3D101_DWrite(Adapter); //Release the Adapter interface Adapter->Release(); //Create our BackBuffer and Render Target hr = SwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), (void**)&BackBuffer11 ); hr = d3d11Device->CreateRenderTargetView( BackBuffer11, NULL, &renderTargetView ); //Describe our Depth/Stencil Buffer D3D11_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = Width; depthStencilDesc.Height = Height; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; depthStencilDesc.SampleDesc.Quality = 0; depthStencilDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; //Create the Depth/Stencil View d3d11Device->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer); d3d11Device->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView); return true; } bool InitD2D_D3D101_DWrite(IDXGIAdapter1 *Adapter) { //Create our Direc3D 10.1 Device/////////////////////////////////////////////////////////////////////////////////////// hr = D3D10CreateDevice1(Adapter, D3D10_DRIVER_TYPE_HARDWARE, NULL,D3D10_CREATE_DEVICE_BGRA_SUPPORT, D3D10_FEATURE_LEVEL_9_3, D3D10_1_SDK_VERSION, &d3d101Device ); //Create Shared Texture that Direct3D 10.1 will render on////////////////////////////////////////////////////////////// D3D11_TEXTURE2D_DESC sharedTexDesc; ZeroMemory(&sharedTexDesc, sizeof(sharedTexDesc)); sharedTexDesc.Width = Width; sharedTexDesc.Height = Height; sharedTexDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; sharedTexDesc.MipLevels = 1; sharedTexDesc.ArraySize = 1; sharedTexDesc.SampleDesc.Count = 1; sharedTexDesc.Usage = D3D11_USAGE_DEFAULT; sharedTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; sharedTexDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; hr = d3d11Device->CreateTexture2D(&sharedTexDesc, NULL, &sharedTex11); // Get the keyed mutex for the shared texture (for D3D11)/////////////////////////////////////////////////////////////// hr = sharedTex11->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex11); // Get the shared handle needed to open the shared texture in D3D10.1/////////////////////////////////////////////////// IDXGIResource *sharedResource10; HANDLE sharedHandle10; hr = sharedTex11->QueryInterface(__uuidof(IDXGIResource), (void**)&sharedResource10); hr = sharedResource10->GetSharedHandle(&sharedHandle10); sharedResource10->Release(); // Open the surface for the shared texture in D3D10.1/////////////////////////////////////////////////////////////////// IDXGISurface1 *sharedSurface10; hr = d3d101Device->OpenSharedResource(sharedHandle10, __uuidof(IDXGISurface1), (void**)(&sharedSurface10)); hr = sharedSurface10->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex10); // Create D2D factory/////////////////////////////////////////////////////////////////////////////////////////////////// ID2D1Factory *D2DFactory; hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, __uuidof(ID2D1Factory), (void**)&D2DFactory); D2D1_RENDER_TARGET_PROPERTIES renderTargetProperties; ZeroMemory(&renderTargetProperties, sizeof(renderTargetProperties)); renderTargetProperties.type = D2D1_RENDER_TARGET_TYPE_HARDWARE; renderTargetProperties.pixelFormat = D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED); hr = D2DFactory->CreateDxgiSurfaceRenderTarget(sharedSurface10, &renderTargetProperties, &D2DRenderTarget); sharedSurface10->Release(); D2DFactory->Release(); // Create a solid color brush to draw something with hr = D2DRenderTarget->CreateSolidColorBrush(D2D1::ColorF(1.0f, 1.0f, 1.0f, 1.0f), &Brush); //DirectWrite/////////////////////////////////////////////////////////////////////////////////////////////////////////// hr = DWriteCreateFactory(DWRITE_FACTORY_TYPE_SHARED, __uuidof(IDWriteFactory), reinterpret_cast<IUnknown**>(&DWriteFactory)); hr = DWriteFactory->CreateTextFormat( L"Script", NULL, DWRITE_FONT_WEIGHT_REGULAR, DWRITE_FONT_STYLE_NORMAL, DWRITE_FONT_STRETCH_NORMAL, 24.0f, L"en-us", &TextFormat ); hr = TextFormat->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING); hr = TextFormat->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR); d3d101Device->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_POINTLIST); return true; } bool InitDirectInput(HINSTANCE hInstance) { hr = DirectInput8Create(hInstance, DIRECTINPUT_VERSION, IID_IDirectInput8, (void**)&DirectInput, NULL); hr = DirectInput->CreateDevice(GUID_SysKeyboard, &DIKeyboard, NULL); hr = DirectInput->CreateDevice(GUID_SysMouse, &DIMouse, NULL); hr = DIKeyboard->SetDataFormat(&c_dfDIKeyboard); hr = DIKeyboard->SetCooperativeLevel(hwnd, DISCL_FOREGROUND | DISCL_NONEXCLUSIVE); hr = DIMouse->SetDataFormat(&c_dfDIMouse); hr = DIMouse->SetCooperativeLevel(hwnd, DISCL_NONEXCLUSIVE | DISCL_NOWINKEY | DISCL_FOREGROUND); return true; } void UpdateCamera() { camRotationMatrix = XMMatrixRotationRollPitchYaw(camPitch, camYaw, 0); camTarget = XMVector3TransformCoord(DefaultForward, camRotationMatrix ); camTarget = XMVector3Normalize(camTarget); XMMATRIX RotateYTempMatrix; RotateYTempMatrix = XMMatrixRotationY(camYaw); // Walk //camRight = XMVector3TransformCoord(DefaultRight, RotateYTempMatrix); //camUp = XMVector3TransformCoord(camUp, RotateYTempMatrix); //camForward = XMVector3TransformCoord(DefaultForward, RotateYTempMatrix); // Free Cam camRight = XMVector3TransformCoord(DefaultRight, camRotationMatrix); camForward = XMVector3TransformCoord(DefaultForward, camRotationMatrix); camUp = XMVector3Cross(camForward, camRight); camPosition += moveLeftRight*camRight; camPosition += moveBackForward*camForward; moveLeftRight = 0.0f; moveBackForward = 0.0f; camTarget = camPosition + camTarget; camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); } void DetectInput(double time) { DIMOUSESTATE mouseCurrState; BYTE keyboardState[256]; DIKeyboard->Acquire(); DIMouse->Acquire(); DIMouse->GetDeviceState(sizeof(DIMOUSESTATE), &mouseCurrState); DIKeyboard->GetDeviceState(sizeof(keyboardState),(LPVOID)&keyboardState); if(keyboardState[DIK_ESCAPE] & 0x80) PostMessage(hwnd, WM_DESTROY, 0, 0); float speed = 10.0f * time; if(keyboardState[DIK_A] & 0x80) { moveLeftRight -= speed; } if(keyboardState[DIK_D] & 0x80) { moveLeftRight += speed; } if(keyboardState[DIK_W] & 0x80) { moveBackForward += speed; } if(keyboardState[DIK_S] & 0x80) { moveBackForward -= speed; } ///////////////**************new**************//////////////////// if(keyboardState[DIK_R] & 0X80) { float timeFactor = 1.0f; // You can speed up or slow down time by changing this UpdateMD5Model(NewMD5Model, time*timeFactor, 0); } ///////////////**************new**************//////////////////// if((mouseCurrState.lX != mouseLastState.lX) || (mouseCurrState.lY != mouseLastState.lY)) { camYaw += mouseLastState.lX * 0.001f; camPitch += mouseCurrState.lY * 0.001f; mouseLastState = mouseCurrState; } UpdateCamera(); return; } void CleanUp() { SwapChain->SetFullscreenState(false, NULL); PostMessage(hwnd, WM_DESTROY, 0, 0); //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); VS->Release(); PS->Release(); VS_Buffer->Release(); PS_Buffer->Release(); vertLayout->Release(); depthStencilView->Release(); depthStencilBuffer->Release(); cbPerObjectBuffer->Release(); Transparency->Release(); CCWcullMode->Release(); CWcullMode->Release(); d3d101Device->Release(); keyedMutex11->Release(); keyedMutex10->Release(); D2DRenderTarget->Release(); Brush->Release(); BackBuffer11->Release(); sharedTex11->Release(); DWriteFactory->Release(); TextFormat->Release(); d2dTexture->Release(); cbPerFrameBuffer->Release(); DIKeyboard->Unacquire(); DIMouse->Unacquire(); DirectInput->Release(); sphereIndexBuffer->Release(); sphereVertBuffer->Release(); SKYMAP_VS->Release(); SKYMAP_PS->Release(); SKYMAP_VS_Buffer->Release(); SKYMAP_PS_Buffer->Release(); smrv->Release(); DSLessEqual->Release(); RSCullNone->Release(); meshVertBuff->Release(); meshIndexBuff->Release(); for(int i = 0; i < NewMD5Model.numSubsets; i++) { NewMD5Model.subsets[i].indexBuff->Release(); NewMD5Model.subsets[i].vertBuff->Release(); } } ///////////////**************new**************//////////////////// bool LoadMD5Anim(std::wstring filename, Model3D& MD5Model) { ModelAnimation tempAnim; // Temp animation to later store in our model's animation array std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { fileIn >> checkString; // Get next string from file if ( checkString == L"MD5Version" ) // Get MD5 version (this function supports version 10) { fileIn >> checkString; /*MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numFrames" ) { fileIn >> tempAnim.numFrames; // Store number of frames in this animation } else if ( checkString == L"numJoints" ) { fileIn >> tempAnim.numJoints; // Store number of joints (must match .md5mesh) } else if ( checkString == L"frameRate" ) { fileIn >> tempAnim.frameRate; // Store animation's frame rate (frames per second) } else if ( checkString == L"numAnimatedComponents" ) { fileIn >> tempAnim.numAnimatedComponents; // Number of components in each frame section } else if ( checkString == L"hierarchy" ) { fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numJoints; i++) // Load in each joint { AnimJointInfo tempJoint; fileIn >> tempJoint.name; // Get joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size()-1, 1); fileIn >> tempJoint.parentID; // Get joints parent ID fileIn >> tempJoint.flags; // Get flags fileIn >> tempJoint.startIndex; // Get joints start index // Make sure the joint exists in the model, and the parent ID's match up // because the bind pose (md5mesh) joint hierarchy and the animations (md5anim) // joint hierarchy must match up bool jointMatchFound = false; for(int k = 0; k < MD5Model.numJoints; k++) { if(MD5Model.joints[k].name == tempJoint.name) { if(MD5Model.joints[k].parentID == tempJoint.parentID) { jointMatchFound = true; tempAnim.jointInfo.push_back(tempJoint); } } } if(!jointMatchFound) // If the skeleton system does not match up, return false return false; // You might want to add an error message here std::getline(fileIn, checkString); // Skip rest of this line } } else if ( checkString == L"bounds" ) // Load in the AABB for each animation { fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numFrames; i++) { BoundingBox tempBB; fileIn >> checkString; // Skip "(" fileIn >> tempBB.min.x >> tempBB.min.z >> tempBB.min.y; fileIn >> checkString >> checkString; // Skip ") (" fileIn >> tempBB.max.x >> tempBB.max.z >> tempBB.max.y; fileIn >> checkString; // Skip ")" tempAnim.frameBounds.push_back(tempBB); } } else if ( checkString == L"baseframe" ) // This is the default position for the animation { // All frames will build their skeletons off this fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numJoints; i++) { Joint tempBFJ; fileIn >> checkString; // Skip "(" fileIn >> tempBFJ.pos.x >> tempBFJ.pos.z >> tempBFJ.pos.y; fileIn >> checkString >> checkString; // Skip ") (" fileIn >> tempBFJ.orientation.x >> tempBFJ.orientation.z >> tempBFJ.orientation.y; fileIn >> checkString; // Skip ")" tempAnim.baseFrameJoints.push_back(tempBFJ); } } else if ( checkString == L"frame" ) // Load in each frames skeleton (the parts of each joint that changed from the base frame) { FrameData tempFrame; fileIn >> tempFrame.frameID; // Get the frame ID fileIn >> checkString; // Skip opening bracket "{" for(int i = 0; i < tempAnim.numAnimatedComponents; i++) { float tempData; fileIn >> tempData; // Get the data tempFrame.frameData.push_back(tempData); } tempAnim.frameData.push_back(tempFrame); ///*** build the frame skeleton ***/// std::vector<Joint> tempSkeleton; for(int i = 0; i < tempAnim.jointInfo.size(); i++) { int k = 0; // Keep track of position in frameData array // Start the frames joint with the base frame's joint Joint tempFrameJoint = tempAnim.baseFrameJoints[i]; tempFrameJoint.parentID = tempAnim.jointInfo[i].parentID; // Notice how I have been flipping y and z. this is because some modeling programs such as // 3ds max (which is what I use) use a right handed coordinate system. Because of this, we // need to flip the y and z axes. If your having problems loading some models, it's possible // the model was created in a left hand coordinate system. in that case, just reflip all the // y and z axes in our md5 mesh and anim loader. if(tempAnim.jointInfo[i].flags & 1) // pos.x ( 000001 ) tempFrameJoint.pos.x = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 2) // pos.y ( 000010 ) tempFrameJoint.pos.z = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 4) // pos.z ( 000100 ) tempFrameJoint.pos.y = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 8) // orientation.x ( 001000 ) tempFrameJoint.orientation.x = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 16) // orientation.y ( 010000 ) tempFrameJoint.orientation.z = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; if(tempAnim.jointInfo[i].flags & 32) // orientation.z ( 100000 ) tempFrameJoint.orientation.y = tempFrame.frameData[tempAnim.jointInfo[i].startIndex + k++]; // Compute the quaternions w float t = 1.0f - ( tempFrameJoint.orientation.x * tempFrameJoint.orientation.x ) - ( tempFrameJoint.orientation.y * tempFrameJoint.orientation.y ) - ( tempFrameJoint.orientation.z * tempFrameJoint.orientation.z ); if ( t < 0.0f ) { tempFrameJoint.orientation.w = 0.0f; } else { tempFrameJoint.orientation.w = -sqrtf(t); } // Now, if the upper arm of your skeleton moves, you need to also move the lower part of your arm, and then the hands, and then finally the fingers (possibly weapon or tool too) // This is where joint hierarchy comes in. We start at the top of the hierarchy, and move down to each joints child, rotating and translating them based on their parents rotation // and translation. We can assume that by the time we get to the child, the parent has already been rotated and transformed based of it's parent. We can assume this because // the child should never come before the parent in the files we loaded in. if(tempFrameJoint.parentID >= 0) { Joint parentJoint = tempSkeleton[tempFrameJoint.parentID]; // Turn the XMFLOAT3 and 4's into vectors for easier computation XMVECTOR parentJointOrientation = XMVectorSet(parentJoint.orientation.x, parentJoint.orientation.y, parentJoint.orientation.z, parentJoint.orientation.w); XMVECTOR tempJointPos = XMVectorSet(tempFrameJoint.pos.x, tempFrameJoint.pos.y, tempFrameJoint.pos.z, 0.0f); XMVECTOR parentOrientationConjugate = XMVectorSet(-parentJoint.orientation.x, -parentJoint.orientation.y, -parentJoint.orientation.z, parentJoint.orientation.w); // Calculate current joints position relative to its parents position XMFLOAT3 rotatedPos; XMStoreFloat3(&rotatedPos, XMQuaternionMultiply(XMQuaternionMultiply(parentJointOrientation, tempJointPos), parentOrientationConjugate)); // Translate the joint to model space by adding the parent joint's pos to it tempFrameJoint.pos.x = rotatedPos.x + parentJoint.pos.x; tempFrameJoint.pos.y = rotatedPos.y + parentJoint.pos.y; tempFrameJoint.pos.z = rotatedPos.z + parentJoint.pos.z; // Currently the joint is oriented in its parent joints space, we now need to orient it in // model space by multiplying the two orientations together (parentOrientation * childOrientation) <- In that order XMVECTOR tempJointOrient = XMVectorSet(tempFrameJoint.orientation.x, tempFrameJoint.orientation.y, tempFrameJoint.orientation.z, tempFrameJoint.orientation.w); tempJointOrient = XMQuaternionMultiply(parentJointOrientation, tempJointOrient); // Normalize the orienation quaternion tempJointOrient = XMQuaternionNormalize(tempJointOrient); XMStoreFloat4(&tempFrameJoint.orientation, tempJointOrient); } // Store the joint into our temporary frame skeleton tempSkeleton.push_back(tempFrameJoint); } // Push back our newly created frame skeleton into the animation's frameSkeleton array tempAnim.frameSkeleton.push_back(tempSkeleton); fileIn >> checkString; // Skip closing bracket "}" } } // Calculate and store some usefull animation data tempAnim.frameTime = 1.0f / tempAnim.frameRate; // Set the time per frame tempAnim.totalAnimTime = tempAnim.numFrames * tempAnim.frameTime; // Set the total time the animation takes tempAnim.currAnimTime = 0.0f; // Set the current time to zero MD5Model.animations.push_back(tempAnim); // Push back the animation into our model object } else // If the file was not loaded { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } void UpdateMD5Model(Model3D& MD5Model, float deltaTime, int animation) { MD5Model.animations[animation].currAnimTime += deltaTime; // Update the current animation time if(MD5Model.animations[animation].currAnimTime > MD5Model.animations[animation].totalAnimTime) MD5Model.animations[animation].currAnimTime = 0.0f; // Which frame are we on float currentFrame = MD5Model.animations[animation].currAnimTime * MD5Model.animations[animation].frameRate; int frame0 = floorf( currentFrame ); int frame1 = frame0 + 1; // Make sure we don't go over the number of frames if(frame0 == MD5Model.animations[animation].numFrames-1) frame1 = 0; float interpolation = currentFrame - frame0; // Get the remainder (in time) between frame0 and frame1 to use as interpolation factor std::vector<Joint> interpolatedSkeleton; // Create a frame skeleton to store the interpolated skeletons in // Compute the interpolated skeleton for( int i = 0; i < MD5Model.animations[animation].numJoints; i++) { Joint tempJoint; Joint joint0 = MD5Model.animations[animation].frameSkeleton[frame0][i]; // Get the i'th joint of frame0's skeleton Joint joint1 = MD5Model.animations[animation].frameSkeleton[frame1][i]; // Get the i'th joint of frame1's skeleton tempJoint.parentID = joint0.parentID; // Set the tempJoints parent id // Turn the two quaternions into XMVECTORs for easy computations XMVECTOR joint0Orient = XMVectorSet(joint0.orientation.x, joint0.orientation.y, joint0.orientation.z, joint0.orientation.w); XMVECTOR joint1Orient = XMVectorSet(joint1.orientation.x, joint1.orientation.y, joint1.orientation.z, joint1.orientation.w); // Interpolate positions tempJoint.pos.x = joint0.pos.x + (interpolation * (joint1.pos.x - joint0.pos.x)); tempJoint.pos.y = joint0.pos.y + (interpolation * (joint1.pos.y - joint0.pos.y)); tempJoint.pos.z = joint0.pos.z + (interpolation * (joint1.pos.z - joint0.pos.z)); // Interpolate orientations using spherical interpolation (Slerp) XMStoreFloat4(&tempJoint.orientation, XMQuaternionSlerp(joint0Orient, joint1Orient, interpolation)); interpolatedSkeleton.push_back(tempJoint); // Push the joint back into our interpolated skeleton } for ( int k = 0; k < MD5Model.numSubsets; k++) { for ( int i = 0; i < MD5Model.subsets[k].vertices.size(); ++i ) { Vertex tempVert = MD5Model.subsets[k].vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first tempVert.normal = XMFLOAT3(0,0,0); // Clear vertices normal // Sum up the joints and weights information to get vertex's position and normal for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = MD5Model.subsets[k].weights[tempVert.StartWeight + j]; Joint tempJoint = interpolatedSkeleton[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion XMVECTOR tempJointOrientationConjugate = XMQuaternionInverse(tempJointOrientation); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Compute the normals for this frames skeleton using the weight normals from before // We can comput the normals the same way we compute the vertices position, only we don't have to translate them (just rotate) XMVECTOR tempWeightNormal = XMVectorSet(tempWeight.normal.x, tempWeight.normal.y, tempWeight.normal.z, 0.0f); // Rotate the normal XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightNormal), tempJointOrientationConjugate)); // Add to vertices normal and ake weight bias into account tempVert.normal.x -= rotatedPoint.x * tempWeight.bias; tempVert.normal.y -= rotatedPoint.y * tempWeight.bias; tempVert.normal.z -= rotatedPoint.z * tempWeight.bias; } MD5Model.subsets[k].positions[i] = tempVert.pos; // Store the vertices position in the position vector instead of straight into the vertex vector MD5Model.subsets[k].vertices[i].normal = tempVert.normal; // Store the vertices normal XMStoreFloat3(&MD5Model.subsets[k].vertices[i].normal, XMVector3Normalize(XMLoadFloat3(&MD5Model.subsets[k].vertices[i].normal))); } // Put the positions into the vertices for this subset for(int i = 0; i < MD5Model.subsets[k].vertices.size(); i++) { MD5Model.subsets[k].vertices[i].pos = MD5Model.subsets[k].positions[i]; } // Update the subsets vertex buffer // First lock the buffer D3D11_MAPPED_SUBRESOURCE mappedVertBuff; hr = d3d11DevCon->Map(MD5Model.subsets[k].vertBuff, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedVertBuff); // Copy the data into the vertex buffer. memcpy(mappedVertBuff.pData, &MD5Model.subsets[k].vertices[0], (sizeof(Vertex) * MD5Model.subsets[k].vertices.size())); d3d11DevCon->Unmap(MD5Model.subsets[k].vertBuff, 0); // The line below is another way to update a buffer. You will use this when you want to update a buffer less // than once per frame, since the GPU reads will be faster (the buffer was created as a DEFAULT buffer instead // of a DYNAMIC buffer), and the CPU writes will be slower. You can try both methods to find out which one is faster // for you. if you want to use the line below, you will have to create the buffer with D3D11_USAGE_DEFAULT instead // of D3D11_USAGE_DYNAMIC //d3d11DevCon->UpdateSubresource( MD5Model.subsets[k].vertBuff, 0, NULL, &MD5Model.subsets[k].vertices[0], 0, 0 ); } } ///////////////**************new**************//////////////////// bool LoadMD5Model(std::wstring filename, Model3D& MD5Model, std::vector<ID3D11ShaderResourceView*>& shaderResourceViewArray, std::vector<std::wstring> texFileNameArray) { std::wifstream fileIn (filename.c_str()); // Open file std::wstring checkString; // Stores the next string from our file if(fileIn) // Check if the file was opened { while(fileIn) // Loop until the end of the file is reached { fileIn >> checkString; // Get next string from file if(checkString == L"MD5Version") // Get MD5 version (this function supports version 10) { /*fileIn >> checkString; MessageBox(0, checkString.c_str(), //display message L"MD5Version", MB_OK);*/ } else if ( checkString == L"commandline" ) { std::getline(fileIn, checkString); // Ignore the rest of this line } else if ( checkString == L"numJoints" ) { fileIn >> MD5Model.numJoints; // Store number of joints } else if ( checkString == L"numMeshes" ) { fileIn >> MD5Model.numSubsets; // Store number of meshes or subsets which we will call them } else if ( checkString == L"joints" ) { Joint tempJoint; fileIn >> checkString; // Skip the "{" for(int i = 0; i < MD5Model.numJoints; i++) { fileIn >> tempJoint.name; // Store joints name // Sometimes the names might contain spaces. If that is the case, we need to continue // to read the name until we get to the closing " (quotation marks) if(tempJoint.name[tempJoint.name.size()-1] != '"') { wchar_t checkChar; bool jointNameFound = false; while(!jointNameFound) { checkChar = fileIn.get(); if(checkChar == '"') jointNameFound = true; tempJoint.name += checkChar; } } fileIn >> tempJoint.parentID; // Store Parent joint's ID fileIn >> checkString; // Skip the "(" // Store position of this joint (swap y and z axis if model was made in RH Coord Sys) fileIn >> tempJoint.pos.x >> tempJoint.pos.z >> tempJoint.pos.y; fileIn >> checkString >> checkString; // Skip the ")" and "(" // Store orientation of this joint fileIn >> tempJoint.orientation.x >> tempJoint.orientation.z >> tempJoint.orientation.y; // Remove the quotation marks from joints name tempJoint.name.erase(0, 1); tempJoint.name.erase(tempJoint.name.size()-1, 1); // Compute the w axis of the quaternion (The MD5 model uses a 3D vector to describe the // direction the bone is facing. However, we need to turn this into a quaternion, and the way // quaternions work, is the xyz values describe the axis of rotation, while the w is a value // between 0 and 1 which describes the angle of rotation) float t = 1.0f - ( tempJoint.orientation.x * tempJoint.orientation.x ) - ( tempJoint.orientation.y * tempJoint.orientation.y ) - ( tempJoint.orientation.z * tempJoint.orientation.z ); if ( t < 0.0f ) { tempJoint.orientation.w = 0.0f; } else { tempJoint.orientation.w = -sqrtf(t); } std::getline(fileIn, checkString); // Skip rest of this line MD5Model.joints.push_back(tempJoint); // Store the joint into this models joint vector } fileIn >> checkString; // Skip the "}" } else if ( checkString == L"mesh") { ModelSubset subset; int numVerts, numTris, numWeights; fileIn >> checkString; // Skip the "{" fileIn >> checkString; while ( checkString != L"}" ) // Read until '}' { // In this lesson, for the sake of simplicity, we will assume a textures filename is givin here. // Usually though, the name of a material (stored in a material library. Think back to the lesson on // loading .obj files, where the material library was contained in the file .mtl) is givin. Let this // be an exercise to load the material from a material library such as obj's .mtl file, instead of // just the texture like we will do here. if(checkString == L"shader") // Load the texture or material { std::wstring fileNamePath; fileIn >> fileNamePath; // Get texture's filename // Take spaces into account if filename or material name has a space in it if(fileNamePath[fileNamePath.size()-1] != '"') { wchar_t checkChar; bool fileNameFound = false; while(!fileNameFound) { checkChar = fileIn.get(); if(checkChar == '"') fileNameFound = true; fileNamePath += checkChar; } } // Remove the quotation marks from texture path fileNamePath.erase(0, 1); fileNamePath.erase(fileNamePath.size()-1, 1); //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < texFileNameArray.size(); ++i) { if(fileNamePath == texFileNameArray[i]) { alreadyLoaded = true; subset.texArrayIndex = i; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { texFileNameArray.push_back(fileNamePath.c_str()); subset.texArrayIndex = shaderResourceViewArray.size(); shaderResourceViewArray.push_back(tempMeshSRV); } else { MessageBox(0, fileNamePath.c_str(), //display message L"Could Not Open:", MB_OK); return false; } } std::getline(fileIn, checkString); // Skip rest of this line } else if ( checkString == L"numverts") { fileIn >> numVerts; // Store number of vertices std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numVerts; i++) { Vertex tempVert; fileIn >> checkString // Skip "vert # (" >> checkString >> checkString; fileIn >> tempVert.texCoord.x // Store tex coords >> tempVert.texCoord.y; fileIn >> checkString; // Skip ")" fileIn >> tempVert.StartWeight; // Index of first weight this vert will be weighted to fileIn >> tempVert.WeightCount; // Number of weights for this vertex std::getline(fileIn, checkString); // Skip rest of this line subset.vertices.push_back(tempVert); // Push back this vertex into subsets vertex vector } } else if ( checkString == L"numtris") { fileIn >> numTris; subset.numTriangles = numTris; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numTris; i++) // Loop through each triangle { DWORD tempIndex; fileIn >> checkString; // Skip "tri" fileIn >> checkString; // Skip tri counter for(int k = 0; k < 3; k++) // Store the 3 indices { fileIn >> tempIndex; subset.indices.push_back(tempIndex); } std::getline(fileIn, checkString); // Skip rest of this line } } else if ( checkString == L"numweights") { fileIn >> numWeights; std::getline(fileIn, checkString); // Skip rest of this line for(int i = 0; i < numWeights; i++) { Weight tempWeight; fileIn >> checkString >> checkString; // Skip "weight #" fileIn >> tempWeight.jointID; // Store weight's joint ID fileIn >> tempWeight.bias; // Store weight's influence over a vertex fileIn >> checkString; // Skip "(" fileIn >> tempWeight.pos.x // Store weight's pos in joint's local space >> tempWeight.pos.z >> tempWeight.pos.y; std::getline(fileIn, checkString); // Skip rest of this line subset.weights.push_back(tempWeight); // Push back tempWeight into subsets Weight array } } else std::getline(fileIn, checkString); // Skip anything else fileIn >> checkString; // Skip "}" } //*** find each vertex's position using the joints and weights ***// for ( int i = 0; i < subset.vertices.size(); ++i ) { Vertex tempVert = subset.vertices[i]; tempVert.pos = XMFLOAT3(0, 0, 0); // Make sure the vertex's pos is cleared first // Sum up the joints and weights information to get vertex's position for ( int j = 0; j < tempVert.WeightCount; ++j ) { Weight tempWeight = subset.weights[tempVert.StartWeight + j]; Joint tempJoint = MD5Model.joints[tempWeight.jointID]; // Convert joint orientation and weight pos to vectors for easier computation // When converting a 3d vector to a quaternion, you should put 0 for "w", and // When converting a quaternion to a 3d vector, you can just ignore the "w" XMVECTOR tempJointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); XMVECTOR tempWeightPos = XMVectorSet(tempWeight.pos.x, tempWeight.pos.y, tempWeight.pos.z, 0.0f); // We will need to use the conjugate of the joint orientation quaternion // To get the conjugate of a quaternion, all you have to do is inverse the x, y, and z XMVECTOR tempJointOrientationConjugate = XMVectorSet(-tempJoint.orientation.x, -tempJoint.orientation.y, -tempJoint.orientation.z, tempJoint.orientation.w); // Calculate vertex position (in joint space, eg. rotate the point around (0,0,0)) for this weight using the joint orientation quaternion and its conjugate // We can rotate a point using a quaternion with the equation "rotatedPoint = quaternion * point * quaternionConjugate" XMFLOAT3 rotatedPoint; XMStoreFloat3(&rotatedPoint, XMQuaternionMultiply(XMQuaternionMultiply(tempJointOrientation, tempWeightPos), tempJointOrientationConjugate)); // Now move the verices position from joint space (0,0,0) to the joints position in world space, taking the weights bias into account // The weight bias is used because multiple weights might have an effect on the vertices final position. Each weight is attached to one joint. tempVert.pos.x += ( tempJoint.pos.x + rotatedPoint.x ) * tempWeight.bias; tempVert.pos.y += ( tempJoint.pos.y + rotatedPoint.y ) * tempWeight.bias; tempVert.pos.z += ( tempJoint.pos.z + rotatedPoint.z ) * tempWeight.bias; // Basically what has happened above, is we have taken the weights position relative to the joints position // we then rotate the weights position (so that the weight is actually being rotated around (0, 0, 0) in world space) using // the quaternion describing the joints rotation. We have stored this rotated point in rotatedPoint, which we then add to // the joints position (because we rotated the weight's position around (0,0,0) in world space, and now need to translate it // so that it appears to have been rotated around the joints position). Finally we multiply the answer with the weights bias, // or how much control the weight has over the final vertices position. All weight's bias effecting a single vertex's position // must add up to 1. } subset.positions.push_back(tempVert.pos); // Store the vertices position in the position vector instead of straight into the vertex vector // since we can use the positions vector for certain things like collision detection or picking // without having to work with the entire vertex structure. } // Put the positions into the vertices for this subset for(int i = 0; i < subset.vertices.size(); i++) { subset.vertices[i].pos = subset.positions[i]; } //*** Calculate vertex normals using normal averaging ***/// std::vector<XMFLOAT3> tempNormal; //normalized and unnormalized normals XMFLOAT3 unnormalized = XMFLOAT3(0.0f, 0.0f, 0.0f); //Used to get vectors (sides) from the position of the verts float vecX, vecY, vecZ; //Two edges of our triangle XMVECTOR edge1 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR edge2 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); //Compute face normals for(int i = 0; i < subset.numTriangles; ++i) { //Get the vector describing one edge of our triangle (edge 0,2) vecX = subset.vertices[subset.indices[(i*3)]].pos.x - subset.vertices[subset.indices[(i*3)+2]].pos.x; vecY = subset.vertices[subset.indices[(i*3)]].pos.y - subset.vertices[subset.indices[(i*3)+2]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)]].pos.z - subset.vertices[subset.indices[(i*3)+2]].pos.z; edge1 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our first edge //Get the vector describing another edge of our triangle (edge 2,1) vecX = subset.vertices[subset.indices[(i*3)+2]].pos.x - subset.vertices[subset.indices[(i*3)+1]].pos.x; vecY = subset.vertices[subset.indices[(i*3)+2]].pos.y - subset.vertices[subset.indices[(i*3)+1]].pos.y; vecZ = subset.vertices[subset.indices[(i*3)+2]].pos.z - subset.vertices[subset.indices[(i*3)+1]].pos.z; edge2 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our second edge //Cross multiply the two edge vectors to get the un-normalized face normal XMStoreFloat3(&unnormalized, XMVector3Cross(edge1, edge2)); tempNormal.push_back(unnormalized); } //Compute vertex normals (normal Averaging) XMVECTOR normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); int facesUsing = 0; float tX, tY, tZ; //temp axis variables //Go through each vertex for(int i = 0; i < subset.vertices.size(); ++i) { //Check which triangles use this vertex for(int j = 0; j < subset.numTriangles; ++j) { if(subset.indices[j*3] == i || subset.indices[(j*3)+1] == i || subset.indices[(j*3)+2] == i) { tX = XMVectorGetX(normalSum) + tempNormal[j].x; tY = XMVectorGetY(normalSum) + tempNormal[j].y; tZ = XMVectorGetZ(normalSum) + tempNormal[j].z; normalSum = XMVectorSet(tX, tY, tZ, 0.0f); //If a face is using the vertex, add the unormalized face normal to the normalSum facesUsing++; } } //Get the actual normal by dividing the normalSum by the number of faces sharing the vertex normalSum = normalSum / facesUsing; //Normalize the normalSum vector normalSum = XMVector3Normalize(normalSum); //Store the normal and tangent in our current vertex subset.vertices[i].normal.x = -XMVectorGetX(normalSum); subset.vertices[i].normal.y = -XMVectorGetY(normalSum); subset.vertices[i].normal.z = -XMVectorGetZ(normalSum); ///////////////**************new**************//////////////////// // Create the joint space normal for easy normal calculations in animation Vertex tempVert = subset.vertices[i]; // Get the current vertex subset.jointSpaceNormals.push_back(XMFLOAT3(0,0,0)); // Push back a blank normal XMVECTOR normal = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); // Clear normal for ( int k = 0; k < tempVert.WeightCount; k++) // Loop through each of the vertices weights { Joint tempJoint = MD5Model.joints[subset.weights[tempVert.StartWeight + k].jointID]; // Get the joints orientation XMVECTOR jointOrientation = XMVectorSet(tempJoint.orientation.x, tempJoint.orientation.y, tempJoint.orientation.z, tempJoint.orientation.w); // Calculate normal based off joints orientation (turn into joint space) normal = XMQuaternionMultiply(XMQuaternionMultiply(XMQuaternionInverse(jointOrientation), normalSum), jointOrientation); XMStoreFloat3(&subset.weights[tempVert.StartWeight + k].normal, XMVector3Normalize(normal)); // Store the normalized quaternion into our weights normal } ///////////////**************new**************//////////////////// //Clear normalSum, facesUsing for next vertex normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); facesUsing = 0; } // Create index buffer D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * subset.numTriangles * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &subset.indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &subset.indexBuff); //Create Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; // We will be updating this buffer, so we must set as dynamic vertexBufferDesc.ByteWidth = sizeof( Vertex ) * subset.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; // Give CPU power to write to buffer vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &subset.vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &subset.vertBuff); // Push back the temp subset into the models subset vector MD5Model.subsets.push_back(subset); } } } else { SwapChain->SetFullscreenState(false, NULL); // Make sure we are out of fullscreen // create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), // display message L"Error", MB_OK); return false; } return true; } bool LoadObjModel(std::wstring filename, ID3D11Buffer** vertBuff, ID3D11Buffer** indexBuff, std::vector<int>& subsetIndexStart, std::vector<int>& subsetMaterialArray, std::vector<SurfaceMaterial>& material, int& subsetCount, bool isRHCoordSys, bool computeNormals) { HRESULT hr = 0; std::wifstream fileIn (filename.c_str()); //Open file std::wstring meshMatLib; //String to hold our obj material library filename //Arrays to store our model's information std::vector<DWORD> indices; std::vector<XMFLOAT3> vertPos; std::vector<XMFLOAT3> vertNorm; std::vector<XMFLOAT2> vertTexCoord; std::vector<std::wstring> meshMaterials; //Vertex definition indices std::vector<int> vertPosIndex; std::vector<int> vertNormIndex; std::vector<int> vertTCIndex; //Make sure we have a default if no tex coords or normals are defined bool hasTexCoord = false; bool hasNorm = false; //Temp variables to store into vectors std::wstring meshMaterialsTemp; int vertPosIndexTemp; int vertNormIndexTemp; int vertTCIndexTemp; wchar_t checkChar; //The variable we will use to store one char from file at a time std::wstring face; //Holds the string containing our face vertices int vIndex = 0; //Keep track of our vertex index count int triangleCount = 0; //Total Triangles int totalVerts = 0; int meshTriangles = 0; //Check to see if the file was opened if (fileIn) { while(fileIn) { checkChar = fileIn.get(); //Get next char switch (checkChar) { case '#': checkChar = fileIn.get(); while(checkChar != '\n') checkChar = fileIn.get(); break; case 'v': //Get Vertex Descriptions checkChar = fileIn.get(); if(checkChar == ' ') //v - vert position { float vz, vy, vx; fileIn >> vx >> vy >> vz; //Store the next three types if(isRHCoordSys) //If model is from an RH Coord System vertPos.push_back(XMFLOAT3( vx, vy, vz * -1.0f)); //Invert the Z axis else vertPos.push_back(XMFLOAT3( vx, vy, vz)); } if(checkChar == 't') //vt - vert tex coords { float vtcu, vtcv; fileIn >> vtcu >> vtcv; //Store next two types if(isRHCoordSys) //If model is from an RH Coord System vertTexCoord.push_back(XMFLOAT2(vtcu, 1.0f-vtcv)); //Reverse the "v" axis else vertTexCoord.push_back(XMFLOAT2(vtcu, vtcv)); hasTexCoord = true; //We know the model uses texture coords } //Since we compute the normals later, we don't need to check for normals //In the file, but i'll do it here anyway if(checkChar == 'n') //vn - vert normal { float vnx, vny, vnz; fileIn >> vnx >> vny >> vnz; //Store next three types if(isRHCoordSys) //If model is from an RH Coord System vertNorm.push_back(XMFLOAT3( vnx, vny, vnz * -1.0f )); //Invert the Z axis else vertNorm.push_back(XMFLOAT3( vnx, vny, vnz )); hasNorm = true; //We know the model defines normals } break; //New group (Subset) case 'g': //g - defines a group checkChar = fileIn.get(); if(checkChar == ' ') { subsetIndexStart.push_back(vIndex); //Start index for this subset subsetCount++; } break; //Get Face Index case 'f': //f - defines the faces checkChar = fileIn.get(); if(checkChar == ' ') { face = L""; std::wstring VertDef; //Holds one vertex definition at a time triangleCount = 0; checkChar = fileIn.get(); while(checkChar != '\n') { face += checkChar; //Add the char to our face string checkChar = fileIn.get(); //Get the next Character if(checkChar == ' ') //If its a space... triangleCount++; //Increase our triangle count } //Check for space at the end of our face string if(face[face.length()-1] == ' ') triangleCount--; //Each space adds to our triangle count triangleCount -= 1; //Ever vertex in the face AFTER the first two are new faces std::wstringstream ss(face); if(face.length() > 0) { int firstVIndex, lastVIndex; //Holds the first and last vertice's index for(int i = 0; i < 3; ++i) //First three vertices (first triangle) { ss >> VertDef; //Get vertex definition (vPos/vTexCoord/vNorm) std::wstring vertPart; int whichPart = 0; //(vPos, vTexCoord, or vNorm) //Parse this string for(int j = 0; j < VertDef.length(); ++j) { if(VertDef[j] != '/') //If there is no divider "/", add a char to our vertPart vertPart += VertDef[j]; //If the current char is a divider "/", or its the last character in the string if(VertDef[j] == '/' || j == VertDef.length()-1) { std::wistringstream wstringToInt(vertPart); //Used to convert wstring to int if(whichPart == 0) //If vPos { wstringToInt >> vertPosIndexTemp; vertPosIndexTemp -= 1; //subtract one since c++ arrays start with 0, and obj start with 1 //Check to see if the vert pos was the only thing specified if(j == VertDef.length()-1) { vertNormIndexTemp = 0; vertTCIndexTemp = 0; } } else if(whichPart == 1) //If vTexCoord { if(vertPart != L"") //Check to see if there even is a tex coord { wstringToInt >> vertTCIndexTemp; vertTCIndexTemp -= 1; //subtract one since c++ arrays start with 0, and obj start with 1 } else //If there is no tex coord, make a default vertTCIndexTemp = 0; //If the cur. char is the second to last in the string, then //there must be no normal, so set a default normal if(j == VertDef.length()-1) vertNormIndexTemp = 0; } else if(whichPart == 2) //If vNorm { std::wistringstream wstringToInt(vertPart); wstringToInt >> vertNormIndexTemp; vertNormIndexTemp -= 1; //subtract one since c++ arrays start with 0, and obj start with 1 } vertPart = L""; //Get ready for next vertex part whichPart++; //Move on to next vertex part } } //Check to make sure there is at least one subset if(subsetCount == 0) { subsetIndexStart.push_back(vIndex); //Start index for this subset subsetCount++; } //Avoid duplicate vertices bool vertAlreadyExists = false; if(totalVerts >= 3) //Make sure we at least have one triangle to check { //Loop through all the vertices for(int iCheck = 0; iCheck < totalVerts; ++iCheck) { //If the vertex position and texture coordinate in memory are the same //As the vertex position and texture coordinate we just now got out //of the obj file, we will set this faces vertex index to the vertex's //index value in memory. This makes sure we don't create duplicate vertices if(vertPosIndexTemp == vertPosIndex[iCheck] && !vertAlreadyExists) { if(vertTCIndexTemp == vertTCIndex[iCheck]) { indices.push_back(iCheck); //Set index for this vertex vertAlreadyExists = true; //If we've made it here, the vertex already exists } } } } //If this vertex is not already in our vertex arrays, put it there if(!vertAlreadyExists) { vertPosIndex.push_back(vertPosIndexTemp); vertTCIndex.push_back(vertTCIndexTemp); vertNormIndex.push_back(vertNormIndexTemp); totalVerts++; //We created a new vertex indices.push_back(totalVerts-1); //Set index for this vertex } //If this is the very first vertex in the face, we need to //make sure the rest of the triangles use this vertex if(i == 0) { firstVIndex = indices[vIndex]; //The first vertex index of this FACE } //If this was the last vertex in the first triangle, we will make sure //the next triangle uses this one (eg. tri1(1,2,3) tri2(1,3,4) tri3(1,4,5)) if(i == 2) { lastVIndex = indices[vIndex]; //The last vertex index of this TRIANGLE } vIndex++; //Increment index count } meshTriangles++; //One triangle down //If there are more than three vertices in the face definition, we need to make sure //we convert the face to triangles. We created our first triangle above, now we will //create a new triangle for every new vertex in the face, using the very first vertex //of the face, and the last vertex from the triangle before the current triangle for(int l = 0; l < triangleCount-1; ++l) //Loop through the next vertices to create new triangles { //First vertex of this triangle (the very first vertex of the face too) indices.push_back(firstVIndex); //Set index for this vertex vIndex++; //Second Vertex of this triangle (the last vertex used in the tri before this one) indices.push_back(lastVIndex); //Set index for this vertex vIndex++; //Get the third vertex for this triangle ss >> VertDef; std::wstring vertPart; int whichPart = 0; //Parse this string (same as above) for(int j = 0; j < VertDef.length(); ++j) { if(VertDef[j] != '/') vertPart += VertDef[j]; if(VertDef[j] == '/' || j == VertDef.length()-1) { std::wistringstream wstringToInt(vertPart); if(whichPart == 0) { wstringToInt >> vertPosIndexTemp; vertPosIndexTemp -= 1; //Check to see if the vert pos was the only thing specified if(j == VertDef.length()-1) { vertTCIndexTemp = 0; vertNormIndexTemp = 0; } } else if(whichPart == 1) { if(vertPart != L"") { wstringToInt >> vertTCIndexTemp; vertTCIndexTemp -= 1; } else vertTCIndexTemp = 0; if(j == VertDef.length()-1) vertNormIndexTemp = 0; } else if(whichPart == 2) { std::wistringstream wstringToInt(vertPart); wstringToInt >> vertNormIndexTemp; vertNormIndexTemp -= 1; } vertPart = L""; whichPart++; } } //Check for duplicate vertices bool vertAlreadyExists = false; if(totalVerts >= 3) //Make sure we at least have one triangle to check { for(int iCheck = 0; iCheck < totalVerts; ++iCheck) { if(vertPosIndexTemp == vertPosIndex[iCheck] && !vertAlreadyExists) { if(vertTCIndexTemp == vertTCIndex[iCheck]) { indices.push_back(iCheck); //Set index for this vertex vertAlreadyExists = true; //If we've made it here, the vertex already exists } } } } if(!vertAlreadyExists) { vertPosIndex.push_back(vertPosIndexTemp); vertTCIndex.push_back(vertTCIndexTemp); vertNormIndex.push_back(vertNormIndexTemp); totalVerts++; //New vertex created, add to total verts indices.push_back(totalVerts-1); //Set index for this vertex } //Set the second vertex for the next triangle to the last vertex we got lastVIndex = indices[vIndex]; //The last vertex index of this TRIANGLE meshTriangles++; //New triangle defined vIndex++; } } } break; case 'm': //mtllib - material library filename checkChar = fileIn.get(); if(checkChar == 't') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == 'i') { checkChar = fileIn.get(); if(checkChar == 'b') { checkChar = fileIn.get(); if(checkChar == ' ') { //Store the material libraries file name fileIn >> meshMatLib; } } } } } } break; case 'u': //usemtl - which material to use checkChar = fileIn.get(); if(checkChar == 's') { checkChar = fileIn.get(); if(checkChar == 'e') { checkChar = fileIn.get(); if(checkChar == 'm') { checkChar = fileIn.get(); if(checkChar == 't') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == ' ') { meshMaterialsTemp = L""; //Make sure this is cleared fileIn >> meshMaterialsTemp; //Get next type (string) meshMaterials.push_back(meshMaterialsTemp); } } } } } } break; default: break; } } } else //If we could not open the file { SwapChain->SetFullscreenState(false, NULL); //Make sure we are out of fullscreen //create message std::wstring message = L"Could not open: "; message += filename; MessageBox(0, message.c_str(), //display message L"Error", MB_OK); return false; } subsetIndexStart.push_back(vIndex); //There won't be another index start after our last subset, so set it here //sometimes "g" is defined at the very top of the file, then again before the first group of faces. //This makes sure the first subset does not conatain "0" indices. if(subsetIndexStart[1] == 0) { subsetIndexStart.erase(subsetIndexStart.begin()+1); meshSubsets--; } //Make sure we have a default for the tex coord and normal //if one or both are not specified if(!hasNorm) vertNorm.push_back(XMFLOAT3(0.0f, 0.0f, 0.0f)); if(!hasTexCoord) vertTexCoord.push_back(XMFLOAT2(0.0f, 0.0f)); //Close the obj file, and open the mtl file fileIn.close(); fileIn.open(meshMatLib.c_str()); std::wstring lastStringRead; int matCount = material.size(); //total materials //kdset - If our diffuse color was not set, we can use the ambient color (which is usually the same) //If the diffuse color WAS set, then we don't need to set our diffuse color to ambient bool kdset = false; if (fileIn) { while(fileIn) { checkChar = fileIn.get(); //Get next char switch (checkChar) { //Check for comment case '#': checkChar = fileIn.get(); while(checkChar != '\n') checkChar = fileIn.get(); break; //Set diffuse color case 'K': checkChar = fileIn.get(); if(checkChar == 'd') //Diffuse Color { checkChar = fileIn.get(); //remove space fileIn >> material[matCount-1].difColor.x; fileIn >> material[matCount-1].difColor.y; fileIn >> material[matCount-1].difColor.z; kdset = true; } //Ambient Color (We'll store it in diffuse if there isn't a diffuse already) if(checkChar == 'a') { checkChar = fileIn.get(); //remove space if(!kdset) { fileIn >> material[matCount-1].difColor.x; fileIn >> material[matCount-1].difColor.y; fileIn >> material[matCount-1].difColor.z; } } break; //Check for transparency case 'T': checkChar = fileIn.get(); if(checkChar == 'r') { checkChar = fileIn.get(); //remove space float Transparency; fileIn >> Transparency; material[matCount-1].difColor.w = Transparency; if(Transparency > 0.0f) material[matCount-1].transparent = true; } break; //Some obj files specify d for transparency case 'd': checkChar = fileIn.get(); if(checkChar == ' ') { float Transparency; fileIn >> Transparency; //'d' - 0 being most transparent, and 1 being opaque, opposite of Tr Transparency = 1.0f - Transparency; material[matCount-1].difColor.w = Transparency; if(Transparency > 0.0f) material[matCount-1].transparent = true; } break; //Get the diffuse map (texture) case 'm': checkChar = fileIn.get(); if(checkChar == 'a') { checkChar = fileIn.get(); if(checkChar == 'p') { checkChar = fileIn.get(); if(checkChar == '_') { //map_Kd - Diffuse map checkChar = fileIn.get(); if(checkChar == 'K') { checkChar = fileIn.get(); if(checkChar == 'd') { std::wstring fileNamePath; fileIn.get(); //Remove whitespace between map_Kd and file //Get the file path - We read the pathname char by char since //pathnames can sometimes contain spaces, so we will read until //we find the file extension bool texFilePathEnd = false; while(!texFilePathEnd) { checkChar = fileIn.get(); fileNamePath += checkChar; if(checkChar == '.') { for(int i = 0; i < 3; ++i) fileNamePath += fileIn.get(); texFilePathEnd = true; } } //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < textureNameArray.size(); ++i) { if(fileNamePath == textureNameArray[i]) { alreadyLoaded = true; material[matCount-1].texArrayIndex = i; material[matCount-1].hasTexture = true; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { textureNameArray.push_back(fileNamePath.c_str()); material[matCount-1].texArrayIndex = meshSRV.size(); meshSRV.push_back(tempMeshSRV); material[matCount-1].hasTexture = true; } } } } //map_d - alpha map else if(checkChar == 'd') { //Alpha maps are usually the same as the diffuse map //So we will assume that for now by only enabling //transparency for this material, as we will already //be using the alpha channel in the diffuse map material[matCount-1].transparent = true; } //map_bump - bump map (we're usinga normal map though) else if(checkChar == 'b') { checkChar = fileIn.get(); if(checkChar == 'u') { checkChar = fileIn.get(); if(checkChar == 'm') { checkChar = fileIn.get(); if(checkChar == 'p') { std::wstring fileNamePath; fileIn.get(); //Remove whitespace between map_bump and file //Get the file path - We read the pathname char by char since //pathnames can sometimes contain spaces, so we will read until //we find the file extension bool texFilePathEnd = false; while(!texFilePathEnd) { checkChar = fileIn.get(); fileNamePath += checkChar; if(checkChar == '.') { for(int i = 0; i < 3; ++i) fileNamePath += fileIn.get(); texFilePathEnd = true; } } //check if this texture has already been loaded bool alreadyLoaded = false; for(int i = 0; i < textureNameArray.size(); ++i) { if(fileNamePath == textureNameArray[i]) { alreadyLoaded = true; material[matCount-1].normMapTexArrayIndex = i; material[matCount-1].hasNormMap = true; } } //if the texture is not already loaded, load it now if(!alreadyLoaded) { ID3D11ShaderResourceView* tempMeshSRV; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, fileNamePath.c_str(), NULL, NULL, &tempMeshSRV, NULL ); if(SUCCEEDED(hr)) { textureNameArray.push_back(fileNamePath.c_str()); material[matCount-1].normMapTexArrayIndex = meshSRV.size(); meshSRV.push_back(tempMeshSRV); material[matCount-1].hasNormMap = true; } } } } } } } } } break; case 'n': //newmtl - Declare new material checkChar = fileIn.get(); if(checkChar == 'e') { checkChar = fileIn.get(); if(checkChar == 'w') { checkChar = fileIn.get(); if(checkChar == 'm') { checkChar = fileIn.get(); if(checkChar == 't') { checkChar = fileIn.get(); if(checkChar == 'l') { checkChar = fileIn.get(); if(checkChar == ' ') { //New material, set its defaults SurfaceMaterial tempMat; material.push_back(tempMat); fileIn >> material[matCount].matName; material[matCount].transparent = false; material[matCount].hasTexture = false; material[matCount].hasNormMap = false; material[matCount].normMapTexArrayIndex = 0; material[matCount].texArrayIndex = 0; matCount++; kdset = false; } } } } } } break; default: break; } } } else { SwapChain->SetFullscreenState(false, NULL); //Make sure we are out of fullscreen std::wstring message = L"Could not open: "; message += meshMatLib; MessageBox(0, message.c_str(), L"Error", MB_OK); return false; } //Set the subsets material to the index value //of the its material in our material array for(int i = 0; i < meshSubsets; ++i) { bool hasMat = false; for(int j = 0; j < material.size(); ++j) { if(meshMaterials[i] == material[j].matName) { subsetMaterialArray.push_back(j); hasMat = true; } } if(!hasMat) subsetMaterialArray.push_back(0); //Use first material in array } std::vector<Vertex> vertices; Vertex tempVert; //Create our vertices using the information we got //from the file and store them in a vector for(int j = 0 ; j < totalVerts; ++j) { tempVert.pos = vertPos[vertPosIndex[j]]; tempVert.normal = vertNorm[vertNormIndex[j]]; tempVert.texCoord = vertTexCoord[vertTCIndex[j]]; vertices.push_back(tempVert); } //////////////////////Compute Normals/////////////////////////// //If computeNormals was set to true then we will create our own //normals, if it was set to false we will use the obj files normals if(computeNormals) { std::vector<XMFLOAT3> tempNormal; //normalized and unnormalized normals XMFLOAT3 unnormalized = XMFLOAT3(0.0f, 0.0f, 0.0f); //tangent stuff std::vector<XMFLOAT3> tempTangent; XMFLOAT3 tangent = XMFLOAT3(0.0f, 0.0f, 0.0f); float tcU1, tcV1, tcU2, tcV2; //Used to get vectors (sides) from the position of the verts float vecX, vecY, vecZ; //Two edges of our triangle XMVECTOR edge1 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR edge2 = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); //Compute face normals //And Tangents for(int i = 0; i < meshTriangles; ++i) { //Get the vector describing one edge of our triangle (edge 0,2) vecX = vertices[indices[(i*3)]].pos.x - vertices[indices[(i*3)+2]].pos.x; vecY = vertices[indices[(i*3)]].pos.y - vertices[indices[(i*3)+2]].pos.y; vecZ = vertices[indices[(i*3)]].pos.z - vertices[indices[(i*3)+2]].pos.z; edge1 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our first edge //Get the vector describing another edge of our triangle (edge 2,1) vecX = vertices[indices[(i*3)+2]].pos.x - vertices[indices[(i*3)+1]].pos.x; vecY = vertices[indices[(i*3)+2]].pos.y - vertices[indices[(i*3)+1]].pos.y; vecZ = vertices[indices[(i*3)+2]].pos.z - vertices[indices[(i*3)+1]].pos.z; edge2 = XMVectorSet(vecX, vecY, vecZ, 0.0f); //Create our second edge //Cross multiply the two edge vectors to get the un-normalized face normal XMStoreFloat3(&unnormalized, XMVector3Cross(edge1, edge2)); tempNormal.push_back(unnormalized); //Find first texture coordinate edge 2d vector tcU1 = vertices[indices[(i*3)]].texCoord.x - vertices[indices[(i*3)+2]].texCoord.x; tcV1 = vertices[indices[(i*3)]].texCoord.y - vertices[indices[(i*3)+2]].texCoord.y; //Find second texture coordinate edge 2d vector tcU2 = vertices[indices[(i*3)+2]].texCoord.x - vertices[indices[(i*3)+1]].texCoord.x; tcV2 = vertices[indices[(i*3)+2]].texCoord.y - vertices[indices[(i*3)+1]].texCoord.y; //Find tangent using both tex coord edges and position edges tangent.x = (tcV1 * XMVectorGetX(edge1) - tcV2 * XMVectorGetX(edge2)) * (1.0f / (tcU1 * tcV2 - tcU2 * tcV1)); tangent.y = (tcV1 * XMVectorGetY(edge1) - tcV2 * XMVectorGetY(edge2)) * (1.0f / (tcU1 * tcV2 - tcU2 * tcV1)); tangent.z = (tcV1 * XMVectorGetZ(edge1) - tcV2 * XMVectorGetZ(edge2)) * (1.0f / (tcU1 * tcV2 - tcU2 * tcV1)); tempTangent.push_back(tangent); } //Compute vertex normals (normal Averaging) XMVECTOR normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); XMVECTOR tangentSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); int facesUsing = 0; float tX, tY, tZ; //temp axis variables //Go through each vertex for(int i = 0; i < totalVerts; ++i) { //Check which triangles use this vertex for(int j = 0; j < meshTriangles; ++j) { if(indices[j*3] == i || indices[(j*3)+1] == i || indices[(j*3)+2] == i) { tX = XMVectorGetX(normalSum) + tempNormal[j].x; tY = XMVectorGetY(normalSum) + tempNormal[j].y; tZ = XMVectorGetZ(normalSum) + tempNormal[j].z; normalSum = XMVectorSet(tX, tY, tZ, 0.0f); //If a face is using the vertex, add the unormalized face normal to the normalSum //We can reuse tX, tY, tZ to sum up tangents tX = XMVectorGetX(tangentSum) + tempTangent[j].x; tY = XMVectorGetY(tangentSum) + tempTangent[j].y; tZ = XMVectorGetZ(tangentSum) + tempTangent[j].z; tangentSum = XMVectorSet(tX, tY, tZ, 0.0f); //sum up face tangents using this vertex facesUsing++; } } //Get the actual normal by dividing the normalSum by the number of faces sharing the vertex normalSum = normalSum / facesUsing; tangentSum = tangentSum / facesUsing; //Normalize the normalSum vector and tangent normalSum = XMVector3Normalize(normalSum); tangentSum = XMVector3Normalize(tangentSum); //Store the normal and tangent in our current vertex vertices[i].normal.x = XMVectorGetX(normalSum); vertices[i].normal.y = XMVectorGetY(normalSum); vertices[i].normal.z = XMVectorGetZ(normalSum); vertices[i].tangent.x = XMVectorGetX(tangentSum); vertices[i].tangent.y = XMVectorGetY(tangentSum); vertices[i].tangent.z = XMVectorGetZ(tangentSum); //Clear normalSum, tangentSum and facesUsing for next vertex normalSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); tangentSum = XMVectorSet(0.0f, 0.0f, 0.0f, 0.0f); facesUsing = 0; } } //Create index buffer D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * meshTriangles*3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, indexBuff); //Create Vertex Buffer D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * totalVerts; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, vertBuff); return true; } void CreateSphere(int LatLines, int LongLines) { NumSphereVertices = ((LatLines-2) * LongLines) + 2; NumSphereFaces = ((LatLines-3)*(LongLines)*2) + (LongLines*2); float sphereYaw = 0.0f; float spherePitch = 0.0f; std::vector<Vertex> vertices(NumSphereVertices); XMVECTOR currVertPos = XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f); vertices[0].pos.x = 0.0f; vertices[0].pos.y = 0.0f; vertices[0].pos.z = 1.0f; for(DWORD i = 0; i < LatLines-2; ++i) { spherePitch = (i+1) * (3.14f/(LatLines-1)); Rotationx = XMMatrixRotationX(spherePitch); for(DWORD j = 0; j < LongLines; ++j) { sphereYaw = j * (6.28f/(LongLines)); Rotationy = XMMatrixRotationZ(sphereYaw); currVertPos = XMVector3TransformNormal( XMVectorSet(0.0f, 0.0f, 1.0f, 0.0f), (Rotationx * Rotationy) ); currVertPos = XMVector3Normalize( currVertPos ); vertices[i*LongLines+j+1].pos.x = XMVectorGetX(currVertPos); vertices[i*LongLines+j+1].pos.y = XMVectorGetY(currVertPos); vertices[i*LongLines+j+1].pos.z = XMVectorGetZ(currVertPos); } } vertices[NumSphereVertices-1].pos.x = 0.0f; vertices[NumSphereVertices-1].pos.y = 0.0f; vertices[NumSphereVertices-1].pos.z = -1.0f; D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * NumSphereVertices; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = &vertices[0]; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &sphereVertBuffer); std::vector<DWORD> indices(NumSphereFaces * 3); int k = 0; for(DWORD l = 0; l < LongLines-1; ++l) { indices[k] = 0; indices[k+1] = l+1; indices[k+2] = l+2; k += 3; } indices[k] = 0; indices[k+1] = LongLines; indices[k+2] = 1; k += 3; for(DWORD i = 0; i < LatLines-3; ++i) { for(DWORD j = 0; j < LongLines-1; ++j) { indices[k] = i*LongLines+j+1; indices[k+1] = i*LongLines+j+2; indices[k+2] = (i+1)*LongLines+j+1; indices[k+3] = (i+1)*LongLines+j+1; indices[k+4] = i*LongLines+j+2; indices[k+5] = (i+1)*LongLines+j+2; k += 6; // next quad } indices[k] = (i*LongLines)+LongLines; indices[k+1] = (i*LongLines)+1; indices[k+2] = ((i+1)*LongLines)+LongLines; indices[k+3] = ((i+1)*LongLines)+LongLines; indices[k+4] = (i*LongLines)+1; indices[k+5] = ((i+1)*LongLines)+1; k += 6; } for(DWORD l = 0; l < LongLines-1; ++l) { indices[k] = NumSphereVertices-1; indices[k+1] = (NumSphereVertices-1)-(l+1); indices[k+2] = (NumSphereVertices-1)-(l+2); k += 3; } indices[k] = NumSphereVertices-1; indices[k+1] = (NumSphereVertices-1)-LongLines; indices[k+2] = NumSphereVertices-2; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * NumSphereFaces * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = &indices[0]; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &sphereIndexBuffer); } void InitD2DScreenTexture() { //Create the vertex buffer Vertex v[] = { // Front Face Vertex(-1.0f, -1.0f, -1.0f, 0.0f, 1.0f,-1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 0.0f), Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 0.0f,-1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 0.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 0.0f), Vertex( 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f, -1.0f, 0.0f, 0.0f, 0.0f), }; DWORD indices[] = { // Front Face 0, 1, 2, 0, 2, 3, }; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * 2 * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = indices; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &d2dIndexBuffer); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * 4; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = v; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &d2dVertBuffer); //Create A shader resource view from the texture D2D will render to, //So we can use it to texture a square which overlays our scene d3d11Device->CreateShaderResourceView(sharedTex11, NULL, &d2dTexture); } bool InitScene() { InitD2DScreenTexture(); CreateSphere(10, 10); if(!LoadObjModel(L"ground.obj", &meshVertBuff, &meshIndexBuff, meshSubsetIndexStart, meshSubsetTexture, material, meshSubsets, true, true)) return false; if(!LoadMD5Model(L"boy.md5mesh", NewMD5Model, meshSRV, textureNameArray)) return false; ///////////////**************new**************//////////////////// if(!LoadMD5Anim(L"boy.md5anim", NewMD5Model)) return false; ///////////////**************new**************//////////////////// //Compile Shaders from shader file hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "VS", "vs_4_0", 0, 0, 0, &VS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "PS", "ps_4_0", 0, 0, 0, &PS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "D2D_PS", "ps_4_0", 0, 0, 0, &D2D_PS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "SKYMAP_VS", "vs_4_0", 0, 0, 0, &SKYMAP_VS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "SKYMAP_PS", "ps_4_0", 0, 0, 0, &SKYMAP_PS_Buffer, 0, 0); //Create the Shader Objects hr = d3d11Device->CreateVertexShader(VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), NULL, &VS); hr = d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS); hr = d3d11Device->CreatePixelShader(D2D_PS_Buffer->GetBufferPointer(), D2D_PS_Buffer->GetBufferSize(), NULL, &D2D_PS); hr = d3d11Device->CreateVertexShader(SKYMAP_VS_Buffer->GetBufferPointer(), SKYMAP_VS_Buffer->GetBufferSize(), NULL, &SKYMAP_VS); hr = d3d11Device->CreatePixelShader(SKYMAP_PS_Buffer->GetBufferPointer(), SKYMAP_PS_Buffer->GetBufferSize(), NULL, &SKYMAP_PS); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); light.pos = XMFLOAT3(0.0f, 7.0f, 0.0f); light.dir = XMFLOAT3(-0.5f, 0.75f, -0.5f); light.range = 1000.0f; light.cone = 12.0f; light.att = XMFLOAT3(0.4f, 0.02f, 0.000f); light.ambient = XMFLOAT4(0.2f, 0.2f, 0.2f, 1.0f); light.diffuse = XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f); //Create the Input Layout hr = d3d11Device->CreateInputLayout( layout, numElements, VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), &vertLayout ); //Set the Input Layout d3d11DevCon->IASetInputLayout( vertLayout ); //Set Primitive Topology d3d11DevCon->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //Create the Viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; //Set the Viewport d3d11DevCon->RSSetViewports(1, &viewport); //Create the buffer to send to the cbuffer in effect file D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer); //Create the buffer to send to the cbuffer per frame in effect file ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerFrame); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerFrameBuffer); //Camera information camPosition = XMVectorSet( 0.0f, 5.0f, -8.0f, 0.0f ); camTarget = XMVectorSet( 0.0f, 0.5f, 0.0f, 0.0f ); camUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f ); //Set the View matrix camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); //Set the Projection matrix camProjection = XMMatrixPerspectiveFovLH( 0.4f*3.14f, (float)Width/Height, 1.0f, 1000.0f); D3D11_BLEND_DESC blendDesc; ZeroMemory( &blendDesc, sizeof(blendDesc) ); D3D11_RENDER_TARGET_BLEND_DESC rtbd; ZeroMemory( &rtbd, sizeof(rtbd) ); rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_COLOR; rtbd.DestBlend = D3D11_BLEND_INV_SRC_ALPHA; rtbd.BlendOp = D3D11_BLEND_OP_ADD; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ZERO; rtbd.BlendOpAlpha = D3D11_BLEND_OP_ADD; rtbd.RenderTargetWriteMask = D3D10_COLOR_WRITE_ENABLE_ALL; blendDesc.AlphaToCoverageEnable = false; blendDesc.RenderTarget[0] = rtbd; d3d11Device->CreateBlendState(&blendDesc, &d2dTransparency); ZeroMemory( &rtbd, sizeof(rtbd) ); rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_INV_SRC_ALPHA; rtbd.DestBlend = D3D11_BLEND_SRC_ALPHA; rtbd.BlendOp = D3D11_BLEND_OP_ADD; rtbd.SrcBlendAlpha = D3D11_BLEND_INV_SRC_ALPHA; rtbd.DestBlendAlpha = D3D11_BLEND_SRC_ALPHA; rtbd.BlendOpAlpha = D3D11_BLEND_OP_ADD; rtbd.RenderTargetWriteMask = D3D10_COLOR_WRITE_ENABLE_ALL; blendDesc.AlphaToCoverageEnable = false; blendDesc.RenderTarget[0] = rtbd; d3d11Device->CreateBlendState(&blendDesc, &Transparency); ///Load Skymap's cube texture/// D3DX11_IMAGE_LOAD_INFO loadSMInfo; loadSMInfo.MiscFlags = D3D11_RESOURCE_MISC_TEXTURECUBE; ID3D11Texture2D* SMTexture = 0; hr = D3DX11CreateTextureFromFile(d3d11Device, L"skymap.dds", &loadSMInfo, 0, (ID3D11Resource**)&SMTexture, 0); D3D11_TEXTURE2D_DESC SMTextureDesc; SMTexture->GetDesc(&SMTextureDesc); D3D11_SHADER_RESOURCE_VIEW_DESC SMViewDesc; SMViewDesc.Format = SMTextureDesc.Format; SMViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURECUBE; SMViewDesc.TextureCube.MipLevels = SMTextureDesc.MipLevels; SMViewDesc.TextureCube.MostDetailedMip = 0; hr = d3d11Device->CreateShaderResourceView(SMTexture, &SMViewDesc, &smrv); // Describe the Sample State D3D11_SAMPLER_DESC sampDesc; ZeroMemory( &sampDesc, sizeof(sampDesc) ); sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER; sampDesc.MinLOD = 0; sampDesc.MaxLOD = D3D11_FLOAT32_MAX; //Create the Sample State hr = d3d11Device->CreateSamplerState( &sampDesc, &CubesTexSamplerState ); D3D11_RASTERIZER_DESC cmdesc; ZeroMemory(&cmdesc, sizeof(D3D11_RASTERIZER_DESC)); cmdesc.FillMode = D3D11_FILL_SOLID; cmdesc.CullMode = D3D11_CULL_BACK; cmdesc.FrontCounterClockwise = true; hr = d3d11Device->CreateRasterizerState(&cmdesc, &CCWcullMode); cmdesc.FrontCounterClockwise = false; hr = d3d11Device->CreateRasterizerState(&cmdesc, &CWcullMode); cmdesc.CullMode = D3D11_CULL_NONE; //cmdesc.FillMode = D3D11_FILL_WIREFRAME; hr = d3d11Device->CreateRasterizerState(&cmdesc, &RSCullNone); D3D11_DEPTH_STENCIL_DESC dssDesc; ZeroMemory(&dssDesc, sizeof(D3D11_DEPTH_STENCIL_DESC)); dssDesc.DepthEnable = true; dssDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL; dssDesc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL; d3d11Device->CreateDepthStencilState(&dssDesc, &DSLessEqual); return true; } void StartTimer() { LARGE_INTEGER frequencyCount; QueryPerformanceFrequency(&frequencyCount); countsPerSecond = double(frequencyCount.QuadPart); QueryPerformanceCounter(&frequencyCount); CounterStart = frequencyCount.QuadPart; } double GetTime() { LARGE_INTEGER currentTime; QueryPerformanceCounter(&currentTime); return double(currentTime.QuadPart-CounterStart)/countsPerSecond; } double GetFrameTime() { LARGE_INTEGER currentTime; __int64 tickCount; QueryPerformanceCounter(&currentTime); tickCount = currentTime.QuadPart-frameTimeOld; frameTimeOld = currentTime.QuadPart; if(tickCount < 0.0f) tickCount = 0.0f; return float(tickCount)/countsPerSecond; } void UpdateScene(double time) { //Reset sphereWorld sphereWorld = XMMatrixIdentity(); //Define sphereWorld's world space matrix Scale = XMMatrixScaling( 5.0f, 5.0f, 5.0f ); //Make sure the sphere is always centered around camera Translation = XMMatrixTranslation( XMVectorGetX(camPosition), XMVectorGetY(camPosition), XMVectorGetZ(camPosition) ); //Set sphereWorld's world space using the transformations sphereWorld = Scale * Translation; //the loaded models world space meshWorld = XMMatrixIdentity(); Rotation = XMMatrixRotationY(3.14f); Scale = XMMatrixScaling( 1.0f, 1.0f, 1.0f ); Translation = XMMatrixTranslation( 0.0f, 0.0f, 0.0f ); meshWorld = Rotation * Scale * Translation; Scale = XMMatrixScaling( 0.04f, 0.04f, 0.04f ); // The model is a bit too large for our scene, so make it smaller Translation = XMMatrixTranslation( 0.0f, 3.0f, 0.0f); smilesWorld = Scale * Translation; } void RenderText(std::wstring text, int inInt) { d3d11DevCon->PSSetShader(D2D_PS, 0, 0); //Release the D3D 11 Device keyedMutex11->ReleaseSync(0); //Use D3D10.1 device keyedMutex10->AcquireSync(0, 5); //Draw D2D content D2DRenderTarget->BeginDraw(); //Clear D2D Background D2DRenderTarget->Clear(D2D1::ColorF(0.0f, 0.0f, 0.0f, 0.0f)); //Create our string std::wostringstream printString; printString << text << inInt; printText = printString.str(); //Set the Font Color D2D1_COLOR_F FontColor = D2D1::ColorF(1.0f, 1.0f, 1.0f, 1.0f); //Set the brush color D2D will use to draw with Brush->SetColor(FontColor); //Create the D2D Render Area D2D1_RECT_F layoutRect = D2D1::RectF(0, 0, Width, Height); //Draw the Text D2DRenderTarget->DrawText( printText.c_str(), wcslen(printText.c_str()), TextFormat, layoutRect, Brush ); D2DRenderTarget->EndDraw(); //Release the D3D10.1 Device keyedMutex10->ReleaseSync(1); //Use the D3D11 Device keyedMutex11->AcquireSync(1, 5); //Use the shader resource representing the direct2d render target //to texture a square which is rendered in screen space so it //overlays on top of our entire scene. We use alpha blending so //that the entire background of the D2D render target is "invisible", //And only the stuff we draw with D2D will be visible (the text) //Set the blend state for D2D render target texture objects d3d11DevCon->OMSetBlendState(d2dTransparency, NULL, 0xffffffff); //Set the d2d Index buffer d3d11DevCon->IASetIndexBuffer( d2dIndexBuffer, DXGI_FORMAT_R32_UINT, 0); //Set the d2d vertex buffer UINT stride = sizeof( Vertex ); UINT offset = 0; d3d11DevCon->IASetVertexBuffers( 0, 1, &d2dVertBuffer, &stride, &offset ); WVP = XMMatrixIdentity(); cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &d2dTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); d3d11DevCon->DrawIndexed( 6, 0, 0 ); } void DrawScene() { //Clear our render target and depth/stencil view float bgColor[4] = { 0.5f, 0.5f, 0.5f, 1.0f }; d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor); d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0); constbuffPerFrame.light = light; d3d11DevCon->UpdateSubresource( cbPerFrameBuffer, 0, NULL, &constbuffPerFrame, 0, 0 ); d3d11DevCon->PSSetConstantBuffers(0, 1, &cbPerFrameBuffer); //Set our Render Target d3d11DevCon->OMSetRenderTargets( 1, &renderTargetView, depthStencilView ); //Set the default blend state (no blending) for opaque objects d3d11DevCon->OMSetBlendState(0, 0, 0xffffffff); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); UINT stride = sizeof( Vertex ); UINT offset = 0; ///***Draw MD5 Model***/// for(int i = 0; i < NewMD5Model.numSubsets; i ++) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( NewMD5Model.subsets[i].indexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &NewMD5Model.subsets[i].vertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = smilesWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(smilesWorld); cbPerObj.hasTexture = true; // We'll assume all md5 subsets have textures cbPerObj.hasNormMap = false; // We'll also assume md5 models have no normal map (easy to change later though) d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[NewMD5Model.subsets[i].texArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); d3d11DevCon->DrawIndexed( NewMD5Model.subsets[i].indices.size(), 0, 0 ); } /////Draw our model's NON-transparent subsets///// for(int i = 0; i < meshSubsets; ++i) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( meshIndexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &meshVertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = meshWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(meshWorld); cbPerObj.difColor = material[meshSubsetTexture[i]].difColor; cbPerObj.hasTexture = material[meshSubsetTexture[i]].hasTexture; cbPerObj.hasNormMap = material[meshSubsetTexture[i]].hasNormMap; d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); if(material[meshSubsetTexture[i]].hasTexture) d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[material[meshSubsetTexture[i]].texArrayIndex] ); if(material[meshSubsetTexture[i]].hasNormMap) d3d11DevCon->PSSetShaderResources( 1, 1, &meshSRV[material[meshSubsetTexture[i]].normMapTexArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); int indexStart = meshSubsetIndexStart[i]; int indexDrawAmount = meshSubsetIndexStart[i+1] - meshSubsetIndexStart[i]; if(!material[meshSubsetTexture[i]].transparent) d3d11DevCon->DrawIndexed( indexDrawAmount, indexStart, 0 ); } /////Draw the Sky's Sphere////// //Set the spheres index buffer d3d11DevCon->IASetIndexBuffer( sphereIndexBuffer, DXGI_FORMAT_R32_UINT, 0); //Set the spheres vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &sphereVertBuffer, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = sphereWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(sphereWorld); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); //Send our skymap resource view to pixel shader d3d11DevCon->PSSetShaderResources( 0, 1, &smrv ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); //Set the new VS and PS shaders d3d11DevCon->VSSetShader(SKYMAP_VS, 0, 0); d3d11DevCon->PSSetShader(SKYMAP_PS, 0, 0); //Set the new depth/stencil and RS states d3d11DevCon->OMSetDepthStencilState(DSLessEqual, 0); d3d11DevCon->RSSetState(RSCullNone); d3d11DevCon->DrawIndexed( NumSphereFaces * 3, 0, 0 ); //Set the default VS, PS shaders and depth/stencil state d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); d3d11DevCon->OMSetDepthStencilState(NULL, 0); /////Draw our model's TRANSPARENT subsets now///// //Set our blend state d3d11DevCon->OMSetBlendState(Transparency, NULL, 0xffffffff); for(int i = 0; i < meshSubsets; ++i) { //Set the grounds index buffer d3d11DevCon->IASetIndexBuffer( meshIndexBuff, DXGI_FORMAT_R32_UINT, 0); //Set the grounds vertex buffer d3d11DevCon->IASetVertexBuffers( 0, 1, &meshVertBuff, &stride, &offset ); //Set the WVP matrix and send it to the constant buffer in effect file WVP = meshWorld * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); cbPerObj.World = XMMatrixTranspose(meshWorld); cbPerObj.difColor = material[meshSubsetTexture[i]].difColor; cbPerObj.hasTexture = material[meshSubsetTexture[i]].hasTexture; cbPerObj.hasNormMap = material[meshSubsetTexture[i]].hasNormMap; d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetConstantBuffers( 1, 1, &cbPerObjectBuffer ); if(material[meshSubsetTexture[i]].hasTexture) d3d11DevCon->PSSetShaderResources( 0, 1, &meshSRV[material[meshSubsetTexture[i]].texArrayIndex] ); if(material[meshSubsetTexture[i]].hasNormMap) d3d11DevCon->PSSetShaderResources( 1, 1, &meshSRV[material[meshSubsetTexture[i]].normMapTexArrayIndex] ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(RSCullNone); int indexStart = meshSubsetIndexStart[i]; int indexDrawAmount = meshSubsetIndexStart[i+1] - meshSubsetIndexStart[i]; if(material[meshSubsetTexture[i]].transparent) d3d11DevCon->DrawIndexed( indexDrawAmount, indexStart, 0 ); } RenderText(L"FPS: ", fps); //Present the backbuffer to the screen SwapChain->Present(0, 0); } int messageloop(){ MSG msg; ZeroMemory(&msg, sizeof(MSG)); while(true) { BOOL PeekMessageL( LPMSG lpMsg, HWND hWnd, UINT wMsgFilterMin, UINT wMsgFilterMax, UINT wRemoveMsg ); if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } else{ // run game code frameCount++; if(GetTime() > 1.0f) { fps = frameCount; frameCount = 0; StartTimer(); } frameTime = GetFrameTime(); DetectInput(frameTime); UpdateScene(frameTime); DrawScene(); } } return msg.wParam; } LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { case WM_KEYDOWN: if( wParam == VK_ESCAPE ){ DestroyWindow(hwnd); } return 0; case WM_DESTROY: PostQuitMessage(0); return 0; } return DefWindowProc(hwnd, msg, wParam, lParam); } Effects.fx struct Light { float3 pos; float range; float3 dir; float cone; float3 att; float4 ambient; float4 diffuse; }; cbuffer cbPerFrame { Light light; }; cbuffer cbPerObject { float4x4 WVP; float4x4 World; float4 difColor; bool hasTexture; bool hasNormMap; }; Texture2D ObjTexture; Texture2D ObjNormMap; SamplerState ObjSamplerState; TextureCube SkyMap; struct VS_OUTPUT { float4 Pos : SV_POSITION; float4 worldPos : POSITION; float2 TexCoord : TEXCOORD; float3 normal : NORMAL; float3 tangent : TANGENT; }; struct SKYMAP_VS_OUTPUT //output structure for skymap vertex shader { float4 Pos : SV_POSITION; float3 texCoord : TEXCOORD; }; VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL, float3 tangent : TANGENT) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.worldPos = mul(inPos, World); output.normal = mul(normal, World); output.tangent = mul(tangent, World); output.TexCoord = inTexCoord; return output; } SKYMAP_VS_OUTPUT SKYMAP_VS(float3 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL, float3 tangent : TANGENT) { SKYMAP_VS_OUTPUT output = (SKYMAP_VS_OUTPUT)0; //Set Pos to xyww instead of xyzw, so that z will always be 1 (furthest from camera) output.Pos = mul(float4(inPos, 1.0f), WVP).xyww; output.texCoord = inPos; return output; } float4 PS(VS_OUTPUT input) : SV_TARGET { input.normal = normalize(input.normal); //Set diffuse color of material float4 diffuse = difColor; //If material has a diffuse texture map, set it now if(hasTexture == true) diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); //If material has a normal map, we can set it now if(hasNormMap == true) { //Load normal from normal map float4 normalMap = ObjNormMap.Sample( ObjSamplerState, input.TexCoord ); //Change normal map range from [0, 1] to [-1, 1] normalMap = (2.0f*normalMap) - 1.0f; //Make sure tangent is completely orthogonal to normal input.tangent = normalize(input.tangent - dot(input.tangent, input.normal)*input.normal); //Create the biTangent float3 biTangent = cross(input.normal, input.tangent); //Create the "Texture Space" float3x3 texSpace = float3x3(input.tangent, biTangent, input.normal); //Convert normal from normal map to texture space and store in input.normal input.normal = normalize(mul(normalMap, texSpace)); } float3 finalColor; finalColor = diffuse * light.ambient; finalColor += saturate(dot(light.dir, input.normal) * light.diffuse * diffuse); return float4(finalColor, diffuse.a); } float4 SKYMAP_PS(SKYMAP_VS_OUTPUT input) : SV_Target { return SkyMap.Sample(ObjSamplerState, input.texCoord); } float4 D2D_PS(VS_OUTPUT input) : SV_TARGET { float4 diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); return diffuse; }
Comments
Hello! Thank you for your tutorials! They are great! I've stumbled upon a problem in this one and the animation tutorial. If you look in the .mesh file. You'll notice that the "Bip01 Spine" joint has the parent id 1, But in the .anim file it says that the same joint has the parent id 2. This makes the animation loader exit early and never finish
on Mar 17 `16
draculavid
Hi, I've followed this tutorial with Dx11 on Windows 8 (with differences to that prevent you from using some of the older DirectX API functions). I can render the model in Bind Pose but as soon as I use the animation code the model becomes a complete mess (I'm using a different model), I've checked the loading and animation code and I appear to be following the steps exactly without success. My code is available from: https://github.com/jdg534/APF3DA Please let me know what steps I need to perform to fix the animation code.
on Apr 13 `16
jdgnintendogit