This tutorial is part of a Collection: 03. DirectX 11 - Braynzar Soft Tutorials
11616
views
16. Simple Lighting
There are four types of light, and three kinds of light sources. We will go over them in this lesson, and learn how to impliment simple lighting, using a "Directional" Light source. We will cover the other two types of light sources and specular lighting in a later lesson.
DX11_Lesson_16_0_Dir...zip 97.34 kb
1313 downloads
##Introduction## I don't believe i need to stress the importance light has on the realistic look and feel of video games. As the fixed function pipeline was removed from directx 10, we now need to impliment our lights manually. Typically, the more accurate the lighting model is, the more computationally expensive and complex it is, and believe me, it can get REAL expensive. In video games, we need to find a way to impliment light while keeping our game running at 30 fps and above. When modeling scenes for a movie, since each frame is pre-rendered, they can afford to spend hours if not days rendering a single frame. In this lesson, we will learn how to impliment the simplest of lights, the directional light, also known as the parallel light. When doing the calculations to find the color when light is hitting a pixel, the color values in the color variable may be greater than 1 or less than zero. To fix this, we use a function in the HLSL language called saturate. ##Types of Light## We can put the different kinds of lights into the following 4 groups: **Ambient** Ambient light is used so parts of our scene that are not touched directly by a light source do not end up pitch black. Ambient light is basically the lowest amount of light any part of our scene can have. In real life, ambient light is all that light that bounces around off every surface, lighting up the entire area, even if not all of the area is in the direct light from a light source. Like we said a bit ago, animated movies can afford to do complex lighting model which calculates all the light flying around a room lighting everything up, but in video games, we don't have the time for these calculations, so we use a simple hack which makes sure that all parts of our scene get a little bit of light so we can see everything. To find this value, we do a component wise multiplication of the ambient color of our light and the diffuse color of our material. eg. LightAmbient(0.5, 0.5, 0.5) * MaterialDiffuse(0.75, 1.0, 0.5) = (0.375, 0.5, 0.25) **Diffuse** To make things simple and performance friendly, we assume that when light strikes a surface, it is reflected off the surface at all angle equally. This way, we do not need to take the position of our camera into account. When we look at an object being hit by this kind of light, it doesn't matter what angle we look at it, it will still have the same amount of light bouncing off and entering our eye. To find this value, first we find out if the pixel is directly in the light from the light source and how much by using lambert's cosine law: inLight = max(lightDirection * surfaceNormal, 0), then we do a component wise multiplication of the diffuse value of our light, the diffuse value of our material, and how much the surface is in the light, if it is. eg. inLight = lightDirection(1.0, 1.0, 1.0) * surfaceNormal(1.0, 1.0, 1.0) = 1 (surfaceNormal and lightDirection are the same here to make it simple) 1 * LightDiffuse(1.0, 1.0, 1.0) * MaterialDiffuse(0.75, 1.0, 0.5) = (0.75, 1.0, 0.5) **Specular** This is the most expensive of the kinds of lights, but they can make a scene beautifuly realistic when used correctly. This is the one where you get a "glare" off a glossy object when the light source's light bounces directly off the glossy surface into your eye. We'll leave this one out of this lesson to keep it simple, and we'll cover it in a later lesson. **Emissive** Emissive light is different than the other light types as it doesn't actually send out light and light up other objects. Its the "glow" that lights up an object sending out light. An example of an object using this kind of light is a light bulb. In a video game, when you look at a turned on light bulb, you can see it glowing, but the glowing is independent of the light that the light bulb is actually sending out. ##Types of Light Sources## There are three different kinds of light sources: Directional, Spotlights, and Point Lights. **Directional Lights (A.K.A. Parallel Lights)** +[http://www.braynzarsoft.net/image/100035][Directional Lighting] Directional lights, also known as Parallel lights, are the simplest implimentation of a light. They have no source, only a direction and color. A good example of a directional light in the real world would be the sun. Although on a massive scale, the sun would be more like a point light. This is the type of lighting model we will use for this lesson. **Point Lights** +[http://www.braynzarsoft.net/image/100036][Point Lights] Point Lights have a position, a fall off point or range, and a color. There is no need for a single direction in point lights, as they send light in all directions. They are very similar to directional lights, but the difference is they can have a fall off point, which means if something is too far away from the source of the pointlight, it will not receive light, and the other thing is, when computing directional light, the light vector used to find out if a surface is in the light is the same for all objects in a scene, but when using a pointlight, the light vector changes from point to point. for example, if you have a lightbulb in the middle of the room, and a box on either side of the room, the left side of the box on the right side of the room will be in the light, and the right side of the box on the left side of the room will be lit up. Using directional lighting, the same side of both boxes will be lit up (that was a little wordy, sorry). An example of a point light would be a light bulb, or on a massive scale, the sun. **Spotlights** +[http://www.braynzarsoft.net/image/100037][Spot Lights] Spotlights are the most expensive light source, as they take the most to compute. They have a direction, a position, a color, two angles, and a fall off point or range. The reason for having two cones is to give us extra flexibility, as a point directly in the center of the cone should be brighter than a point on the edge of the cone. The inner cone is brighter than the outer cone, and there should be a falloff from the center to the edge of the cone. To find how bright a surface is in while inside the cone of light, if its actually inside, we can do the same type of calculation as we do with the specular light. However, to keep this lesson simple, we'll just cover these lights in a later lesson. An example of a Spotlight would be a flashlight. ##Normals## Normals are used to define which direction a surface is facing. Here we will use them to determine if the surface is in the light from a light source, and how much light it should recieve. Translating the vertices in world space makes the normal vector not of unit lenght, meaning the values in the normal vector are greater than 1 or less than zero. To fix this, we use a function in the HLSL language called normalize to fix them. **Vertex Normals** +[http://www.braynzarsoft.net/image/100038][Vertex Normals] We define a normal when defining a vertex. Then we use this normal for each pixel on the surface to dertermine how much light that pixel will recieve. However, if we were to create an object such as a sphere, we use a technique called normal averaging to make the sphere appear smooth, otherwise, if we did not use normal averaging, the entire surface of each triangle would be lit up in the light independent of the surrounding triangles, making the sphere look "choppy". **Face Normals** +[http://www.braynzarsoft.net/image/100039][Face Normals] Face normals are the same as vertex normals, but instead of defining the direction the vertex is facing, it defines the direction the surface is facing. We do not actually define face normals when creating an object, only vertex normals, then we average the vertex normals to make the face normal. **Normal Averaging** +[http://www.braynzarsoft.net/image/100040][Normal Averaging] Normal averaging is a technique used to make lighting look less "choppy". To get a normal average, we take the the normal of each surface using that vertex, and average them. Doing this, the light will gradually move accross each face, instead of an entire face at a time. ##Global Declarations## The only thing we have here is a buffer to hold our cbperframe stuff in, so we can send it to the pixel shader. ID3D11Buffer* cbPerFrameBuffer; ##Updated Constant Buffer## Now, we are going to start sending our object's "World Space" matrix to the effect file. We do this to make sure our lighting is striking the object from its direction or position in "World Space". We will use the World matrix to reposition our object's normals in "World Space". I think the best way to explain this is through example. So when we start to impliment a camera, what really happens is the the entire world will be moving around the camera, instead of the camera moving about the world. Because of this, when we impliment directional lighting, the direction of the light will not change with the rest of the world, so we need to calculate the lighting in world space, or it will look like the direction of the light moves with the camera. struct cbPerObject { XMMATRIX WVP; XMMATRIX World; }; ##The Light Structure## We need to create a light structure, which basically matches the light structure we will put in our effect file. We will send this structure to the pixels shader to fill in the light structure in the effect file to impliment the light. First we clear the memory when we create a new light structure. Then we have a direction, a pad, an ambient, and a diffuse member. We know what the dir, ambient, and diffuse variables are for, but whats the pad for? HLSL packs structures into 4D vectors. You are not allowed to split a single variable between two 4D vectors. That is why we use the pad variable. Look at the code. the first member of the structure is an XMFLOAT3 type. This is a 3D vector, but remember how HLSL packs everything into 4D vectors? the next member below would be the XMFLOAT4 variable, which is a 4D vector. Without the pad variable between them, the first number of the ambient member would be packed into the 4D vector of the dir member when being sent to the HLSL. So to stop this from happening, we have the pad variable, so that will be packed into the 4th spot in the 4D vector containing the dir member. I hope that makes sense. After we create the structure, we declare a new Light object. struct Light { Light() { ZeroMemory(this, sizeof(Light)); } XMFLOAT3 dir; float pad; XMFLOAT4 ambient; XMFLOAT4 diffuse; }; Light light; cbPerFrame Structure This is the structure we will send to the Pixel Shaders constant buffer. As you can see by the name, we will update and send it to the PS constant buffer once every frame, unlike the other one we have, which we send to the VS once per object. The only thing we have in here right now is a Light object. struct cbPerFrame { Light light; }; cbPerFrame constbuffPerFrame; The UpdateScene() Functions New Parameter time is the new variable UpdateScene() takes in. We will pass in the value GetFrameTime() returns, and then use the time variable to update our scene. void UpdateScene(double time) ##Updated Vertex Structure & Vertex Layout## Now, since we are implimenting light, we need to include a new member to our vertex structure. This member will hold normal data, so we can decide if and how much light is strking the surface. struct Vertex //Overloaded Vertex Structure { Vertex(){} Vertex(float x, float y, float z, float u, float v, float nx, float ny, float nz) : pos(x,y,z), texCoord(u, v), normal(nx, ny, nz){} XMFLOAT3 pos; XMFLOAT2 texCoord; XMFLOAT3 normal; }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0} }; ##Clean Up## Don't Forget. void CleanUp() { //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); squareVertBuffer->Release(); squareIndexBuffer->Release(); VS->Release(); PS->Release(); VS_Buffer->Release(); PS_Buffer->Release(); vertLayout->Release(); depthStencilView->Release(); depthStencilBuffer->Release(); cbPerObjectBuffer->Release(); Transparency->Release(); CCWcullMode->Release(); CWcullMode->Release(); d3d101Device->Release(); keyedMutex11->Release(); keyedMutex10->Release(); D2DRenderTarget->Release(); Brush->Release(); BackBuffer11->Release(); sharedTex11->Release(); DWriteFactory->Release(); TextFormat->Release(); d2dTexture->Release(); ///////////////**************new**************//////////////////// cbPerFrameBuffer->Release(); ///////////////**************new**************//////////////////// } ##Define the Light## Next go down to the InitScene() function, where we will now define the direction, ambient, and diffuse members of our light. We will give it a white light (diffuse), and a dark ambient (so that when light is not striking the surface, it appears to be much darker than when the light IS striking the surface). light.dir = XMFLOAT3(0.25f, 0.5f, -1.0f); light.ambient = XMFLOAT4(0.2f, 0.2f, 0.2f, 1.0f); light.diffuse = XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f); ##Adding Normal Data to the Cube## Now we need to add normal data to our cube. You may wonder how I got these normals. Well, all i did was take the position of each vertex, as they are all an equal distance from the center of the cube (0, 0, 0). Most of the time, you won't get this lucky to be able to do this, but when we get to loading models, usually the program you export the model from will export normals too. A lot of times they will export the face normals, which you will need to average to get the vertex normal. Vertex v[] = { // Front Face Vertex(-1.0f, -1.0f, -1.0f, 0.0f, 1.0f,-1.0f, -1.0f, -1.0f), Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 0.0f,-1.0f, 1.0f, -1.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f), Vertex( 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f, -1.0f), // Back Face Vertex(-1.0f, -1.0f, 1.0f, 1.0f, 1.0f,-1.0f, -1.0f, 1.0f), Vertex( 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f, 1.0f), Vertex( 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vertex(-1.0f, 1.0f, 1.0f, 1.0f, 0.0f,-1.0f, 1.0f, 1.0f), // Top Face Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 1.0f,-1.0f, 1.0f, -1.0f), Vertex(-1.0f, 1.0f, 1.0f, 0.0f, 0.0f,-1.0f, 1.0f, 1.0f), Vertex( 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f), // Bottom Face Vertex(-1.0f, -1.0f, -1.0f, 1.0f, 1.0f,-1.0f, -1.0f, -1.0f), Vertex( 1.0f, -1.0f, -1.0f, 0.0f, 1.0f, 1.0f, -1.0f, -1.0f), Vertex( 1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f, -1.0f, 1.0f), Vertex(-1.0f, -1.0f, 1.0f, 1.0f, 0.0f,-1.0f, -1.0f, 1.0f), // Left Face Vertex(-1.0f, -1.0f, 1.0f, 0.0f, 1.0f,-1.0f, -1.0f, 1.0f), Vertex(-1.0f, 1.0f, 1.0f, 0.0f, 0.0f,-1.0f, 1.0f, 1.0f), Vertex(-1.0f, 1.0f, -1.0f, 1.0f, 0.0f,-1.0f, 1.0f, -1.0f), Vertex(-1.0f, -1.0f, -1.0f, 1.0f, 1.0f,-1.0f, -1.0f, -1.0f), // Right Face Vertex( 1.0f, -1.0f, -1.0f, 0.0f, 1.0f, 1.0f, -1.0f, -1.0f), Vertex( 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -1.0f), Vertex( 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vertex( 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f, 1.0f), }; Go down a little further in the function, and you will see we have created a new buffer which will hold our cbPerFram structure, which holds our light data. We have already learned about this so i won't go through it. Next we will send this buffer to the pixel shader to impliment the light for each pixel. ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerFrame); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerFrameBuffer); ##D2D's FPS Square's World Space Matrix## Make sure you set the World matrix for D2D's Square Texture, or it will use the last set World Matrix. It's World matrix is also an empty matrix since it's in screen space, so you can just use the empty WVP matrix. WVP = XMMatrixIdentity(); ///////////////**************new**************//////////////////// cbPerObj.World = XMMatrixTranspose(WVP); ///////////////**************new**************//////////////////// cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &d2dTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); ##Setting the Light## The last thing we need to do now is send the buffer containing our light data to the pixel shaders constant buffer. We need to only do this once every frame since lighting will be the same for every object in the scene. In fact, since our light does not change at all, we could really have done this when initializing our scene, but this lets you change the light if you want. Now on to the effect file. constbuffPerFrame.light = light; d3d11DevCon->UpdateSubresource( cbPerFrameBuffer, 0, NULL, &constbuffPerFrame, 0, 0 ); d3d11DevCon->PSSetConstantBuffers(0, 1, &cbPerFrameBuffer); ##Setting the World Matrix## Now we need to send both the WVP and World matrices to the effect file. We will use the World matrix to transform the objects normals in world space, so that we can impliment correct lighting. WVP = cube1World * camView * camProjection; cbPerObj.World = XMMatrixTranspose(cube1World); cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &CubesTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); d3d11DevCon->DrawIndexed( 36, 0, 0 ); WVP = cube2World * camView * camProjection; cbPerObj.World = XMMatrixTranspose(cube2World); cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &CubesTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); d3d11DevCon->DrawIndexed( 36, 0, 0 ); ##Effect File## Here we will create a Light structure to hold our light data in when our program sends the light data to the pixel shader. This light structure is very simple, only containing a direction, ambient, and diffuse. struct Light { float3 dir; float4 ambient; float4 diffuse; }; Here is our new cbPerObject constant buffer, which now holds a matrix for our objects world space. cbuffer cbPerObject { float4x4 WVP; float4x4 World; }; Remember we said to separate the constant buffers depending on how often they will be updated? We create a new constant buffer for the light, since we will only need to update this constant buffer once per frame, as all objects in the scene will use the light. cbuffer cbPerFrame { Light light; }; We need to include a new member to our VS_OUTPUT structure, which is the normal of each vertex sent to the pixel shader. We will use these normals to calculate how much light is hitting the surface of an object. Now we need to multiply the objects normals with the objects world space, so that the lighting will be correct. If we were to multiply the normals with the WVP matrix, the lighting would not be correct, and when we impliment a camera, it would look like the directional light moves with the camera, since technically, the WVP is the camera space. struct VS_OUTPUT { float4 Pos : SV_POSITION; float2 TexCoord : TEXCOORD; float3 normal : NORMAL; }; VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.normal = mul(normal, World); output.TexCoord = inTexCoord; return output; } Now finally on to actually calculating the lighting in our scene. First we normalize the normal, as it may not be of unit length. Then, like before we find the diffuse color from our texture. After that we create a new variable which will hold our final color after the lighting calculations are done. We then calculate the color of the pixel with the ambient light of our lighting model. This will ensure that we can still see the parts of our scene that are not directly in the light (of course, maybe some games will want you to not be able to see whats outside of the light). We then calculate the final color of the pixel after we have checked how much light is hitting it. We do this by first dot multiplying the direction of the light with the normal of the surface, then multiply that with the diffuse color of our light, and finally the diffuse color of our pixels texture. We saturate this to make sure no values are higher than 1.0f, and then add it to the finalColor, which is already holding the color of the pixel with ambient lighting. And at last, we return the color. Notice how we return the color. We must always return a float4 from the pixel shader, which contains the color along with the alpha value. All we do is return the finalColor for the first three values of the return value, and the alpha value of the texture we are texturing our pixels with. float4 PS(VS_OUTPUT input) : SV_TARGET { input.normal = normalize(input.normal); float4 diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); float3 finalColor; finalColor = diffuse * light.ambient; finalColor += saturate(dot(light.dir, input.normal) * light.diffuse * diffuse); return float4(finalColor, diffuse.a); } I hope you understand lighting better now. This is very simple light, but soon we will cover Specular lighting, and the other two Lighting models, spotlights and pointlights ##Exercise:## 1. Make the Light spin around your scene, like the sun. Here's the final code: main.cpp: //Include and link appropriate libraries and headers// #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") #pragma comment(lib, "d3dx10.lib") #pragma comment (lib, "D3D10_1.lib") #pragma comment (lib, "DXGI.lib") #pragma comment (lib, "D2D1.lib") #pragma comment (lib, "dwrite.lib") #include <windows.h> #include <d3d11.h> #include <d3dx11.h> #include <d3dx10.h> #include <xnamath.h> #include <D3D10_1.h> #include <DXGI.h> #include <D2D1.h> #include <sstream> #include <dwrite.h> //Global Declarations - Interfaces// IDXGISwapChain* SwapChain; ID3D11Device* d3d11Device; ID3D11DeviceContext* d3d11DevCon; ID3D11RenderTargetView* renderTargetView; ID3D11Buffer* squareIndexBuffer; ID3D11DepthStencilView* depthStencilView; ID3D11Texture2D* depthStencilBuffer; ID3D11Buffer* squareVertBuffer; ID3D11VertexShader* VS; ID3D11PixelShader* PS; ID3D10Blob* VS_Buffer; ID3D10Blob* PS_Buffer; ID3D11InputLayout* vertLayout; ID3D11Buffer* cbPerObjectBuffer; ID3D11BlendState* Transparency; ID3D11RasterizerState* CCWcullMode; ID3D11RasterizerState* CWcullMode; ID3D11ShaderResourceView* CubesTexture; ID3D11SamplerState* CubesTexSamplerState; ///////////////**************new**************//////////////////// ID3D11Buffer* cbPerFrameBuffer; ID3D11PixelShader* D2D_PS; ID3D10Blob* D2D_PS_Buffer; ///////////////**************new**************//////////////////// ID3D10Device1 *d3d101Device; IDXGIKeyedMutex *keyedMutex11; IDXGIKeyedMutex *keyedMutex10; ID2D1RenderTarget *D2DRenderTarget; ID2D1SolidColorBrush *Brush; ID3D11Texture2D *BackBuffer11; ID3D11Texture2D *sharedTex11; ID3D11Buffer *d2dVertBuffer; ID3D11Buffer *d2dIndexBuffer; ID3D11ShaderResourceView *d2dTexture; IDWriteFactory *DWriteFactory; IDWriteTextFormat *TextFormat; std::wstring printText; //Global Declarations - Others// LPCTSTR WndClassName = L"firstwindow"; HWND hwnd = NULL; HRESULT hr; const int Width = 300; const int Height = 300; XMMATRIX WVP; XMMATRIX cube1World; XMMATRIX cube2World; XMMATRIX camView; XMMATRIX camProjection; XMMATRIX d2dWorld; XMVECTOR camPosition; XMVECTOR camTarget; XMVECTOR camUp; XMMATRIX Rotation; XMMATRIX Scale; XMMATRIX Translation; float rot = 0.01f; double countsPerSecond = 0.0; __int64 CounterStart = 0; int frameCount = 0; int fps = 0; __int64 frameTimeOld = 0; double frameTime; //Function Prototypes// bool InitializeDirect3d11App(HINSTANCE hInstance); void CleanUp(); bool InitScene(); void DrawScene(); bool InitD2D_D3D101_DWrite(IDXGIAdapter1 *Adapter); void InitD2DScreenTexture(); void UpdateScene(double time); void RenderText(std::wstring text, int inInt); void StartTimer(); double GetTime(); double GetFrameTime(); bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed); int messageloop(); LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam); ///////////////**************new**************//////////////////// //Create effects constant buffer's structure// struct cbPerObject { XMMATRIX WVP; XMMATRIX World; }; cbPerObject cbPerObj; struct Light { Light() { ZeroMemory(this, sizeof(Light)); } XMFLOAT3 dir; float pad; XMFLOAT4 ambient; XMFLOAT4 diffuse; }; Light light; struct cbPerFrame { Light light; }; cbPerFrame constbuffPerFrame; //Vertex Structure and Vertex Layout (Input Layout)// struct Vertex //Overloaded Vertex Structure { Vertex(){} Vertex(float x, float y, float z, float u, float v, float nx, float ny, float nz) : pos(x,y,z), texCoord(u, v), normal(nx, ny, nz){} XMFLOAT3 pos; XMFLOAT2 texCoord; XMFLOAT3 normal; }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D11_INPUT_PER_VERTEX_DATA, 0} }; ///////////////**************new**************//////////////////// UINT numElements = ARRAYSIZE(layout); int WINAPI WinMain(HINSTANCE hInstance, //Main windows function HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { if(!InitializeWindow(hInstance, nShowCmd, Width, Height, true)) { MessageBox(0, L"Window Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitializeDirect3d11App(hInstance)) //Initialize Direct3D { MessageBox(0, L"Direct3D Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitScene()) //Initialize our scene { MessageBox(0, L"Scene Initialization - Failed", L"Error", MB_OK); return 0; } messageloop(); CleanUp(); return 0; } bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed) { typedef struct _WNDCLASS { UINT cbSize; UINT style; WNDPROC lpfnWndProc; int cbClsExtra; int cbWndExtra; HANDLE hInstance; HICON hIcon; HCURSOR hCursor; HBRUSH hbrBackground; LPCTSTR lpszMenuName; LPCTSTR lpszClassName; } WNDCLASS; WNDCLASSEX wc; wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = NULL; wc.cbWndExtra = NULL; wc.hInstance = hInstance; wc.hIcon = LoadIcon(NULL, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(COLOR_WINDOW + 2); wc.lpszMenuName = NULL; wc.lpszClassName = WndClassName; wc.hIconSm = LoadIcon(NULL, IDI_APPLICATION); if (!RegisterClassEx(&wc)) { MessageBox(NULL, L"Error registering class", L"Error", MB_OK | MB_ICONERROR); return 1; } hwnd = CreateWindowEx( NULL, WndClassName, L"Lesson 4 - Begin Drawing", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, NULL, NULL, hInstance, NULL ); if (!hwnd) { MessageBox(NULL, L"Error creating window", L"Error", MB_OK | MB_ICONERROR); return 1; } ShowWindow(hwnd, ShowWnd); UpdateWindow(hwnd); return true; } bool InitializeDirect3d11App(HINSTANCE hInstance) { //Describe our SwapChain Buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC)); bufferDesc.Width = Width; bufferDesc.Height = Height; bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; //Describe our SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = hwnd; swapChainDesc.Windowed = TRUE; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; // Create DXGI factory to enumerate adapters/////////////////////////////////////////////////////////////////////////// IDXGIFactory1 *DXGIFactory; HRESULT hr = CreateDXGIFactory1(__uuidof(IDXGIFactory1), (void**)&DXGIFactory); // Use the first adapter IDXGIAdapter1 *Adapter; hr = DXGIFactory->EnumAdapters1(0, &Adapter); DXGIFactory->Release(); //Create our Direct3D 11 Device and SwapChain////////////////////////////////////////////////////////////////////////// hr = D3D11CreateDeviceAndSwapChain(Adapter, D3D_DRIVER_TYPE_UNKNOWN, NULL, D3D11_CREATE_DEVICE_BGRA_SUPPORT, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &d3d11Device, NULL, &d3d11DevCon); //Initialize Direct2D, Direct3D 10.1, DirectWrite InitD2D_D3D101_DWrite(Adapter); //Release the Adapter interface Adapter->Release(); //Create our BackBuffer and Render Target hr = SwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), (void**)&BackBuffer11 ); hr = d3d11Device->CreateRenderTargetView( BackBuffer11, NULL, &renderTargetView ); //Describe our Depth/Stencil Buffer D3D11_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = Width; depthStencilDesc.Height = Height; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; depthStencilDesc.SampleDesc.Quality = 0; depthStencilDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; //Create the Depth/Stencil View d3d11Device->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer); d3d11Device->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView); return true; } bool InitD2D_D3D101_DWrite(IDXGIAdapter1 *Adapter) { //Create our Direc3D 10.1 Device/////////////////////////////////////////////////////////////////////////////////////// hr = D3D10CreateDevice1(Adapter, D3D10_DRIVER_TYPE_HARDWARE, NULL,D3D10_CREATE_DEVICE_BGRA_SUPPORT, D3D10_FEATURE_LEVEL_9_3, D3D10_1_SDK_VERSION, &d3d101Device ); //Create Shared Texture that Direct3D 10.1 will render on////////////////////////////////////////////////////////////// D3D11_TEXTURE2D_DESC sharedTexDesc; ZeroMemory(&sharedTexDesc, sizeof(sharedTexDesc)); sharedTexDesc.Width = Width; sharedTexDesc.Height = Height; sharedTexDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; sharedTexDesc.MipLevels = 1; sharedTexDesc.ArraySize = 1; sharedTexDesc.SampleDesc.Count = 1; sharedTexDesc.Usage = D3D11_USAGE_DEFAULT; sharedTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; sharedTexDesc.MiscFlags = D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX; hr = d3d11Device->CreateTexture2D(&sharedTexDesc, NULL, &sharedTex11); // Get the keyed mutex for the shared texture (for D3D11)/////////////////////////////////////////////////////////////// hr = sharedTex11->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex11); // Get the shared handle needed to open the shared texture in D3D10.1/////////////////////////////////////////////////// IDXGIResource *sharedResource10; HANDLE sharedHandle10; hr = sharedTex11->QueryInterface(__uuidof(IDXGIResource), (void**)&sharedResource10); hr = sharedResource10->GetSharedHandle(&sharedHandle10); sharedResource10->Release(); // Open the surface for the shared texture in D3D10.1/////////////////////////////////////////////////////////////////// IDXGISurface1 *sharedSurface10; hr = d3d101Device->OpenSharedResource(sharedHandle10, __uuidof(IDXGISurface1), (void**)(&sharedSurface10)); hr = sharedSurface10->QueryInterface(__uuidof(IDXGIKeyedMutex), (void**)&keyedMutex10); // Create D2D factory/////////////////////////////////////////////////////////////////////////////////////////////////// ID2D1Factory *D2DFactory; hr = D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, __uuidof(ID2D1Factory), (void**)&D2DFactory); D2D1_RENDER_TARGET_PROPERTIES renderTargetProperties; ZeroMemory(&renderTargetProperties, sizeof(renderTargetProperties)); renderTargetProperties.type = D2D1_RENDER_TARGET_TYPE_HARDWARE; renderTargetProperties.pixelFormat = D2D1::PixelFormat(DXGI_FORMAT_UNKNOWN, D2D1_ALPHA_MODE_PREMULTIPLIED); hr = D2DFactory->CreateDxgiSurfaceRenderTarget(sharedSurface10, &renderTargetProperties, &D2DRenderTarget); sharedSurface10->Release(); D2DFactory->Release(); // Create a solid color brush to draw something with hr = D2DRenderTarget->CreateSolidColorBrush(D2D1::ColorF(1.0f, 1.0f, 1.0f, 1.0f), &Brush); //DirectWrite/////////////////////////////////////////////////////////////////////////////////////////////////////////// hr = DWriteCreateFactory(DWRITE_FACTORY_TYPE_SHARED, __uuidof(IDWriteFactory), reinterpret_cast<IUnknown**>(&DWriteFactory)); hr = DWriteFactory->CreateTextFormat( L"Script", NULL, DWRITE_FONT_WEIGHT_REGULAR, DWRITE_FONT_STYLE_NORMAL, DWRITE_FONT_STRETCH_NORMAL, 24.0f, L"en-us", &TextFormat ); hr = TextFormat->SetTextAlignment(DWRITE_TEXT_ALIGNMENT_LEADING); hr = TextFormat->SetParagraphAlignment(DWRITE_PARAGRAPH_ALIGNMENT_NEAR); d3d101Device->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_POINTLIST); return true; } void CleanUp() { //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); squareVertBuffer->Release(); squareIndexBuffer->Release(); VS->Release(); PS->Release(); VS_Buffer->Release(); PS_Buffer->Release(); vertLayout->Release(); depthStencilView->Release(); depthStencilBuffer->Release(); cbPerObjectBuffer->Release(); Transparency->Release(); CCWcullMode->Release(); CWcullMode->Release(); d3d101Device->Release(); keyedMutex11->Release(); keyedMutex10->Release(); D2DRenderTarget->Release(); Brush->Release(); BackBuffer11->Release(); sharedTex11->Release(); DWriteFactory->Release(); TextFormat->Release(); d2dTexture->Release(); ///////////////**************new**************//////////////////// cbPerFrameBuffer->Release(); ///////////////**************new**************//////////////////// } void InitD2DScreenTexture() { //Create the vertex buffer Vertex v[] = { // Front Face Vertex(-1.0f, -1.0f, -1.0f, 0.0f, 1.0f,-1.0f, -1.0f, -1.0f), Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 0.0f,-1.0f, 1.0f, -1.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f), Vertex( 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f, -1.0f), }; DWORD indices[] = { // Front Face 0, 1, 2, 0, 2, 3, }; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * 2 * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = indices; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &d2dIndexBuffer); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * 4; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = v; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &d2dVertBuffer); //Create A shader resource view from the texture D2D will render to, //So we can use it to texture a square which overlays our scene d3d11Device->CreateShaderResourceView(sharedTex11, NULL, &d2dTexture); } bool InitScene() { InitD2DScreenTexture(); //Compile Shaders from shader file hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "VS", "vs_4_0", 0, 0, 0, &VS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "PS", "ps_4_0", 0, 0, 0, &PS_Buffer, 0, 0); ///////////////**************new**************//////////////////// hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "D2D_PS", "ps_4_0", 0, 0, 0, &D2D_PS_Buffer, 0, 0); ///////////////**************new**************//////////////////// //Create the Shader Objects hr = d3d11Device->CreateVertexShader(VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), NULL, &VS); hr = d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS); ///////////////**************new**************//////////////////// hr = d3d11Device->CreatePixelShader(D2D_PS_Buffer->GetBufferPointer(), D2D_PS_Buffer->GetBufferSize(), NULL, &D2D_PS); ///////////////**************new**************//////////////////// //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); ///////////////**************new**************//////////////////// light.dir = XMFLOAT3(0.25f, 0.5f, -1.0f); light.ambient = XMFLOAT4(0.2f, 0.2f, 0.2f, 1.0f); light.diffuse = XMFLOAT4(1.0f, 1.0f, 1.0f, 1.0f); //Create the vertex buffer Vertex v[] = { // Front Face Vertex(-1.0f, -1.0f, -1.0f, 0.0f, 1.0f,-1.0f, -1.0f, -1.0f), Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 0.0f,-1.0f, 1.0f, -1.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f), Vertex( 1.0f, -1.0f, -1.0f, 1.0f, 1.0f, 1.0f, -1.0f, -1.0f), // Back Face Vertex(-1.0f, -1.0f, 1.0f, 1.0f, 1.0f,-1.0f, -1.0f, 1.0f), Vertex( 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 1.0f, -1.0f, 1.0f), Vertex( 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vertex(-1.0f, 1.0f, 1.0f, 1.0f, 0.0f,-1.0f, 1.0f, 1.0f), // Top Face Vertex(-1.0f, 1.0f, -1.0f, 0.0f, 1.0f,-1.0f, 1.0f, -1.0f), Vertex(-1.0f, 1.0f, 1.0f, 0.0f, 0.0f,-1.0f, 1.0f, 1.0f), Vertex( 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vertex( 1.0f, 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f), // Bottom Face Vertex(-1.0f, -1.0f, -1.0f, 1.0f, 1.0f,-1.0f, -1.0f, -1.0f), Vertex( 1.0f, -1.0f, -1.0f, 0.0f, 1.0f, 1.0f, -1.0f, -1.0f), Vertex( 1.0f, -1.0f, 1.0f, 0.0f, 0.0f, 1.0f, -1.0f, 1.0f), Vertex(-1.0f, -1.0f, 1.0f, 1.0f, 0.0f,-1.0f, -1.0f, 1.0f), // Left Face Vertex(-1.0f, -1.0f, 1.0f, 0.0f, 1.0f,-1.0f, -1.0f, 1.0f), Vertex(-1.0f, 1.0f, 1.0f, 0.0f, 0.0f,-1.0f, 1.0f, 1.0f), Vertex(-1.0f, 1.0f, -1.0f, 1.0f, 0.0f,-1.0f, 1.0f, -1.0f), Vertex(-1.0f, -1.0f, -1.0f, 1.0f, 1.0f,-1.0f, -1.0f, -1.0f), // Right Face Vertex( 1.0f, -1.0f, -1.0f, 0.0f, 1.0f, 1.0f, -1.0f, -1.0f), Vertex( 1.0f, 1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 1.0f, -1.0f), Vertex( 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 1.0f), Vertex( 1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, -1.0f, 1.0f), }; ///////////////**************new**************//////////////////// DWORD indices[] = { // Front Face 0, 1, 2, 0, 2, 3, // Back Face 4, 5, 6, 4, 6, 7, // Top Face 8, 9, 10, 8, 10, 11, // Bottom Face 12, 13, 14, 12, 14, 15, // Left Face 16, 17, 18, 16, 18, 19, // Right Face 20, 21, 22, 20, 22, 23 }; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * 12 * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = indices; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &squareIndexBuffer); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * 24; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = v; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &squareVertBuffer); //Create the Input Layout hr = d3d11Device->CreateInputLayout( layout, numElements, VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), &vertLayout ); //Set the Input Layout d3d11DevCon->IASetInputLayout( vertLayout ); //Set Primitive Topology d3d11DevCon->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //Create the Viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; //Set the Viewport d3d11DevCon->RSSetViewports(1, &viewport); //Create the buffer to send to the cbuffer in effect file D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer); ///////////////**************new**************//////////////////// //Create the buffer to send to the cbuffer per frame in effect file ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerFrame); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerFrameBuffer); ///////////////**************new**************//////////////////// //Camera information camPosition = XMVectorSet( 0.0f, 3.0f, -8.0f, 0.0f ); camTarget = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f ); camUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f ); //Set the View matrix camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); //Set the Projection matrix camProjection = XMMatrixPerspectiveFovLH( 0.4f*3.14f, (float)Width/Height, 1.0f, 1000.0f); D3D11_BLEND_DESC blendDesc; ZeroMemory( &blendDesc, sizeof(blendDesc) ); D3D11_RENDER_TARGET_BLEND_DESC rtbd; ZeroMemory( &rtbd, sizeof(rtbd) ); rtbd.BlendEnable = true; rtbd.SrcBlend = D3D11_BLEND_SRC_COLOR; rtbd.DestBlend = D3D11_BLEND_INV_SRC_ALPHA; rtbd.BlendOp = D3D11_BLEND_OP_ADD; rtbd.SrcBlendAlpha = D3D11_BLEND_ONE; rtbd.DestBlendAlpha = D3D11_BLEND_ZERO; rtbd.BlendOpAlpha = D3D11_BLEND_OP_ADD; rtbd.RenderTargetWriteMask = D3D10_COLOR_WRITE_ENABLE_ALL; blendDesc.AlphaToCoverageEnable = false; blendDesc.RenderTarget[0] = rtbd; hr = D3DX11CreateShaderResourceViewFromFile( d3d11Device, L"braynzar.jpg", NULL, NULL, &CubesTexture, NULL ); // Describe the Sample State D3D11_SAMPLER_DESC sampDesc; ZeroMemory( &sampDesc, sizeof(sampDesc) ); sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR; sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER; sampDesc.MinLOD = 0; sampDesc.MaxLOD = D3D11_FLOAT32_MAX; //Create the Sample State hr = d3d11Device->CreateSamplerState( &sampDesc, &CubesTexSamplerState ); d3d11Device->CreateBlendState(&blendDesc, &Transparency); D3D11_RASTERIZER_DESC cmdesc; ZeroMemory(&cmdesc, sizeof(D3D11_RASTERIZER_DESC)); cmdesc.FillMode = D3D11_FILL_SOLID; cmdesc.CullMode = D3D11_CULL_BACK; cmdesc.FrontCounterClockwise = true; hr = d3d11Device->CreateRasterizerState(&cmdesc, &CCWcullMode); cmdesc.FrontCounterClockwise = false; hr = d3d11Device->CreateRasterizerState(&cmdesc, &CWcullMode); return true; } void StartTimer() { LARGE_INTEGER frequencyCount; QueryPerformanceFrequency(&frequencyCount); countsPerSecond = double(frequencyCount.QuadPart); QueryPerformanceCounter(&frequencyCount); CounterStart = frequencyCount.QuadPart; } double GetTime() { LARGE_INTEGER currentTime; QueryPerformanceCounter(¤tTime); return double(currentTime.QuadPart-CounterStart)/countsPerSecond; } double GetFrameTime() { LARGE_INTEGER currentTime; __int64 tickCount; QueryPerformanceCounter(¤tTime); tickCount = currentTime.QuadPart-frameTimeOld; frameTimeOld = currentTime.QuadPart; if(tickCount < 0.0f) tickCount = 0.0f; return float(tickCount)/countsPerSecond; } void UpdateScene(double time) { //Keep the cubes rotating rot += 1.0f * time; if(rot > 6.28f) rot = 0.0f; //Reset cube1World cube1World = XMMatrixIdentity(); //Define cube1's world space matrix XMVECTOR rotaxis = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); Rotation = XMMatrixRotationAxis( rotaxis, rot); Translation = XMMatrixTranslation( 0.0f, 0.0f, 4.0f ); //Set cube1's world space using the transformations cube1World = Translation * Rotation; //Reset cube2World cube2World = XMMatrixIdentity(); //Define cube2's world space matrix Rotation = XMMatrixRotationAxis( rotaxis, -rot); Scale = XMMatrixScaling( 1.3f, 1.3f, 1.3f ); //Set cube2's world space matrix cube2World = Rotation * Scale; } void RenderText(std::wstring text, int inInt) { //Release the D3D 11 Device keyedMutex11->ReleaseSync(0); //Use D3D10.1 device keyedMutex10->AcquireSync(0, 5); //Draw D2D content D2DRenderTarget->BeginDraw(); //Clear D2D Background D2DRenderTarget->Clear(D2D1::ColorF(0.0f, 0.0f, 0.0f, 0.0f)); //Create our string std::wostringstream printString; printString << text << inInt; printText = printString.str(); //Set the Font Color D2D1_COLOR_F FontColor = D2D1::ColorF(1.0f, 1.0f, 1.0f, 1.0f); //Set the brush color D2D will use to draw with Brush->SetColor(FontColor); //Create the D2D Render Area D2D1_RECT_F layoutRect = D2D1::RectF(0, 0, Width, Height); //Draw the Text D2DRenderTarget->DrawText( printText.c_str(), wcslen(printText.c_str()), TextFormat, layoutRect, Brush ); D2DRenderTarget->EndDraw(); //Release the D3D10.1 Device keyedMutex10->ReleaseSync(1); //Use the D3D11 Device keyedMutex11->AcquireSync(1, 5); //Use the shader resource representing the direct2d render target //to texture a square which is rendered in screen space so it //overlays on top of our entire scene. We use alpha blending so //that the entire background of the D2D render target is "invisible", //And only the stuff we draw with D2D will be visible (the text) //Set the blend state for D2D render target texture objects d3d11DevCon->OMSetBlendState(Transparency, NULL, 0xffffffff); //Set d2d's pixel shader so lighting calculations are not done d3d11DevCon->PSSetShader(D2D_PS, 0, 0); //Set the d2d Index buffer d3d11DevCon->IASetIndexBuffer( d2dIndexBuffer, DXGI_FORMAT_R32_UINT, 0); //Set the d2d vertex buffer UINT stride = sizeof( Vertex ); UINT offset = 0; d3d11DevCon->IASetVertexBuffers( 0, 1, &d2dVertBuffer, &stride, &offset ); WVP = XMMatrixIdentity(); ///////////////**************new**************//////////////////// cbPerObj.World = XMMatrixTranspose(WVP); ///////////////**************new**************//////////////////// cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &d2dTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); //Draw the second cube d3d11DevCon->DrawIndexed( 6, 0, 0 ); } void DrawScene() { //Clear our render target and depth/stencil view float bgColor[4] = {(0.0f, 0.0f, 0.0f, 0.0f)}; d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor); d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0); ///////////////**************new**************//////////////////// constbuffPerFrame.light = light; d3d11DevCon->UpdateSubresource( cbPerFrameBuffer, 0, NULL, &constbuffPerFrame, 0, 0 ); d3d11DevCon->PSSetConstantBuffers(0, 1, &cbPerFrameBuffer); //Reset Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); ///////////////**************new**************//////////////////// //Set our Render Target d3d11DevCon->OMSetRenderTargets( 1, &renderTargetView, depthStencilView ); //Set the default blend state (no blending) for opaque objects d3d11DevCon->OMSetBlendState(0, 0, 0xffffffff); //Set the cubes index buffer d3d11DevCon->IASetIndexBuffer( squareIndexBuffer, DXGI_FORMAT_R32_UINT, 0); //Set the cubes vertex buffer UINT stride = sizeof( Vertex ); UINT offset = 0; d3d11DevCon->IASetVertexBuffers( 0, 1, &squareVertBuffer, &stride, &offset ); ///////////////**************new**************//////////////////// //Set the WVP matrix and send it to the constant buffer in effect file WVP = cube1World * camView * camProjection; cbPerObj.World = XMMatrixTranspose(cube1World); cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &CubesTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); d3d11DevCon->DrawIndexed( 36, 0, 0 ); WVP = cube2World * camView * camProjection; cbPerObj.World = XMMatrixTranspose(cube2World); cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); d3d11DevCon->PSSetShaderResources( 0, 1, &CubesTexture ); d3d11DevCon->PSSetSamplers( 0, 1, &CubesTexSamplerState ); d3d11DevCon->RSSetState(CWcullMode); d3d11DevCon->DrawIndexed( 36, 0, 0 ); ///////////////**************new**************//////////////////// RenderText(L"FPS: ", fps); //Present the backbuffer to the screen SwapChain->Present(0, 0); } int messageloop(){ MSG msg; ZeroMemory(&msg, sizeof(MSG)); while(true) { BOOL PeekMessageL( LPMSG lpMsg, HWND hWnd, UINT wMsgFilterMin, UINT wMsgFilterMax, UINT wRemoveMsg ); if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } else{ // run game code frameCount++; if(GetTime() > 1.0f) { fps = frameCount; frameCount = 0; StartTimer(); } frameTime = GetFrameTime(); UpdateScene(frameTime); DrawScene(); } } return msg.wParam; } LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { case WM_KEYDOWN: if( wParam == VK_ESCAPE ){ DestroyWindow(hwnd); } return 0; case WM_DESTROY: PostQuitMessage(0); return 0; } return DefWindowProc(hwnd, msg, wParam, lParam); } Effects.fx: struct Light { float3 dir; float4 ambient; float4 diffuse; }; cbuffer cbPerFrame { Light light; }; cbuffer cbPerObject { float4x4 WVP; float4x4 World; }; Texture2D ObjTexture; SamplerState ObjSamplerState; struct VS_OUTPUT { float4 Pos : SV_POSITION; float2 TexCoord : TEXCOORD; float3 normal : NORMAL; }; VS_OUTPUT VS(float4 inPos : POSITION, float2 inTexCoord : TEXCOORD, float3 normal : NORMAL) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.normal = mul(normal, World); output.TexCoord = inTexCoord; return output; } float4 PS(VS_OUTPUT input) : SV_TARGET { input.normal = normalize(input.normal); float4 diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); float3 finalColor; finalColor = diffuse * light.ambient; finalColor += saturate(dot(light.dir, input.normal) * light.diffuse * diffuse); return float4(finalColor, diffuse.a); } float4 D2D_PS(VS_OUTPUT input) : SV_TARGET { float4 diffuse = ObjTexture.Sample( ObjSamplerState, input.TexCoord ); return diffuse; }
Why reset the shader object in final code(DrawScene) while there's nothing changed with shader object?
on Feb 21 `20
paco
Sign in to comment