This tutorial is part of a Collection: 03. DirectX 11 - Braynzar Soft Tutorials
11178
views
08. World View and Local Spaces (static Camera)
We will learn about the world, view, and local spaces in a 3D world, which will enable us to create a camera, so only the things the camera sees will be drawn to the screen. We will learn how to impliment a static (not moving) camera, and how to work with a shaders constant buffers, which are variables in an effect file that shaders can use, and we can update from our code.
DX11_Lesson_08_World...zip 15.79 kb
1285 downloads
##Introduction## We will learn about the different spaces in a 3D world, which consist of World, View, Projection, Local, and Screen spaces. We can use these spaces to create a camera effect, which will only show what the camera sees. To use these spaces, we will send them to a variable in a constant buffer inside an effect file, which the Vertex Shader will use to determine the coordinates of the vertices which make up an object. ##Local (Object) Space## Local space is the space relative to an object. When we create an object, we will usually center it around the point (0, 0, 0), which makes it much easier to create and define the vertices. Think of a cube. If we were to create the cube relative to the center point (0, 0, 0) of our actual 3D scene, we would have a very tough time defining its vertices, especially if the cube was tilted a little. Not only that, but maybe we need to make a forest. It would be a waste to create a ton of trees that define the forest, when all we have to do is create the tree one time and make copies, repositioning each copy. Local Space defines the vertices' position relative to the other vertices in that object. The vertices position in local space are defined usually in a file containing the 3D object, created from a 3D modeling program. ##World Space## World space is used to position each object relative to each other, in the world space. Each object will have their own unique world space matrix. The world space is the objects position, size, and rotation in the 3D scene. All objects will be position around a single central point (0, 0, 0), which is the world space center. To create a world space matrix, we will need to do the transformations for the object we are creating the world space matrix for (translation, rotation, scaling. Covered in the next lesson). World Space is defined by individual transformations on each object, Translations, Rotations, and Scaling. We will use the World space matrix to transform the objects vertices from their Local Space to World space, where the vertices positions are relative to the other objects in the scene. We will talk about these transformations in the next lesson. ##View Space## The View space is basically the camera's space. The camera is positioned at point (0, 0, 0) where the camera is looking down the z-axis, the up direction of the camera is the y-axis, and the world is translated into the cameras space. So when we do transformations, it will look like the camera is moving around the world, when in fact, the world is moving, and the camera is still. The View Space is defined by creating a matrix describing our cameras position, view direction (target), and up (the y axis of the camera). We can easily create a View matrix by using 3 vectors, Position, Target, and Up, and the function XMMatrixLookAtLH(). ##Projection Space## This is basically the space that, if objects are in, they are rendered to the screen, and if objects are out, they are discared. This space is different, as it is defined by six planes, the near plane, far plane, top, left, bottom, and right planes. If we were to render the Projection Space as geometry, it would look like a piramid with its tip cut off. The tip of the piramid would be the position of the camera, and where the tip was cut off would be the near z-plan, and the base of the piramid would be the far z-plane. The near and far planes are defined by a float value, and the other four are defined by the aspect ratio and FOV (field of view in radians). The Projection Space defines the area in the 3D scene that objects are visible from the camera's point of view (The objects that will be displayed on the screen). We can easily define the Projection matrix using the function XMMatrixPerspectiveFovLH(), and passing it the FOV (Field of view in radians), aspect ratio, near z-plane, and far z-plane. I will explain what the aspect ratio and FOV do a little better. The aspect ratio is a value used to find the width and height of the near and far planes, usually you will want to put the width divided by the height of your screen (Width/Height). A 1:1 ratio (setting the aspect ratio to 1) would give you an exact square. Of course the far plane will be bigger than the near plane (Like the piramid idea), and the amount its bigger is defined by the FOV. The larger the FOV, the larger the back plane is compared to the near plane, thus putting more objects on the screen. ##Screen Space## This last space is basically the x and y values in pixels of the backbuffer, where (0, 0) marks the top left of the space, and (width, height) marks the bottom right of the space. This is the 2D space thats actually displayed on your monitor. We do not have to define this space, it is more of an idea of the physical space of our monitor. However, we will use the idea of this space when we get to picking with the mouse. We will take the x and y coordinates of our mouse in the screen space to see if we are clicking on a 3d object. ##Transforming Spaces## Transforming spaces usually means transforming vertices from one space to another. The rendering pipeline uses three spaces which we will define; World, View, and Projection. We will transfrom these spaces from one to the other, and put the resulting matrix into another matrix called WVP (World View Projection). To transform them from one to the other, we will multiply them together, but remember, the order in which we multiply matrices can change the resulting matrix, so we will multiply them in the order: World * View * Projection. We will then send the WVP matrix to a constant buffer in the effect file which the VS will use to transform the objects vertices. So the order is like this: The objects vertices in Local Space will be sent to the Vertex Shader. The VS will use the WVP passed into it right before we called the draw function, and multiply the vertices position with the WVP matrix. This results in the objects position being where we want in the world, and clipping it from being rendered if it is not in view of the camera. ##Constant Buffers## A constant buffer is basically a structure in an effect file which holds variables we are able to update from our game code. We can create a constant buffer using the cbuffer type. An example, and the one we will use looks like this: cbuffer cbPerObject { float4x4 WVP; }; Constant buffers should be seperated by how often they are updated. This way we can call them as least as possible since it takes processing time to do it. Examples of different frequencies we call a constant buffer are: per scene (call the buffer only once per scene, such as lighting that does not change throughout the scene) per frame (such as lighting that does change position or something every frame, like the sun moving across the sky) per object (Like what we will do. We update the WVP per object, since each object has a different World space, or position, rotation, or scaling) ##Global Declarations## This is a direct3d buffer interface, which we will use to store our constant buffer variables in (WVP matrix) to send to the actual constant buffer in the effect file. ID3D11Buffer* cbPerObjectBuffer; Now we define four matrices and three vectors. The four matrices represent each of the different spaces our objects vertices need to go through to get onto the screen, and the three vectors are used to define the position, target, and up direction of our camera. XMMATRIX WVP; XMMATRIX World; XMMATRIX camView; XMMATRIX camProjection; XMVECTOR camPosition; XMVECTOR camTarget; XMVECTOR camUp; ##Constant buffer structure## We need to make sure that our constant buffer in code is the same exact layout of the structure in the constant buffer in the effect file. Then we create a constant buffer object. struct cbPerObject { XMMATRIX WVP; }; cbPerObject cbPerObj; ##Release the Buffer interface## Don't forget cbPerObjectBuffer->Release(); ##Create the Constant Buffer in code## Here we create a buffer which will hold the information we want to pass to the constant buffer in the effect file. We first create the buffer description, like we already know how to do. The only difference here though, is that we change the bind flags member to D3D11_BIND_CONSTANT_BUFFER. This says that the buffer will be bound to a constant buffer in the effect file. After that, we create the buffer cbPerObjectBuffer using the description we just made. D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer); ##Camera## This is where we will define our cameras position, Target, and up vectors. Pretty self explanitory, except that there is a fourth parameter. We do not use it, so set it to 0.0f. This is because we are wanted by microsoft to use the more popular xna math library... So just going with the flow here. camPosition = XMVectorSet( 0.0f, 0.0f, -0.5f, 0.0f ); camTarget = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f ); camUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f ); ##Create the View Space## We can create the view space matrix using the XMMatrixLookAtLH() function from the xna library. The parameters shouldn't be too hard to figure out. We are initializing our cameras matrix here, but later when we create a first and third person camera, this will be updated every frame. camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); ##Creating the Projection Space## Now we create the projection space matrix using the XMMatrixPerspectiveFovLH() xna function. This usually does not have to be updated every frame, but sometimes you might want to update it for a certain effect. This is the functions parameters: XMMATRIX XMMatrixPerspectiveFovLH ( FLOAT FovAngleY, FLOAT AspectRatio, FLOAT NearZ, FLOAT FarZ ) Where each parameter is described below: **FovAngleY -** *The Field of View in radians along the y-axis.* **AspectRatio -** *The aspect ratio, usually Width/Height.* **NearZ -** *A float describing the distance from the camera to the near z-plane.* **FarZ -** *A float describing the distance from the camera to the far plane.* If an object is further than the far plane from the camera, it will not be rendered, and if an object is closer than the near plane from the camera it will also not be rendered. Notice how we are casting the Width to a float type. This is because Width and Height are integers, or whole numbers, so dividing them would also result in a whole number. We want an aspect ratio which is represented by a decimal unless the width and height happen to be the same, which would result in a 1:1 ration, or a product of 1.0f when we do Width/Height. camProjection = XMMatrixPerspectiveFovLH( 0.4f*3.14f, (float)Width/Height, 1.0f, 1000.0f); ##Creating the WVP Matrix## Here we will create the WVP matrix which will be sent to the vertex shader to reposition the objects vertices correctly. Every object will have their own world space, so this should be done for each object in the scene. We reset the World Matrix by using the XMMatrixIdentity() function, which returns a blank matrix. Then we define the WVP by multiply the world, view, and projection matrices in that order. World = XMMatrixIdentity(); WVP = World * camView * camProjection; ##Update the Constant Buffer## This is important to remember. When sending matrices to the effect file in direct3d 11, we must send the matrices "transpose", where the rows and colums are switched. We set our buffers WVP matrix to the transpose of our WVP matrix. We then update our applications Constant Buffers Buffer with the cbPerObj structure containing our updated WVP matrix using the UpdateSubresource() method of the ID3D11DeviceContext interface. After that, we set the Vertex Shaders constant buffer to our applications constant buffers buffer using the method ID3D11DeviceContext::VSSetConstantBuffers(). cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); ##The Constant Buffer## This is the constant buffer in our effect file. We create a constant buffer structure using cbuffer. Remember to separate them and name them on the frequency in which they are updated. A matrix variable in an effect file is represented by the float4x4 type. You can have other sizes to by change the numbers. cbuffer cbPerObject { float4x4 WVP; }; ##Updated Vertex Buffer## Here is our vertex buffer. The only thing we have done here is add a new line, which will translate the objects vertices according to the WVP matrix we last updated. We can multiply the vertices position by using the mul() function. VS_OUTPUT VS(float4 inPos : POSITION, float4 inColor : COLOR) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.Color = inColor; return output; } Now we have a simple static camera. I hope you understand the 3d spaces a little better, i know i didn't do a great job explaining, but i didn't want to bore you with the specifics. ##Exercise:## 1. Change the cameras position. 1. Add another variable to the constant buffer which will update the pixels color. Here's the final code: main.cpp: //Include and link appropriate libraries and headers// #pragma comment(lib, "d3d11.lib") #pragma comment(lib, "d3dx11.lib") #pragma comment(lib, "d3dx10.lib") #include <windows.h> #include <d3d11.h> #include <d3dx11.h> #include <d3dx10.h> #include <xnamath.h> //Global Declarations - Interfaces// IDXGISwapChain* SwapChain; ID3D11Device* d3d11Device; ID3D11DeviceContext* d3d11DevCon; ID3D11RenderTargetView* renderTargetView; ID3D11Buffer* squareIndexBuffer; ID3D11DepthStencilView* depthStencilView; ID3D11Texture2D* depthStencilBuffer; ID3D11Buffer* squareVertBuffer; ID3D11VertexShader* VS; ID3D11PixelShader* PS; ID3D10Blob* VS_Buffer; ID3D10Blob* PS_Buffer; ID3D11InputLayout* vertLayout; ///////////////**************new**************//////////////////// ID3D11Buffer* cbPerObjectBuffer; ///////////////**************new**************//////////////////// //Global Declarations - Others// LPCTSTR WndClassName = L"firstwindow"; HWND hwnd = NULL; HRESULT hr; const int Width = 300; const int Height = 300; ///////////////**************new**************//////////////////// XMMATRIX WVP; XMMATRIX World; XMMATRIX camView; XMMATRIX camProjection; XMVECTOR camPosition; XMVECTOR camTarget; XMVECTOR camUp; ///////////////**************new**************//////////////////// //Function Prototypes// bool InitializeDirect3d11App(HINSTANCE hInstance); void CleanUp(); bool InitScene(); void UpdateScene(); void DrawScene(); bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed); int messageloop(); LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam); ///////////////**************new**************//////////////////// //Create effects constant buffer's structure// struct cbPerObject { XMMATRIX WVP; }; cbPerObject cbPerObj; ///////////////**************new**************//////////////////// //Vertex Structure and Vertex Layout (Input Layout)// struct Vertex //Overloaded Vertex Structure { Vertex(){} Vertex(float x, float y, float z, float cr, float cg, float cb, float ca) : pos(x,y,z), color(cr, cg, cb, ca){} XMFLOAT3 pos; XMFLOAT4 color; }; D3D11_INPUT_ELEMENT_DESC layout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, }; UINT numElements = ARRAYSIZE(layout); int WINAPI WinMain(HINSTANCE hInstance, //Main windows function HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { if(!InitializeWindow(hInstance, nShowCmd, Width, Height, true)) { MessageBox(0, L"Window Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitializeDirect3d11App(hInstance)) //Initialize Direct3D { MessageBox(0, L"Direct3D Initialization - Failed", L"Error", MB_OK); return 0; } if(!InitScene()) //Initialize our scene { MessageBox(0, L"Scene Initialization - Failed", L"Error", MB_OK); return 0; } messageloop(); CleanUp(); return 0; } bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, int width, int height, bool windowed) { typedef struct _WNDCLASS { UINT cbSize; UINT style; WNDPROC lpfnWndProc; int cbClsExtra; int cbWndExtra; HANDLE hInstance; HICON hIcon; HCURSOR hCursor; HBRUSH hbrBackground; LPCTSTR lpszMenuName; LPCTSTR lpszClassName; } WNDCLASS; WNDCLASSEX wc; wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = NULL; wc.cbWndExtra = NULL; wc.hInstance = hInstance; wc.hIcon = LoadIcon(NULL, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(COLOR_WINDOW + 2); wc.lpszMenuName = NULL; wc.lpszClassName = WndClassName; wc.hIconSm = LoadIcon(NULL, IDI_APPLICATION); if (!RegisterClassEx(&wc)) { MessageBox(NULL, L"Error registering class", L"Error", MB_OK | MB_ICONERROR); return 1; } hwnd = CreateWindowEx( NULL, WndClassName, L"Lesson 4 - Begin Drawing", WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, width, height, NULL, NULL, hInstance, NULL ); if (!hwnd) { MessageBox(NULL, L"Error creating window", L"Error", MB_OK | MB_ICONERROR); return 1; } ShowWindow(hwnd, ShowWnd); UpdateWindow(hwnd); return true; } bool InitializeDirect3d11App(HINSTANCE hInstance) { //Describe our SwapChain Buffer DXGI_MODE_DESC bufferDesc; ZeroMemory(&bufferDesc, sizeof(DXGI_MODE_DESC)); bufferDesc.Width = Width; bufferDesc.Height = Height; bufferDesc.RefreshRate.Numerator = 60; bufferDesc.RefreshRate.Denominator = 1; bufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; bufferDesc.ScanlineOrdering = DXGI_MODE_SCANLINE_ORDER_UNSPECIFIED; bufferDesc.Scaling = DXGI_MODE_SCALING_UNSPECIFIED; //Describe our SwapChain DXGI_SWAP_CHAIN_DESC swapChainDesc; ZeroMemory(&swapChainDesc, sizeof(DXGI_SWAP_CHAIN_DESC)); swapChainDesc.BufferDesc = bufferDesc; swapChainDesc.SampleDesc.Count = 1; swapChainDesc.SampleDesc.Quality = 0; swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; swapChainDesc.BufferCount = 1; swapChainDesc.OutputWindow = hwnd; swapChainDesc.Windowed = TRUE; swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_DISCARD; //Create our SwapChain hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &swapChainDesc, &SwapChain, &d3d11Device, NULL, &d3d11DevCon); //Create our BackBuffer ID3D11Texture2D* BackBuffer; hr = SwapChain->GetBuffer( 0, __uuidof( ID3D11Texture2D ), (void**)&BackBuffer ); //Create our Render Target hr = d3d11Device->CreateRenderTargetView( BackBuffer, NULL, &renderTargetView ); BackBuffer->Release(); //Describe our Depth/Stencil Buffer D3D11_TEXTURE2D_DESC depthStencilDesc; depthStencilDesc.Width = Width; depthStencilDesc.Height = Height; depthStencilDesc.MipLevels = 1; depthStencilDesc.ArraySize = 1; depthStencilDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT; depthStencilDesc.SampleDesc.Count = 1; depthStencilDesc.SampleDesc.Quality = 0; depthStencilDesc.Usage = D3D11_USAGE_DEFAULT; depthStencilDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL; depthStencilDesc.CPUAccessFlags = 0; depthStencilDesc.MiscFlags = 0; //Create the Depth/Stencil View d3d11Device->CreateTexture2D(&depthStencilDesc, NULL, &depthStencilBuffer); d3d11Device->CreateDepthStencilView(depthStencilBuffer, NULL, &depthStencilView); //Set our Render Target d3d11DevCon->OMSetRenderTargets( 1, &renderTargetView, depthStencilView ); return true; } void CleanUp() { //Release the COM Objects we created SwapChain->Release(); d3d11Device->Release(); d3d11DevCon->Release(); renderTargetView->Release(); squareVertBuffer->Release(); squareIndexBuffer->Release(); VS->Release(); PS->Release(); VS_Buffer->Release(); PS_Buffer->Release(); vertLayout->Release(); depthStencilView->Release(); depthStencilBuffer->Release(); ///////////////**************new**************//////////////////// cbPerObjectBuffer->Release(); ///////////////**************new**************//////////////////// } bool InitScene() { //Compile Shaders from shader file hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "VS", "vs_4_0", 0, 0, 0, &VS_Buffer, 0, 0); hr = D3DX11CompileFromFile(L"Effects.fx", 0, 0, "PS", "ps_4_0", 0, 0, 0, &PS_Buffer, 0, 0); //Create the Shader Objects hr = d3d11Device->CreateVertexShader(VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), NULL, &VS); hr = d3d11Device->CreatePixelShader(PS_Buffer->GetBufferPointer(), PS_Buffer->GetBufferSize(), NULL, &PS); //Set Vertex and Pixel Shaders d3d11DevCon->VSSetShader(VS, 0, 0); d3d11DevCon->PSSetShader(PS, 0, 0); //Create the vertex buffer Vertex v[] = { Vertex( -0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 1.0f ), Vertex( -0.5f, 0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f ), Vertex( 0.5f, 0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f ), Vertex( 0.5f, -0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f ), }; DWORD indices[] = { 0, 1, 2, 0, 2, 3, }; D3D11_BUFFER_DESC indexBufferDesc; ZeroMemory( &indexBufferDesc, sizeof(indexBufferDesc) ); indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(DWORD) * 2 * 3; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA iinitData; iinitData.pSysMem = indices; d3d11Device->CreateBuffer(&indexBufferDesc, &iinitData, &squareIndexBuffer); d3d11DevCon->IASetIndexBuffer( squareIndexBuffer, DXGI_FORMAT_R32_UINT, 0); D3D11_BUFFER_DESC vertexBufferDesc; ZeroMemory( &vertexBufferDesc, sizeof(vertexBufferDesc) ); vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof( Vertex ) * 4; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; D3D11_SUBRESOURCE_DATA vertexBufferData; ZeroMemory( &vertexBufferData, sizeof(vertexBufferData) ); vertexBufferData.pSysMem = v; hr = d3d11Device->CreateBuffer( &vertexBufferDesc, &vertexBufferData, &squareVertBuffer); //Set the vertex buffer UINT stride = sizeof( Vertex ); UINT offset = 0; d3d11DevCon->IASetVertexBuffers( 0, 1, &squareVertBuffer, &stride, &offset ); //Create the Input Layout hr = d3d11Device->CreateInputLayout( layout, numElements, VS_Buffer->GetBufferPointer(), VS_Buffer->GetBufferSize(), &vertLayout ); //Set the Input Layout d3d11DevCon->IASetInputLayout( vertLayout ); //Set Primitive Topology d3d11DevCon->IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST ); //Create the Viewport D3D11_VIEWPORT viewport; ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT)); viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; //Set the Viewport d3d11DevCon->RSSetViewports(1, &viewport); ///////////////**************new**************//////////////////// //Create the buffer to send to the cbuffer in effect file D3D11_BUFFER_DESC cbbd; ZeroMemory(&cbbd, sizeof(D3D11_BUFFER_DESC)); cbbd.Usage = D3D11_USAGE_DEFAULT; cbbd.ByteWidth = sizeof(cbPerObject); cbbd.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbbd.CPUAccessFlags = 0; cbbd.MiscFlags = 0; hr = d3d11Device->CreateBuffer(&cbbd, NULL, &cbPerObjectBuffer); //Camera information camPosition = XMVectorSet( 0.0f, 0.0f, -0.5f, 0.0f ); camTarget = XMVectorSet( 0.0f, 0.0f, 0.0f, 0.0f ); camUp = XMVectorSet( 0.0f, 1.0f, 0.0f, 0.0f ); //Set the View matrix camView = XMMatrixLookAtLH( camPosition, camTarget, camUp ); //Set the Projection matrix camProjection = XMMatrixPerspectiveFovLH( 0.4f*3.14f, (float)Width/Height, 1.0f, 1000.0f); ///////////////**************new**************//////////////////// return true; } void UpdateScene() { } void DrawScene() { //Clear our backbuffer float bgColor[4] = {(0.0f, 0.0f, 0.0f, 0.0f)}; d3d11DevCon->ClearRenderTargetView(renderTargetView, bgColor); //Refresh the Depth/Stencil view d3d11DevCon->ClearDepthStencilView(depthStencilView, D3D11_CLEAR_DEPTH|D3D11_CLEAR_STENCIL, 1.0f, 0); ///////////////**************new**************//////////////////// //Set the World/View/Projection matrix, then send it to constant buffer in effect file World = XMMatrixIdentity(); WVP = World * camView * camProjection; cbPerObj.WVP = XMMatrixTranspose(WVP); d3d11DevCon->UpdateSubresource( cbPerObjectBuffer, 0, NULL, &cbPerObj, 0, 0 ); d3d11DevCon->VSSetConstantBuffers( 0, 1, &cbPerObjectBuffer ); ///////////////**************new**************//////////////////// //Draw the triangle d3d11DevCon->DrawIndexed( 6, 0, 0 ); //Present the backbuffer to the screen SwapChain->Present(0, 0); } int messageloop(){ MSG msg; ZeroMemory(&msg, sizeof(MSG)); while(true) { BOOL PeekMessageL( LPMSG lpMsg, HWND hWnd, UINT wMsgFilterMin, UINT wMsgFilterMax, UINT wRemoveMsg ); if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } else{ // run game code UpdateScene(); DrawScene(); } } return msg.wParam; } LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch( msg ) { case WM_KEYDOWN: if( wParam == VK_ESCAPE ){ DestroyWindow(hwnd); } return 0; case WM_DESTROY: PostQuitMessage(0); return 0; } return DefWindowProc(hwnd, msg, wParam, lParam); } Effects.fx: cbuffer cbPerObject { float4x4 WVP; }; struct VS_OUTPUT { float4 Pos : SV_POSITION; float4 Color : COLOR; }; VS_OUTPUT VS(float4 inPos : POSITION, float4 inColor : COLOR) { VS_OUTPUT output; output.Pos = mul(inPos, WVP); output.Color = inColor; return output; } float4 PS(VS_OUTPUT input) : SV_TARGET { return input.Color; }
This might be a silly question but I don't understand why we need 3 vectors (Position, Target and Up) for XMMatrixLookAtLH()?
I understand the use of Position, Target but Up seems pretty useless...
Could someone explain me, please?
PS: Sorry for my bad english.
on Jul 20 `16
Kavarna
Hi kavarna, next time please ask questions in the questions section. There is a link to the questions section in the header tabs next to tutorials. Up is needed to complete a view matrix. Without up, you will not know the roll of the camera. If you only have target and position, you wouldn't know if your camera should be upside down or sideways or just normal.
on Jul 20 `16
iedoc
Sign in to comment