This tutorial is part of a Collection: 04. DirectX 12 - Braynzar Soft Tutorials
rate up
1
rate down
18300
views
bookmark
10. Textures From File

This tutorial will teach you how to create textures from a file. We will learn how to load in an image from a file using the Windows Imaging Component (WIC) API. Once we have the image loaded in, we will upload it to a default resource heap using an upload heap, create an SRV, then use sample from that SRV in our pixel shader to color our cube.

BzTuts10.rar 207.1 kb
1714 downloads
####Introduction#### In this tutorial, we will learn how to texture our cubes with an image loaded from a file. There are 3 steps we will have to take to get the SRV we can use to texture our cubes 1. Load the data from the image file, and decrypt it to a bitmap format compatible with DXGI formats (rgba) 2. Create an upload (intermediate) heap, default heap, and resource to store the bitmap data 3. Create a Shader Resource View (SRV) that describes and points to the bitmap image data Once we have an SRV (which is inside a descriptor heap), we set the root parameter that is a descriptor table to the area where the SRV is located in the descriptor heap. Our pixel shader can then sample from this texture to get the pixel colors for the triangles in our cube. ####Windows Imaging Component (WIC)#### .[https://msdn.microsoft.com/en-us/library/windows/desktop/ee719902(v=vs.85).aspx][Windows Imaging Component] We will start off with how to load in an image file and decode the data to a DXGI compatible bitmap format. We will take advantage of the Windows Imaging Component API which Microsoft has kindly provided for us, and all we have to include in our project is **wincodec.h**. The WIC API is a low-level API to work with digital images, which includes things such as encoding, decoding, format conversions and working with meta data. There are a number of default codecs that WIC includes built in. These formats include: - BMP - GIF - ICO - JPEG - PNG - TIFF - DDS The API is designed so that everything uses the same set of API calls, no matter what codec or format is being used. It allows developers to create their own codecs, and any machine is able to decode the format using the same set of API calls as one would for the built in codecs such as PNG. All that needs to happen is the codec must be installed on the client's machine. Anyway, All we care about is decoding an image to a bitmap format, compatible with a DXGI format. If your are so inclined, you could write your own codec using WIC, and you would just have to distribute the codec with your application so that client computers could decode the images that were encoded using your custom codec. This is the flow we will have to take to get the image into a format we can use: 0. Create a WIC Factory 1. Create a WIC Bitmap Decoder (we can do this by providing a filename) 2. Grab a "Frame" from the decoder, which is an interface containing the decoded image data from the file 3. Get image information, such as the WIC pixel format and size of the image (width, height) 4. Get the compatible DXGI Format (the DXGI format that is compatible with the WIC pixel format) 5. If a compatible DXGI format was not found, we must convert the image to a WIC pixel format that IS compatible with a DXGI Format. 6. Finally copy the pixels from the WIC Frame to a BYTE array. This is our bitmap data we will use to texture our geometry. ##Create a WIC Factory## To initialize WIC, we must first initialize the COM library on the current thread with a function called .[https://msdn.microsoft.com/en-us/library/windows/desktop/ms678543(v=vs.85).aspx][CoInitialize()] so that we can call the CoCreateInstance() function to create a WIC Factory. The .[https://msdn.microsoft.com/en-us/library/windows/desktop/ms686615(v=vs.85).aspx][CoCreateInstance()] function creates an uninitialized object of the class we provide the function with, in our case we want a WIC Factory, so we provide it with **CLSID_WICImagingFactory**. We can use the WIC Factory to create instances of the WIC Bitmap Decoder, which we use to load an image in from a file and grab "Frames" from. ##Create a WIC Bitmap Decoder## The WIC Bitmap Decoder (.[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690086(v=vs.85).aspx][IWICBitmapDecoder]) is an interface which we create an instance of for an image format from the file. This interface will allow you to read metadata as well as get "Frames" from the image file. ##Grab a "Frame" from the decoder## A WIC Frame (.[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690134(v=vs.85).aspx][IWICBitmapFrameDecode]) interface contains information of a single frame from the image. Formats such as GIF usually have multiple "Frames", while other image formats like PNG and JPEG only have one. We can grab the first frame from the WIC Bitmap Decoder with it's GetFrame() method. The first parameter is the frame index (0 for the first frame), and the second parameter is the WIC Frame that will store the frame. ##Get image information## Now that we have a frame from the image, we need to get the pixel format and the size of the image. We can get the .[https://msdn.microsoft.com/en-us/library/windows/desktop/ee719797(v=vs.85).aspx][WIC pixel format] with the GetPixelFormat() method of the WIC Frame interface, which is a GUID (Globally Unique Identifier), WICPixelFormatGUID. We can get the size of the image with the GetSize() method. The first parameter is a UINT to store the width in, and the second is a UINT to store the height in. ##Get the compatible DXGI Format## We need to get the DXGI format of the image so that we can create a texture description for our SRV. You can create a function that returns a DXGI format based on the input WICPixelFormatGUID. If no format was found, you can return DXGI_FORMAT_UNKNOWN. ##Convert Image if not already a DXGI compatible format## When i had first written the code for the tutorial, i skipped this step. I had then tried a couple other images taken from the internet to make sure that this part was not needed, but i had found that some of them were not DXGI compatible formats, so I decided it would be best to include this section. If the DXGI format we got from the function mentioned in the section above was DXGI_FORMAT_UNKNOWN, that means the image format was not a format that is DXGI compatible, which means we will now need to use WIC to convert the image to a format that is DXGI compatible. We can create a WIC Format Converter interface instance (.[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690274(v=vs.85).aspx][IWICFormatConverter]) from the WIC Factory using it's CreateFormatConverter() method. Once we know we have to convert the image to a DXGI compatible format, we need to find out which WIC pixel format we should convert it to. We have a function which takes in a WICPixelFormatGUID, which is the current pixel format, and returns a WICPixelFormatGUID that is compatible with DXGI formats. It's possible that there is no compatible format we could convert it to, in that case, we are not able to load this image in. To convert the image, we first need to find out if it is even possible, we can do this with the CanConvert() method of the WIC Converter interface. If we can convert the image, we do so by calling the "Initialize" method of the converter. Once this function has completed, this interface is where the converted image data will be. ##Copy the pixels from the WIC Frame to a BYTE array## Once we have gotten the WIC Frame and converted if needed, we need to copy the pixel data to a BYTE array. We can copy the pixel data by using the CopyPixels() method. If we did not have to convert the image, we call the CopyPixels() method of the WIC Frame interface that contains the image data from the file. If we had to convert the image, we need to call the CopyPixels() method of the WIC Converter interface, which now contains the converted image data. We will also need to get the bits per pixel from the image, which we have a function that takes in the DXGI format and returns an integer of the number of bits per pixel. We will use this when determining the image size as well as the bytes per row of the image. At the end of our function to load in the image, we will fill out a resource description for our texture. We will use this structure to create our SRV and resource heaps. Here are the functions we have in the code for reference: **GetDXGIFormatFromWICFormat()** function: // get the dxgi format equivilent of a wic format DXGI_FORMAT GetDXGIFormatFromWICFormat(WICPixelFormatGUID& wicFormatGUID) { if (wicFormatGUID == GUID_WICPixelFormat128bppRGBAFloat) return DXGI_FORMAT_R32G32B32A32_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBAHalf) return DXGI_FORMAT_R16G16B16A16_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBA) return DXGI_FORMAT_R16G16B16A16_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBA) return DXGI_FORMAT_R8G8B8A8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppBGRA) return DXGI_FORMAT_B8G8R8A8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppBGR) return DXGI_FORMAT_B8G8R8X8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBA1010102XR) return DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBA1010102) return DXGI_FORMAT_R10G10B10A2_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat16bppBGRA5551) return DXGI_FORMAT_B5G5R5A1_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat16bppBGR565) return DXGI_FORMAT_B5G6R5_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppGrayFloat) return DXGI_FORMAT_R32_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat16bppGrayHalf) return DXGI_FORMAT_R16_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat16bppGray) return DXGI_FORMAT_R16_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat8bppGray) return DXGI_FORMAT_R8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat8bppAlpha) return DXGI_FORMAT_A8_UNORM; else return DXGI_FORMAT_UNKNOWN; } **GetConvertToWICFormat()** function: // get a dxgi compatible wic format from another wic format WICPixelFormatGUID GetConvertToWICFormat(WICPixelFormatGUID& wicFormatGUID) { if (wicFormatGUID == GUID_WICPixelFormatBlackWhite) return GUID_WICPixelFormat8bppGray; else if (wicFormatGUID == GUID_WICPixelFormat1bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat2bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat4bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat8bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat2bppGray) return GUID_WICPixelFormat8bppGray; else if (wicFormatGUID == GUID_WICPixelFormat4bppGray) return GUID_WICPixelFormat8bppGray; else if (wicFormatGUID == GUID_WICPixelFormat16bppGrayFixedPoint) return GUID_WICPixelFormat16bppGrayHalf; else if (wicFormatGUID == GUID_WICPixelFormat32bppGrayFixedPoint) return GUID_WICPixelFormat32bppGrayFloat; else if (wicFormatGUID == GUID_WICPixelFormat16bppBGR555) return GUID_WICPixelFormat16bppBGRA5551; else if (wicFormatGUID == GUID_WICPixelFormat32bppBGR101010) return GUID_WICPixelFormat32bppRGBA1010102; else if (wicFormatGUID == GUID_WICPixelFormat24bppBGR) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat24bppRGB) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat32bppPBGRA) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat32bppPRGBA) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat48bppRGB) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat48bppBGR) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppBGRA) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppPRGBA) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppPBGRA) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat48bppRGBFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat48bppBGRFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBAFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppBGRAFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBHalf) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat48bppRGBHalf) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat128bppPRGBAFloat) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat128bppRGBFloat) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat128bppRGBAFixedPoint) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat128bppRGBFixedPoint) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBE) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat32bppCMYK) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppCMYK) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat40bppCMYKAlpha) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat80bppCMYKAlpha) return GUID_WICPixelFormat64bppRGBA; #if (_WIN32_WINNT >= _WIN32_WINNT_WIN8) || defined(_WIN7_PLATFORM_UPDATE) else if (wicFormatGUID == GUID_WICPixelFormat32bppRGB) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGB) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppPRGBAHalf) return GUID_WICPixelFormat64bppRGBAHalf; #endif else return GUID_WICPixelFormatDontCare; } **GetDXGIFormatBitsPerPixel()** function // get the number of bits per pixel for a dxgi format int GetDXGIFormatBitsPerPixel(DXGI_FORMAT& dxgiFormat) { if (dxgiFormat == DXGI_FORMAT_R32G32B32A32_FLOAT) return 128; else if (dxgiFormat == DXGI_FORMAT_R16G16B16A16_FLOAT) return 64; else if (dxgiFormat == DXGI_FORMAT_R16G16B16A16_UNORM) return 64; else if (dxgiFormat == DXGI_FORMAT_R8G8B8A8_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_B8G8R8A8_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_B8G8R8X8_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_R10G10B10A2_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_B5G5R5A1_UNORM) return 16; else if (dxgiFormat == DXGI_FORMAT_B5G6R5_UNORM) return 16; else if (dxgiFormat == DXGI_FORMAT_R32_FLOAT) return 32; else if (dxgiFormat == DXGI_FORMAT_R16_FLOAT) return 16; else if (dxgiFormat == DXGI_FORMAT_R16_UNORM) return 16; else if (dxgiFormat == DXGI_FORMAT_R8_UNORM) return 8; else if (dxgiFormat == DXGI_FORMAT_A8_UNORM) return 8; } **LoadImageDataFromFile()** function: // load and decode image from file int LoadImageDataFromFile(BYTE** imageData, D3D12_RESOURCE_DESC& resourceDescription, LPCWSTR filename, int &bytesPerRow) { HRESULT hr; // we only need one instance of the imaging factory to create decoders and frames static IWICImagingFactory *wicFactory; // reset decoder, frame and converter since these will be different for each image we load IWICBitmapDecoder *wicDecoder = NULL; IWICBitmapFrameDecode *wicFrame = NULL; IWICFormatConverter *wicConverter = NULL; bool imageConverted = false; if (wicFactory == NULL) { // Initialize the COM library CoInitialize(NULL); // create the WIC factory hr = CoCreateInstance( CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&wicFactory) ); if (FAILED(hr)) return 0; } // load a decoder for the image hr = wicFactory->CreateDecoderFromFilename( filename, // Image we want to load in NULL, // This is a vendor ID, we do not prefer a specific one so set to null GENERIC_READ, // We want to read from this file WICDecodeMetadataCacheOnLoad, // We will cache the metadata right away, rather than when needed, which might be unknown &wicDecoder // the wic decoder to be created ); if (FAILED(hr)) return 0; // get image from decoder (this will decode the "frame") hr = wicDecoder->GetFrame(0, &wicFrame); if (FAILED(hr)) return 0; // get wic pixel format of image WICPixelFormatGUID pixelFormat; hr = wicFrame->GetPixelFormat(&pixelFormat); if (FAILED(hr)) return 0; // get size of image UINT textureWidth, textureHeight; hr = wicFrame->GetSize(&textureWidth, &textureHeight); if (FAILED(hr)) return 0; // we are not handling sRGB types in this tutorial, so if you need that support, you'll have to figure // out how to implement the support yourself // convert wic pixel format to dxgi pixel format DXGI_FORMAT dxgiFormat = GetDXGIFormatFromWICFormat(pixelFormat); // if the format of the image is not a supported dxgi format, try to convert it if (dxgiFormat == DXGI_FORMAT_UNKNOWN) { // get a dxgi compatible wic format from the current image format WICPixelFormatGUID convertToPixelFormat = GetConvertToWICFormat(pixelFormat); // return if no dxgi compatible format was found if (convertToPixelFormat == GUID_WICPixelFormatDontCare) return 0; // set the dxgi format dxgiFormat = GetDXGIFormatFromWICFormat(convertToPixelFormat); // create the format converter hr = wicFactory->CreateFormatConverter(&wicConverter); if (FAILED(hr)) return 0; // make sure we can convert to the dxgi compatible format BOOL canConvert = FALSE; hr = wicConverter->CanConvert(pixelFormat, convertToPixelFormat, &canConvert); if (FAILED(hr) || !canConvert) return 0; // do the conversion (wicConverter will contain the converted image) hr = wicConverter->Initialize(wicFrame, convertToPixelFormat, WICBitmapDitherTypeErrorDiffusion, 0, 0, WICBitmapPaletteTypeCustom); if (FAILED(hr)) return 0; // this is so we know to get the image data from the wicConverter (otherwise we will get from wicFrame) imageConverted = true; } int bitsPerPixel = GetDXGIFormatBitsPerPixel(dxgiFormat); // number of bits per pixel bytesPerRow = (textureWidth * bitsPerPixel) / 8; // number of bytes in each row of the image data int imageSize = bytesPerRow * textureHeight; // total image size in bytes // allocate enough memory for the raw image data, and set imageData to point to that memory *imageData = (BYTE*)malloc(imageSize); // copy (decoded) raw image data into the newly allocated memory (imageData) if (imageConverted) { // if image format needed to be converted, the wic converter will contain the converted image hr = wicConverter->CopyPixels(0, bytesPerRow, imageSize, *imageData); if (FAILED(hr)) return 0; } else { // no need to convert, just copy data from the wic frame hr = wicFrame->CopyPixels(0, bytesPerRow, imageSize, *imageData); if (FAILED(hr)) return 0; } // now describe the texture with the information we have obtained from the image resourceDescription = {}; resourceDescription.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D; resourceDescription.Alignment = 0; // may be 0, 4KB, 64KB, or 4MB. 0 will let runtime decide between 64KB and 4MB (4MB for multi-sampled textures) resourceDescription.Width = textureWidth; // width of the texture resourceDescription.Height = textureHeight; // height of the texture resourceDescription.DepthOrArraySize = 1; // if 3d image, depth of 3d image. Otherwise an array of 1D or 2D textures (we only have one image, so we set 1) resourceDescription.MipLevels = 1; // Number of mipmaps. We are not generating mipmaps for this texture, so we have only one level resourceDescription.Format = dxgiFormat; // This is the dxgi format of the image (format of the pixels) resourceDescription.SampleDesc.Count = 1; // This is the number of samples per pixel, we just want 1 sample resourceDescription.SampleDesc.Quality = 0; // The quality level of the samples. Higher is better quality, but worse performance resourceDescription.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN; // The arrangement of the pixels. Setting to unknown lets the driver choose the most efficient one resourceDescription.Flags = D3D12_RESOURCE_FLAG_NONE; // no flags // return the size of the image. remember to delete the image once your done with it (in this tutorial once its uploaded to the gpu) return imageSize; } ####Creating Resources#### Alright let's talk about how we get the texture data to the GPU. I want to first get a few definitions/ideas out of the way. **Heap** A heap is just a contiguous chunk of physical memory. This memory may be video memory (GPU), system memory (CPU RAM), or page memory (data is placed on the hard disk when ram fills up). Whether the heap is in video memory or system memory depends on the creation flags, but if video memory is filled up, heaps that would have been placed in video memory would then be placed in RAM, or as a last effort, page memory when ram fills up. **Physical Address** A physical address is an address to a specific location on the hardware. To directly access system memory, we would read directly from the physical address to the location in memory we want to read. Think of system memory (RAM) as a street, a really long street. Each block of memory is then a house. Each house has an address. This physical address is unique to all the hardware and ram. **Virtual Address** A virtual address is similar to a physical address, in that they are both addresses. But instead of the virtual address being the direct location to a block of memory, virtual addresses are **mapped** to physical memory. When you access that virtual address once it has been mapped, the API will direct that access to the physical address that the virtual address is mapped to. **Mapping** Basically explained above, mapping creates a link between a virtual address and a physical address. **Resource** The basic idea of a resource is that it is a virtual address range. A resource is just a pointer to a window or section of a heap, where the resource's data resides. **Residency** When a heap is created, it starts out as resident. What this means is that the physical memory that is associated with the heap is accessable to the GPU. This heap could either be in video memory or system memory. A default heap will reside in video memory if there is enough space, otherwise would be moved to system memory, or page memory on disk if there is not enough system memory. Of course, you want the default heap to reside in video memory so that the GPU can access the data faster, which is why D3D12 allows us to "Evict" or "MakeResident" heaps. **Evict** the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn788676(v=vs.85).aspx][Evict()] method of the device interface allows you to set one or more heaps as non-resident. This will only set a flag so the CPU/GPU cost may not be immediate. What this does is when a new heap becomes resident (either through creation or the "MakeResident" method) and needs the space a non-resident heap resides in, the non-resident heap will be moved out to a slower section of memory (if the evicted heap was in video memory, it would move to system memory, if it was already in system memory, it would be moved to page memory), and the new resident heap uses the space the evicted heap was in. **MakeResident** The opposite of Evict. You must use fences to make sure that any memory the GPU tries to access is resident. **Available Memory** Before you create any heaps, you should query the available memory for your application. D3D12 provides a way to do this with the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn933223(v=vs.85).aspx][QueryVideoMemoryInfo()] method of the adapter interface. The results can change very frequently. It is likely that you will not get the exact same results even if you call this function twice a second apart. The result will be a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn933220(v=vs.85).aspx][DXGI_QUERY_VIDEO_MEMORY_INFO] structure, which will give you the total amount of video memory available to your process. If there is not enough space or your coming close to using all the available video memory, you might decide not to load and render insignificant resources. **Descriptors** A descriptor is D3D12's word for View, although View is still used in the names of the types of resources, such as Shader Resource View, or Constant Buffer View. You must create a descriptor for a resource in order to use the resource in the graphics pipeline. There are three types of resources in D3D12: ##Committed Resource## A committed resource is both an **implicit** heap and a resource. The heap is large enough to hold the resource, and the resource is mapped to the entire heap. This is probably the most common and simple way to get data to the GPU. They call it an implicit heap because you will not have direct access to the heap apart from the resource in a committed resource. To do anything with the heap, such as evict or make resident, you will do it on the resource. Because creating a committed resource creates a heap (allocates memory) and creates a resource and maps that resource to the heap, it is slower to create and destroy than the other two resources. Resources that you will use often through the level or game would be good candidates or a committed resource, such as the HUD, user interface, or main character. We can create a committed resource with the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn899178(v=vs.85).aspx][CreateCommittedResource()] method of the device interface. This method will create both a resource and a heap. ##Placed Resource## Placed Resources are faster to create and destroy, as they are only a resource, mapped to a section of an **explicit** heap. An explicit heap is a heap that you create yourself, apart from any resources. You can explicitly create a heap using the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn788664(v=vs.85).aspx][CreateHeap()] method of the device interface. This will create a heap, allocating the ammount of data you require in either video or system memory depending on flags and available memory. If you run out of all memory, this method will fail with an out of memory error. When you create a placed resource, you must specify the heap and offset into the heap from the beginning of the heap, as well as the size of the resource. You can use placed resources to better manage your memory. A placed resource that you will no longer need can be destroyed, and a new placed resource can be created and reuse the section of the heap the destroyed placed resource had been using. Placed resources are created in the "inactive" state. You can only use an "active" placed resource on the GPU, so you must use aliasing barriers to change the placed resource state. Placed resource can overlap, but only one overlapping placed resource can be in the "active" state. When activating a placed resource, all other placed resources that share the same physical memory (overlapping resources) will automatically become inactive. Placed resources are good for more fine grained memory management, and should be used for resources that change often. To create a placed resource, use the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn899180(v=vs.85).aspx][CreatePlacedResource()] method of the device interface. ##Reserved Resource## Finally we have reserved resource. Reserved resources are similar to .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn786477(v=vs.85).aspx][tiled resources] in D3D11. I suggest having a look at that url if you don't already know what tiled resources are, or why they can be useful. A reserved resources literally is just virtual address range. You can map and unmap this resource to heaps as much as you want, and it is very quick. Reserved resources are not available on all D3D12 hardware yet (they can only be created when the adapter supports tiled resource tier 1 or greater). Streaming data or very large map terrain are good candidates for a reserved resource. You can imagine your camera is moving around the world. You do not need the entire worlds terrain at once, so you create a reserved resource that gets remapped every frame over the window of terrain around your camera. The resource stays the same size, and just slides around the heap depending on where your camera is. Reserved resources are not mapped to physical memory on creation like placed and committed resources, so it is up to you to explicitly map reserved resources to physical memory (a heap). You can map reserved resources to physical memory using the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn788629(v=vs.85).aspx][CopyTileMappings()] and .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn788641(v=vs.85).aspx][UpdateTileMappings()] methods of a command list. You can create a reserved resource with the .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn899181(v=vs.85).aspx][CreateReservedResource()] method of the device interface. Unlike committed and placed resources, you can map and unmap this resource as much as you want to different heaps or different parts of heaps (committed resources are mapped strictly to their implicit heap on creation, and placed resources are mapped to a part of an explicit heap on creation). To improve performance, you will want to have a background thread mapping, unmapping, and creating resources while your game is running. You have to make sure the resource is available to the GPU before using it in the graphics or compute pipelines. ####Texture Space#### Texture space is the space we use in the pixel shader to get the color of a pixel from a texture based on the texture coordinate. Texture coordinates are generally represented by **U** for the "x" axis and **V** for the "y" axis. Sometimes however, you might see the uv axes represented by **S** for the "x" axis and **T** for the "y" axis. For 3D textures, the third axis will be represented by "W", meaning depth of the texture. In Direct3D, the texture coordinates for a texture start at the top left corner of the texture, at point (0,0) and go to the bottom right corner of the texture (1,1). +[http://www.braynzarsoft.net/image/100027][Texture UV] Depending on the sampling you have configured, you might have it set up for texture wrapping. What this means is that if you go over 1 or under 0, the texture repeats itself, like this: +[http://www.braynzarsoft.net/image/100028][Texture UV over 1] Another sampling configuration might be to have a border of a certain color. If you go over 1 or under 0, you might see a solid color for the amount of space you have gone "out of bounds". In OpenGL, the V is flipped, along with many 3d modeling programs, so that the texture coordinates for a texture start at the bottom left (0,0) and move to the top right (1,1). ####Samplers#### A sampler tells the shader how it should read a texture given a texture coordinate to get a pixel color. Samplers are a different type of resource, which you can bind to the pipeline through the root signature. You can create **Static Samplers**, which are samplers "baked" into the root signature. Static samplers cannot be changed once the root signature has been created, but since they are part of the root signature, they are generally more performant since you will not need to upload the sampler resource and bind it as a root argument, but as the name says, they are static, so once you create a static sampler on a root signature, you cannot change it. Samplers do not add to the total memory used by the root signature, like the other root parameters do (eg. root constant is 1 DWORD to the max 64 DWORDS). In this tutorial, we will be using a static sampler as it makes the code a little easier. On to the code~ ####New Globals#### The first new global is a resource object called textureBuffer. This resource will hold our texture data. The next couple are function prototypes. These functions will be used to load in our texture from a file using the WIC API. We have a new descriptor heap that will store the SRV for our texture, and after that we have an upload heap to upload our texture and copy it to the textureBuffer resource (default heap). ID3D12Resource* textureBuffer; // the resource heap containing our texture int LoadImageDataFromFile(BYTE** imageData, D3D12_RESOURCE_DESC& resourceDescription, LPCWSTR filename, int &bytesPerRow); DXGI_FORMAT GetDXGIFormatFromWICFormat(WICPixelFormatGUID& wicFormatGUID); WICPixelFormatGUID GetConvertToWICFormat(WICPixelFormatGUID& wicFormatGUID); int GetDXGIFormatBitsPerPixel(DXGI_FORMAT& dxgiFormat); ID3D12DescriptorHeap* mainDescriptorHeap; ID3D12Resource* textureBufferUploadHeap; ####New Vertex Structure#### We removed the XMLFLOAT4 color member and have added an XMFLOAT2 member to the vertex structure to represent our texture coordinate data for each vertex. struct Vertex { Vertex(float x, float y, float z, float u, float v) : pos(x, y, z), texCoord(u, v) {} XMFLOAT3 pos; XMFLOAT2 texCoord; }; ####Descriptor Table#### We have added two new parameters to our root signature, a descriptor table, and a static sampler. We have talked about descriptor tables in a previous .[http://www.braynzarsoft.net/viewtutorial/q16390-directx-12-constant-buffers-root-descriptor-tables][tutorial], so i will not re-explain here. The first root parameter [0] is our constant buffer, the second [1] is our descriptor table which will contain a descriptor to our SRV. // create a descriptor range (descriptor table) and fill it out // this is a range of descriptors inside a descriptor heap D3D12_DESCRIPTOR_RANGE descriptorTableRanges[1]; // only one range right now descriptorTableRanges[0].RangeType = D3D12_DESCRIPTOR_RANGE_TYPE_SRV; // this is a range of shader resource views (descriptors) descriptorTableRanges[0].NumDescriptors = 1; // we only have one texture right now, so the range is only 1 descriptorTableRanges[0].BaseShaderRegister = 0; // start index of the shader registers in the range descriptorTableRanges[0].RegisterSpace = 0; // space 0. can usually be zero descriptorTableRanges[0].OffsetInDescriptorsFromTableStart = D3D12_DESCRIPTOR_RANGE_OFFSET_APPEND; // this appends the range to the end of the root signature descriptor tables // create a descriptor table D3D12_ROOT_DESCRIPTOR_TABLE descriptorTable; descriptorTable.NumDescriptorRanges = _countof(descriptorTableRanges); // we only have one range descriptorTable.pDescriptorRanges = &descriptorTableRanges[0]; // the pointer to the beginning of our ranges array // create a root parameter for the root descriptor and fill it out D3D12_ROOT_PARAMETER rootParameters[2]; // two root parameters rootParameters[0].ParameterType = D3D12_ROOT_PARAMETER_TYPE_CBV; // this is a constant buffer view root descriptor rootParameters[0].Descriptor = rootCBVDescriptor; // this is the root descriptor for this root parameter rootParameters[0].ShaderVisibility = D3D12_SHADER_VISIBILITY_VERTEX; // our pixel shader will be the only shader accessing this parameter for now // fill out the parameter for our descriptor table. Remember it's a good idea to sort parameters by frequency of change. Our constant // buffer will be changed multiple times per frame, while our descriptor table will not be changed at all (in this tutorial) rootParameters[1].ParameterType = D3D12_ROOT_PARAMETER_TYPE_DESCRIPTOR_TABLE; // this is a descriptor table rootParameters[1].DescriptorTable = descriptorTable; // this is our descriptor table for this root parameter rootParameters[1].ShaderVisibility = D3D12_SHADER_VISIBILITY_PIXEL; // our pixel shader will be the only shader accessing this parameter for now ####The static sampler#### Here we create a static sampler by filling out a D3D12_STATIC_SAMPLER_DESC structure. typedef struct .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn986748(v=vs.85).aspx][D3D12_STATIC_SAMPLER_DESC] { D3D12_FILTER Filter; D3D12_TEXTURE_ADDRESS_MODE AddressU; D3D12_TEXTURE_ADDRESS_MODE AddressV; D3D12_TEXTURE_ADDRESS_MODE AddressW; FLOAT MipLODBias; UINT MaxAnisotropy; D3D12_COMPARISON_FUNC ComparisonFunc; D3D12_STATIC_BORDER_COLOR BorderColor; FLOAT MinLOD; FLOAT MaxLOD; UINT ShaderRegister; UINT RegisterSpace; D3D12_SHADER_VISIBILITY ShaderVisibility; } D3D12_STATIC_SAMPLER_DESC; - **Filter** - *This is a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn770367(v=vs.85).aspx][D3D12_FILTER] enumeration value. This explains what type of filter we would like to use when sampling from the texture.* - **AddressU** - *This is a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn770441(v=vs.85).aspx][D3D12_TEXTURE_ADDRESS_MODE] enumeration value. This explains what should happen when a texture coordinate is out of the 0-1 bounds (greater than 1 or less than 0).* - **AddressV** - *Same as above* - **AddressW** - *Same as above* - **MipLODBias** - *For mipmapped textures, this is the offset from what D3D thinks the mip level should be. If we set this to 1 for example, and D3D decided the mip level that should be sampled is level 2, then we would sample from mip level 3.* - **MaxAnisotropy** - *A clamping value used for when the filter is set to D3D12_FILTER_ANISOTROPIC or D3D12_FILTER_COMPARISON_ANISOTROPIC. Value must be between 1 and 16.* - **ComparisonFunc** - *A .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn770349(v=vs.85).aspx][D3D12_COMPARISON_FUNC] enumeration value. This is a function used to compare the sampled value with existing sampled data. To tell you the truth, i'm not exactly sure what sampled data the D3D12 docs are talking about, so I just set it to D3D12_COMPARISON_FUNC_NEVER.* - **BorderColor** - *If border is specified for one of the address members above, this is the value that will be sampled for values outside the 0-1 texture coordinate range. Can only be transparent black, opaque black, or opaque white, a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn903815(v=vs.85).aspx][D3D12_STATIC_BORDER_COLOR] enumeration value.* - **MinLOD** - *This is the mipmap level to clamp the lower end of the mipmap range to, where 0 is the lowest level and the most detailed mipmap. You might set this to higher than zero if you find that the available video memory is too low when you create the sampler and decide to cut off the more detailed mipmaps.* - **MaxLOD** - *The opposite as above. This is the higher level to clamp the mipmap levels to, where the mipmaps are smaller images and less detailed. Set this to D3D12_FLOAT32_MAX to not have mipmaps clamped* - **ShaderRegister** - *This is the **s** register you are binding this sampler to. We have a sampler at s0 for example in this tutorial. You will use this sampler to sample a value from the bound SRV (t register) for a pixel.* - **RegisterSpace** - *This is the register space. Register spaces were discussed in a previous tutorial.* - **ShaderVisibility** - *This is the visibility of this sampler to shaders. Only one shader can have visibility to the sampler, or all of them with the D3D12_SHADER_VISIBILITY_ALL value. We are sampling the texture from the pixel shader, so we set this to D3D12_SHADER_VISIBILITY_PIXEL* // create a static sampler D3D12_STATIC_SAMPLER_DESC sampler = {}; sampler.Filter = D3D12_FILTER_MIN_MAG_MIP_POINT; sampler.AddressU = D3D12_TEXTURE_ADDRESS_MODE_BORDER; sampler.AddressV = D3D12_TEXTURE_ADDRESS_MODE_BORDER; sampler.AddressW = D3D12_TEXTURE_ADDRESS_MODE_BORDER; sampler.MipLODBias = 0; sampler.MaxAnisotropy = 0; sampler.ComparisonFunc = D3D12_COMPARISON_FUNC_NEVER; sampler.BorderColor = D3D12_STATIC_BORDER_COLOR_TRANSPARENT_BLACK; sampler.MinLOD = 0.0f; sampler.MaxLOD = D3D12_FLOAT32_MAX; sampler.ShaderRegister = 0; sampler.RegisterSpace = 0; sampler.ShaderVisibility = D3D12_SHADER_VISIBILITY_PIXEL; ####Updated Root Signature#### We now create the root signature description. This root signature differs from the last tutorial in that we now have a static sampler, as well as a second root parameter for the descriptor table. The first parameter of the Init function of the root signature description (in the d3dx12.h header) is the number of parameters we want to create this root signature with. We have 2 parameters, one for the root constant for our constant buffer, and one for the descriptor table that will store our texture's SRV. the second parameter is the array of D3D12_ROOT_PARAMETER structures, which define each of the parameters. The third parameter is the number of static samplers we are creating this root signature with. We only have one, so we set that parameter to 1. The fourth parameter is a reference to our sampler description. The last parameter is the flags we are creating the root signature with. We want to deny the root signature to as many of the shaders as possible, so as to not lose performance by letting shaders that do not need access to the root signature not know about it. Our vertex shader, as well as our pixel shader both need access to the root signature, so we have denied all shaders access except for those two. CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc; rootSignatureDesc.Init(_countof(rootParameters), // we have 2 root parameters rootParameters, // a pointer to the beginning of our root parameters array 1, // we have one static sampler &sampler, // a pointer to our static sampler (array) D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT | // we can deny shader stages here for better performance D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS); ####New Input Layout#### We removed the COLOR input element and replaced it with a TEXCOORD element which is a DXGI_FORMAT_R32G32_FLOAT. D3D12_INPUT_ELEMENT_DESC inputLayout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 } }; ####New Cube Data#### We have replaced the 4 color values for each vertex with 2 values for the texture coordinate Vertex vList[] = { // front face { -0.5f, 0.5f, -0.5f, 0.0f, 0.0f }, { 0.5f, -0.5f, -0.5f, 1.0f, 1.0f }, { -0.5f, -0.5f, -0.5f, 0.0f, 1.0f }, { 0.5f, 0.5f, -0.5f, 1.0f, 0.0f }, // right side face { 0.5f, -0.5f, -0.5f, 0.0f, 1.0f }, { 0.5f, 0.5f, 0.5f, 1.0f, 0.0f }, { 0.5f, -0.5f, 0.5f, 1.0f, 1.0f }, { 0.5f, 0.5f, -0.5f, 0.0f, 0.0f }, // left side face { -0.5f, 0.5f, 0.5f, 0.0f, 0.0f }, { -0.5f, -0.5f, -0.5f, 1.0f, 1.0f }, { -0.5f, -0.5f, 0.5f, 0.0f, 1.0f }, { -0.5f, 0.5f, -0.5f, 1.0f, 0.0f }, // back face { 0.5f, 0.5f, 0.5f, 0.0f, 0.0f }, { -0.5f, -0.5f, 0.5f, 1.0f, 1.0f }, { 0.5f, -0.5f, 0.5f, 0.0f, 1.0f }, { -0.5f, 0.5f, 0.5f, 1.0f, 0.0f }, // top face { -0.5f, 0.5f, -0.5f, 0.0f, 1.0f }, { 0.5f, 0.5f, 0.5f, 1.0f, 0.0f }, { 0.5f, 0.5f, -0.5f, 1.0f, 1.0f }, { -0.5f, 0.5f, 0.5f, 0.0f, 0.0f }, // bottom face { 0.5f, -0.5f, 0.5f, 0.0f, 0.0f }, { -0.5f, -0.5f, -0.5f, 1.0f, 1.0f }, { 0.5f, -0.5f, -0.5f, 0.0f, 1.0f }, { -0.5f, -0.5f, 0.5f, 1.0f, 0.0f }, }; ####Loading the texture from a file#### We have created a couple functions that will load the image data from file into memory, decoded and layed out in a DXGI compatible format (eg. 32-bit RGBA). The function is called *LoadImageDataFromFile()*, and takes 4 parameters. The first parameter is a reference to a BYTE array. This BYTE array (BYTE*) will store the actual image data once the image has been decoded into a DXGI compatible format. The second parameter is a reference to a D3D12_RESOURCE_DESC structure. This structure will be filled out in the function as we load and decode the image. It will contain all the texture information such as width, height, and pixel format. This function will not create mipmaps, although once you have the data and the initial texture description, you could calculate and create your own mipmaps and update the texture description with the number of mipmaps. The third parameter is the filename we want to load. In this case, we are loading a jpeg file called braynzar.jpg. finally we have a reference to the number of bytes per row in the image. This will get filled out by the function. The function returns the size in bytes of the image. // Load the image from file D3D12_RESOURCE_DESC textureDesc; int imageBytesPerRow; BYTE* imageData; int imageSize = LoadImageDataFromFile(&imageData, textureDesc, L"braynzar.jpg", imageBytesPerRow); ####LoadImageDataFromFile()#### This is a custom function which uses the WIC API to load and decode an image file to a DXGI compatible format. It fills out a D3D12_RESOURCE_DESC, and returns the image data, image size in bytes, and the bytes per row of the image. You will see later why we need the bytes per row. We must correctly align data ourselves in D3D12 in a way that is compatible with the D3D drivers and GPU. Let's start at the top of this function. Some of the WIC API's return an HRESULT, so we just create this object at the top of the function. We only need one IWICImagingFactory for our application, which we will use to create a decoder, frame, and converter, so we make this variable a static. The decoder, frame, and converter are all per image, so we must recreate each of these for every image we load in. Not all WIC formats are compatible with DXGI, so in the cases where we must convert the format of the image to a compatible DXGI format, we set the imageConverted to true, which we will find out in a bit. If we have not yet created a WIC factory, we do that now. This should only happen once when we load our first image in. // load and decode image from file int LoadImageDataFromFile(BYTE** imageData, D3D12_RESOURCE_DESC& resourceDescription, LPCWSTR filename, int &bytesPerRow) { HRESULT hr; // we only need one instance of the imaging factory to create decoders and frames static IWICImagingFactory *wicFactory; // reset decoder, frame and converter since these will be different for each image we load IWICBitmapDecoder *wicDecoder = NULL; IWICBitmapFrameDecode *wicFrame = NULL; IWICFormatConverter *wicConverter = NULL; bool imageConverted = false; if (wicFactory == NULL) { // Initialize the COM library CoInitialize(NULL); // create the WIC factory hr = CoCreateInstance( CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&wicFactory) ); if (FAILED(hr)) return 0; } ####Create the Bitmap Decoder#### We can use the .[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690307(v=vs.85).aspx][CreateDecoderFromFilename()] method of the WIC factory to create a decoder from a file. // load a decoder for the image hr = wicFactory->CreateDecoderFromFilename( filename, // Image we want to load in NULL, // This is a vendor ID, we do not prefer a specific one so set to null GENERIC_READ, // We want to read from this file WICDecodeMetadataCacheOnLoad, // We will cache the metadata right away, rather than when needed, which might be unknown &wicDecoder // the wic decoder to be created ); if (FAILED(hr)) return 0; ####Grab the first frame in the image#### Some image formats, such as GIF, have multiple images, or frames. Other image formats, like jpg only have one image, or frame, so we just grab the first frame from the decoder, using the .[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690098(v=vs.85).aspx][GetFrame()] method of the WIC decoder. The first argument is the frame number we wish to grab, the second is the IWICBitmapFrameDecode we wish to store this frame in. // get image from decoder (this will decode the "frame") hr = wicDecoder->GetFrame(0, &wicFrame); if (FAILED(hr)) return 0; ####Get WIC Pixel Format From Frame#### Now we get the pixel format of the decoded image. We can do this with the .[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690181(v=vs.85).aspx][GetPixelFormat()] method of the IWICBitmapSource interface (IWICBitmapFrameDecode inherits from IWICBitmapSource). // get wic pixel format of image WICPixelFormatGUID pixelFormat; hr = wicFrame->GetPixelFormat(&pixelFormat); if (FAILED(hr)) return 0; ####Get Size of the image#### We need to get the width and height of the image, which we can do with the .[https://msdn.microsoft.com/en-us/library/windows/desktop/ee690185(v=vs.85).aspx][GetSize()] method of the IWICBitmapSource interface. // get size of image UINT textureWidth, textureHeight; hr = wicFrame->GetSize(&textureWidth, &textureHeight); if (FAILED(hr)) return 0; ####Get Compatible DXGI Format#### Now we call another custom function we have, which is just a bunch of if statements. This function will return a DXGI format that is compatible with the WIC format of the image. If there are no compatible DXGI formats, it will return DXGI_FORMAT_UNKNOWN, which we then know we must try to convert the format into a DXGI compatible format. // convert wic pixel format to dxgi pixel format DXGI_FORMAT dxgiFormat = GetDXGIFormatFromWICFormat(pixelFormat); ####Convert image to DXGI format if necessary#### Basically what we are doing here is finding a WIC format that can be converted to based on the current WIC pixel format of the image. Once we have a format we know is DXGI compatible, we convert the image. We do this by first creating a IWICFormatConverter with the WIC factory, checking if the current format can be converted to the new format, Then finally doing the convert with the IWICFormatConverter object. The newly converted image will actually now reside in the IWICFormatConverter object, so when we get the pixel data, we will have to either get it from the IWICBitmapFrameDecode if the image did not need to be converted, or the IWICFormatConverter if the image did need to be converted. This is where the imageConverted boolean variable comes in. If the image was converted, we set this to true. // if the format of the image is not a supported dxgi format, try to convert it if (dxgiFormat == DXGI_FORMAT_UNKNOWN) { // get a dxgi compatible wic format from the current image format WICPixelFormatGUID convertToPixelFormat = GetConvertToWICFormat(pixelFormat); // return if no dxgi compatible format was found if (convertToPixelFormat == GUID_WICPixelFormatDontCare) return 0; // set the dxgi format dxgiFormat = GetDXGIFormatFromWICFormat(convertToPixelFormat); // create the format converter hr = wicFactory->CreateFormatConverter(&wicConverter); if (FAILED(hr)) return 0; // make sure we can convert to the dxgi compatible format BOOL canConvert = FALSE; hr = wicConverter->CanConvert(pixelFormat, convertToPixelFormat, &canConvert); if (FAILED(hr) || !canConvert) return 0; // do the conversion (wicConverter will contain the converted image) hr = wicConverter->Initialize(wicFrame, convertToPixelFormat, WICBitmapDitherTypeErrorDiffusion, 0, 0, WICBitmapPaletteTypeCustom); if (FAILED(hr)) return 0; // this is so we know to get the image data from the wicConverter (otherwise we will get from wicFrame) imageConverted = true; } ####Get image size information#### Now we get the bits per pixel based on the dxgi format by calling another custom function which is just a bunch of if statements which will return the bits per pixel based on the given DXGI format. Once we have the bits per pixel, we can calculate the bytes per row. Bytes per row is calculated by (width * bitsperpixel) / 8. We must get the bytes per row because of how the D3D drivers manage caching resources, which gives us alignment responsibilities. Then we get the actual image size (in bytes). int bitsPerPixel = GetDXGIFormatBitsPerPixel(dxgiFormat); // number of bits per pixel bytesPerRow = (textureWidth * bitsPerPixel) / 8; // number of bytes in each row of the image data int imageSize = bytesPerRow * textureHeight; // total image size in bytes ####Get the pixel data#### The first thing we do is allocate enough memory to store the image in. The image at this point has been decoded and will most likely be much larger than the file it was stored in (because it was encoded). We do this with malloc, passing in the size of the image in bytes. Once we have allocated enough memory, we copy the pixel data from either the wic frame or wic format converter (depending on if the image was converted) to the allocated memory. We copy the data from these interfaces using the CopyPixels method of the interface. // allocate enough memory for the raw image data, and set imageData to point to that memory *imageData = (BYTE*)malloc(imageSize); // copy (decoded) raw image data into the newly allocated memory (imageData) if (imageConverted) { // if image format needed to be converted, the wic converter will contain the converted image hr = wicConverter->CopyPixels(0, bytesPerRow, imageSize, *imageData); if (FAILED(hr)) return 0; } else { // no need to convert, just copy data from the wic frame hr = wicFrame->CopyPixels(0, bytesPerRow, imageSize, *imageData); if (FAILED(hr)) return 0; } ####Filling out a texture description#### Now that we have our image stored in memory, and we have information about the image, we must fill out a D3D12_RESOURCE_DESC structure. I have decided to pass a reference to this structure through a parameter so that we can have this function return multiple pieces of data. The parameter name is resourceDescription, which is the structure we will be filling out. typedef struct .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn903813(v=vs.85).aspx][D3D12_RESOURCE_DESC] { D3D12_RESOURCE_DIMENSION Dimension; UINT64 Alignment; UINT64 Width; UINT Height; UINT16 DepthOrArraySize; UINT16 MipLevels; DXGI_FORMAT Format; DXGI_SAMPLE_DESC SampleDesc; D3D12_TEXTURE_LAYOUT Layout; D3D12_RESOURCE_FLAGS Flags; } D3D12_RESOURCE_DESC; - **Dimension** - *This is a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn770396(v=vs.85).aspx][D3D12_RESOURCE_DIMENSION] enumeration value. Basically this just says what type of resource this is. We are creating a 2D texture, so we set this to D3D12_RESOURCE_DIMENSION_TEXTURE2D* - **Alignment** - *This is the alignment of the resource, Values can be 0, 4KB (4096, or D3D12_SMALL_RESOURCE_PLACEMENT_ALIGNMENT), 64KB (65536, or D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT), or 4MB (4194304, or D3D12_DEFAULT_MSAA_RESOURCE_PLACEMENT_ALIGNMENT). If 0 is set, the D3D drivers will automatically decide the alignment based on the size of the image and number of mipmaps. If you are using placed or reserved resources, you will want to explicitly set this yourself so that you can better map your resources to heaps. - **Width** - *Width of the texture* - **Height** - *Height of the texture* - **DepthOrArraySize** - *Set this to 1 for a 1D or 2D texture. Otherwise this is the depth of the texture when working with a 3D texture.* - **MipLevels** - *Number of mip levels in this resource. We are not creating any mipmaps here, so we set this to 1, so only one mipmap level.* - **Format** - *This is the DXGI format of the texture.* - **SampleDesc** - *This is a DXGI_SAMPLE_DESC structure, which we talked about in a previous tutorial. There are only two members, Count and Quality. We set this to 1 and 0 respectively.* - **Layout** - *A .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn770442(v=vs.85).aspx][D3D12_TEXTURE_LAYOUT] enumeration value. We can set this to D3D12_TEXTURE_LAYOUT_UNKNOWN to let the D3D driver choose the most efficient pixel layout. This is how the data is layed out in memory.* - **Flags** - *A .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn986742(v=vs.85).aspx][D3D12_RESOURCE_FLAGS] enumeration value. We will set this to D3D12_RESOURCE_FLAG_NONE here.* // now describe the texture with the information we have obtained from the image resourceDescription = {}; resourceDescription.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D; resourceDescription.Alignment = 0; // may be 0, 4KB, 64KB, or 4MB. 0 will let runtime decide between 64KB and 4MB (4MB for multi-sampled textures) resourceDescription.Width = textureWidth; // width of the texture resourceDescription.Height = textureHeight; // height of the texture resourceDescription.DepthOrArraySize = 1; // if 3d image, depth of 3d image. Otherwise an array of 1D or 2D textures (we only have one image, so we set 1) resourceDescription.MipLevels = 1; // Number of mipmaps. We are not generating mipmaps for this texture, so we have only one level resourceDescription.Format = dxgiFormat; // This is the dxgi format of the image (format of the pixels) resourceDescription.SampleDesc.Count = 1; // This is the number of samples per pixel, we just want 1 sample resourceDescription.SampleDesc.Quality = 0; // The quality level of the samples. Higher is better quality, but worse performance resourceDescription.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN; // The arrangement of the pixels. Setting to unknown lets the driver choose the most efficient one resourceDescription.Flags = D3D12_RESOURCE_FLAG_NONE; // no flags ####Returning from the function#### Finally we return from the function, explicitly returning the size of the image (other return values were in the parameters). // return the size of the image. remember to delete the image once your done with it (in this tutorial once its uploaded to the gpu) return imageSize; } ####Make sure there is an image#### Going back to the InitD3D() function, once we load in our texture from the file, we first check it's size to make sure we have data // make sure we have data if(imageSize <= 0) { Running = false; return false; } ####Create the texture resource#### Now that we have a texture description and texture data, we need to create a resource. We will create a committed resource here, in video memory (a default heap), which we will use an upload heap to copy the image data to the resource. The texture description will define how large the resource must be. // create a default heap where the upload heap will copy its contents into (contents being the texture) hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), // a default heap D3D12_HEAP_FLAG_NONE, // no flags &textureDesc, // the description of our texture D3D12_RESOURCE_STATE_COPY_DEST, // We will copy the texture from the upload heap to here, so we start it out in a copy dest state nullptr, // used for render targets and depth/stencil buffers IID_PPV_ARGS(&textureBuffer)); if (FAILED(hr)) { Running = false; return false; } textureBuffer->SetName(L"Texture Buffer Resource Heap"); ####Create an upload heap#### Now we need to create an upload heap large enough to fit the data in and copy the image data to the texture resource. The first thing we will do is find out how large the upload heap must be to store our texture. This is where an alignment requirement must be met. The row pitch is the size in bytes of each row of pixels in the image. For an upload heap, the row pitch must be 256 byte aligned for every row, except for the very last row, which does not need to meet this alignment requirement. So how do we calculate this? we get the size in bytes of each row minus the last row, add padding to make it 256 byte aligned, then add the size in bytes of the last row. You can do it in code like this: int textureHeapSize = ((((width * numBytesPerPixel) + 255) & ~255) * (height - 1)) + (width * numBytesPerPixel); Or we can use an API the D3D device offers us, .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn986878(v=vs.85).aspx][GetCopyableFootprints()], which is what we will do. Use the equation above, and compare it to the results of GetCopyableFootprints, they should end up with the same value. Once we have the required upload heap size for the texture, we create the upload heap (as a committed resource). UINT64 textureUploadBufferSize; // this function gets the size an upload buffer needs to be to upload a texture to the gpu. // each row must be 256 byte aligned except for the last row, which can just be the size in bytes of the row // eg. textureUploadBufferSize = ((((width * numBytesPerPixel) + 255) & ~255) * (height - 1)) + (width * numBytesPerPixel); //textureUploadBufferSize = (((imageBytesPerRow + 255) & ~255) * (textureDesc.Height - 1)) + imageBytesPerRow; device->GetCopyableFootprints(&textureDesc, 0, 1, 0, nullptr, nullptr, nullptr, &textureUploadBufferSize); // now we create an upload heap to upload our texture to the GPU hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), // upload heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(textureUploadBufferSize), // resource description for a buffer (storing the image data in this heap just to copy to the default heap) D3D12_RESOURCE_STATE_GENERIC_READ, // We will copy the contents from this heap to the default heap above nullptr, IID_PPV_ARGS(&textureBufferUploadHeap)); if (FAILED(hr)) { Running = false; return false; } textureBufferUploadHeap->SetName(L"Texture Buffer Upload Resource Heap"); ####Upload the texture#### We now have a default heap that the pixel shader can read from, and an upload heap we can use to upload the texture data and copy to the default heap. Lets do that now. We will use the UpdateSubresources() function from the d3dx12.h helper header to copy the resource data through the upload heap to the default heap. This function does a lot of stuff for us and then eventually calls .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn903862(v=vs.85).aspx][CopyTextureRegion()] to copy the resource from the upload heap to the default heap. UINT64 inline .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn899213(v=vs.85).aspx][UpdateSubresources]( _In_ ID3D12GraphicsCommandList *pCmdList, _In_ ID3D12Resource *pDestinationResource, _In_ ID3D12Resource *pIntermediate, UINT64 IntermediateOffset, _In_ UINT FirstSubresource, _In_ UINT NumSubresources, _In_ D3D12_SUBRESOURCE_DATA *pSrcData ); - **pCmdList** - *The command list that will be executing this command* - **pDestinationResource** - *The resource we wish to update* - **pIntermediate** - *The resource we wish to update the destination resource with* - **IntermediateOffset** - *The offset in bytes to the resource data in the upload heap we want to copy from. The upload heap only has this one resource, so we do not need to offset. Set this to 0* - **FirstSubresource** - *The index of the first subresource in the intermediate resource we want to copy from. We only have one resource, so we set this to 0* - **NumSubresources** - *The number of subresources we want to copy. We only have one resource, and we want to copy the entire thing, so we set this to 1* - **pSrcData** - *The actual data we will be copying.* After we copy the texture data to the default heap, we must transition the state of the texture resource from copy destination to pixel shader resource. We do this with a resource barrier. // store vertex buffer in upload heap D3D12_SUBRESOURCE_DATA textureData = {}; textureData.pData = &imageData[0]; // pointer to our image data textureData.RowPitch = imageBytesPerRow; // size of all our triangle vertex data textureData.SlicePitch = imageBytesPerRow * textureDesc.Height; // also the size of our triangle vertex data // Now we copy the upload buffer contents to the default heap UpdateSubresources(commandList, textureBuffer, textureBufferUploadHeap, 0, 0, 1, &textureData); // transition the texture default heap to a pixel shader resource (we will be sampling from this heap in the pixel shader to get the color of pixels) commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(textureBuffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE)); ####Create the SRV Descriptor Heap#### Now we create a descriptor heap to store the descriptor describing our texture resource (SRV). We've created descriptor heaps in a previous tutorial, so if you don't know what this is all about, you can go read the Constant Buffer tutorial. // create the descriptor heap that will store our srv D3D12_DESCRIPTOR_HEAP_DESC heapDesc = {}; heapDesc.NumDescriptors = 1; heapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; heapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; hr = device->CreateDescriptorHeap(&heapDesc, IID_PPV_ARGS(&mainDescriptorHeap)); if (FAILED(hr)) { Running = false; } ####Create the SRV descriptor#### Here we create the Shader Resource View which describes our texture and where to find it. We will store this SRV in the descriptor heap we just created above, then use a descriptor table to point to this SRV, so that the pixel shader can use the texture. To create an SRV, we fill out a D3D12_SHADER_RESOURCE_VIEW_DESC structure. typedef struct D3D12_SHADER_RESOURCE_VIEW_DESC { DXGI_FORMAT Format; D3D12_SRV_DIMENSION ViewDimension; UINT Shader4ComponentMapping; union { D3D12_BUFFER_SRV Buffer; D3D12_TEX1D_SRV Texture1D; D3D12_TEX1D_ARRAY_SRV Texture1DArray; D3D12_TEX2D_SRV Texture2D; D3D12_TEX2D_ARRAY_SRV Texture2DArray; D3D12_TEX2DMS_SRV Texture2DMS; D3D12_TEX2DMS_ARRAY_SRV Texture2DMSArray; D3D12_TEX3D_SRV Texture3D; D3D12_TEXCUBE_SRV TextureCube; D3D12_TEXCUBE_ARRAY_SRV TextureCubeArray; }; } D3D12_SHADER_RESOURCE_VIEW_DESC; - **Format** - *This is the DXGI format of the resource, a DXGI_FORMAT enumeration value.* - **ViewDimension** - *This is a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn770408(v=vs.85).aspx][D3D12_SRV_DIMENSION] enumeration value. For a 2D texture, we specify D3D12_SRV_DIMENSION_TEXTURE2D.* - **Shader4ComponentMapping** - *This is a .[https://msdn.microsoft.com/en-us/library/windows/desktop/dn903814(v=vs.85).aspx][D3D12_SHADER_COMPONENT_MAPPING] enumeration value. Specifying D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING indicates a default 1:1 mapping. If we were to specify anything else, then all 4 components would return what we specify here, such as D3D12_SHADER_COMPONENT_MAPPING_FROM_MEMORY_COMPONENT_0 for the red channel.* - **Buffer - TextureCubeArray** - *These are union structures. As unions make sure the structure has enough space for the largest unioned structure, we only set one of these union structures. Since we are working with a single texture2d, we set the Texture2D union parameter's MipLevels to 1, since we are not mipmapping, and only have one level. We can leave the other members of the Texture2D member to their default values.* // now we create a shader resource view (descriptor that points to the texture and describes it) D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {}; srvDesc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING; srvDesc.Format = textureDesc.Format; srvDesc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; device->CreateShaderResourceView(textureBuffer, &srvDesc, mainDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); ####Upload the texture#### Now that we have created some commands, specifically uploading the texture and changing the state of the texture buffer from copy dest to pixel shader resource, we need to execute these commands before we can use the texture. Remember that you must make sure that your resources are fully uploaded, active, and GPU visible before you attempt to use them in any of the shaders. After we upload the texture data, we can delete our local copy of the texture data since we have no further use for it. Remember the idea of residency. We do not need to worry about it in this tutorial, but in your own applications, if you are finished using a resource for the time being, but plan on using it again, instead of releasing the resource, you can "Evict" it, which will make the memory available if another resource needs that memory, and will move it to a lower memory location such as system memory or page memory if needed. When you need it again, you can "MakeResident" the resource, instead of loading it from the file, decoding and uploading it all over again. Now i said resource in the above paragraph, but the truth is, you can only Evice and MakeResident ENTIRE heaps, which may have many resources inside them. You cannot evict or makeresident a part of a heap, so you will want to group your resources in heaps based on their usage patterns, or just use committed resources. // Now we execute the command list to upload the initial assets (triangle data) commandList->Close(); ID3D12CommandList* ppCommandLists[] = { commandList }; commandQueue->ExecuteCommandLists(_countof(ppCommandLists), ppCommandLists); // increment the fence value now, otherwise the buffer might not be uploaded by the time we start drawing fenceValue[frameIndex]++; hr = commandQueue->Signal(fence[frameIndex], fenceValue[frameIndex]); if (FAILED(hr)) { Running = false; return false; } // we are done with image data now that we've uploaded it to the gpu, so free it up delete imageData; ####Set the SRV's descriptor heap as a Root Parameter#### We've done this before. We create an array of descriptor heaps (only our SRV descriptor heap) and make them our current descriptor heap. // set the descriptor heap ID3D12DescriptorHeap* descriptorHeaps[] = { mainDescriptorHeap }; commandList->SetDescriptorHeaps(_countof(descriptorHeaps), descriptorHeaps); ####Set the root descriptor table#### Now we get a pointer to the descriptor heap and pass it as the root descriptor table argument. // set the descriptor table to the descriptor heap (parameter 1, as constant buffer root descriptor is parameter index 0) commandList->SetGraphicsRootDescriptorTable(1, mainDescriptorHeap->GetGPUDescriptorHandleForHeapStart()); ####Draw the cubes#### We set up the pipeline and finally draw the cubes. commandList->RSSetViewports(1, &viewport); // set the viewports commandList->RSSetScissorRects(1, &scissorRect); // set the scissor rects commandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // set the primitive topology commandList->IASetVertexBuffers(0, 1, &vertexBufferView); // set the vertex buffer (using the vertex buffer view) commandList->IASetIndexBuffer(&indexBufferView); // first cube // set cube1's constant buffer commandList->SetGraphicsRootConstantBufferView(0, constantBufferUploadHeaps[frameIndex]->GetGPUVirtualAddress()); // draw first cube commandList->DrawIndexedInstanced(numCubeIndices, 1, 0, 0, 0); // second cube // set cube2's constant buffer. You can see we are adding the size of ConstantBufferPerObject to the constant buffer // resource heaps address. This is because cube1's constant buffer is stored at the beginning of the resource heap, while // cube2's constant buffer data is stored after (256 bits from the start of the heap). commandList->SetGraphicsRootConstantBufferView(0, constantBufferUploadHeaps[frameIndex]->GetGPUVirtualAddress() + ConstantBufferPerObjectAlignedSize); // draw second cube commandList->DrawIndexedInstanced(numCubeIndices, 1, 0, 0, 0); ####Passing texture coordinates from the vertex shader#### We have a new input parameter to the vertex shader, Texture Coordinates. We add a float2 parameter at the TEXCOORD semantic in the VS_INPUT structure struct VS_INPUT { float4 pos : POSITION; float2 texCoord: TEXCOORD; }; We also have to pass the texture coordinates to the pixel shader, so we add a float2 to the VS_OUTPUT structure as well. struct VS_OUTPUT { float4 pos: SV_POSITION; float2 texCoord: TEXCOORD; }; In our vertex shader, all we have to do is set the output texture coordinates to the values of the input texture coordinates VS_OUTPUT main(VS_INPUT input) { VS_OUTPUT output; output.pos = mul(input.pos, wvpMat); output.texCoord = input.texCoord; return output; } ####Sample the texture in the pixel shader#### We have two uniform variables here, a Texture2D and a SamplerState, bound at registers t0 and s0 respectively. The Texture2D will be the SRV that we have set in the root descriptor table for register t0, and the SamplerState is the static sampler we created in the root signature and bound to register s0. To use these two new variables and get a color for a pixel from the texture, we call the Sample method of the Texture2D, providing a SamplerState as the first argument, and the 2D texture coordinate as the second argument. The result will be a float4, containing the rgba values sampled from the texture at that texture coordinate, using the sampler we had provided as an argument. Texture2D t1 : register(t0); SamplerState s1 : register(s0); struct VS_OUTPUT { float4 pos: SV_POSITION; float2 texCoord: TEXCOORD; }; float4 main(VS_OUTPUT input) : SV_TARGET { // return interpolated color return t1.Sample(s1, input.texCoord); } Thats it! You should end up with something like this: +[http://www.braynzarsoft.net/image/100267][D3D12 Texture Coordinate Tutorial Finale] ####Final Code#### ##VertexShader.hlsl## struct VS_INPUT { float4 pos : POSITION; float2 texCoord: TEXCOORD; }; struct VS_OUTPUT { float4 pos: SV_POSITION; float2 texCoord: TEXCOORD; }; cbuffer ConstantBuffer : register(b0) { float4x4 wvpMat; }; VS_OUTPUT main(VS_INPUT input) { VS_OUTPUT output; output.pos = mul(input.pos, wvpMat); output.texCoord = input.texCoord; return output; } ##PixelShader.hlsl## Texture2D t1 : register(t0); SamplerState s1 : register(s0); struct VS_OUTPUT { float4 pos: SV_POSITION; float2 texCoord: TEXCOORD; }; float4 main(VS_OUTPUT input) : SV_TARGET { // return interpolated color return t1.Sample(s1, input.texCoord); } ##stdafx.h## #pragma once #ifndef WIN32_LEAN_AND_MEAN #define WIN32_LEAN_AND_MEAN // Exclude rarely-used stuff from Windows headers. #endif #include <windows.h> #include <d3d12.h> #include <dxgi1_4.h> #include <D3Dcompiler.h> #include <DirectXMath.h> #include "d3dx12.h" #include <string> #include <wincodec.h> // this will only call release if an object exists (prevents exceptions calling release on non existant objects) #define SAFE_RELEASE(p) { if ( (p) ) { (p)->Release(); (p) = 0; } } using namespace DirectX; // we will be using the directxmath library // Handle to the window HWND hwnd = NULL; // name of the window (not the title) LPCTSTR WindowName = L"BzTutsApp"; // title of the window LPCTSTR WindowTitle = L"Bz Window"; // width and height of the window int Width = 800; int Height = 600; // is window full screen? bool FullScreen = false; // we will exit the program when this becomes false bool Running = true; // create a window bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, bool fullscreen); // main application loop void mainloop(); // callback function for windows messages LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam); // direct3d stuff const int frameBufferCount = 3; // number of buffers we want, 2 for double buffering, 3 for tripple buffering ID3D12Device* device; // direct3d device IDXGISwapChain3* swapChain; // swapchain used to switch between render targets ID3D12CommandQueue* commandQueue; // container for command lists ID3D12DescriptorHeap* rtvDescriptorHeap; // a descriptor heap to hold resources like the render targets ID3D12Resource* renderTargets[frameBufferCount]; // number of render targets equal to buffer count ID3D12CommandAllocator* commandAllocator[frameBufferCount]; // we want enough allocators for each buffer * number of threads (we only have one thread) ID3D12GraphicsCommandList* commandList; // a command list we can record commands into, then execute them to render the frame ID3D12Fence* fence[frameBufferCount]; // an object that is locked while our command list is being executed by the gpu. We need as many //as we have allocators (more if we want to know when the gpu is finished with an asset) HANDLE fenceEvent; // a handle to an event when our fence is unlocked by the gpu UINT64 fenceValue[frameBufferCount]; // this value is incremented each frame. each fence will have its own value int frameIndex; // current rtv we are on int rtvDescriptorSize; // size of the rtv descriptor on the device (all front and back buffers will be the same size) // function declarations bool InitD3D(); // initializes direct3d 12 void Update(); // update the game logic void UpdatePipeline(); // update the direct3d pipeline (update command lists) void Render(); // execute the command list void Cleanup(); // release com ojects and clean up memory void WaitForPreviousFrame(); // wait until gpu is finished with command list ID3D12PipelineState* pipelineStateObject; // pso containing a pipeline state ID3D12RootSignature* rootSignature; // root signature defines data shaders will access D3D12_VIEWPORT viewport; // area that output from rasterizer will be stretched to. D3D12_RECT scissorRect; // the area to draw in. pixels outside that area will not be drawn onto ID3D12Resource* vertexBuffer; // a default buffer in GPU memory that we will load vertex data for our triangle into ID3D12Resource* indexBuffer; // a default buffer in GPU memory that we will load index data for our triangle into D3D12_VERTEX_BUFFER_VIEW vertexBufferView; // a structure containing a pointer to the vertex data in gpu memory // the total size of the buffer, and the size of each element (vertex) D3D12_INDEX_BUFFER_VIEW indexBufferView; // a structure holding information about the index buffer ID3D12Resource* depthStencilBuffer; // This is the memory for our depth buffer. it will also be used for a stencil buffer in a later tutorial ID3D12DescriptorHeap* dsDescriptorHeap; // This is a heap for our depth/stencil buffer descriptor // this is the structure of our constant buffer. struct ConstantBufferPerObject { XMFLOAT4X4 wvpMat; }; // Constant buffers must be 256-byte aligned which has to do with constant reads on the GPU. // We are only able to read at 256 byte intervals from the start of a resource heap, so we will // make sure that we add padding between the two constant buffers in the heap (one for cube1 and one for cube2) // Another way to do this would be to add a float array in the constant buffer structure for padding. In this case // we would need to add a float padding[50]; after the wvpMat variable. This would align our structure to 256 bytes (4 bytes per float) // The reason i didn't go with this way, was because there would actually be wasted cpu cycles when memcpy our constant // buffer data to the gpu virtual address. currently we memcpy the size of our structure, which is 16 bytes here, but if we // were to add the padding array, we would memcpy 64 bytes if we memcpy the size of our structure, which is 50 wasted bytes // being copied. int ConstantBufferPerObjectAlignedSize = (sizeof(ConstantBufferPerObject) + 255) & ~255; ConstantBufferPerObject cbPerObject; // this is the constant buffer data we will send to the gpu // (which will be placed in the resource we created above) ID3D12Resource* constantBufferUploadHeaps[frameBufferCount]; // this is the memory on the gpu where constant buffers for each frame will be placed UINT8* cbvGPUAddress[frameBufferCount]; // this is a pointer to each of the constant buffer resource heaps XMFLOAT4X4 cameraProjMat; // this will store our projection matrix XMFLOAT4X4 cameraViewMat; // this will store our view matrix XMFLOAT4 cameraPosition; // this is our cameras position vector XMFLOAT4 cameraTarget; // a vector describing the point in space our camera is looking at XMFLOAT4 cameraUp; // the worlds up vector XMFLOAT4X4 cube1WorldMat; // our first cubes world matrix (transformation matrix) XMFLOAT4X4 cube1RotMat; // this will keep track of our rotation for the first cube XMFLOAT4 cube1Position; // our first cubes position in space XMFLOAT4X4 cube2WorldMat; // our first cubes world matrix (transformation matrix) XMFLOAT4X4 cube2RotMat; // this will keep track of our rotation for the second cube XMFLOAT4 cube2PositionOffset; // our second cube will rotate around the first cube, so this is the position offset from the first cube int numCubeIndices; // the number of indices to draw the cube ID3D12Resource* textureBuffer; // the resource heap containing our texture int LoadImageDataFromFile(BYTE** imageData, D3D12_RESOURCE_DESC& resourceDescription, LPCWSTR filename, int &bytesPerRow); DXGI_FORMAT GetDXGIFormatFromWICFormat(WICPixelFormatGUID& wicFormatGUID); WICPixelFormatGUID GetConvertToWICFormat(WICPixelFormatGUID& wicFormatGUID); int GetDXGIFormatBitsPerPixel(DXGI_FORMAT& dxgiFormat); ID3D12DescriptorHeap* mainDescriptorHeap; ID3D12Resource* textureBufferUploadHeap; ##main.cpp## #include "stdafx.h" struct Vertex { Vertex(float x, float y, float z, float u, float v) : pos(x, y, z), texCoord(u, v) {} XMFLOAT3 pos; XMFLOAT2 texCoord; }; int WINAPI WinMain(HINSTANCE hInstance, //Main windows function HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) { // create the window if (!InitializeWindow(hInstance, nShowCmd, FullScreen)) { MessageBox(0, L"Window Initialization - Failed", L"Error", MB_OK); return 1; } // initialize direct3d if (!InitD3D()) { MessageBox(0, L"Failed to initialize direct3d 12", L"Error", MB_OK); Cleanup(); return 1; } // start the main loop mainloop(); // we want to wait for the gpu to finish executing the command list before we start releasing everything WaitForPreviousFrame(); // close the fence event CloseHandle(fenceEvent); // clean up everything Cleanup(); return 0; } // create and show the window bool InitializeWindow(HINSTANCE hInstance, int ShowWnd, bool fullscreen) { if (fullscreen) { HMONITOR hmon = MonitorFromWindow(hwnd, MONITOR_DEFAULTTONEAREST); MONITORINFO mi = { sizeof(mi) }; GetMonitorInfo(hmon, &mi); Width = mi.rcMonitor.right - mi.rcMonitor.left; Height = mi.rcMonitor.bottom - mi.rcMonitor.top; } WNDCLASSEX wc; wc.cbSize = sizeof(WNDCLASSEX); wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = NULL; wc.cbWndExtra = NULL; wc.hInstance = hInstance; wc.hIcon = LoadIcon(NULL, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH)(COLOR_WINDOW + 2); wc.lpszMenuName = NULL; wc.lpszClassName = WindowName; wc.hIconSm = LoadIcon(NULL, IDI_APPLICATION); if (!RegisterClassEx(&wc)) { MessageBox(NULL, L"Error registering class", L"Error", MB_OK | MB_ICONERROR); return false; } hwnd = CreateWindowEx(NULL, WindowName, WindowTitle, WS_OVERLAPPEDWINDOW, CW_USEDEFAULT, CW_USEDEFAULT, Width, Height, NULL, NULL, hInstance, NULL); if (!hwnd) { MessageBox(NULL, L"Error creating window", L"Error", MB_OK | MB_ICONERROR); return false; } if (fullscreen) { SetWindowLong(hwnd, GWL_STYLE, 0); } ShowWindow(hwnd, ShowWnd); UpdateWindow(hwnd); return true; } void mainloop() { MSG msg; ZeroMemory(&msg, sizeof(MSG)); while (Running) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } else { // run game code Update(); // update the game logic Render(); // execute the command queue (rendering the scene is the result of the gpu executing the command lists) } } } LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wParam, LPARAM lParam) { switch (msg) { case WM_KEYDOWN: if (wParam == VK_ESCAPE) { if (MessageBox(0, L"Are you sure you want to exit?", L"Really?", MB_YESNO | MB_ICONQUESTION) == IDYES) { Running = false; DestroyWindow(hwnd); } } return 0; case WM_DESTROY: // x button on top right corner of window was pressed Running = false; PostQuitMessage(0); return 0; } return DefWindowProc(hwnd, msg, wParam, lParam); } bool InitD3D() { HRESULT hr; // -- Create the Device -- // IDXGIFactory4* dxgiFactory; hr = CreateDXGIFactory1(IID_PPV_ARGS(&dxgiFactory)); if (FAILED(hr)) { return false; } IDXGIAdapter1* adapter; // adapters are the graphics card (this includes the embedded graphics on the motherboard) int adapterIndex = 0; // we'll start looking for directx 12 compatible graphics devices starting at index 0 bool adapterFound = false; // set this to true when a good one was found // find first hardware gpu that supports d3d 12 while (dxgiFactory->EnumAdapters1(adapterIndex, &adapter) != DXGI_ERROR_NOT_FOUND) { DXGI_ADAPTER_DESC1 desc; adapter->GetDesc1(&desc); if (desc.Flags & DXGI_ADAPTER_FLAG_SOFTWARE) { // we dont want a software device continue; } // we want a device that is compatible with direct3d 12 (feature level 11 or higher) hr = D3D12CreateDevice(adapter, D3D_FEATURE_LEVEL_11_0, _uuidof(ID3D12Device), nullptr); if (SUCCEEDED(hr)) { adapterFound = true; break; } adapterIndex++; } if (!adapterFound) { Running = false; return false; } // Create the device hr = D3D12CreateDevice( adapter, D3D_FEATURE_LEVEL_11_0, IID_PPV_ARGS(&device) ); if (FAILED(hr)) { Running = false; return false; } // -- Create a direct command queue -- // D3D12_COMMAND_QUEUE_DESC cqDesc = {}; cqDesc.Flags = D3D12_COMMAND_QUEUE_FLAG_NONE; cqDesc.Type = D3D12_COMMAND_LIST_TYPE_DIRECT; // direct means the gpu can directly execute this command queue hr = device->CreateCommandQueue(&cqDesc, IID_PPV_ARGS(&commandQueue)); // create the command queue if (FAILED(hr)) { Running = false; return false; } // -- Create the Swap Chain (double/tripple buffering) -- // DXGI_MODE_DESC backBufferDesc = {}; // this is to describe our display mode backBufferDesc.Width = Width; // buffer width backBufferDesc.Height = Height; // buffer height backBufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; // format of the buffer (rgba 32 bits, 8 bits for each chanel) // describe our multi-sampling. We are not multi-sampling, so we set the count to 1 (we need at least one sample of course) DXGI_SAMPLE_DESC sampleDesc = {}; sampleDesc.Count = 1; // multisample count (no multisampling, so we just put 1, since we still need 1 sample) // Describe and create the swap chain. DXGI_SWAP_CHAIN_DESC swapChainDesc = {}; swapChainDesc.BufferCount = frameBufferCount; // number of buffers we have swapChainDesc.BufferDesc = backBufferDesc; // our back buffer description swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT; // this says the pipeline will render to this swap chain swapChainDesc.SwapEffect = DXGI_SWAP_EFFECT_FLIP_DISCARD; // dxgi will discard the buffer (data) after we call present swapChainDesc.OutputWindow = hwnd; // handle to our window swapChainDesc.SampleDesc = sampleDesc; // our multi-sampling description swapChainDesc.Windowed = !FullScreen; // set to true, then if in fullscreen must call SetFullScreenState with true for full screen to get uncapped fps IDXGISwapChain* tempSwapChain; dxgiFactory->CreateSwapChain( commandQueue, // the queue will be flushed once the swap chain is created &swapChainDesc, // give it the swap chain description we created above &tempSwapChain // store the created swap chain in a temp IDXGISwapChain interface ); swapChain = static_cast<IDXGISwapChain3*>(tempSwapChain); frameIndex = swapChain->GetCurrentBackBufferIndex(); // -- Create the Back Buffers (render target views) Descriptor Heap -- // // describe an rtv descriptor heap and create D3D12_DESCRIPTOR_HEAP_DESC rtvHeapDesc = {}; rtvHeapDesc.NumDescriptors = frameBufferCount; // number of descriptors for this heap. rtvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; // this heap is a render target view heap // This heap will not be directly referenced by the shaders (not shader visible), as this will store the output from the pipeline // otherwise we would set the heap's flag to D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE rtvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; hr = device->CreateDescriptorHeap(&rtvHeapDesc, IID_PPV_ARGS(&rtvDescriptorHeap)); if (FAILED(hr)) { Running = false; return false; } // get the size of a descriptor in this heap (this is a rtv heap, so only rtv descriptors should be stored in it. // descriptor sizes may vary from device to device, which is why there is no set size and we must ask the // device to give us the size. we will use this size to increment a descriptor handle offset rtvDescriptorSize = device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_RTV); // get a handle to the first descriptor in the descriptor heap. a handle is basically a pointer, // but we cannot literally use it like a c++ pointer. CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); // Create a RTV for each buffer (double buffering is two buffers, tripple buffering is 3). for (int i = 0; i < frameBufferCount; i++) { // first we get the n'th buffer in the swap chain and store it in the n'th // position of our ID3D12Resource array hr = swapChain->GetBuffer(i, IID_PPV_ARGS(&renderTargets[i])); if (FAILED(hr)) { Running = false; return false; } // the we "create" a render target view which binds the swap chain buffer (ID3D12Resource[n]) to the rtv handle device->CreateRenderTargetView(renderTargets[i], nullptr, rtvHandle); // we increment the rtv handle by the rtv descriptor size we got above rtvHandle.Offset(1, rtvDescriptorSize); } // -- Create the Command Allocators -- // for (int i = 0; i < frameBufferCount; i++) { hr = device->CreateCommandAllocator(D3D12_COMMAND_LIST_TYPE_DIRECT, IID_PPV_ARGS(&commandAllocator[i])); if (FAILED(hr)) { Running = false; return false; } } // -- Create a Command List -- // // create the command list with the first allocator hr = device->CreateCommandList(0, D3D12_COMMAND_LIST_TYPE_DIRECT, commandAllocator[frameIndex], NULL, IID_PPV_ARGS(&commandList)); if (FAILED(hr)) { Running = false; return false; } // -- Create a Fence & Fence Event -- // // create the fences for (int i = 0; i < frameBufferCount; i++) { hr = device->CreateFence(0, D3D12_FENCE_FLAG_NONE, IID_PPV_ARGS(&fence[i])); if (FAILED(hr)) { Running = false; return false; } fenceValue[i] = 0; // set the initial fence value to 0 } // create a handle to a fence event fenceEvent = CreateEvent(nullptr, FALSE, FALSE, nullptr); if (fenceEvent == nullptr) { Running = false; return false; } // create root signature // create a root descriptor, which explains where to find the data for this root parameter D3D12_ROOT_DESCRIPTOR rootCBVDescriptor; rootCBVDescriptor.RegisterSpace = 0; rootCBVDescriptor.ShaderRegister = 0; // create a descriptor range (descriptor table) and fill it out // this is a range of descriptors inside a descriptor heap D3D12_DESCRIPTOR_RANGE descriptorTableRanges[1]; // only one range right now descriptorTableRanges[0].RangeType = D3D12_DESCRIPTOR_RANGE_TYPE_SRV; // this is a range of shader resource views (descriptors) descriptorTableRanges[0].NumDescriptors = 1; // we only have one texture right now, so the range is only 1 descriptorTableRanges[0].BaseShaderRegister = 0; // start index of the shader registers in the range descriptorTableRanges[0].RegisterSpace = 0; // space 0. can usually be zero descriptorTableRanges[0].OffsetInDescriptorsFromTableStart = D3D12_DESCRIPTOR_RANGE_OFFSET_APPEND; // this appends the range to the end of the root signature descriptor tables // create a descriptor table D3D12_ROOT_DESCRIPTOR_TABLE descriptorTable; descriptorTable.NumDescriptorRanges = _countof(descriptorTableRanges); // we only have one range descriptorTable.pDescriptorRanges = &descriptorTableRanges[0]; // the pointer to the beginning of our ranges array // create a root parameter for the root descriptor and fill it out D3D12_ROOT_PARAMETER rootParameters[2]; // two root parameters rootParameters[0].ParameterType = D3D12_ROOT_PARAMETER_TYPE_CBV; // this is a constant buffer view root descriptor rootParameters[0].Descriptor = rootCBVDescriptor; // this is the root descriptor for this root parameter rootParameters[0].ShaderVisibility = D3D12_SHADER_VISIBILITY_VERTEX; // our pixel shader will be the only shader accessing this parameter for now // fill out the parameter for our descriptor table. Remember it's a good idea to sort parameters by frequency of change. Our constant // buffer will be changed multiple times per frame, while our descriptor table will not be changed at all (in this tutorial) rootParameters[1].ParameterType = D3D12_ROOT_PARAMETER_TYPE_DESCRIPTOR_TABLE; // this is a descriptor table rootParameters[1].DescriptorTable = descriptorTable; // this is our descriptor table for this root parameter rootParameters[1].ShaderVisibility = D3D12_SHADER_VISIBILITY_PIXEL; // our pixel shader will be the only shader accessing this parameter for now // create a static sampler D3D12_STATIC_SAMPLER_DESC sampler = {}; sampler.Filter = D3D12_FILTER_MIN_MAG_MIP_POINT; sampler.AddressU = D3D12_TEXTURE_ADDRESS_MODE_BORDER; sampler.AddressV = D3D12_TEXTURE_ADDRESS_MODE_BORDER; sampler.AddressW = D3D12_TEXTURE_ADDRESS_MODE_BORDER; sampler.MipLODBias = 0; sampler.MaxAnisotropy = 0; sampler.ComparisonFunc = D3D12_COMPARISON_FUNC_NEVER; sampler.BorderColor = D3D12_STATIC_BORDER_COLOR_TRANSPARENT_BLACK; sampler.MinLOD = 0.0f; sampler.MaxLOD = D3D12_FLOAT32_MAX; sampler.ShaderRegister = 0; sampler.RegisterSpace = 0; sampler.ShaderVisibility = D3D12_SHADER_VISIBILITY_PIXEL; CD3DX12_ROOT_SIGNATURE_DESC rootSignatureDesc; rootSignatureDesc.Init(_countof(rootParameters), // we have 2 root parameters rootParameters, // a pointer to the beginning of our root parameters array 1, // we have one static sampler &sampler, // a pointer to our static sampler (array) D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT | // we can deny shader stages here for better performance D3D12_ROOT_SIGNATURE_FLAG_DENY_HULL_SHADER_ROOT_ACCESS | D3D12_ROOT_SIGNATURE_FLAG_DENY_DOMAIN_SHADER_ROOT_ACCESS | D3D12_ROOT_SIGNATURE_FLAG_DENY_GEOMETRY_SHADER_ROOT_ACCESS); ID3DBlob* errorBuff; // a buffer holding the error data if any ID3DBlob* signature; hr = D3D12SerializeRootSignature(&rootSignatureDesc, D3D_ROOT_SIGNATURE_VERSION_1, &signature, &errorBuff); if (FAILED(hr)) { OutputDebugStringA((char*)errorBuff->GetBufferPointer()); return false; } hr = device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&rootSignature)); if (FAILED(hr)) { return false; } // create vertex and pixel shaders // when debugging, we can compile the shader files at runtime. // but for release versions, we can compile the hlsl shaders // with fxc.exe to create .cso files, which contain the shader // bytecode. We can load the .cso files at runtime to get the // shader bytecode, which of course is faster than compiling // them at runtime // compile vertex shader ID3DBlob* vertexShader; // d3d blob for holding vertex shader bytecode hr = D3DCompileFromFile(L"VertexShader.hlsl", nullptr, nullptr, "main", "vs_5_0", D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION, 0, &vertexShader, &errorBuff); if (FAILED(hr)) { OutputDebugStringA((char*)errorBuff->GetBufferPointer()); Running = false; return false; } // fill out a shader bytecode structure, which is basically just a pointer // to the shader bytecode and the size of the shader bytecode D3D12_SHADER_BYTECODE vertexShaderBytecode = {}; vertexShaderBytecode.BytecodeLength = vertexShader->GetBufferSize(); vertexShaderBytecode.pShaderBytecode = vertexShader->GetBufferPointer(); // compile pixel shader ID3DBlob* pixelShader; hr = D3DCompileFromFile(L"PixelShader.hlsl", nullptr, nullptr, "main", "ps_5_0", D3DCOMPILE_DEBUG | D3DCOMPILE_SKIP_OPTIMIZATION, 0, &pixelShader, &errorBuff); if (FAILED(hr)) { OutputDebugStringA((char*)errorBuff->GetBufferPointer()); Running = false; return false; } // fill out shader bytecode structure for pixel shader D3D12_SHADER_BYTECODE pixelShaderBytecode = {}; pixelShaderBytecode.BytecodeLength = pixelShader->GetBufferSize(); pixelShaderBytecode.pShaderBytecode = pixelShader->GetBufferPointer(); // create input layout // The input layout is used by the Input Assembler so that it knows // how to read the vertex data bound to it. D3D12_INPUT_ELEMENT_DESC inputLayout[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 }, { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D12_INPUT_CLASSIFICATION_PER_VERTEX_DATA, 0 } }; // fill out an input layout description structure D3D12_INPUT_LAYOUT_DESC inputLayoutDesc = {}; // we can get the number of elements in an array by "sizeof(array) / sizeof(arrayElementType)" inputLayoutDesc.NumElements = sizeof(inputLayout) / sizeof(D3D12_INPUT_ELEMENT_DESC); inputLayoutDesc.pInputElementDescs = inputLayout; // create a pipeline state object (PSO) // In a real application, you will have many pso's. for each different shader // or different combinations of shaders, different blend states or different rasterizer states, // different topology types (point, line, triangle, patch), or a different number // of render targets you will need a pso // VS is the only required shader for a pso. You might be wondering when a case would be where // you only set the VS. It's possible that you have a pso that only outputs data with the stream // output, and not on a render target, which means you would not need anything after the stream // output. D3D12_GRAPHICS_PIPELINE_STATE_DESC psoDesc = {}; // a structure to define a pso psoDesc.InputLayout = inputLayoutDesc; // the structure describing our input layout psoDesc.pRootSignature = rootSignature; // the root signature that describes the input data this pso needs psoDesc.VS = vertexShaderBytecode; // structure describing where to find the vertex shader bytecode and how large it is psoDesc.PS = pixelShaderBytecode; // same as VS but for pixel shader psoDesc.PrimitiveTopologyType = D3D12_PRIMITIVE_TOPOLOGY_TYPE_TRIANGLE; // type of topology we are drawing psoDesc.RTVFormats[0] = DXGI_FORMAT_R8G8B8A8_UNORM; // format of the render target psoDesc.SampleDesc = sampleDesc; // must be the same sample description as the swapchain and depth/stencil buffer psoDesc.SampleMask = 0xffffffff; // sample mask has to do with multi-sampling. 0xffffffff means point sampling is done psoDesc.RasterizerState = CD3DX12_RASTERIZER_DESC(D3D12_DEFAULT); // a default rasterizer state. psoDesc.BlendState = CD3DX12_BLEND_DESC(D3D12_DEFAULT); // a default blent state. psoDesc.NumRenderTargets = 1; // we are only binding one render target psoDesc.DepthStencilState = CD3DX12_DEPTH_STENCIL_DESC(D3D12_DEFAULT); // a default depth stencil state // create the pso hr = device->CreateGraphicsPipelineState(&psoDesc, IID_PPV_ARGS(&pipelineStateObject)); if (FAILED(hr)) { Running = false; return false; } // Create vertex buffer // a cube Vertex vList[] = { // front face { -0.5f, 0.5f, -0.5f, 0.0f, 0.0f }, { 0.5f, -0.5f, -0.5f, 1.0f, 1.0f }, { -0.5f, -0.5f, -0.5f, 0.0f, 1.0f }, { 0.5f, 0.5f, -0.5f, 1.0f, 0.0f }, // right side face { 0.5f, -0.5f, -0.5f, 0.0f, 1.0f }, { 0.5f, 0.5f, 0.5f, 1.0f, 0.0f }, { 0.5f, -0.5f, 0.5f, 1.0f, 1.0f }, { 0.5f, 0.5f, -0.5f, 0.0f, 0.0f }, // left side face { -0.5f, 0.5f, 0.5f, 0.0f, 0.0f }, { -0.5f, -0.5f, -0.5f, 1.0f, 1.0f }, { -0.5f, -0.5f, 0.5f, 0.0f, 1.0f }, { -0.5f, 0.5f, -0.5f, 1.0f, 0.0f }, // back face { 0.5f, 0.5f, 0.5f, 0.0f, 0.0f }, { -0.5f, -0.5f, 0.5f, 1.0f, 1.0f }, { 0.5f, -0.5f, 0.5f, 0.0f, 1.0f }, { -0.5f, 0.5f, 0.5f, 1.0f, 0.0f }, // top face { -0.5f, 0.5f, -0.5f, 0.0f, 1.0f }, { 0.5f, 0.5f, 0.5f, 1.0f, 0.0f }, { 0.5f, 0.5f, -0.5f, 1.0f, 1.0f }, { -0.5f, 0.5f, 0.5f, 0.0f, 0.0f }, // bottom face { 0.5f, -0.5f, 0.5f, 0.0f, 0.0f }, { -0.5f, -0.5f, -0.5f, 1.0f, 1.0f }, { 0.5f, -0.5f, -0.5f, 0.0f, 1.0f }, { -0.5f, -0.5f, 0.5f, 1.0f, 0.0f }, }; int vBufferSize = sizeof(vList); // create default heap // default heap is memory on the GPU. Only the GPU has access to this memory // To get data into this heap, we will have to upload the data using // an upload heap hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), // a default heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(vBufferSize), // resource description for a buffer D3D12_RESOURCE_STATE_COPY_DEST, // we will start this heap in the copy destination state since we will copy data // from the upload heap to this heap nullptr, // optimized clear value must be null for this type of resource. used for render targets and depth/stencil buffers IID_PPV_ARGS(&vertexBuffer)); if (FAILED(hr)) { Running = false; return false; } // we can give resource heaps a name so when we debug with the graphics debugger we know what resource we are looking at vertexBuffer->SetName(L"Vertex Buffer Resource Heap"); // create upload heap // upload heaps are used to upload data to the GPU. CPU can write to it, GPU can read from it // We will upload the vertex buffer using this heap to the default heap ID3D12Resource* vBufferUploadHeap; hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), // upload heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(vBufferSize), // resource description for a buffer D3D12_RESOURCE_STATE_GENERIC_READ, // GPU will read from this buffer and copy its contents to the default heap nullptr, IID_PPV_ARGS(&vBufferUploadHeap)); if (FAILED(hr)) { Running = false; return false; } vBufferUploadHeap->SetName(L"Vertex Buffer Upload Resource Heap"); // store vertex buffer in upload heap D3D12_SUBRESOURCE_DATA vertexData = {}; vertexData.pData = reinterpret_cast<BYTE*>(vList); // pointer to our vertex array vertexData.RowPitch = vBufferSize; // size of all our triangle vertex data vertexData.SlicePitch = vBufferSize; // also the size of our triangle vertex data // we are now creating a command with the command list to copy the data from // the upload heap to the default heap UpdateSubresources(commandList, vertexBuffer, vBufferUploadHeap, 0, 0, 1, &vertexData); // transition the vertex buffer data from copy destination state to vertex buffer state commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(vertexBuffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER)); // Create index buffer // a quad (2 triangles) DWORD iList[] = { // ffront face 0, 1, 2, // first triangle 0, 3, 1, // second triangle // left face 4, 5, 6, // first triangle 4, 7, 5, // second triangle // right face 8, 9, 10, // first triangle 8, 11, 9, // second triangle // back face 12, 13, 14, // first triangle 12, 15, 13, // second triangle // top face 16, 17, 18, // first triangle 16, 19, 17, // second triangle // bottom face 20, 21, 22, // first triangle 20, 23, 21, // second triangle }; int iBufferSize = sizeof(iList); numCubeIndices = sizeof(iList) / sizeof(DWORD); // create default heap to hold index buffer hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), // a default heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(iBufferSize), // resource description for a buffer D3D12_RESOURCE_STATE_COPY_DEST, // start in the copy destination state nullptr, // optimized clear value must be null for this type of resource IID_PPV_ARGS(&indexBuffer)); if (FAILED(hr)) { Running = false; return false; } // we can give resource heaps a name so when we debug with the graphics debugger we know what resource we are looking at vertexBuffer->SetName(L"Index Buffer Resource Heap"); // create upload heap to upload index buffer ID3D12Resource* iBufferUploadHeap; hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), // upload heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(vBufferSize), // resource description for a buffer D3D12_RESOURCE_STATE_GENERIC_READ, // GPU will read from this buffer and copy its contents to the default heap nullptr, IID_PPV_ARGS(&iBufferUploadHeap)); if (FAILED(hr)) { Running = false; return false; } vBufferUploadHeap->SetName(L"Index Buffer Upload Resource Heap"); // store vertex buffer in upload heap D3D12_SUBRESOURCE_DATA indexData = {}; indexData.pData = reinterpret_cast<BYTE*>(iList); // pointer to our index array indexData.RowPitch = iBufferSize; // size of all our index buffer indexData.SlicePitch = iBufferSize; // also the size of our index buffer // we are now creating a command with the command list to copy the data from // the upload heap to the default heap UpdateSubresources(commandList, indexBuffer, iBufferUploadHeap, 0, 0, 1, &indexData); // transition the vertex buffer data from copy destination state to vertex buffer state commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(indexBuffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER)); // Create the depth/stencil buffer // create a depth stencil descriptor heap so we can get a pointer to the depth stencil buffer D3D12_DESCRIPTOR_HEAP_DESC dsvHeapDesc = {}; dsvHeapDesc.NumDescriptors = 1; dsvHeapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_DSV; dsvHeapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; hr = device->CreateDescriptorHeap(&dsvHeapDesc, IID_PPV_ARGS(&dsDescriptorHeap)); if (FAILED(hr)) { Running = false; return false; } D3D12_DEPTH_STENCIL_VIEW_DESC depthStencilDesc = {}; depthStencilDesc.Format = DXGI_FORMAT_D32_FLOAT; depthStencilDesc.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D; depthStencilDesc.Flags = D3D12_DSV_FLAG_NONE; D3D12_CLEAR_VALUE depthOptimizedClearValue = {}; depthOptimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT; depthOptimizedClearValue.DepthStencil.Depth = 1.0f; depthOptimizedClearValue.DepthStencil.Stencil = 0; hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), D3D12_HEAP_FLAG_NONE, &CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_D32_FLOAT, Width, Height, 1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL), D3D12_RESOURCE_STATE_DEPTH_WRITE, &depthOptimizedClearValue, IID_PPV_ARGS(&depthStencilBuffer) ); if (FAILED(hr)) { Running = false; return false; } dsDescriptorHeap->SetName(L"Depth/Stencil Resource Heap"); device->CreateDepthStencilView(depthStencilBuffer, &depthStencilDesc, dsDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); // create the constant buffer resource heap // We will update the constant buffer one or more times per frame, so we will use only an upload heap // unlike previously we used an upload heap to upload the vertex and index data, and then copied over // to a default heap. If you plan to use a resource for more than a couple frames, it is usually more // efficient to copy to a default heap where it stays on the gpu. In this case, our constant buffer // will be modified and uploaded at least once per frame, so we only use an upload heap // first we will create a resource heap (upload heap) for each frame for the cubes constant buffers // As you can see, we are allocating 64KB for each resource we create. Buffer resource heaps must be // an alignment of 64KB. We are creating 3 resources, one for each frame. Each constant buffer is // only a 4x4 matrix of floats in this tutorial. So with a float being 4 bytes, we have // 16 floats in one constant buffer, and we will store 2 constant buffers in each // heap, one for each cube, thats only 64x2 bits, or 128 bits we are using for each // resource, and each resource must be at least 64KB (65536 bits) for (int i = 0; i < frameBufferCount; ++i) { // create resource for cube 1 hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), // this heap will be used to upload the constant buffer data D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(D3D12_DEFAULT_RESOURCE_PLACEMENT_ALIGNMENT), // size of the resource heap. Must be a multiple of 64KB for single-textures and constant buffers D3D12_RESOURCE_STATE_GENERIC_READ, // will be data that is read from so we keep it in the generic read state nullptr, // we do not have use an optimized clear value for constant buffers IID_PPV_ARGS(&constantBufferUploadHeaps[i])); if (FAILED(hr)) { Running = false; return false; } constantBufferUploadHeaps[i]->SetName(L"Constant Buffer Upload Resource Heap"); ZeroMemory(&cbPerObject, sizeof(cbPerObject)); CD3DX12_RANGE readRange(0, 0); // We do not intend to read from this resource on the CPU. (so end is less than or equal to begin) // map the resource heap to get a gpu virtual address to the beginning of the heap hr = constantBufferUploadHeaps[i]->Map(0, &readRange, reinterpret_cast<void**>(&cbvGPUAddress[i])); // Because of the constant read alignment requirements, constant buffer views must be 256 bit aligned. Our buffers are smaller than 256 bits, // so we need to add spacing between the two buffers, so that the second buffer starts at 256 bits from the beginning of the resource heap. memcpy(cbvGPUAddress[i], &cbPerObject, sizeof(cbPerObject)); // cube1's constant buffer data memcpy(cbvGPUAddress[i] + ConstantBufferPerObjectAlignedSize, &cbPerObject, sizeof(cbPerObject)); // cube2's constant buffer data } // load the image, create a texture resource and descriptor heap // Load the image from file D3D12_RESOURCE_DESC textureDesc; int imageBytesPerRow; BYTE* imageData; int imageSize = LoadImageDataFromFile(&imageData, textureDesc, L"braynzar.jpg", imageBytesPerRow); // make sure we have data if(imageSize <= 0) { Running = false; return false; } // create a default heap where the upload heap will copy its contents into (contents being the texture) hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT), // a default heap D3D12_HEAP_FLAG_NONE, // no flags &textureDesc, // the description of our texture D3D12_RESOURCE_STATE_COPY_DEST, // We will copy the texture from the upload heap to here, so we start it out in a copy dest state nullptr, // used for render targets and depth/stencil buffers IID_PPV_ARGS(&textureBuffer)); if (FAILED(hr)) { Running = false; return false; } textureBuffer->SetName(L"Texture Buffer Resource Heap"); UINT64 textureUploadBufferSize; // this function gets the size an upload buffer needs to be to upload a texture to the gpu. // each row must be 256 byte aligned except for the last row, which can just be the size in bytes of the row // eg. textureUploadBufferSize = ((((width * numBytesPerPixel) + 255) & ~255) * (height - 1)) + (width * numBytesPerPixel); //textureUploadBufferSize = (((imageBytesPerRow + 255) & ~255) * (textureDesc.Height - 1)) + imageBytesPerRow; device->GetCopyableFootprints(&textureDesc, 0, 1, 0, nullptr, nullptr, nullptr, &textureUploadBufferSize); // now we create an upload heap to upload our texture to the GPU hr = device->CreateCommittedResource( &CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_UPLOAD), // upload heap D3D12_HEAP_FLAG_NONE, // no flags &CD3DX12_RESOURCE_DESC::Buffer(textureUploadBufferSize), // resource description for a buffer (storing the image data in this heap just to copy to the default heap) D3D12_RESOURCE_STATE_GENERIC_READ, // We will copy the contents from this heap to the default heap above nullptr, IID_PPV_ARGS(&textureBufferUploadHeap)); if (FAILED(hr)) { Running = false; return false; } textureBufferUploadHeap->SetName(L"Texture Buffer Upload Resource Heap"); // store vertex buffer in upload heap D3D12_SUBRESOURCE_DATA textureData = {}; textureData.pData = &imageData[0]; // pointer to our image data textureData.RowPitch = imageBytesPerRow; // size of all our triangle vertex data textureData.SlicePitch = imageBytesPerRow * textureDesc.Height; // also the size of our triangle vertex data // Now we copy the upload buffer contents to the default heap UpdateSubresources(commandList, textureBuffer, textureBufferUploadHeap, 0, 0, 1, &textureData); // transition the texture default heap to a pixel shader resource (we will be sampling from this heap in the pixel shader to get the color of pixels) commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(textureBuffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE)); // create the descriptor heap that will store our srv D3D12_DESCRIPTOR_HEAP_DESC heapDesc = {}; heapDesc.NumDescriptors = 1; heapDesc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_SHADER_VISIBLE; heapDesc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV; hr = device->CreateDescriptorHeap(&heapDesc, IID_PPV_ARGS(&mainDescriptorHeap)); if (FAILED(hr)) { Running = false; } // now we create a shader resource view (descriptor that points to the texture and describes it) D3D12_SHADER_RESOURCE_VIEW_DESC srvDesc = {}; srvDesc.Shader4ComponentMapping = D3D12_DEFAULT_SHADER_4_COMPONENT_MAPPING; srvDesc.Format = textureDesc.Format; srvDesc.ViewDimension = D3D12_SRV_DIMENSION_TEXTURE2D; srvDesc.Texture2D.MipLevels = 1; device->CreateShaderResourceView(textureBuffer, &srvDesc, mainDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); // Now we execute the command list to upload the initial assets (triangle data) commandList->Close(); ID3D12CommandList* ppCommandLists[] = { commandList }; commandQueue->ExecuteCommandLists(_countof(ppCommandLists), ppCommandLists); // increment the fence value now, otherwise the buffer might not be uploaded by the time we start drawing fenceValue[frameIndex]++; hr = commandQueue->Signal(fence[frameIndex], fenceValue[frameIndex]); if (FAILED(hr)) { Running = false; return false; } // we are done with image data now that we've uploaded it to the gpu, so free it up delete imageData; // create a vertex buffer view for the triangle. We get the GPU memory address to the vertex pointer using the GetGPUVirtualAddress() method vertexBufferView.BufferLocation = vertexBuffer->GetGPUVirtualAddress(); vertexBufferView.StrideInBytes = sizeof(Vertex); vertexBufferView.SizeInBytes = vBufferSize; // create a vertex buffer view for the triangle. We get the GPU memory address to the vertex pointer using the GetGPUVirtualAddress() method indexBufferView.BufferLocation = indexBuffer->GetGPUVirtualAddress(); indexBufferView.Format = DXGI_FORMAT_R32_UINT; // 32-bit unsigned integer (this is what a dword is, double word, a word is 2 bytes) indexBufferView.SizeInBytes = iBufferSize; // Fill out the Viewport viewport.TopLeftX = 0; viewport.TopLeftY = 0; viewport.Width = Width; viewport.Height = Height; viewport.MinDepth = 0.0f; viewport.MaxDepth = 1.0f; // Fill out a scissor rect scissorRect.left = 0; scissorRect.top = 0; scissorRect.right = Width; scissorRect.bottom = Height; // build projection and view matrix XMMATRIX tmpMat = XMMatrixPerspectiveFovLH(45.0f*(3.14f/180.0f), (float)Width / (float)Height, 0.1f, 1000.0f); XMStoreFloat4x4(&cameraProjMat, tmpMat); // set starting camera state cameraPosition = XMFLOAT4(0.0f, 2.0f, -4.0f, 0.0f); cameraTarget = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f); cameraUp = XMFLOAT4(0.0f, 1.0f, 0.0f, 0.0f); // build view matrix XMVECTOR cPos = XMLoadFloat4(&cameraPosition); XMVECTOR cTarg = XMLoadFloat4(&cameraTarget); XMVECTOR cUp = XMLoadFloat4(&cameraUp); tmpMat = XMMatrixLookAtLH(cPos, cTarg, cUp); XMStoreFloat4x4(&cameraViewMat, tmpMat); // set starting cubes position // first cube cube1Position = XMFLOAT4(0.0f, 0.0f, 0.0f, 0.0f); // set cube 1's position XMVECTOR posVec = XMLoadFloat4(&cube1Position); // create xmvector for cube1's position tmpMat = XMMatrixTranslationFromVector(posVec); // create translation matrix from cube1's position vector XMStoreFloat4x4(&cube1RotMat, XMMatrixIdentity()); // initialize cube1's rotation matrix to identity matrix XMStoreFloat4x4(&cube1WorldMat, tmpMat); // store cube1's world matrix // second cube cube2PositionOffset = XMFLOAT4(1.5f, 0.0f, 0.0f, 0.0f); posVec = XMLoadFloat4(&cube2PositionOffset) + XMLoadFloat4(&cube1Position); // create xmvector for cube2's position // we are rotating around cube1 here, so add cube2's position to cube1 tmpMat = XMMatrixTranslationFromVector(posVec); // create translation matrix from cube2's position offset vector XMStoreFloat4x4(&cube2RotMat, XMMatrixIdentity()); // initialize cube2's rotation matrix to identity matrix XMStoreFloat4x4(&cube2WorldMat, tmpMat); // store cube2's world matrix return true; } void Update() { // update app logic, such as moving the camera or figuring out what objects are in view // create rotation matrices XMMATRIX rotXMat = XMMatrixRotationX(0.0001f); XMMATRIX rotYMat = XMMatrixRotationY(0.0002f); XMMATRIX rotZMat = XMMatrixRotationZ(0.0003f); // add rotation to cube1's rotation matrix and store it XMMATRIX rotMat = XMLoadFloat4x4(&cube1RotMat) * rotXMat * rotYMat * rotZMat; XMStoreFloat4x4(&cube1RotMat, rotMat); // create translation matrix for cube 1 from cube 1's position vector XMMATRIX translationMat = XMMatrixTranslationFromVector(XMLoadFloat4(&cube1Position)); // create cube1's world matrix by first rotating the cube, then positioning the rotated cube XMMATRIX worldMat = rotMat * translationMat; // store cube1's world matrix XMStoreFloat4x4(&cube1WorldMat, worldMat); // update constant buffer for cube1 // create the wvp matrix and store in constant buffer XMMATRIX viewMat = XMLoadFloat4x4(&cameraViewMat); // load view matrix XMMATRIX projMat = XMLoadFloat4x4(&cameraProjMat); // load projection matrix XMMATRIX wvpMat = XMLoadFloat4x4(&cube1WorldMat) * viewMat * projMat; // create wvp matrix XMMATRIX transposed = XMMatrixTranspose(wvpMat); // must transpose wvp matrix for the gpu XMStoreFloat4x4(&cbPerObject.wvpMat, transposed); // store transposed wvp matrix in constant buffer // copy our ConstantBuffer instance to the mapped constant buffer resource memcpy(cbvGPUAddress[frameIndex], &cbPerObject, sizeof(cbPerObject)); // now do cube2's world matrix // create rotation matrices for cube2 rotXMat = XMMatrixRotationX(0.0003f); rotYMat = XMMatrixRotationY(0.0002f); rotZMat = XMMatrixRotationZ(0.0001f); // add rotation to cube2's rotation matrix and store it rotMat = rotZMat * (XMLoadFloat4x4(&cube2RotMat) * (rotXMat * rotYMat)); XMStoreFloat4x4(&cube2RotMat, rotMat); // create translation matrix for cube 2 to offset it from cube 1 (its position relative to cube1 XMMATRIX translationOffsetMat = XMMatrixTranslationFromVector(XMLoadFloat4(&cube2PositionOffset)); // we want cube 2 to be half the size of cube 1, so we scale it by .5 in all dimensions XMMATRIX scaleMat = XMMatrixScaling(0.5f, 0.5f, 0.5f); // reuse worldMat. // first we scale cube2. scaling happens relative to point 0,0,0, so you will almost always want to scale first // then we translate it. // then we rotate it. rotation always rotates around point 0,0,0 // finally we move it to cube 1's position, which will cause it to rotate around cube 1 worldMat = scaleMat * translationOffsetMat * rotMat * translationMat; wvpMat = XMLoadFloat4x4(&cube2WorldMat) * viewMat * projMat; // create wvp matrix transposed = XMMatrixTranspose(wvpMat); // must transpose wvp matrix for the gpu XMStoreFloat4x4(&cbPerObject.wvpMat, transposed); // store transposed wvp matrix in constant buffer // copy our ConstantBuffer instance to the mapped constant buffer resource memcpy(cbvGPUAddress[frameIndex] + ConstantBufferPerObjectAlignedSize, &cbPerObject, sizeof(cbPerObject)); // store cube2's world matrix XMStoreFloat4x4(&cube2WorldMat, worldMat); } void UpdatePipeline() { HRESULT hr; // We have to wait for the gpu to finish with the command allocator before we reset it WaitForPreviousFrame(); // we can only reset an allocator once the gpu is done with it // resetting an allocator frees the memory that the command list was stored in hr = commandAllocator[frameIndex]->Reset(); if (FAILED(hr)) { Running = false; } // reset the command list. by resetting the command list we are putting it into // a recording state so we can start recording commands into the command allocator. // the command allocator that we reference here may have multiple command lists // associated with it, but only one can be recording at any time. Make sure // that any other command lists associated to this command allocator are in // the closed state (not recording). // Here you will pass an initial pipeline state object as the second parameter, // but in this tutorial we are only clearing the rtv, and do not actually need // anything but an initial default pipeline, which is what we get by setting // the second parameter to NULL hr = commandList->Reset(commandAllocator[frameIndex], pipelineStateObject); if (FAILED(hr)) { Running = false; } // here we start recording commands into the commandList (which all the commands will be stored in the commandAllocator) // transition the "frameIndex" render target from the present state to the render target state so the command list draws to it starting from here commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(renderTargets[frameIndex], D3D12_RESOURCE_STATE_PRESENT, D3D12_RESOURCE_STATE_RENDER_TARGET)); // here we again get the handle to our current render target view so we can set it as the render target in the output merger stage of the pipeline CD3DX12_CPU_DESCRIPTOR_HANDLE rtvHandle(rtvDescriptorHeap->GetCPUDescriptorHandleForHeapStart(), frameIndex, rtvDescriptorSize); // get a handle to the depth/stencil buffer CD3DX12_CPU_DESCRIPTOR_HANDLE dsvHandle(dsDescriptorHeap->GetCPUDescriptorHandleForHeapStart()); // set the render target for the output merger stage (the output of the pipeline) commandList->OMSetRenderTargets(1, &rtvHandle, FALSE, &dsvHandle); // Clear the render target by using the ClearRenderTargetView command const float clearColor[] = { 0.0f, 0.2f, 0.4f, 1.0f }; commandList->ClearRenderTargetView(rtvHandle, clearColor, 0, nullptr); // clear the depth/stencil buffer commandList->ClearDepthStencilView(dsDescriptorHeap->GetCPUDescriptorHandleForHeapStart(), D3D12_CLEAR_FLAG_DEPTH, 1.0f, 0, 0, nullptr); // set root signature commandList->SetGraphicsRootSignature(rootSignature); // set the root signature // set the descriptor heap ID3D12DescriptorHeap* descriptorHeaps[] = { mainDescriptorHeap }; commandList->SetDescriptorHeaps(_countof(descriptorHeaps), descriptorHeaps); // set the descriptor table to the descriptor heap (parameter 1, as constant buffer root descriptor is parameter index 0) commandList->SetGraphicsRootDescriptorTable(1, mainDescriptorHeap->GetGPUDescriptorHandleForHeapStart()); commandList->RSSetViewports(1, &viewport); // set the viewports commandList->RSSetScissorRects(1, &scissorRect); // set the scissor rects commandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); // set the primitive topology commandList->IASetVertexBuffers(0, 1, &vertexBufferView); // set the vertex buffer (using the vertex buffer view) commandList->IASetIndexBuffer(&indexBufferView); // first cube // set cube1's constant buffer commandList->SetGraphicsRootConstantBufferView(0, constantBufferUploadHeaps[frameIndex]->GetGPUVirtualAddress()); // draw first cube commandList->DrawIndexedInstanced(numCubeIndices, 1, 0, 0, 0); // second cube // set cube2's constant buffer. You can see we are adding the size of ConstantBufferPerObject to the constant buffer // resource heaps address. This is because cube1's constant buffer is stored at the beginning of the resource heap, while // cube2's constant buffer data is stored after (256 bits from the start of the heap). commandList->SetGraphicsRootConstantBufferView(0, constantBufferUploadHeaps[frameIndex]->GetGPUVirtualAddress() + ConstantBufferPerObjectAlignedSize); // draw second cube commandList->DrawIndexedInstanced(numCubeIndices, 1, 0, 0, 0); // transition the "frameIndex" render target from the render target state to the present state. If the debug layer is enabled, you will receive a // warning if present is called on the render target when it's not in the present state commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(renderTargets[frameIndex], D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_PRESENT)); hr = commandList->Close(); if (FAILED(hr)) { Running = false; } } void Render() { HRESULT hr; UpdatePipeline(); // update the pipeline by sending commands to the commandqueue // create an array of command lists (only one command list here) ID3D12CommandList* ppCommandLists[] = { commandList }; // execute the array of command lists commandQueue->ExecuteCommandLists(_countof(ppCommandLists), ppCommandLists); // this command goes in at the end of our command queue. we will know when our command queue // has finished because the fence value will be set to "fenceValue" from the GPU since the command // queue is being executed on the GPU hr = commandQueue->Signal(fence[frameIndex], fenceValue[frameIndex]); if (FAILED(hr)) { Running = false; } // present the current backbuffer hr = swapChain->Present(0, 0); if (FAILED(hr)) { Running = false; } } void Cleanup() { // wait for the gpu to finish all frames for (int i = 0; i < frameBufferCount; ++i) { frameIndex = i; WaitForPreviousFrame(); } // get swapchain out of full screen before exiting BOOL fs = false; if (swapChain->GetFullscreenState(&fs, NULL)) swapChain->SetFullscreenState(false, NULL); SAFE_RELEASE(device); SAFE_RELEASE(swapChain); SAFE_RELEASE(commandQueue); SAFE_RELEASE(rtvDescriptorHeap); SAFE_RELEASE(commandList); for (int i = 0; i < frameBufferCount; ++i) { SAFE_RELEASE(renderTargets[i]); SAFE_RELEASE(commandAllocator[i]); SAFE_RELEASE(fence[i]); }; SAFE_RELEASE(pipelineStateObject); SAFE_RELEASE(rootSignature); SAFE_RELEASE(vertexBuffer); SAFE_RELEASE(indexBuffer); SAFE_RELEASE(depthStencilBuffer); SAFE_RELEASE(dsDescriptorHeap); for (int i = 0; i < frameBufferCount; ++i) { SAFE_RELEASE(constantBufferUploadHeaps[i]); }; } void WaitForPreviousFrame() { HRESULT hr; // swap the current rtv buffer index so we draw on the correct buffer frameIndex = swapChain->GetCurrentBackBufferIndex(); // if the current fence value is still less than "fenceValue", then we know the GPU has not finished executing // the command queue since it has not reached the "commandQueue->Signal(fence, fenceValue)" command if (fence[frameIndex]->GetCompletedValue() < fenceValue[frameIndex]) { // we have the fence create an event which is signaled once the fence's current value is "fenceValue" hr = fence[frameIndex]->SetEventOnCompletion(fenceValue[frameIndex], fenceEvent); if (FAILED(hr)) { Running = false; } // We will wait until the fence has triggered the event that it's current value has reached "fenceValue". once it's value // has reached "fenceValue", we know the command queue has finished executing WaitForSingleObject(fenceEvent, INFINITE); } // increment fenceValue for next frame fenceValue[frameIndex]++; } // get the dxgi format equivilent of a wic format DXGI_FORMAT GetDXGIFormatFromWICFormat(WICPixelFormatGUID& wicFormatGUID) { if (wicFormatGUID == GUID_WICPixelFormat128bppRGBAFloat) return DXGI_FORMAT_R32G32B32A32_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBAHalf) return DXGI_FORMAT_R16G16B16A16_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBA) return DXGI_FORMAT_R16G16B16A16_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBA) return DXGI_FORMAT_R8G8B8A8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppBGRA) return DXGI_FORMAT_B8G8R8A8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppBGR) return DXGI_FORMAT_B8G8R8X8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBA1010102XR) return DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBA1010102) return DXGI_FORMAT_R10G10B10A2_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat16bppBGRA5551) return DXGI_FORMAT_B5G5R5A1_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat16bppBGR565) return DXGI_FORMAT_B5G6R5_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat32bppGrayFloat) return DXGI_FORMAT_R32_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat16bppGrayHalf) return DXGI_FORMAT_R16_FLOAT; else if (wicFormatGUID == GUID_WICPixelFormat16bppGray) return DXGI_FORMAT_R16_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat8bppGray) return DXGI_FORMAT_R8_UNORM; else if (wicFormatGUID == GUID_WICPixelFormat8bppAlpha) return DXGI_FORMAT_A8_UNORM; else return DXGI_FORMAT_UNKNOWN; } // get a dxgi compatible wic format from another wic format WICPixelFormatGUID GetConvertToWICFormat(WICPixelFormatGUID& wicFormatGUID) { if (wicFormatGUID == GUID_WICPixelFormatBlackWhite) return GUID_WICPixelFormat8bppGray; else if (wicFormatGUID == GUID_WICPixelFormat1bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat2bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat4bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat8bppIndexed) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat2bppGray) return GUID_WICPixelFormat8bppGray; else if (wicFormatGUID == GUID_WICPixelFormat4bppGray) return GUID_WICPixelFormat8bppGray; else if (wicFormatGUID == GUID_WICPixelFormat16bppGrayFixedPoint) return GUID_WICPixelFormat16bppGrayHalf; else if (wicFormatGUID == GUID_WICPixelFormat32bppGrayFixedPoint) return GUID_WICPixelFormat32bppGrayFloat; else if (wicFormatGUID == GUID_WICPixelFormat16bppBGR555) return GUID_WICPixelFormat16bppBGRA5551; else if (wicFormatGUID == GUID_WICPixelFormat32bppBGR101010) return GUID_WICPixelFormat32bppRGBA1010102; else if (wicFormatGUID == GUID_WICPixelFormat24bppBGR) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat24bppRGB) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat32bppPBGRA) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat32bppPRGBA) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat48bppRGB) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat48bppBGR) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppBGRA) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppPRGBA) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppPBGRA) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat48bppRGBFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat48bppBGRFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBAFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppBGRAFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBFixedPoint) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGBHalf) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat48bppRGBHalf) return GUID_WICPixelFormat64bppRGBAHalf; else if (wicFormatGUID == GUID_WICPixelFormat128bppPRGBAFloat) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat128bppRGBFloat) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat128bppRGBAFixedPoint) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat128bppRGBFixedPoint) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat32bppRGBE) return GUID_WICPixelFormat128bppRGBAFloat; else if (wicFormatGUID == GUID_WICPixelFormat32bppCMYK) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppCMYK) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat40bppCMYKAlpha) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat80bppCMYKAlpha) return GUID_WICPixelFormat64bppRGBA; #if (_WIN32_WINNT >= _WIN32_WINNT_WIN8) || defined(_WIN7_PLATFORM_UPDATE) else if (wicFormatGUID == GUID_WICPixelFormat32bppRGB) return GUID_WICPixelFormat32bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppRGB) return GUID_WICPixelFormat64bppRGBA; else if (wicFormatGUID == GUID_WICPixelFormat64bppPRGBAHalf) return GUID_WICPixelFormat64bppRGBAHalf; #endif else return GUID_WICPixelFormatDontCare; } // get the number of bits per pixel for a dxgi format int GetDXGIFormatBitsPerPixel(DXGI_FORMAT& dxgiFormat) { if (dxgiFormat == DXGI_FORMAT_R32G32B32A32_FLOAT) return 128; else if (dxgiFormat == DXGI_FORMAT_R16G16B16A16_FLOAT) return 64; else if (dxgiFormat == DXGI_FORMAT_R16G16B16A16_UNORM) return 64; else if (dxgiFormat == DXGI_FORMAT_R8G8B8A8_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_B8G8R8A8_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_B8G8R8X8_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_R10G10B10_XR_BIAS_A2_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_R10G10B10A2_UNORM) return 32; else if (dxgiFormat == DXGI_FORMAT_B5G5R5A1_UNORM) return 16; else if (dxgiFormat == DXGI_FORMAT_B5G6R5_UNORM) return 16; else if (dxgiFormat == DXGI_FORMAT_R32_FLOAT) return 32; else if (dxgiFormat == DXGI_FORMAT_R16_FLOAT) return 16; else if (dxgiFormat == DXGI_FORMAT_R16_UNORM) return 16; else if (dxgiFormat == DXGI_FORMAT_R8_UNORM) return 8; else if (dxgiFormat == DXGI_FORMAT_A8_UNORM) return 8; } // load and decode image from file int LoadImageDataFromFile(BYTE** imageData, D3D12_RESOURCE_DESC& resourceDescription, LPCWSTR filename, int &bytesPerRow) { HRESULT hr; // we only need one instance of the imaging factory to create decoders and frames static IWICImagingFactory *wicFactory; // reset decoder, frame and converter since these will be different for each image we load IWICBitmapDecoder *wicDecoder = NULL; IWICBitmapFrameDecode *wicFrame = NULL; IWICFormatConverter *wicConverter = NULL; bool imageConverted = false; if (wicFactory == NULL) { // Initialize the COM library CoInitialize(NULL); // create the WIC factory hr = CoCreateInstance( CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&wicFactory) ); if (FAILED(hr)) return 0; } // load a decoder for the image hr = wicFactory->CreateDecoderFromFilename( filename, // Image we want to load in NULL, // This is a vendor ID, we do not prefer a specific one so set to null GENERIC_READ, // We want to read from this file WICDecodeMetadataCacheOnLoad, // We will cache the metadata right away, rather than when needed, which might be unknown &wicDecoder // the wic decoder to be created ); if (FAILED(hr)) return 0; // get image from decoder (this will decode the "frame") hr = wicDecoder->GetFrame(0, &wicFrame); if (FAILED(hr)) return 0; // get wic pixel format of image WICPixelFormatGUID pixelFormat; hr = wicFrame->GetPixelFormat(&pixelFormat); if (FAILED(hr)) return 0; // get size of image UINT textureWidth, textureHeight; hr = wicFrame->GetSize(&textureWidth, &textureHeight); if (FAILED(hr)) return 0; // we are not handling sRGB types in this tutorial, so if you need that support, you'll have to figure // out how to implement the support yourself // convert wic pixel format to dxgi pixel format DXGI_FORMAT dxgiFormat = GetDXGIFormatFromWICFormat(pixelFormat); // if the format of the image is not a supported dxgi format, try to convert it if (dxgiFormat == DXGI_FORMAT_UNKNOWN) { // get a dxgi compatible wic format from the current image format WICPixelFormatGUID convertToPixelFormat = GetConvertToWICFormat(pixelFormat); // return if no dxgi compatible format was found if (convertToPixelFormat == GUID_WICPixelFormatDontCare) return 0; // set the dxgi format dxgiFormat = GetDXGIFormatFromWICFormat(convertToPixelFormat); // create the format converter hr = wicFactory->CreateFormatConverter(&wicConverter); if (FAILED(hr)) return 0; // make sure we can convert to the dxgi compatible format BOOL canConvert = FALSE; hr = wicConverter->CanConvert(pixelFormat, convertToPixelFormat, &canConvert); if (FAILED(hr) || !canConvert) return 0; // do the conversion (wicConverter will contain the converted image) hr = wicConverter->Initialize(wicFrame, convertToPixelFormat, WICBitmapDitherTypeErrorDiffusion, 0, 0, WICBitmapPaletteTypeCustom); if (FAILED(hr)) return 0; // this is so we know to get the image data from the wicConverter (otherwise we will get from wicFrame) imageConverted = true; } int bitsPerPixel = GetDXGIFormatBitsPerPixel(dxgiFormat); // number of bits per pixel bytesPerRow = (textureWidth * bitsPerPixel) / 8; // number of bytes in each row of the image data int imageSize = bytesPerRow * textureHeight; // total image size in bytes // allocate enough memory for the raw image data, and set imageData to point to that memory *imageData = (BYTE*)malloc(imageSize); // copy (decoded) raw image data into the newly allocated memory (imageData) if (imageConverted) { // if image format needed to be converted, the wic converter will contain the converted image hr = wicConverter->CopyPixels(0, bytesPerRow, imageSize, *imageData); if (FAILED(hr)) return 0; } else { // no need to convert, just copy data from the wic frame hr = wicFrame->CopyPixels(0, bytesPerRow, imageSize, *imageData); if (FAILED(hr)) return 0; } // now describe the texture with the information we have obtained from the image resourceDescription = {}; resourceDescription.Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D; resourceDescription.Alignment = 0; // may be 0, 4KB, 64KB, or 4MB. 0 will let runtime decide between 64KB and 4MB (4MB for multi-sampled textures) resourceDescription.Width = textureWidth; // width of the texture resourceDescription.Height = textureHeight; // height of the texture resourceDescription.DepthOrArraySize = 1; // if 3d image, depth of 3d image. Otherwise an array of 1D or 2D textures (we only have one image, so we set 1) resourceDescription.MipLevels = 1; // Number of mipmaps. We are not generating mipmaps for this texture, so we have only one level resourceDescription.Format = dxgiFormat; // This is the dxgi format of the image (format of the pixels) resourceDescription.SampleDesc.Count = 1; // This is the number of samples per pixel, we just want 1 sample resourceDescription.SampleDesc.Quality = 0; // The quality level of the samples. Higher is better quality, but worse performance resourceDescription.Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN; // The arrangement of the pixels. Setting to unknown lets the driver choose the most efficient one resourceDescription.Flags = D3D12_RESOURCE_FLAG_NONE; // no flags // return the size of the image. remember to delete the image once your done with it (in this tutorial once its uploaded to the gpu) return imageSize; }
Comments
After updating my GeForce GTX 950M driver to the latest version from Nvidia, I noticed that all the tutorial samples had pronounced tearing of the displayed image, whereas before the update everything was perfect. I tried changing the 'frameBufferCount' in your 'stdafx.h' from 3 to 2, and everything corrected itself. Any comment?
on May 17 `16
AllanF
Hi allanF, that's interesting, I will do some investigating and get back to you if I find something
on May 17 `16
iedoc
Thanks, iedoc... I'm unsure if this has any bearing, but I'm also undertaking the examples from Frank Luna's latest book "Introduction To 3D Game Programming With Direct X 12" and had similar tearing effects.On searching the Net I found that others were experiencing similar issues with DirectX12 and that this was a problem specific to Nvidia mobile cards in particular (my integrated Intel HD Graphics 4600 has no such issues) A solution was posted that recommended changing the SwapChainBufferCount from 2 to 3. I tried this and it worked. So its strange that this is an opposite solution to the one I've just posted for your samples...?
on May 17 `16
AllanF
That's really strange. So your saying with my examples you need to change 3 backbuffers to 2 to fix the problem, but on all the other samples you have to change from 2 to 3?
on May 17 `16
iedoc
Yes. I just did another check to confirm. As I said, your samples were working perfectly, until my latest Nvidia driver update a few days ago.
on May 17 `16
AllanF
I wonder what it could be... when I have time I will try to look into the reason. In the meantime if you find any more information on it please post so we can figure out what needs to change in the code to make it work for everyone
on May 17 `16
iedoc
Just installed the latest Nvidia drivers...all examples no longer show any screen tearing with either 2 or 3 backbuffers.
on Jun 15 `16
AllanF
Hey thanks for getting back AllanF
on Jun 15 `16
iedoc
Awesome tutorials! I would like to take this a step further, say I fancied created two cubes each with a different texture, would I create a new TextureBuffer and Description for each texture, add this to the mainDescriptionHeap, and say mainDescriptionHeap[0] for Cube1 and mainDescriptionHeap[1] for Cube2? Thanks in advance!
on Oct 12 `16
MaverickGames
Hi MaverickGames, yes, that is what you would do if you want to upload multiple textures
on Oct 13 `16
iedoc
Awesome. Thank you for the swift reply :D
on Oct 13 `16
MaverickGames
First , thanks a lot for your very useful tutorials. Lets say i have Meshes with two textures(one for diffuse, one for specular). 1) Do i have to create a descriptor heap handling the two texture for every mesh ? 2) How i transition to the two textures to be accessible by the pixel shader ? I see we actually use m_commandList->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(dst.buffer, D3D12_RESOURCE_STATE_COPY_DEST, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
on May 07 `17
CLoyz
Thank you for the great tutorial. I adapted very much of this and currently I am writing an application with multiple windows. Unfortunately does the drawing on the second window not work properly. It only clears the RTV but does not draw the triangles. For the second window I used completely new assets(new swapChain, commandQueue etc...). Curiously the first window sometimes flickers with the color of the second window. This makes me wondering why it behaves like that. Hopefully someone can help. Thank you guys!
on May 18 `18
Radlog
Hey! First of all, great tutorial it's really helping to get a better understanding of the method to render a texture. Also, I think I have found a typo and have a quick question... At the Descriptor table, you're explaining how to set up the root signature. I suppose the constant buffer is the MVP matrix? As for the typo: ```rootParameters[0].ShaderVisibility = D3D12_SHADER_VISIBILITY_VERTEX; // our pixel shader will be the only shader accessing this parameter for now``` This sets the root parameter to be only visible in the vertex shader and counters what the comments is saying (that the pixel shader is the only one who can access it). I suppose this should be the vertex shader :) Once again, great tutorial!
on Nov 20 `18
Meine