Is there any reason why 16 bit shaders are ignored here? In my tests the 16 bit version of the FSR3 upscaler shading change pyramid shader is slightly faster with no noticeable visual difference.
|
static FfxShaderBlob fsr3UpscalerGetShadingChangePyramidPassPermutationBlobByIndex(uint32_t permutationOptions, bool isWave64, bool) |
Is there any reason why 16 bit shaders are ignored here? In my tests the 16 bit version of the FSR3 upscaler shading change pyramid shader is slightly faster with no noticeable visual difference.
FidelityFX-SDK/Kits/FidelityFX/upscalers/fsr3/internal/ffx_fsr3upscaler_shaderblobs.cpp
Line 289 in f4c1da8