Is it possible to sample 16bit colour values with textureSampler ?

RealGPT

New Member
I am writing a filter with OBS-Shaderfilter to remap an image/video. The filter creates its output by sampling a texture image and uses Red/Green values as new co-ordinates to sample the input). I've seen this called UV mapping or ST mapping.

It works OK with an 8 bit RG texture image, but then I am limited to 256x256 pixel sampling fidelity.
I cant get it to work for 16 bit RG colour texture image which would open up some powerful image / video transformations.

Is it possible to sample 16 bit RG values ?

--------------------------------------------------------------------

Below my filter code and texture creation method
- I want to sample remap_texture 16 bit RG values...


uniform texture2d remap_texture <string label = "remap_texture";>;
float4 mainImage(VertData v_in) : TARGET
{
float2 uvmap = remap_texture.Sample(textureSampler, v_in.uv);
return image.Sample(textureSampler, uvmap);
}


Test texture generation (identity or 1 to 1 mapping so expect output = input)

Generate an identity test texture 256x256 -> texture_8bit.png
ffmpeg -f lavfi -i nullsrc=size=256x256 -vf format=gray8,geq='X' -frames 1 -y xmap.pgm
ffmpeg -f lavfi -i nullsrc=size=256x256 -vf format=gray8,geq='Y' -frames 1 -y ymap.pgm
magick convert xmap.pgm ymap.pgm -background black -channel RG -combine texture_8bit.png

Generate an identity test texture 1080x1080 -> texture_16bit.png
ffmpeg -f lavfi -i nullsrc=size=1080x1080 -vf format=gray16,geq='X' -frames 1 -y xmap.pgm
ffmpeg -f lavfi -i nullsrc=size=1080x1080 -vf format=gray16,geq='Y' -frames 1 -y ymap.pgm
magick convert xmap.pgm ymap.pgm -background black -channel RG -combine texture_16bit.png
 

AaronD

Active Member
I don't have an answer, but for those who ask, "Why in the world do you need more than 10-bit color?! Or even 8-bit color?!", THAT'S NOT WHAT THIS IS!!!

This is physically rearranging the individual pixels of an image, using an API that already exists. Each pixel in the "texture" is not a color, but an address of where to get the pixel from the source image. Once you think in terms of an address, not a color, it becomes obvious why it needs more bits.
 

AaronD

Active Member
As I was writing that, I did think of one possibility:

Can you use a 12x12-bit address space? Or 11x13-bit? Etc. Use all 3 colors, 8 bits each, to give you 24 bits total, and divide up those 24 bits however you need. You might also need to write a preprocessor to convert a 16-bit map into your standard, and throw an error when it's still too big.

Or, since you're working with PNG's, and that format has 8-bit transparency (alpha), you might actually have 4 "colors" to work with, with 8 bits each. Just stick two pairs of colors together, and you have 16x16 bits! Still need the preprocessor, but at least you know it'll fit!
 

RealGPT

New Member
As I was writing that, I did think of one possibility:

Can you use a 12x12-bit address space? Or 11x13-bit? Etc. Use all 3 colors, 8 bits each, to give you 24 bits total, and divide up those 24 bits however you need. You might also need to write a preprocessor to convert a 16-bit map into your standard, and throw an error when it's still too big.

Or, since you're working with PNG's, and that format has 8-bit transparency (alpha), you might actually have 4 "colors" to work with, with 8 bits each. Just stick two pairs of colors together, and you have 16x16 bits! Still need the preprocessor, but at least you know it'll fit!
Thanks AaronD, yout got the concept exactly. Yes agreed I could use 3 or 4 channels but Im new to HLSL and finding it a little tough going! So was hoping for a cleaner solution if was possible.
 

khaver

Member
The uv (xy) and color (RGBA) values of textures are normalized to floating point numbers between 0 and 1. Also, retrieving a color sample from a texture returns a float4 (RGBA). To retrieve just the R & G values (float2), you'll need to use:

float2 uvmap = remap_texture.Sample(textureSampler, v_in.uv).rg;
 

RealGPT

New Member
The uv (xy) and color (RGBA) values of textures are normalized to floating point numbers between 0 and 1. Also, retrieving a color sample from a texture returns a float4 (RGBA). To retrieve just the R & G values (float2), you'll need to use:

float2 uvmap = remap_texture.Sample(textureSampler, v_in.uv).rg;
thanks, understood
 

RealGPT

New Member
Successfully implemented with RGBA to get 2x16bit values. Works ok as a workaround...
uniform texture2d remap_texture <string label = "remap_texture";>; float4 mainImage(VertData v_in) : TARGET { int texture_width = 4096; int texture_height = 2048; float4 rgba = remap_texture.Sample(textureSampler, v_in.uv); // convert to 16 bit value and downscale to fit float x = (256*rgba.r + rgba.g) / (texture_width / 256); float y = (256*rgba.b + rgba.a) / (texture_height / 256); return image.Sample(textureSampler, float2(x,y)); }

To generate identity map 4096x2048 8bit RGBA
ffmpeg -f lavfi -i nullsrc=size=4096x2048 -vf format=gray,geq='floor(X/256)' -frames 1 -y r.pgm ffmpeg -f lavfi -i nullsrc=size=4096x2048 -vf format=gray,geq='mod(X,256)' -frames 1 -y g.pgm ffmpeg -f lavfi -i nullsrc=size=4096x2048 -vf format=gray,geq='floor(Y/256)' -frames 1 -y b.pgm ffmpeg -f lavfi -i nullsrc=size=4096x2048 -vf format=gray,geq='mod(Y,256)' -frames 1 -y a.pgm magick convert r.pgm g.pgm b.pgm a.pgm -channel RGBA -combine texture.png
 

AaronD

Active Member
The uv (xy) and color (RGBA) values of textures are normalized to floating point numbers between 0 and 1. Also, retrieving a color sample from a texture returns a float4 (RGBA). To retrieve just the R & G values (float2), you'll need to use:

float2 uvmap = remap_texture.Sample(textureSampler, v_in.uv).rg;
That shouldn't be a problem. A 32-bit float (IEEE 754 format) can exactly represent any 24-bit or smaller integer. So an automatic conversion to float and explicitly back to int again, is only a problem if your index is bigger than 16,777,216.
 

moocowsheep

New Member
Just to add to this...I've been working on the same thing with the devs, and we've been trying to get 32 bit RGBA tiff support into OBS. This seems to work with the modified code from the pull request on Windows, I've tested it.

The shader is as follows:
uniform texture2d remap_texture <string label = "remap_texture";>;
sampler_state uvshaderSampler {
Filter = Linear;
AddressU = Border;
AddressV = Border;
BorderColor = 00000000;
};
float4 mainImage(VertData v_in) : TARGET
{
float uvX = (remap_texture.Sample(uvshaderSampler, v_in.uv).r);
float uvY = 1 - (remap_texture.Sample(uvshaderSampler, v_in.uv).g);
float2 uvmap = float2(uvX,uvY);
return image.Sample(uvshaderSampler, uvmap);
}

The UV map has to be a 32 bit floating point TIFF, pixel format rgbaf32le. I've done it by creating a 32 bit EXR, and then saving it as a 32 bit UNCOMPRESSED tiff with no layers in GIMP (a compressed TIFF will crash OBS).

If the modified image loading code gets into the master branch, hopefully it'll be readily available, but until now, I've just compiled my own builds.
 
Top