Video & Avatars
Realtime Video Lighting for Avatars
[WIP] Fill out details and do proper formatting. For now here's a summarized version of the information:
You can use LTCGI for world-based realtime lighting (or AreaLit if you prefer that solution).
For avatars, If they use a shader that supports LTCGI natively (like poiyomi 9.1+), they'll just work with that (obviously has a performance cost as well).
You can use AreaLit for projector based lighting which supports any avatars at a performance cost.
Both have pros and cons.For avatars that don't support LTCGI (and you don't want to use AreaLit), you need to setup lightprobes in your scene, bake your lighting with realtime enabled (if using bakery, ensure you are on the 'experimental' menu mode so you can select the bake with Enlighten option at the bottom), make sure your lighting settings are setup for realtime lighting, double check the indirect resolution (try 0.2 and 2 to see which is better for your scene and then adjust from there, you can also try adjusting your lightmap resolution) and then add the RTGIUpdater component that ProTV provides to the MeshRenderer screen you want to emit onto the lightprobes. Be aware that the more lightprobes you have, the more expensive the RTGIUpdater action is. So don't just spam lightprobes everywhere. Be judicious about it.
The benefit of lightprobes is that they are generally more lightweight (albeit less accurate lighting data) way of capturing the screen's emission. Also they are supported with quest avatars that use the standard shaders.
For lightprobe-based GI, It is required the video screen material use the ProTV/VideoScreen shader OR a standard/standard-derived shader (eg: orels standard shader) that contains a META shader pass for the lightprobes. This enables outputting of the video data as emissive lighting.
For integrating anything with VRSL GI, any affected shaders must support the feature-set VRSL provides. For anything specific about setups of the third-party systems themselves, please visit the appropriate dev server for those assets.
Video On Avatars
ProTV 3 enables the use of the video screen on Avatars via a Global Video Texture.
The setting Global Video Texture
on the TVManager
component enables this access through two specific variable names: _Udon_VideoTex
and _Udon_VideoData
_Udon_VideoTex
This shader variable is a 2D texture of varying dimension. It will be assigned the RenderTexture component utilized by the TV for the Blit operation.
You can check if the texture is 'active' by examining the dimensions of it.
If it is width <= 16
, then the texture is considered disabled or unavailable and the shader can fallback to something else.
_Udon_VideoTex_ST
Shaders should be sure to implement the float4 _Udon_VideoTex_ST
tiling/offset value to allow the world to define what portion of the video texture should be used by the shader.
There are two ways to check the global variable for if it's present.
1) Using TexelSize (Recommended)
Shader pass declaration:
uniform sampler2D _Udon_VideoTex;
float4 _Udon_VideoTex_TexelSize;
float4 _Udon_VideoTex_ST;
Fragment snippet for checking width (z of TexelSize is the width of the texture in pixels):
if (_Udon_VideoTex_TexelSize.z <= 16) {
// no video texture
} else {
// video texture present, transform UV and sample texture
// the TRANSFORM_TEX call just returns:
// uv * _Udon_VideoTex_ST.xy + _Udon_VideoTex_ST.zw;
float4 tex = tex2d(_Udon_VideoTex, TRANSFORM_TEX(uv, _Udon_VideoTex));
}
2) Using Texture2D
Shader pass declaration:
uniform Texture2D _Udon_VideoTex;
SamplerState sampler_Udon_VideoTex;
float4 _Udon_VideoTex_ST;
Fragment snippet for checking width:
int videoWidth;
int videoHeight;
_Udon_VideoTex.GetDimensions(videoWidth, videoHeight);
if (videoWidth <= 16) {
// no video texture
} else {
// video texture present, transform UV and sample texture
// the TRANSFORM_TEX call just returns:
// uv * _Udon_VideoTex_ST.xy + _Udon_VideoTex_ST.zw;
float4 tex = _Udon_VideoTex.Sample(sampler_Udon_VideoTex, TRANSFORM_TEX(uv, _Udon_VideoTex));
}
_Udon_VideoData
This shader variable is a float4x4 matrix that contains the important internal metadata of the TV. The structure is as follows:
[ FLAGS (_11 int ) , STATE (_12 int ) , ERROR_STATE (_13 int ) , READY (_14 bool) ]
[ VOLUME (_21 float) , SEEK_PERCENT (_22 float) , PLAYBACK_SPEED (_23 float) , (_24 ) ]
[ (_31 ) , (_32 ) , (_33 ) , (_34 ) ]
[ 3D_MODE (_41 int ) , (_42 ) , (_43 ) , (_44 ) ]
The flags field (_11) is a bit-packed composition of the following options (and their respective value checks, the int() cast is required in the shader)
LOCKED (int(_11) >> 0 & 1)
MUTE (int(_11) >> 1 & 1)
LIVE (int(_11) >> 2 & 1)
LOADING (int(_11) >> 3 & 1)
FORCE2D (int(_11) >> 4 & 1)
To examine if a TV is feeding information to the shader variables, you can examine the _14
field of the matrix. This is a boolean (0 or 1) value which the TV specifies if it has passed initialization (meaning it can play media). If the value is 0/false, the shader should assume that the TV is unavailable and can fallback to something else.
The int values are enums used by the TV which are as follows:
- STATE
- 0 = No media currently loaded
- 1 = Media is stopped
- 2 = Media is playing
- 3 = Media is paused
- ERROR_STATE
- 0 = no error
- 1 = url failed and is retrying
- 2 = url is blocked
- 3 = url failed to load
- 3D_MODE
- 0 = No 3d mode active
- 1 = Side by Side (SBS)
- 2 = Side by Side but the eyes should be swapped
- 3 = Over Under
- 4 = Over Under but the eyes should be swapped
- -1 through -4 = Any of the respective above values, but treated as if each eye is rendered at full resolution.
This means that instead of assuming the source resolution is the full texture
width * height
(which stretches the texture horizontally from the source media to fit the expected size), it will assume the source resolution iswidth/2 * height
for SBS (orwidth * height/2
for OverUnder modes) (meaning the original texture is pixel exact per eye). This is aka SBS-Full (or OverUnder-Full).
Why does the structure look weirdly arranged? That's because the default value for a 4x4 matrix is the identity of it which is:
[1,0,0,0]
[0,1,0,0]
[0,0,1,0]
[0,0,0,1]
So to handle situations where the identity fallback is set by unity, the locations of the data have been carefully selected to have a reasonable default value.