reeddacted
New Member
I see face tracking plugins that will zoom, pan etc on a camera source when detecting a face.
I was curious if it would be possible to use NVIDIA AR SDK or other face mesh or movement data to move different sources or send other triggers.
Would allow something sort of between a pngtuber and vtuber (blinking and open mouth detection, detect a wink = trigger sound effect, X/Y/Z/rotation movement but no physics) or a world of other stuff. Hand tracking data could be used in a similar way (hand in scene = enable wave.gif etc).
There's probably a reason this hasn't been done already but I haven't been able to find anything online so I thought I'd mention it here.
I was curious if it would be possible to use NVIDIA AR SDK or other face mesh or movement data to move different sources or send other triggers.
Would allow something sort of between a pngtuber and vtuber (blinking and open mouth detection, detect a wink = trigger sound effect, X/Y/Z/rotation movement but no physics) or a world of other stuff. Hand tracking data could be used in a similar way (hand in scene = enable wave.gif etc).
There's probably a reason this hasn't been done already but I haven't been able to find anything online so I thought I'd mention it here.