I've messed around with it a little, and I can kind of get it to do what I want. I'm able to get it to dynamically pick up the mask coming from a live NDI source. I can even manage some edge alpha-blending. I'll have to keep tweaking it, but this has some promise.
What I really want probably isn't possible with any system using only two inputs (main cam and an overlay with a mask). I'm trying to put text over a video in various locations coming from a second presentation machine (hence the dynamic mask). If I stick with simple chroma-keying and block edges, this is easy. But I really want a blurred section of my video under the text, with the edges of the blur blended back into the sharp video. So really, I'm trying to figure out how to do a 3-4 layer application with only two source layers.
The presentation layer order is effectively (from top to bottom of the stack):
- Presentation Text
- Blurred Camera
- Apply mask/key (from presentation machine) with blended alpha edges
- Camera
But I only have two sources: Camera and the Presentation NDI feed. The Dynamic Mask filter works for the blended alpha edge, but also takes out part or all of my text as well. I'd have to have two coordinated layers from the presentation machine (text and mask separately) to make it work right, I think. I'll probably end up not worrying about the blended alpha edges and just stick with a chroma-key for my mask. Thanks for the tip about trying the Dynamic Mask though!