compositing plugin questions

chrismarch

New Member
Hello,
I've been digging through the plugin API docs, along with code for background removal plugins (using machine learning, instead of a green screen). I'm at the point where I need to ask some questions about the plugin api:
  1. I want precise control of the compositing between a video capture device and whatever source is to render underneath it. Is there a way to access the frame/pixels that another source is rendering, underneath the video capture device source filter plugin that I am coding? Is there a different plugin setup that makes more sense than a source filter, say using OBS_SOURCE_COMPOSITE (maybe an infinite transition)? (Also, I am not finding the source blending modes, in the obs GUI, to give enough control).
  2. Can I output a different frame format than is input in an async filter_video function? (for example, output RGBA)
  3. If I use the graphics system instead, can I have a shader read what is in vram (render target?) before I blend the texture that will have the filtered video capture device? Or, do I need to use the stage surface api to read it back to ram and then use that information to construct a texture in a later frame?
 
Top