UmiHimeYokai
New Member
I am a little bit confused on how Render Delay filters work.
I have a multiPC stream setup. I need to send audio on one computer over to the main streaming computer. Due to reasons I am not going to get into here, there is a 1 to 1.5 second delay, with all the video and sound from the streaming computer being ahead of this one audio source.
Obviously I need to add a render delay. But I've been trying that and it seems to act really not super well. What I tried to do is put everything in a scene except for this audio source. Then in a new scene add the first scene as a source and the audio source coming in as a source. And put a render delay on that scene source. I need a fairly substantial render delay (currently 1.4 seconds) which, to reiterate, can't really be reduced. However I am running into issues where my encoder is getting overwhelmed. This doesn't make that much sense to me though and maybe I am fundementally misunderstanding how render delay filters work.
I am running at 30 fps, 720p scaling with a 4080 super and like 8 free VRAM for OBS to play with. Currently I assumed that Render delay filters on a scene worked by storing raw frames in VRAM, with the number of frames being the common frame rate of OBS. A lossless png screenshot exported from OBS is about 1.84 MBs. Assuming it's actually 10MBs per frame, at 30 frames/second with a 1.5 second delay, I would expect this to take 450 MBs of VRAM. Which, while a lot, is still an order of magnitude less than the the VRAM OBS has available to it. Yet I'm having encoding overload issues.
What makes this a bigger headscratcher for me is that stream delaying supposedly adds significantly less memory for significantly longer than render delaying.
How exactlyyyyyy does render delay filter work on scenes? I am imagining render delay essentially storing the common frame rate times the delay time's worth of PNGs of a combined scene and putting it to VRAM. But it also could be storing the individual frames of every single element in a scene source individually and reassembling it at render. The latter obviously would be infinitely more VRAM, and would make a lot more sense - but I can't find any information on what exactly is happening. Is this why stream delay can be so much longer?
If that's the case, would it be significantly faster in my case to instead, immediately render what I have, then add the audio and then re-render? How would I do that in OBS? Would I essentially have to stream it to myself or something like OBS ninja or something?
And to head it off, I am using hardware rendering and Fast quality settings. My cpu usage is low (and my GPU usage is low, although it has random spikes where the video encoder just wigs out for a few seconds).
I have a multiPC stream setup. I need to send audio on one computer over to the main streaming computer. Due to reasons I am not going to get into here, there is a 1 to 1.5 second delay, with all the video and sound from the streaming computer being ahead of this one audio source.
Obviously I need to add a render delay. But I've been trying that and it seems to act really not super well. What I tried to do is put everything in a scene except for this audio source. Then in a new scene add the first scene as a source and the audio source coming in as a source. And put a render delay on that scene source. I need a fairly substantial render delay (currently 1.4 seconds) which, to reiterate, can't really be reduced. However I am running into issues where my encoder is getting overwhelmed. This doesn't make that much sense to me though and maybe I am fundementally misunderstanding how render delay filters work.
I am running at 30 fps, 720p scaling with a 4080 super and like 8 free VRAM for OBS to play with. Currently I assumed that Render delay filters on a scene worked by storing raw frames in VRAM, with the number of frames being the common frame rate of OBS. A lossless png screenshot exported from OBS is about 1.84 MBs. Assuming it's actually 10MBs per frame, at 30 frames/second with a 1.5 second delay, I would expect this to take 450 MBs of VRAM. Which, while a lot, is still an order of magnitude less than the the VRAM OBS has available to it. Yet I'm having encoding overload issues.
What makes this a bigger headscratcher for me is that stream delaying supposedly adds significantly less memory for significantly longer than render delaying.
How exactlyyyyyy does render delay filter work on scenes? I am imagining render delay essentially storing the common frame rate times the delay time's worth of PNGs of a combined scene and putting it to VRAM. But it also could be storing the individual frames of every single element in a scene source individually and reassembling it at render. The latter obviously would be infinitely more VRAM, and would make a lot more sense - but I can't find any information on what exactly is happening. Is this why stream delay can be so much longer?
If that's the case, would it be significantly faster in my case to instead, immediately render what I have, then add the audio and then re-render? How would I do that in OBS? Would I essentially have to stream it to myself or something like OBS ninja or something?
And to head it off, I am using hardware rendering and Fast quality settings. My cpu usage is low (and my GPU usage is low, although it has random spikes where the video encoder just wigs out for a few seconds).