desaintmartin
New Member
Hello Guys,
I've been playing with OBS for a few days now, and my goal is to setup a Linux machine that receives 4K RTMP streams as pull input (through some RTMP server like nginx or srs) and output it in 4K using NVENC.
So far, the NVENC part works well and I am able to encode my output stream using the video engine from my GTX 1070.
But I have dropped frames even when not streaming, the decoding / filtering / rendering parts alone seem to put my GPU on its knees. Oddly enough, everything works well on Windows.
When looking at GPU usage (using "nvidia-smi dmon" command on linux, GPU-Z on Windows) I see on Linux 100% usage without even streaming, but on Windows it is only around 50%, and ~80% when streaming.
It results that when streaming, on Linux I get ~22FPS instead of 30.
I'm using 0.19.0.3 on both OSes, and nvidia drivers 183 on Linux and 184 on Windows. Linux is a Ubuntu 16.04 up-to-date.
Is there some differences between the openGL / Xorg rendering and the direct3D one that could explain this huge difference? Is it an OBS-related optimization issue? I've seen some 2-years-old post on Stack Overflow where MadCactus relates his early xorg problems, maybe it has changed since that? I really would like to kind of automate everything and Linux is much more indicated for server-like purposes than Windows...
I've been playing with OBS for a few days now, and my goal is to setup a Linux machine that receives 4K RTMP streams as pull input (through some RTMP server like nginx or srs) and output it in 4K using NVENC.
So far, the NVENC part works well and I am able to encode my output stream using the video engine from my GTX 1070.
But I have dropped frames even when not streaming, the decoding / filtering / rendering parts alone seem to put my GPU on its knees. Oddly enough, everything works well on Windows.
When looking at GPU usage (using "nvidia-smi dmon" command on linux, GPU-Z on Windows) I see on Linux 100% usage without even streaming, but on Windows it is only around 50%, and ~80% when streaming.
It results that when streaming, on Linux I get ~22FPS instead of 30.
I'm using 0.19.0.3 on both OSes, and nvidia drivers 183 on Linux and 184 on Windows. Linux is a Ubuntu 16.04 up-to-date.
Is there some differences between the openGL / Xorg rendering and the direct3D one that could explain this huge difference? Is it an OBS-related optimization issue? I've seen some 2-years-old post on Stack Overflow where MadCactus relates his early xorg problems, maybe it has changed since that? I really would like to kind of automate everything and Linux is much more indicated for server-like purposes than Windows...
Last edited: