Don't get me wrong, FFmpeg is great and truly is the swiss army knife of video libraries/applications although I am just curious as to why you guys decided to go with FFmpeg to handle the encoding in the redux as opposed to using x264 and your hardware encoders from the original OBS?
If I was starting from scratch I would probably use FFmpeg aswell, atlthough you guys had a working version using x264 alone with the hardware encoders integrated already. Considering x264 is cross platform, and the hardware encoders are cross platform aswell and probably wouldn't require many changes.
The only thing I can think of is FFmpeg supports more codecs out of the box giving more options. Although more encoding options isn't really a good thing when x264 is the best option for live streaming even over HEVC IMO. Or did you guys go with it for it's other video producing capabilities?
So I am just wanting to pick the developers brains a little bit to figure out why you guys decided to go with FFmpeg. Do you guys plan on supporting hardware acceleration in the future with OBS studio?
If I was starting from scratch I would probably use FFmpeg aswell, atlthough you guys had a working version using x264 alone with the hardware encoders integrated already. Considering x264 is cross platform, and the hardware encoders are cross platform aswell and probably wouldn't require many changes.
The only thing I can think of is FFmpeg supports more codecs out of the box giving more options. Although more encoding options isn't really a good thing when x264 is the best option for live streaming even over HEVC IMO. Or did you guys go with it for it's other video producing capabilities?
So I am just wanting to pick the developers brains a little bit to figure out why you guys decided to go with FFmpeg. Do you guys plan on supporting hardware acceleration in the future with OBS studio?