DrWolfsherz
New Member
The audio interface is a Focusrite Scarlett 2i4.
Okay, I have taken a look at the code. It seems to be a fundamental thing how ffmpeg works. The downmixing option is not involved, but the resampling that happens in obs-source.c (also in audio-io.c, but this is not relevant for the recording, I think, but maybe for other things). In both cases it uses the resampling implemented in audio-resampler-ffmpeg.c.
Bottom line is that swr_convert, which is used in audio_resampler_resample(), seems to be the culprit. Given a 1-channel input and 2-channel output, it generates two quieter stereo streams.
I'm referring specifically to this part:Why it is stalled? The idea to give to user full control over the mixing is major change in OBS and the request that require many hard work. As obs overlay, that requested long time ago (while projectors exist where all sources always visible).
Now the question is, is this what we would expect swr_convert to do? Is this what we expect OBS to do?
EDIT: To be clear, I think that the way ffmpeg works makes sense depending on what you expect. If a mono signal would actually be output by only one speaker, then distributing the signal over all channels to get the same total power output after the conversion makes sense. However, it might be a counterintuitive result in case of OBS (at least I think so). I'm used to the fact that a mono signal is usually output on all channels/speakers by common audio recording and editing tools. Of course, the user could increase the recording volume in OBS in this particular case of mono input and stereo output, but I'm not sure whether this could potentially decrease the audio quality by first narrowing and then widening the dynamic range again. Also, I'm not sure whether this should be something the user needs to be taught to do in the particular case of a mono input and a stereo output.