Resource icon

OBS Studio High quality recording with multiple audio tracks in Advanced output mode

megpoid0

Member
megpoid0 submitted a new resource:

High quality recording with multiple audio tracks in Advanced output mode - bottom text

Advanced output mode allows creating a file with multiple audio tracks - for example, you could livestream your game's audio and your microphone, and also record your system sounds, game audio, and microphone into 3 separate tracks for video editing. (Capturing per-application audio will be added to OBS Studio in the future; currently, the win-capture-audio plugin can be installed for this.)

[COLOR=rgb(247, 218...[/quote]

Read more about this resource...
 

Joe-Bio

New Member
The multitrack audio recording is VERY helpful. My problem: I do not find the possibility to record the audio signal pre-fader / pre-filter (I hope I did not oversee it in your post). Does this possibility exist? If yes, cool, if no, I can explain why this feature would be more than a nice to have :)
 

AaronD

Active Member
Could you make it clearer what actually ends up in the recorded file?

I always assumed that the Tracks in OBS were more like Mute Groups on a large-format mixing console, or mixing busses that each have an output mute and then mix together to produce a single N-track file, where N is the number of channels that OBS is set to. (mono=1, stereo=2, 5.1=6, etc.) So for that reason, I've never seen a use for them (just mix directly into a single Track, if it all ends up that way anyway), and this Guide just kinda glosses over it.

Thanks!
 

Joe-Bio

New Member
My case: I use OBS for live hybrid lectures, therefore I have different audio signals. In this context, the most relevant ones are from a lavalier microphone recording my voice and another condensor microphone for the "surrounding", which captures the contributions of the students in the room and reactions like laughter, applause... For live streaming, the signal of the surrounding microphone is ducked via sidechain compression (using the internal OBS filter) as long as I speak. For post processing, I record all audio on seperate tracks and subsequently have all options using a DAW. My problem is, that all tracks are recorded after the filters (post fader / post effect). Recording equipment as a Zoom livetrak l-8 only has the mix channel recorded post fader / effect (which, in OBS corresponds to the mixed signal sent to the stream), the individual in-channels are recorded pre-fader / effect, therefore contain the raw signal.
The usual case is, that people "duck" their voice over a background music - and for post processing, you can simply load the original track. In my case, the "ducked" signal is my only recording I have of the surrounding and students contribution, and therefore I have no possibility to rescue something in post. As an example, if a student speaks and I have to cough, the student will be ducked in this instant and I cannot manually bring the student back in post using e.g. sidechain automation in the DAW.

To make a longs story short: as the recorded tracks play (at least in my perception) only a role in post-processing, where you have all tools (and time) of the world, recording of the raw signals makes more sense then recording of the processed / filtered signals. And maybe, there is the possibility to set this somewhere, simply I did not find it.

Thanks for your help - it's highly appreciated!
 

AaronD

Active Member
My case: I use OBS for live hybrid lectures, therefore I have different audio signals. In this context, the most relevant ones are from a lavalier microphone recording my voice and another condensor microphone for the "surrounding", which captures the contributions of the students in the room and reactions like laughter, applause... For live streaming, the signal of the surrounding microphone is ducked via sidechain compression (using the internal OBS filter) as long as I speak. For post processing, I record all audio on seperate tracks and subsequently have all options using a DAW. My problem is, that all tracks are recorded after the filters (post fader / post effect). Recording equipment as a Zoom livetrak l-8 only has the mix channel recorded post fader / effect (which, in OBS corresponds to the mixed signal sent to the stream), the individual in-channels are recorded pre-fader / effect, therefore contain the raw signal.
The usual case is, that people "duck" their voice over a background music - and for post processing, you can simply load the original track. In my case, the "ducked" signal is my only recording I have of the surrounding and students contribution, and therefore I have no possibility to rescue something in post. As an example, if a student speaks and I have to cough, the student will be ducked in this instant and I cannot manually bring the student back in post using e.g. sidechain automation in the DAW.

To make a longs story short: as the recorded tracks play (at least in my perception) only a role in post-processing, where you have all tools (and time) of the world, recording of the raw signals makes more sense then recording of the processed / filtered signals. And maybe, there is the possibility to set this somewhere, simply I did not find it.

Thanks for your help - it's highly appreciated!
In your specific case, since you only have two mono mics, you might see what it would take to record a stereo soundtrack with each one panned hard to one side or the other. Then your editor can use the left and right tracks as two mono's that you pan back to center. Still doesn't solve the problem though, of needing a raw recording to post-process.

For the raw recording, can you create a copy of each mic in OBS, so that you now have 4 tracks? Put your filters on one set, and none on the other. In the Advanced Audio Properties, set the Track assignments and panning differently, and then in the Settings, stream one track and record another.

Or, to actually use the multiple-tracks idea as I think it's intended, given the original guide, you might keep the duplicated mics but assign each of the un-filtered ones to its own track, and record all of those tracks. So both of the processed copies go to Track 6, for example, and you stream Track 6, meanwhile one of the individual mics goes to Track 1 and the other to Track 2, and you record Tracks 1&2.
(kinda like the audience output of a live audio console uses the highest-numbered outputs, while the "internal logistics" stuff like floor wedges and recording start from 1 in the same set)
Then your editor needs to pick Track 1 or Track 2 out of the video file, which is different from left or right in stereo. In fact, if OBS is set for stereo, you would get a total 4 tracks in the recording - 2 tracks of stereo each - so since nobody is going to appreciate stereo anyway as you use it, you could set OBS to mono and still have the mics recorded separately.

The original guide has better screenshots than what I would do, so you can just look at those. :-)
 
Last edited:

Joe-Bio

New Member
Thanks for your response - this just paralleled what I was about to write, as this is exactly my current turnaround: I copy the surrounding mic and send the filtered one to the stream and the other, non-filtered one to the recording. The lavalier mic is connected to a rode go wireless system which has also an internal recording, therefore there is no problem for this one.

Nevertheless, as I do not see an advantage of including the filters for the recordings, would be cool if the developers could at least think about the possiblity to record the raw signal.

In any case, thanks a lot for your swift response and help :)
 

AaronD

Active Member
The lavalier mic is connected to a rode go wireless system which has also an internal recording, therefore there is no problem for this one.
I can see a problem: clock drift.

Even if you sync the start exactly, it can still be off a bit by the time you get to the end of a 50-minute lecture. For example, one might record at 44100.3 Hz and the other at 44100.4 Hz. Both files say they're 44.1 kHz, and you couldn't tell the difference in a direct A/B listening test, but one of them has more samples for the same original time and therefore takes longer to play.

If you record everything together, in the same device to the same file, it stays in sync the whole way through.

Pro gear that must record different parts of the same thing in multiple devices, or doesn't want the added latency of a asynchronous buffer, also has a master clock device of some kind, that gets distributed to everything else in the system. I'm pretty sure you don't have that. :-)
 
Ok I have read all the guide and it seems to be very good. Now I know that we should use singel pass and "tuning" should be at "high quality" what about the "prest". Here I am not sure what value should be chosen for recording and streaming. Here I can chose from "the fastest - lowest quality" to "the slowest - best quality". Personally I use "P5: Slow (Good Quality)". For recording and streaming. Is here someone who was testing it and can say what's the preferable value?

If my question is not so clear I have attached a png with the value I am asking for help.

All good for you guys,
 

Attachments

  • obs-preset-settings.png
    obs-preset-settings.png
    104.6 KB · Views: 158

JerichoJericho

New Member
I can see a problem: clock drift.

Even if you sync the start exactly, it can still be off a bit by the time you get to the end of a 50-minute lecture. For example, one might record at 44100.3 Hz and the other at 44100.4 Hz. Both files say they're 44.1 kHz, and you couldn't tell the difference in a direct A/B listening test, but one of them has more samples for the same original time and therefore takes longer to play.

If you record everything together, in the same device to the same file, it stays in sync the whole way through.

Pro gear that must record different parts of the same thing in multiple devices, or doesn't want the added latency of a asynchronous buffer, also has a master clock device of some kind, that gets distributed to everything else in the system. I'm pretty sure you don't have that. :-)
You are correct. Clock drift can indeed cause synchronization issues when recording audio or other time sensitive data across multiple devices. Even slight variations in the sampling rate can accumulate over time, resulting in desynchronization between the recordings.
thanks
 
Top