96 Khz Support

D

Deleted member 496183

The mediainfo says that, but is it really?
Yes it is, I've done some analysis with Audacity and even found out that OBS default AAC settings are bit worse than YouTubes default transcoder settings.

So I've generated some white noise in Audacity with an amplitude of 0.1 at 96 KHz and 32 bit and recorded it with OBS at 48 KHz and 96 KHz PCM 32 bit.

These are the results:

OBS 48 KHz PCM 32bit Custom Output (FFMPEG):
1726518051099.png

As you can see with 48 KHz the frequency spectrum stops at 24 KHz that should be fine.

OBS basic.ini workaround 96 KHz PCM 32bit Custom Output (FFMPEG):
1726518110110.png

The spectrum stops at 48KHz here and indicates that the sample rate setting in the basic.ini for 96 KHz is working.

OBS default AAC encoder at 96 KHz:
1726518976710.png

Recording with OBS default AAC encoder (I think it is libfdk_aac) shows that the cutoff is around 17 KHz. You can set it higher if you want. Source: https://trac.ffmpeg.org/wiki/Encode/AAC
The default AAC cutoff limit on Youtube is at 20 KHz. That is probably the reason why some people sound a bit better on YouTube because they record their voices with a different audio recording software as when they are streaming or uploading OBS videos directly.

Youtubes AAC frequency spectrum from a random Video for comparison at 96 KHz:
1726519731445.png

As you can see the Youtube AAC Encoder cutoff setting is at 20 KHz.

So in conclusion I can say that the basic.ini workaround for higher sample rates is working. Recording/Streaming Videos directly with OBS above 44.1 KHz is useless because the default AAC cutoff is around 17 KHz anyways. It only makes sense to record audio above 44.1 KHz if you record your audio with lossless encoders like PCM or WAV or set the default AAC cutoff limit above 20 KHz in the custom output settings of OBS.

Log file: https://obsproject.com/logs/rnf516VMaOdFKO78
 
Last edited by a moderator:

lizardpeter

New Member
So I've generated some white noise in Audacity with an amplitude of 0.1 at 96 KHz and 32 bit and recorded it with OBS at 48 KHz and 96 KHz PCM 32 bit.

Wow, cool stuff! If it really is this simple, hopefully they can just add an option for it in the menu alongside 44.1 kHz and 48 kHz. I always record at 32 bit float, so the AAC issue shouldn't be a problem.
 
D

Deleted member 496183

Wow, cool stuff! If it really is this simple, hopefully they can just add an option for it in the menu alongside 44.1 kHz and 48 kHz. I always record at 32 bit float, so the AAC issue shouldn't be a problem.
Yes adding multiple options for different sample rates is easy.
I've edited two files to get there.

You have to set items in "obs-studio/UI/forms/OBSBasicSettings.ui" and add some if statements in "obs-studio/UI/window-basic-settings.cpp" so you don't have to set the .ini file manually.

The result should look like this:
1726571349190.png
 

lizardpeter

New Member
Yes adding multiple options for different sample rates is easy.
I've edited two files to get there.

You have to set items in "obs-studio/UI/forms/OBSBasicSettings.ui" and add some if statements in "obs-studio/UI/window-basic-settings.cpp" so you don't have to set the .ini file manually.

The result should look like this:
View attachment 107834
Could you submit that as a pull request to the main OBS branch?
 

AaronD

Active Member
Yes it is, I've done some analysis with Audacity and even found out that OBS default AAC settings are bit worse than YouTubes default transcoder settings.

So I've generated some white noise in Audacity with an amplitude of 0.1 at 96 KHz and 32 bit and recorded it with OBS at 48 KHz and 96 KHz PCM 32 bit.

These are the results:

OBS 48 KHz PCM 32bit Custom Output (FFMPEG):
View attachment 107816
As you can see with 48 KHz the frequency spectrum stops at 24 KHz that should be fine.

OBS basic.ini workaround 96 KHz PCM 32bit Custom Output (FFMPEG):
View attachment 107817
The spectrum stops at 48KHz here and indicates that the sample rate setting in the basic.ini for 96 KHz is working.

OBS default AAC encoder at 96 KHz:
View attachment 107819
Recording with OBS default AAC encoder (I think it is libfdk_aac) shows that the cutoff is around 17 KHz. You can set it higher if you want. Source: https://trac.ffmpeg.org/wiki/Encode/AAC
The default AAC cutoff limit on Youtube is at 20 KHz. That is probably the reason why some people sound a bit better on YouTube because they record their voices with a different audio recording software as when they are streaming or uploading OBS videos directly.

Youtubes AAC frequency spectrum from a random Video for comparison at 96 KHz:
View attachment 107822
As you can see the Youtube AAC Encoder cutoff setting is at 20 KHz.

So in conclusion I can say that the basic.ini workaround for higher sample rates is working. Recording/Streaming Videos directly with OBS above 44.1 KHz is useless because the default AAC cutoff is around 17 KHz anyways. It only makes sense to record audio above 44.1 KHz if you record your audio with lossless encoders like PCM or WAV or set the default AAC cutoff limit above 20 KHz in the custom output settings of OBS.

Log file: https://obsproject.com/logs/rnf516VMaOdFKO78
Slight nitpick: DSP functions have no notion of real time whatsoever. It's entirely based on sample counts. So as they see it, the "sample rate" field is meaningless. The coefficients/settings that go into the processing algorithms have to be tweaked, of course, to do the same thing at a different sample rate, but there's no difference whatsoever between the same thing at twice the rate, or an octave lower at the same rate.

So: Is the scale accurate on those frequency plots? It's possible for two settings that are supposed to match, to not match, so that all of the analysis appears to work...but it's still not actually correct.

I think to fully prove this and put it to bed, you'll have to do some actual conversions to and from analog, and zoom in enough to count samples of a known-frequency sine wave. See if that lines up. Be skeptical of every point in the process, and set traps all over the place, like you've already done to an extent, but not quite enough. For another example, what about automatic resampling somewhere, that you might not have thought about?...

An almost slam-dunk proof would be to preserve some ultrasonic content, air-to-air, and show that it really still *is* ultrasonic. (put a mic next to something humanly-inaudible, record it, play it back with the mic next to the speaker this time while recording again, and analyze that second recording...and of course show that the original source was not present for that second recording) That would eliminate transparent resampling to 48k somewhere and back up again, and require the encoding to not "brick-wall" it too, at 17k or whatever. A somewhat gradual roll-off starting at 22k or so is okay, as long as the ultrasonic is not *gone*.

A lot of analog converter chips do that: 96k and 192k use the exact same mid-MHz physical sample rate as 48k, and the exact same cutoff frequency for the digital lowpass inside the chip, that goes between that and picking samples to send out at the desired rate. The only difference is that the mid-MHz lowpass is less aggressive, which has a byproduct of reducing latency even more than just the time between samples. (though if you're feeding a multitasking computer with it, even the longer latency at a slower rate is still nothing compared to the batch-processing buffers that are needed to make general-purpose multitasking work)
That then returns the question of whether a higher sample rate is worth it, if the converters themselves still start rolling off around 20k-22k or so, regardless of sample rate.

And like I said before, make sure the EQ filter still works the same at the officially-unsupported rates. Maybe it's written generally enough that it does, or maybe it only has a specific list to choose from, and so the closest approximation (or wrap-around, or whatever it does) causes it to work on the wrong frequencies. Likewise for the other filters, but the EQ is probably the most sensitive and noticeable.
 
Last edited:
D

Deleted member 496183

I think to fully prove this and put it to bed, you'll have to do some actual conversions to and from analog, and zoom in enough to count samples of a known-frequency sine wave. See if that lines up. Be skeptical of every point in the process, and set traps all over the place, like you've already done to an extent, but not quite enough. For another example, what about automatic resampling somewhere, that you might not have thought about?...

An almost slam-dunk proof would be to preserve some ultrasonic content, air-to-air, and show that it really still *is* ultrasonic. (put a mic next to something humanly-inaudible, record it, play it back with the mic next to the speaker this time while recording again, and analyze that second recording...and of course show that the original source was not present for that second recording) That would eliminate transparent resampling to 48k somewhere and back up again, and require the encoding to not "brick-wall" it too, at 17k or whatever. A somewhat gradual roll-off starting at 22k or so is okay, as long as the ultrasonic is not *gone*.

A lot of analog converter chips do that: 96k and 192k use the exact same mid-MHz physical sample rate as 48k, and the exact same cutoff frequency for the digital lowpass inside the chip, that goes between that and picking samples to send out at the desired rate. The only difference is that the mid-MHz lowpass is less aggressive, which has a byproduct of reducing latency even more than just the time between samples. (though if you're feeding a multitasking computer with it, even the longer latency at a slower rate is still nothing compared to the batch-processing buffers that are needed to make general-purpose multitasking work)
That then returns the question of whether a higher sample rate is worth it, if the converters themselves still start rolling off around 20k-22k or so, regardless of sample rate.

I think it would be best if you test it for yourself as my knowledge in audio is very limited and hence I can't provide you with 100% solid proof.

The last thing I did is to generate a 30 kHz sine wave tone in Audacity, recorded it with OBS and analyzed it again with the built-in plot spectrum.

Spectrum of the generated file 96 kHz 32 bit:
1726598788732.png


Spectrum of OBS recording at 96 kHz 32bit:
1726598876931.png

Shows the same frequency as the generated one at 30 kHz.

Spectrum of OBS recording at 48 kHz 32bit:
1726598918899.png

The 48 kHz file shows nothing.

Could you submit that as a pull request to the main OBS branch?
I'd love to, but there's not enough evidence that it actually works, as AaronD says. Submitting a pull request without proof is bound to be rejected.
 
D

Deleted member 496183

And like I said before, make sure the EQ filter still works the same at the officially-unsupported rates. Maybe it's written generally enough that it does, or maybe it only has a specific list to choose from, and so the closest approximation (or wrap-around, or whatever it does) causes it to work on the wrong frequencies. Likewise for the other filters, but the EQ is probably the most sensitive and noticeable.
I installed Ardour and used a parametric EQ for both the original and the recorded soundtrack with unsupported sample rate. Bass and mids sound the same in both files.
 

AaronD

Active Member
I installed Ardour and used a parametric EQ for both the original and the recorded soundtrack with unsupported sample rate. Bass and mids sound the same in both files.
Ardour's plugins mean nothing for OBS. I'd expect anything in a for-real DAW to support any sample rate. It's *OBS's* filters that I'm unsure about.
 
D

Deleted member 496183

Ardour's plugins mean nothing for OBS. I'd expect anything in a for-real DAW to support any sample rate. It's *OBS's* filters that I'm unsure about.
You are right, after using *OBS's* 3 band eq filter you will hear a difference in the mid's. They tend to sound different on all sample rates even on the supported ones. The higher you go in sample rate the lower sounding the mids get. But for some reason a sample rate of 44.1 kHz has lower mid's than a 48 kHz sample rate, so it is the other way around at the supported rates.
 
I am very curious as to what the actual use cases for anything higher than 48 kHz sample rates would be other than "my interface supports it." For the record, YouTube supports 48 kHz uploads just fine regardless of what they might say they support, I've always uploaded it that way. Twitch accepts 48 kHz streams just fine, too. Your voice receives no benefit from recording at a higher rate (like objectively it does not), the games you are playing aren't recorded at a higher rate, and there's no functional benefit to recording music at a higher rate. There are some reasons to use higher sampling rates with certain effects to reduce certain types of "distortions," but by and large, there is no objective benefit to this (as explored by smart audio person Dan Worrall here and here). And if music production is your end goal, while I usually disagree with the "OBS is not a DAW" argument, in this case it really does apply as music production should be done in a DAW.

I'll summarize the video for the TL;DW folks, but essentially, sample rates are used to define enough sample points to accurately reconstruct the waveform using math. At 48 kHz, all frequencies at 24,000 Hz (approximately 8,000 Hz above what I can hear) and below can be reconstructed with the exact same mathematical accuracy as if you recorded at 96 kHz or even higher. There is, mathematically, no difference. The only thing that happens is you use more processing power and storage space. So what is the use case for which you believe you need these higher sample rates?
 

lizardpeter

New Member
Not everyone is uploading to Twitch or YouTube. Titans in the audio industry don't release at 192 kHz for no reason. There are thousands of examples from the biggest names. I only record at 120 FPS and above, too, so my personal use case is maintaining as much sound quality as possible if I decide to slow things down for whatever reason. It is, as you mentioned, also a case of making use of the hardware. If my microphone supports 192 kHz 32 bit FP audio, that's what I want to record at. File size and processing power are irrelevant to me. I already have dozens of TB of storage and the fastest CPU. Likewise, the audio size is usually a tiny fraction of the video size. It's an insignificant increase.
 
I don't think you read my entire post. If you are talking about the audio industry, OBS is not the tool you should be using. And actually, yes, they do release at 192 kHz for no reason other than audiophiles who don't know any better asked for it. There is, mathematically, no difference between that 192 kHz mastered file and the same track mastered at 48 kHz, as those videos I linked to will explain. It is useful during the mixing process, as I mentioned, but there is virtually no use beyond that. And finally, the only reason your hardware supports it is because people who didn't know better kept demanding it and it looks better on a spec sheet, not because it's actually useful, so "just because my hardware supports it" is a terrible reason. Please, please do some research on this before making pointless requests. You will, objectively, receive no measurable benefit to using higher sample rates, let alone audible ones.

Maybe in a world where the devs' time is not finite, adding pointless features that yield no benefit is fine. But given they have finite time, I'd much rather they devote it to working on features that will provide actual benefits.
 

lizardpeter

New Member
I don't think you read my entire post. If you are talking about the audio industry, OBS is not the tool you should be using. And actually, yes, they do release at 192 kHz for no reason other than audiophiles who don't know any better asked for it. There is, mathematically, no difference between that 192 kHz mastered file and the same track mastered at 48 kHz, as those videos I linked to will explain. It is useful during the mixing process, as I mentioned, but there is virtually no use beyond that. And finally, the only reason your hardware supports it is because people who didn't know better kept demanding it and it looks better on a spec sheet, not because it's actually useful, so "just because my hardware supports it" is a terrible reason. Please, please do some research on this before making pointless requests. You will, objectively, receive no measurable benefit to using higher sample rates, let alone audible ones.

Maybe in a world where the devs' time is not finite, adding pointless features that yield no benefit is fine. But given they have finite time, I'd much rather they devote it to working on features that will provide actual benefits.

The problem is, no one knows what someone else is using OBS for, so it's impossible to say what is or isn't beneficial. There are a thousand features of OBS I don't personally use. That doesn't mean they weren't excellent additions used by tons of other people. The vast majority of people using OBS to record aren't simply keeping the raw file as-is. They're, as you mentioned, mixing and editing the audio, sometimes heavily, where you conceded there is a tangible benefit. Don't forget, OBS uses off-the-shelf audio encoders. If I want to use one of these encoders on its own at a different sample rate, it's often as simple as changing a line or two in the command.
yield no benefit

You admitted in your post that there is, in fact, a benefit in certain use cases.

It is useful during the mixing process
 
You admitted in your post that there is, in fact, a benefit in certain use cases.
The use case is scientific work that needs to record frequencies above the human hearing range. As mentioned, the other use case is in post production work (not live work) with certain filters. The audio does not need to be recorded at higher sample rates to achieve these benefits; it just needs to be processed at the higher rate. For the sorts of effects where this can be an issue, they often provide supersampling options in the plugin to account for that. This is generally accepted to be the best way to handle this, by processing in post at higher rates as needed, not be recording at higher rates where there is no benefit. This is what I mean by "it is useful during the mixing process." OBS has no part in this as it is a live production environment. And again, if music production is your goal, use a DAW for that. Please do me us all a favor and watch those videos before responding again, seriously. Everything you keep saying is talked about there, you might learn something. They're not even particularly long.
 

lizardpeter

New Member
The use case is scientific work that needs to record frequencies above the human hearing range. As mentioned, the other use case is in post production work (not live work) with certain filters. The audio does not need to be recorded at higher sample rates to achieve these benefits; it just needs to be processed at the higher rate. For the sorts of effects where this can be an issue, they often provide supersampling options in the plugin to account for that. This is generally accepted to be the best way to handle this, by processing in post at higher rates as needed, not be recording at higher rates where there is no benefit. This is what I mean by "it is useful during the mixing process." OBS has no part in this as it is a live production environment. And again, if music production is your goal, use a DAW for that. Please do me us all a favor and watch those videos before responding again, seriously. Everything you keep saying is talked about there, you might learn something. They're not even particularly long.
My use case is not music production or live work. I know all of these things about audio processing taking place internally at high bit depth and sample rate in most programs. It's still common sense that feeding higher quality audio samples into this process initially is always going to be ideal.
 
This is what I have been trying to say that you need to watch the videos to understand. Digital audio is constructed using formulas. Whether you have 2 samples (as in 48 kHz) or 4 (as in 96 kHz) or even more, as long as you have at least 2, you will construct identical waveforms. I don't mean theoretically identical, I mean actually identical. You are not feeding "higher quality" audio samples, you are simply feeding more of them. And you don't need more of them to construct the waveform, it will construct exactly the same waveform because that's how the math pans out. This has been proven time and time again. You quite literally do not need more information, so capturing it wastes space for no actual benefit, theoretical, mathematical, measurable, or otherwise. The only reason it is beneficial to sometimes process effects at higher sample rates is because those inaudible frequencies can reflect back down into the audible ones and cause issues; by processing at a higher sample rate, you keep those reflections above the audible range, then just cut them back out. The only time this comes into play when recording is if you are using poor quality gear with poorly designed filters for the converters, in which case you'd get better results by upgrading your hardware and not by trying to workaround it with higher sample rates.
 

lizardpeter

New Member
This is what I have been trying to say that you need to watch the videos to understand. Digital audio is constructed using formulas. Whether you have 2 samples (as in 48 kHz) or 4 (as in 96 kHz) or even more, as long as you have at least 2, you will construct identical waveforms. I don't mean theoretically identical, I mean actually identical. You are not feeding "higher quality" audio samples, you are simply feeding more of them. And you don't need more of them to construct the waveform, it will construct exactly the same waveform because that's how the math pans out. This has been proven time and time again. You quite literally do not need more information, so capturing it wastes space for no actual benefit, theoretical, mathematical, measurable, or otherwise. The only reason it is beneficial to sometimes process effects at higher sample rates is because those inaudible frequencies can reflect back down into the audible ones and cause issues; by processing at a higher sample rate, you keep those reflections above the audible range, then just cut them back out. The only time this comes into play when recording is if you are using poor quality gear with poorly designed filters for the converters, in which case you'd get better results by upgrading your hardware and not by trying to workaround it with higher sample rates.

So, to be clear, you're saying increased temporal resolution won't positively impact the outcome of time-stretching or pitch-shifting processes? I'm having a hard time believing that.
 

AaronD

Active Member
Everything @Thebigcheese said is correct. You don't construct a better waveform with more samples. You construct THE EXACT SAME waveform. Any difference at all MUST represent a frequency beyond Nyquist, and that's literally impossible.

The idea of timing being locked to discrete samples is also unintuitively wrong. The examples that are often used to argue that, use samples that "just happen" to line up perfectly with the event in question, and that one snapshot is used for the entire argument. But if you draw the *actual* waveform (not the ideal one), that only has content below Nyquist, and then time-shift *that* by less than one sample while adjusting all of the samples to stay on the waveform, you'll see a less-pretty pattern that still captures THE EXACT SAME THING at fractional samples' difference in time. So you don't need fast sample rates "to get the timing right" either.
(this is one of several reasons why "bit-perfection" is completely unnecessary)

It's also unintuitive, but true, that you can design a series of equally-spaced, finite-valued samples to give an arbitrarily high peak, well beyond the highest possible encoded value. This makes "true peak" meters difficult in digital. You can't just take the highest sample and call that the peak, because the waveform peak is actually higher than that. The vast majority of the time, it's not *much* higher, but it usually is by at least a little bit, and there really is no absolute maximum.
Doubling or quadrupling the sample rate to try and "limit those peaks", as if they were some kind of unwanted artifact...will simply add another sample right on the exact same waveform and not change anything at all. If you were close enough to clipping that the new samples are out of range and can't be encoded accurately, then you've ADDED distortion, not reduced it.

As said before, there are a few cases where it's actually useful to keep a high sample rate all the way through the system. Decoding bat calls, for one example, or non-isolating hearing aids for another. However:
  • If you're recording ultrasonics in air (bat calls), then you can't use a normal audio A/D converter, even if it does support high sample rates, for two reasons:
    • The analog anti-aliasing filter is still important, and of course doesn't change at all when you choose a different rate. You can make it different from the manufacturer's recommendations, like to increase both the cutoff frequency and the rolloff rate so that it's still sufficiently quiet at the mid-MHz physical sample rate, but of course that's on you now as the electronics designer. If you buy one that's made for audio, it'll start rolling off just above audible, and you can't change that. That's going to affect your ultrasonic recording.
    • The digital filter inside the chip, also starts rolling off just above audible, and you can't change that either. That's true even at higher sample rates. The difference is *not* the cutoff frequency, but the rolloff rate above that frequency. So that'll affect your ultrasonic recording too.
  • If you need super-low latency (non-isolating hearing aid, without an acoustic comb filter), then *the entire system* needs to shovel samples through fast enough to make that worth the argument. If you use a multitasking operating system (anything that can run OBS), then it's going to fill a buffer while the system does something else, then process that buffer all at once, and play it out at the other end. That completely wipes out any hope of "low latency" on the scale that increasing the sample rate gives you.
    • If you're using a dedicated system that does push one sample all the way through before grabbing the next, and you can only afford a handbreadth or so worth of latency compared to the speed of sound in air, THEN a higher sample rate might be useful. Not so much because of the time between samples, but because the digital filter inside the converter chip adds that much latency all by itself, and it reduces at higher rates even when measured in *samples* and not seconds.
      (relaxing the rolloff rate, does that)
 

lizardpeter

New Member
Everything @Thebigcheese said is correct. You don't construct a better waveform with more samples. You construct THE EXACT SAME waveform. Any difference at all MUST represent a frequency beyond Nyquist, and that's literally impossible.

The idea of timing being locked to discrete samples is also unintuitively wrong. The examples that are often used to argue that, use samples that "just happen" to line up perfectly with the event in question, and that one snapshot is used for the entire argument. But if you draw the *actual* waveform (not the ideal one), that only has content below Nyquist, and then time-shift *that* by less than one sample while adjusting all of the samples to stay on the waveform, you'll see a less-pretty pattern that still captures THE EXACT SAME THING at fractional samples' difference in time. So you don't need fast sample rates "to get the timing right" either.
(this is one of several reasons why "bit-perfection" is completely unnecessary)

It's also unintuitive, but true, that you can design a series of equally-spaced, finite-valued samples to give an arbitrarily high peak, well beyond the highest possible encoded value. This makes "true peak" meters difficult in digital. You can't just take the highest sample and call that the peak, because the waveform peak is actually higher than that. The vast majority of the time, it's not *much* higher, but it usually is by at least a little bit, and there really is no absolute maximum.
Doubling or quadrupling the sample rate to try and "limit those peaks", as if they were some kind of unwanted artifact...will simply add another sample right on the exact same waveform and not change anything at all. If you were close enough to clipping that the new samples are out of range and can't be encoded accurately, then you've ADDED distortion, not reduced it.

As said before, there are a few cases where it's actually useful to keep a high sample rate all the way through the system. Decoding bat calls, for one example, or non-isolating hearing aids for another. However:
  • If you're recording ultrasonics in air (bat calls), then you can't use a normal audio A/D converter, even if it does support high sample rates, for two reasons:
    • The analog anti-aliasing filter is still important, and of course doesn't change at all when you choose a different rate. You can make it different from the manufacturer's recommendations, like to increase both the cutoff frequency and the rolloff rate so that it's still sufficiently quiet at the mid-MHz physical sample rate, but of course that's on you now as the electronics designer. If you buy one that's made for audio, it'll start rolling off just above audible, and you can't change that. That's going to affect your ultrasonic recording.
    • The digital filter inside the chip, also starts rolling off just above audible, and you can't change that either. That's true even at higher sample rates. The difference is *not* the cutoff frequency, but the rolloff rate above that frequency. So that'll affect your ultrasonic recording too.
  • If you need super-low latency (non-isolating hearing aid, without an acoustic comb filter), then *the entire system* needs to shovel samples through fast enough to make that worth the argument. If you use a multitasking operating system (anything that can run OBS), then it's going to fill a buffer while the system does something else, then process that buffer all at once, and play it out at the other end. That completely wipes out any hope of "low latency" on the scale that increasing the sample rate gives you.
    • If you're using a dedicated system that does push one sample all the way through before grabbing the next, and you can only afford a handbreadth or so worth of latency compared to the speed of sound in air, THEN a higher sample rate might be useful. Not so much because of the time between samples, but because the digital filter inside the converter chip adds that much latency all by itself, and it reduces at higher rates even when measured in *samples* and not seconds.
      (relaxing the rolloff rate, does that)

I always use a combination of the lowest usable buffer and the highest possible sample rate to combat latency. Even on an operating system like Windows, it's possible to reduce the latency by many milliseconds.
 

AaronD

Active Member
I always use a combination of the lowest usable buffer and the highest possible sample rate to combat latency. Even on an operating system like Windows, it's possible to reduce the latency by many milliseconds.
If you really care that much, have a look at Ubuntu Studio Linux:
Low-latency kernel, and comes with a very good DAW (Ardour) preinstalled, along with a TON of plugins for it, also preinstalled and working. And OBS too, though you'll still want to update it from our official PPA to get all the features.

I've heard of people running live concerts with just that and a many-channel interface.
 
Top