What sample frequesncy 48 or 44.1 is better for streaming?

Even google can't help. The main reason of using 48 is compability with other devices to avoid audio artifacts but why than 44.1 exists in obs if it was created only for audio tracks?
 

qhobbes

Active Member
"The higher sample rate technically leads to more measurements per second and a closer recreation of the original audio, so 48 kHz is often used in “professional audio” contexts more than music contexts. For instance, it’s the standard sample rate in audio for video. This sample rate moves the Nyquist frequency to around 24 kHz, giving further buffer room before filtering is needed."
 
"The higher sample rate technically leads to more measurements per second and a closer recreation of the original audio, so 48 kHz is often used in “professional audio” contexts more than music contexts. For instance, it’s the standard sample rate in audio for video. This sample rate moves the Nyquist frequency to around 24 kHz, giving further buffer room before filtering is needed."
Can't you get that people can't hear difference more than 20khz? Even if it's more exact, people can't get this difference.
1. I think if obs can connect 44.1Khz sound with video without any lags so it's always better to use 44.1 because this will make file smaller.
2. OBS stream will not be used for further audio editing so it will not be problems with sound compability.
 

qhobbes

Active Member
Are you saying that an AAC file at 64 Kbps with a 44.1 kHz sample rate is going to be smaller in size than a 64 Kbps file with a 48 kHz sample rate?
 

koala

Active Member
So why I should use 48 if anything above 40000 nobody can't hear?
That's not the point (or the core) of your original question. The point is standardization and compatibility. Over the whole workflow, that spans from recording and production, to distribution and client consumption (media player and output device), you should try to use the same sampling frequency to avoid quality loss due to resampling.
There may be workflows that have a fixed 44.1 requirement, and there may be workflows that have a fixed 48k requirement. So OBS gives you the choice to support both. Usually, it's the part of the workflow you have the least control over (distribution and output device) that will set this requirement.

If your workflow doesn't have this fixed requirement, look deeper: you probably missed something. In case you still have the choice, choose 48k, because this is the de facto standard, and in case you will re-use older recordings for future projects, the probability you need to resample is lower.
 
That's not the point (or the core) of your original question. The point is standardization and compatibility. Over the whole workflow, that spans from recording and production, to distribution and client consumption (media player and output device), you should try to use the same sampling frequency to avoid quality loss due to resampling.
There may be workflows that have a fixed 44.1 requirement, and there may be workflows that have a fixed 48k requirement. So OBS gives you the choice to support both. Usually, it's the part of the workflow you have the least control over (distribution and output device) that will set this requirement.

If your workflow doesn't have this fixed requirement, look deeper: you probably missed something. In case you still have the choice, choose 48k, because this is the de facto standard, and in case you will re-use older recordings for future projects, the probability you need to resample is lower.
1. I'm sure if obs has 2 options than it can resample it without problems For example I can have different audio sources with 44 CD and 48 video at the same time. As I have read many articles and everyone tells that new filters can handle alasing from resampling.
2. If obs has 2 options than it means that output devices works with 44 and 48.
3. De facto standart? Half a year ago 44 was default for OBS.
3. As I had written before I'm not going to edit my recordings. I had read that 48 has nice synch with different framerates.
 

koala

Active Member
Nobody says you should edit existing recordings. Keep them as they are, because every editing/resampling will degrade their quality. Use 48k for new recordings.

If if comes to different sample rates within your sources for streaming/recording, OBS will resample on the fly or tell the operating system to resample on the fly, so different sample rates are never an issue. If it comes to aliasing, you don't need to worry about this. Aliasing happens with sampling in general and applies to every digitized audio content, so it applies to every analog->digital audio processing. Properly low-pass filtered input signals to half of the sampling frequency eliminates aliasing. This is a thing so basic, it's as if you explicitly want to make sure a master painter is using proper acrylic or oil paint and not children's watercolor set.

The signal quality loss of resampling isn't due to aliasing, it's due to interpolating integer values during the resampling and cut to integer again in the output file. For minimizing this, intermediate audio formats within audio processing software are normalized to floating point as far as I know, but as soon as you save to an output file that saves audio samples as integer you lose accuracy.
 
Nobody says you should edit existing recordings. Keep them as they are, because every editing/resampling will degrade their quality. Use 48k for new recordings.

If if comes to different sample rates within your sources for streaming/recording, OBS will resample on the fly or tell the operating system to resample on the fly, so different sample rates are never an issue. If it comes to aliasing, you don't need to worry about this. Aliasing happens with sampling in general and applies to every digitized audio content, so it applies to every analog->digital audio processing. Properly low-pass filtered input signals to half of the sampling frequency eliminates aliasing. This is a thing so basic, it's as if you explicitly want to make sure a master painter is using proper acrylic or oil paint and not children's watercolor set.

The signal quality loss of resampling isn't due to aliasing, it's due to interpolating integer values during the resampling and cut to integer again in the output file. For minimizing this, intermediate audio formats within audio processing software are normalized to floating point as far as I know, but as soon as you save to an output file that saves audio samples as integer you lose accuracy.
Seems you can't understand my question. During conversion I will lose some % of quality but I also will lose 10% of samples. This means that I could use this 10% free bitrate to encode 10% more frequencies. If I run 3200Kbps 48Khz sound on stream and encode it to 320kbps 48Khz, I lose 90% of frequencies. I think if I encode it to 320kbps 44.1Khz sound, I will lose only 89%. I want to knonw how much quality will I lose during resampling because I'm sure thatt it's less than 10%.
 

koala

Active Member
This is irrelevant. You will not be able to hear any difference. Nobody of your viewers/listeners will notice any difference, because their (home) equipment isn't up to reproduce that amount of audio detail. If you really care that much: no human ear except perhaps the rare case of a healthy audio-educated ear of a 16 year old teenager with perfect studio equipment is able to detect a difference between 256kbps and 320 kbps, and certainly not the difference between 44k and 48k at the same bitrate.
 
This is irrelevant. You will not be able to hear any difference. Nobody of your viewers/listeners will notice any difference, because their (home) equipment isn't up to reproduce that amount of audio detail. If you really care that much: no human ear except perhaps the rare case of a healthy audio-educated ear of a 16 year old teenager with perfect studio equipment is able to detect a difference between 256kbps and 320 kbps, and certainly not the difference between 44k and 48k at the same bitrate.
What about sound in movies 3000 kbps? It exists just like that? And engineers are morons that had created settings 320 AAC that nobody can't hear? I can't get it.
 
Last edited:

Suslik V

Active Member
OBS settings always depends on the target (streaming service in your case).
For example, for the YouTube the recommended are (as for today):
 

Peerless

New Member
Hi, today i got those information from my streaming hoster:

- you can send, 44.1 KHz or 48 KHz . the Safari-Browser has sometimes problems with 48 KHz streams

- so we suppose using 44100 Hz mit 128 kbit/s, Stereo. to be more compatible with older computers

good information, but i remember there could be audio delay problems if i got 48khz input from my atem mixer and send 44.1khz with obs ?

or does this problem not longer exist ?
 

konsolenritter

Active Member
Suslik V nailed it to the point, what ZergShadow was missing: OBS has _possibilites_ we should be thankful for (not mathematical bound neither restricted to). Its up to the point of the (re)constructing player on the viewers side as well as re-encoders on their way. 44.1k was developed regarding the compact disc (without any doubt regarding video), wherever 48k resembles to common fractions with the usual movie framerates. So _usually_ (not ever!) players and re-encoders go easier with 48k. Problems regarding the safari browser are specific ones, then someone should go for 44.1 as advised. Merely Players today are webplayers like the youtube player. Then you should go for, what your streaming provider recommends.

With enough testing on its own purpose every streaming people will go and check thus getting experience enough whats works best for their viewers and what not.
 

RSSCommentary

New Member
I'm a software-computer engineer and audio engineer/YouTuber and here is my technical opinion from years of experience in OBS. 44.1KHz is a nightmare to work with, stick with 48KHz at 160kbps. Everything is set by default to 48KHz in Windows, and there is a good reason for it.

Sure, you can't hear the extra high-frequency sampling of 48KHz, but it's very important that you upsample 44.1KHz analog audio sources to 48KHz. If you don't then you will get some aliasing in the signal, which is kind of like Chromatic aberrations when a lens doesn't have enough megapixel of sharpness for the sensor. The Digital-to-analog converter has edge transitions from one sample to the next that will cause glitches in the audio when sampled at the same frequency. When you are sampling an analog source at 48KHz, you want to sample it at 96KHz. In layman's terms, that gives you two samples that get averaged out. The problem is that 96KHz will start to tax your CPU because of interrupts and 48KHz is good enough to eliminate aliasing.

Another good relevant hack I would like to add is that with 16-bit audio you can get a maximum amount of 96db before the signal gets noisy, which translates to 6db per bit. If you want more gain out of your mics, you should use exactly 6db of digital gain after the compression stage because it has the effect of throughout one of the bits, or bit shifting the audio sample by one bit. If you're using 24-bits you go down to 23-bits, if you use 16-bits you go down to 15-bits. It really helps with dynamic mics, you can avoid buying the cloud lifter. You should be using 24-bit audio everywhere you can.
 
Top