ZergShadow
Member
Even google can't help. The main reason of using 48 is compability with other devices to avoid audio artifacts but why than 44.1 exists in obs if it was created only for audio tracks?
Obs default is 44.1 as I remember.48 kHz is recommended.
Can't you get that people can't hear difference more than 20khz? Even if it's more exact, people can't get this difference."The higher sample rate technically leads to more measurements per second and a closer recreation of the original audio, so 48 kHz is often used in “professional audio” contexts more than music contexts. For instance, it’s the standard sample rate in audio for video. This sample rate moves the Nyquist frequency to around 24 kHz, giving further buffer room before filtering is needed."
Digital Audio Basics: Audio Sample Rate and Bit Depth
Learn the basics of digital audio and how a computer handles sound, from audio sample rate to bit depth.www.izotope.com
That's not the point (or the core) of your original question. The point is standardization and compatibility. Over the whole workflow, that spans from recording and production, to distribution and client consumption (media player and output device), you should try to use the same sampling frequency to avoid quality loss due to resampling.So why I should use 48 if anything above 40000 nobody can't hear?
1. I'm sure if obs has 2 options than it can resample it without problems For example I can have different audio sources with 44 CD and 48 video at the same time. As I have read many articles and everyone tells that new filters can handle alasing from resampling.That's not the point (or the core) of your original question. The point is standardization and compatibility. Over the whole workflow, that spans from recording and production, to distribution and client consumption (media player and output device), you should try to use the same sampling frequency to avoid quality loss due to resampling.
There may be workflows that have a fixed 44.1 requirement, and there may be workflows that have a fixed 48k requirement. So OBS gives you the choice to support both. Usually, it's the part of the workflow you have the least control over (distribution and output device) that will set this requirement.
If your workflow doesn't have this fixed requirement, look deeper: you probably missed something. In case you still have the choice, choose 48k, because this is the de facto standard, and in case you will re-use older recordings for future projects, the probability you need to resample is lower.
It will have better quality because no need in saving another 10% frequencies.Are you saying that an AAC file at 64 Kbps with a 44.1 kHz sample rate is going to be smaller in size than a 64 Kbps file with a 48 kHz sample rate?
Seems you can't understand my question. During conversion I will lose some % of quality but I also will lose 10% of samples. This means that I could use this 10% free bitrate to encode 10% more frequencies. If I run 3200Kbps 48Khz sound on stream and encode it to 320kbps 48Khz, I lose 90% of frequencies. I think if I encode it to 320kbps 44.1Khz sound, I will lose only 89%. I want to knonw how much quality will I lose during resampling because I'm sure thatt it's less than 10%.Nobody says you should edit existing recordings. Keep them as they are, because every editing/resampling will degrade their quality. Use 48k for new recordings.
If if comes to different sample rates within your sources for streaming/recording, OBS will resample on the fly or tell the operating system to resample on the fly, so different sample rates are never an issue. If it comes to aliasing, you don't need to worry about this. Aliasing happens with sampling in general and applies to every digitized audio content, so it applies to every analog->digital audio processing. Properly low-pass filtered input signals to half of the sampling frequency eliminates aliasing. This is a thing so basic, it's as if you explicitly want to make sure a master painter is using proper acrylic or oil paint and not children's watercolor set.
The signal quality loss of resampling isn't due to aliasing, it's due to interpolating integer values during the resampling and cut to integer again in the output file. For minimizing this, intermediate audio formats within audio processing software are normalized to floating point as far as I know, but as soon as you save to an output file that saves audio samples as integer you lose accuracy.
What about sound in movies 3000 kbps? It exists just like that? And engineers are morons that had created settings 320 AAC that nobody can't hear? I can't get it.This is irrelevant. You will not be able to hear any difference. Nobody of your viewers/listeners will notice any difference, because their (home) equipment isn't up to reproduce that amount of audio detail. If you really care that much: no human ear except perhaps the rare case of a healthy audio-educated ear of a 16 year old teenager with perfect studio equipment is able to detect a difference between 256kbps and 320 kbps, and certainly not the difference between 44k and 48k at the same bitrate.