192000 Hz 24 bit

unfa

Member
There is absolutely no reason to broadcast anything more than 48 kHz / 16-bit audio.

Higher sampling rates and bit depths make sense only if you intend to process the sound heavily afterwards.
Broadcasting means serving the viewers already processed content.

Why doens't it make sense?

Becasue humans won't be able to tell the difference between 48 kHz /16-bit and 192 kHz / 24-bit. Adn the extra data is a waste of bandwidth or even can cause playback issues.

Humans can at best hear 20 kHz. Most can't hear above 18 kHz. That means that 44.1 kHz sampling rate is sufficient to cover all our hearing range in perfect fidelity and have some safety margin. This is called Nyqist frequency.

16 bit depth provides about 96 dB of dynamic range. Below -96 dB everything is just dithering noise. Usual recorded material (from my experience) has it's own noisefloor much higher (-60 dB is already really good ratio). So this is also an overkill.

You can do your own measurments with Audacity )opensource audio editor).


Further reading: https://people.xiph.org/~xiphmont/demo/neil-young.html
 

Noodlebox

New Member
There is absolutely no reason to broadcast anything more than 48 kHz / 16-bit audio.

Higher sampling rates and bit depths make sense only if you intend to process the sound heavily afterwards.
Broadcasting means serving the viewers already processed content.

Why doens't it make sense?

Becasue humans won't be able to tell the difference between 48 kHz /16-bit and 192 kHz / 24-bit. Adn the extra data is a waste of bandwidth or even can cause playback issues.

Humans can at best hear 20 kHz. Most can't hear above 18 kHz. That means that 44.1 kHz sampling rate is sufficient to cover all our hearing range in perfect fidelity and have some safety margin. This is called Nyqist frequency.

16 bit depth provides about 96 dB of dynamic range. Below -96 dB everything is just dithering noise. Usual recorded material (from my experience) has it's own noisefloor much higher (-60 dB is already really good ratio). So this is also an overkill.

You can do your own measurments with Audacity )opensource audio editor).


Further reading: https://people.xiph.org/~xiphmont/demo/neil-young.html

I have to disagree with you. The sound hits you in subconscious level not conscious level of hearing. When I listen 48000 KHZ between 192000 KHZ it's a huge difference. Don't listen to these false intellects not hearing or can't tell the difference between 192 KHZ and 48000HZ. It's like Fake News, of people saying there's no difference between 192,000 HZ and 48,000 HZ. When there is actually a huge difference. But at 192,000 HZ eats alot of hard drive space like 150-250 MB for a single song. It doesn't feel streamable due sucking all the bitrate for your stream.
 
  • Like
Reactions: TDJ

S. Reinhardt

New Member
You can definetly hear difference. 16 bits is different to 24. The difference between 24/96 and 24/192 is not big. But the sound difference between 16/44 to 24/96 is huge. The problem today is that thereno one is comparing. The sound is fatter, has more depth and space. Try https://neilyoungarchives.com/ or the app. On the app you can skip between mp3 CD and HD. On web between mp3 (320) to 24/192. You can hear difference.
 

R1CH

Forum Admin
Developer
No, you can't. Please see https://people.xiph.org/~xiphmont/demo/neil-young.html which explains the science behind your ears. The page you linked is testing MP3 (compressed) audio against lossless (uncompressed) audio which is an unrelated topic to sampling rate and bit depth.

TL,DR from one of the studies:
This paper presented listeners with a choice between high-rate DVD-A/SACD content, chosen by high-definition audio advocates to show off high-def's superiority, and that same content resampled on the spot down to 16-bit / 44.1kHz Compact Disc rate. The listeners were challenged to identify any difference whatsoever between the two using an ABX methodology. BAS conducted the test using high-end professional equipment in noise-isolated studio listening environments with both amateur and trained professional listeners.

In 554 trials, listeners chose correctly 49.8% of the time. In other words, they were guessing. Not one listener throughout the entire test was able to identify which was 16/44.1 and which was high rate [15], and the 16-bit signal wasn't even dithered!
 

S. Reinhardt

New Member
I love science. I am familiar with the Nyquist theorem. Which is just a theory, from the 1920s. But this topic has so many elements. Music has so many aspects. Ex. is it recorded from analog masters? volume level, music style, where the listeners familiar with the music. And it is more than just hearing the music its also about feeling the music. But of course, is the music recorded in 16/44, which much music is studio today, its no benefit with higher sampling. After I changed from vinyl to CD in the 90s, my interest to music faded slowly. Later I was told that Lps delivers 24/192 sound quality. My interest in music is back by listering to high def music from analog masters. That is not science, I know, but thats my personal opinion. But I respect everyone that disagree. Interesting speech from producer/mixer Andrew Schebs on some of the subject. https://www.youtube.com/watch?v=SXbH-yzGNfg
 

Notty

New Member
Dear Developer Team,

As it was mentioned here, high sampling rates are important for digital audio processing.

Are VST plugins applied before downsampling?
 
  • Like
Reactions: TDJ

Notty

New Member
Nobody knows? And the same question about bit depth.

Can VST plugins work with 24-bit 96 kHz sources? Or does downsampling to 16-bit 48 kHz happen before?
 

SoloMan

New Member
I don't know if there is a way to sample at higher frequencies than 48k but using custom output you can encode audio as pcm, and since OBS uses fl32 while mixing that would be the optimal bit depth.
 

were491

Member
if you wish to work with sound, then you can use audacity (or similar) to record audio and OBS to record video. However, if you're just streaming or uploading videos, keep in mind they usually get compressed by the video services anyways so there is no real difference (and even if it doesn't, again: placebo). Even editing videos, a lot of video editors may re-encode the sound when encoding.
 

emilois

New Member
I wanted 192 kHz so that I can slow down a 240 FPS recording up to 25% while still keeping audio quality.
Unfortunately, OBS doesn't have 192 kHz audio settings.
 

lizardpeter

New Member
There are plenty of use cases for it. Don't forget, people whose sole job is to work with audio have been releasing works at 24 bit 192 kHz for ages now (see John Williams, etc). To say it's useless is simply untrue.
 

AaronD

Active Member
There are plenty of use cases for it. Don't forget, people whose sole job is to work with audio have been releasing works at 24 bit 192 kHz for ages now (see John Williams, etc). To say it's useless is simply untrue.
Unless you're slowing it down like @emilois mentioned, it's still placebo. Stuff is released with stupid specs not because anyone can perceive the difference, but to satisfy a market that doesn't know what it's talking about. I wonder how much is actually processed at 96k or even 48k, and then UPsampled for a stupid-spec audiophool release?

There IS a reason to process with insane bit-depth, which is to keep the cumulative roundoff error through all the processing, below what a 24-bit integer DAC can resolve. OBS uses 32 bits internally, the cheap Behringer digital mixers use 40 bits, and most DAW's to my knowledge use 64 bits per sample.

It's possible, though difficult, to make an analog circuit outperform a 16-bit conversion. (from experience and from looking inside, I'd put most professional live analog mixing consoles at around 12-14 bits equivalent or so, not 16) It's absolutely physically impossible to have analog outperform 24 bits, with thermal and other noise sources, crosstalk, etc. Thus, small enough roundoff error that a 24-bit DAC can't see it, becomes "perfect" analog to analog. The bottom few bits of that will *always* be analog noise.

Of course, you need to be careful how you test that, because of something called a "noise gate". When mixing, a gate makes quiet things (assumed to be pure noise) silent. So that's not a good test. And a lot of DAC's turn themselves off, which changes how the analog output works, when they no longer receive samples. This could happen when a player pauses, for example, which I've seen directly while trying to make a custom USB sound card, even while the computer itself and even the player app were still running. So that's not a good test either. The proper test is to send it a stream of digital silence, and to verify that the DAC stays active for that and doesn't have its own noise gate.
 
Last edited:
Top