22050hz has higher quality than those at the same bitrates.
Only if you don't need the top audible octave. There's a reason why 44.1 and 48kHz are the de-facto standard for full-range audio distribution.
44.1 and 48 get brickwall-lowpassed just above 20kHz, which is the upper limit of human hearing. Regardless of what the audiophool community says, we're simply oblivious to anything beyond that, and so that's the lowest practical sample rate that is completely indistinguishable from analog or acoustic. (Nyquist...)
22.05kHz (half of 44.1) needs to be lowpassed just above 10kHz to prevent that upper octave from aliasing (or "folding back") into a lower octave. Aliasing creates audible artifacts that can't be removed later, so the top octave must be filtered out (and lost forever) before sampling at that rate.
Yes, a lot of the population can't hear that high anymore anyway, and intelligible speech is entirely below that, but it *does* throw away something that at least some viewers can hear, when you sample that slowly.
Adding support for 96 kHz & 192 kHz sample rates sure would be nice as well…
Higher sample rates may have some technical merit for other reasons, but they don't *sound* any better because the full range of human hearing is already captured completely at 44.1kHz, and slightly more comfortably at 48kHz. It doesn't matter if the sound is awful above that because we just can't hear it. (and if you look at the datasheet for any decent DAC, you'll see a brickwall lowpass on the response graph(s), so it's barely doing anything anyway, outside of what we can hear)
Two examples of potential merit for a high sample rate:
- Lower latency in the converter chips themselves, from analog to digital and back again. They operate in different modes for different ranges of sample rates, and that allows them to use a less aggressive lowpass for the higher rates, with the corner frequency at the same low 20's of kHz. The less aggressive filter results in fewer samples of latency through the converter, in addition to having less time between those samples. So the overall latency decreases faster than you might expect, with a smaller-than-expected increase in inaudible high frequencies.
But if you're delaying the soundtrack anyway, to line up with a couple of frame buffers at, say, 60fps to be generous (60Hz sample rate, where each sample is a still image - a lot of us are closer to 30), this point is completely moot.
- Higher-quality "stretching" or "tape slowdown". This only works if you have an ADC that does indeed keep the steep cutoff at a higher frequency, instead of requiring a shallower cutoff at the same frequency like most do. If you just grab a random converter, it probably only allows (officially) one mode per frequency range, and that determines which filter it uses, all of which have a low 20's of kHz corner frequency and only vary the slope according to how much room they have before significant aliasing. Overclocking a slow mode might seem like a viable way to get a steep slope at a high frequency, since DSP functions (including the converter's filter) have no notion of time, only samples, but overclocking by 100% is pretty extreme.
I rather doubt you're doing this live though. It's more of a special-effects thing that produces a file beforehand, and then you play that file live. Even if you do record something live, run it through that processing, and play the result live in the same show, it's still the same concept.
Except for marketing (and the one pedant who both knows how to check, and calls you on said marketing), there's no need for live-streamed audio to be any faster than 48k, except for a handful of cases that might benefit from #1, but those probably don't include video.