The Soundsystem/mixing part of music was never really my forte but I did learn drums, trumpet, piano and singing growing up. Some better than others :D
I had some music lessons growing up too. Piano and trombone. Then we moved and never found a teacher.
But the musical background really helps to mix a song, even if I have no direct experience with any of the instruments. If different parts of the song need slightly different settings, I can "feel" where they're going and make that adjustment with them. Most of the time, I'm right, even if I've never heard the song before.
I hated theory in those traditional music lessons. I never made the connection between boring scales with wacky fingering, or the "beat machine" that my piano teacher had (stereo cassette tape: one track went to mono headphones, the other track told the machine when I should have pushed the big button, and it scored me on that), and the fun pieces that I brute-force memorized. But when I taught myself sound, I spent a lot of time looking up how to do it, how this or that gear worked both externally and internally, how the physics work, etc. It wasn't until years later that I realized that that's theory!
---
I've since made my own analog audio processors, and I'm presently beating my head against a wall with a completely custom digital one. Still a long time before I give up on that. Yes, I could take what someone else has done and run with it, but all of them have various problems for the project I want to use it for:
- Too big of a buffer, to use the single-instruction-multiple-data design that a lot of processors have to greatly improve efficiency, but also creates too much latency. (record a chunk, process it all at once, and play it back out) Most of the time, that's not a problem - the "standard latency" through a professional PA is equivalent to about a forearm's length in air, or less - but for physical reasons, I want it to be less than a handbreadth. Including the converters' delay, that leaves me with practically one sample at a time: in, process, out.
- Not enough channels on USB. Partly for debugging and partly for the actual use, I want to send a bunch of intermediate taps back to the PC to record and analyze. But all the free hobbyist libraries seem to be hardcoded for stereo only. So I need to figure out how to modify a liberal-licensed one of them to increase the channel count. Behringer's XR18 and X32 digital mixers have 18 and 32 channels on USB respectively, with no driver required, so I know it works.
- Not enough bit depth. The free hobby stuff is hardcoded for 16-bit integer, which is more than plenty for casual playing around or making a relatively simple musical instrument out of it. It's also indistinguishable from analog if you're not doing any processing with it. But the pro standard is 24-bit conversion and at least 32-bit processing, often 64-bit for the fancier stuff, just to guarantee that the conversion is better than analog could ever hope to be from a purely physical standpoint, and that the roundoff error from all that processing still doesn't affect the 24-bit output. Thus, the entire digital system is perfect as far as the analog endpoints can tell, no matter how much it does in between. Behringer uses 24-bit converters, 40-bit internal processing, and 32-bit integer on USB in addition to the high channel count, so I know that works too.
OBS, by the way, uses 32-bit audio processing internally, so no worries about the supposed "problems of digital" in general. But the myriad of problems that come from a pile of band-aids on top of a good idea for its original single use, written by someone who didn't really understand digital audio but managed to make it work anyway...most of the time...makes it one of the more difficult systems for me to use. The devs know this, and they're in the early stages of figuring out what they want to replace it with, to be much more maintainable than what it is now.
In the meantime, I often recommend that people do all of their audio work *outside* of OBS, either in a DAW (Digital Audio Workstation - essentially a complete sound studio in one app, that is designed specifically to only do that) or in a physical console, and bring the final result into OBS as its only audio source at all, to pass through completely unchanged. That's the pro mindset anyway: audio and video are separate things to be processed separately, and brought together at the very last moment to send out.
---
And when I get that custom digital audio processor working and in a few rigs, I want to do the same thing for video. One pixel at a time - in, process, out - for minimal latency (and a severe restriction on what can be done, because a lot of things require knowledge of neighboring pixels or even the entire frame, but what I'm thinking of doesn't need that)......and I think I'll also need to learn how to use an FPGA to make that work (Field-Programmable Gate Array, which is essentially a breadboard-on-a-chip for high-speed digital circuitry; the "program" is the connections between things, not a sequential list of instructions)......