Because I have absolutely no experience with audio processing. The visualizer is entirely based on other existing visualizers, which is also why I don't know whether I can copy over the code from Rainmeter
You appear to be representing the frequency bands in a linear fashion (this is not related to the above post which is about amplitude sensitivity), but it is an accepted industry standard to represent spectral audio feedback in a logarithmic fashion.
There is an important reason why the industry adheres to this standard. It's for the uniformity of the display of coherent frequencies, rather than wasting bands on high frequency content that most people find irrelevant when listening to music, as the bulk of dynamics falls in the relatively low frequency part of the audio spectrum between 30 Hz and 6 KHz.
For this to make sense and to be easily readable, logarithmic scales are implemented on all peices of audio equipment and audio software that have frequency band visualisation. Even the archaic graphic equalisers of stacking stereo systems had sliders placed in logarithmic positions to control each frequency band. The audio content that you want to be representing is the relatively low frequency dynamic coherent content, so it makes sense to use a logarithmic scale for the visual representation in your plugin.
Think of using a linear frequency scale as the equivalent of watching a high dynamic range (HDR) movie on an standard dynamic range (SDR) screen. With a linear audio frequency visualisation one can only see a compressed version of the frequencies that can be readily heard and easily isolated in the lower range, but the higher frequency content would be dominating most of the visualisation. This is akin to the brightest scene with lots of contrast in an HDR movie being viewed on an SDR screen which will still render a full picture with all the content, but the brighter parts will be compressed, making the brighter parts of the image mingle together with the medium parts and make it look relatively flat, washed out and drab. This is exactly what's happening to the lower frequencies on your audio visualiser using a linear representation scale for frequency, because the most dynamic parts in the lower frequencies are squashed into a very small space at the beginning of graph.
Now think of using a logarithmic scale as the equivalent of watching an HDR movie on an HDR screen. With logarithmic audio visualisation one can see all of the expanded lower frequency dynamic content spread across more of the visualisation and the higher frequency content is relegated to the end, compressed up against the last third or so of the visualisation. This makes more room for coherent dynamic content and compresses the higher and mostly incoherent frequencies at the top, wasting less space by not representing what is essentially a noise wall in the higher frequencies.
The relationship between frequency bands is halved for every unit of length along the X axis. That means if you represent 1KHz at point 0, equispaced point 1 must be 2 KHz, equispaced point 3 must be 4KHz, so on and so forth. The same goes for any frequencies in between, representing in a logarithmic way, otherwise the bulk of your dynamic content will display as shown in my comparison video, with most of the movement happening in the lower quarter of your range and the other three quarters taken up by high frequency noise that represents visually as a thick block of pretty much nothing.
Fix that and I think you'll be half way there to removing the unsightly block of unnecessary noise from most of the graphic representation, essentially compressing it into the last quarter of your x axis. Much nicer. :-)