I did. But the hard reality is that it's only "thought candy". Unless you're doing slo-mo replays or something like that, and still want *that* to be smooth.
Like I said:Why is it thoug
There's been a few studies on that, but you can google as well as I can. :-)Besides, not even YOU can see and react faster than 60Hz. That's already beyond any human capability. Any notion to the contrary is not reacting, but anticipating.
---
As for why these fast monitors and TV's are sold, hyped up, and charged a pretty penny for:
It's the same reason that the consumer audio world has their stupidly-high specs and useless products. They don't make a scrap of difference either, because again, you can't perceive any better than what the dirt cheap stuff is already capable of, even if there *is* a measurable difference.
And there isn't always a measurable difference, even. Sometimes there is, sometimes not, but the audiophool is so confused that it hardly matters. He sees the marketing and spends his money.
Non-technical gamers often have a very similar mindset, as I've seen, and are just as easily manipulated to spend money on things that don't matter. Regardless of whether it actually makes a measurable difference or not (high-speed camera looking at the monitor, for example), because their entire perception is based on the marketing, and their unchanged raw observations (because you can't perceive that difference even if it *does* exist) are filtered through that marketing as "proof"......which is also how placebo medicines work.
---
If you see something about "pro quality" or something like that: yes, the professional producers do use higher quality than what is useful to consumers, but it's NOT because that quality is perceptibly better to look at or listen to! Again, it's not.
The reason for producers to use higher quality, is so the boatload of math that is used to take the raw inputs and calculate the final result, can't be accused of not being accurate enough to give a "perfect" result. There will always be roundoff error with digital systems, so using more bits per sample (24-bit raw recording, 64-bit processing, etc.) is simply a way to push that roundoff error down to where the final 16-bit output for sound or 8-bit output for video can't see it. Any more than that for consumers is just a waste of bandwidth and storage space, because you can't actually perceive any better than that.
And as another note: that 24-bit recording is already better than any analog circuit could ever hope for. It's possible, but difficult to make an analog circuit match or slightly exceed 16-bit equivalent resolution, but not a chance for 24-bit. The bottom few bits of a 24-bit recording will always be analog circuit noise.
And if you look closely at a monitor that's displaying an 8-bit picture converted to an analog VGA signal, you can see some noise on that too. That's all analog circuit noise, which is worse than 8 bits because the difficulty increases with frequency (several MHz for video), and you can't see that either from a reasonable viewing distance.
Likewise for audio sample rates and video frame rates:
You can't hear any higher than 20kHz (often less), and so there's no point at all in using anything higher than the first standard rate above 40kHz (google Nyquist to see why that is). For CD's and their legacy, that's 44.1k (which is based on analog TV video standards), and for everything else, 48k. Higher than that, like 96k or 192k, might have some not-directly-audible benefits like lower latency (as in, about a handbreadth or so at the speed of sound in air, which you can *only* perceive as a slight coloration in the higher frequencies, and *that's* only if you have two copies of the same sound with one delayed by that amount and the other not), but it makes no audible difference whatsoever for a single copy of a given sound. Again, a waste of bandwidth and storage space on the consumer side, and digital lowpasses are practically free, even "brickwall" ones, to make it fit into a lower sample rate with no coloration at all.
And you can't see any faster than 60fps, so there's no point at all in using anything faster than that. The same reasoning may apply to the math behind it - smaller time slices make a physics calculation more accurate, for example, and that's already done regardless of the displayed framerate - but when it comes time to display it, no need for anything more than 60Hz. There *might* be something to be said about the pixel devices themselves not smearing from one frame to the next at 60Hz, but that's pretty much it.
Video resolution, or pixel count, has a similar argument again: past a certain point, you can't see anything different, so why bother? For most people, that's 1920x1080 at a reasonable viewing distance. If it literally fills your *entire* unrestricted view (very roughly 90x180 deg), you might justify 4k, but otherwise not...unless you're zooming in later and still want that to be clear. Thin-line graphics like the border of a webpage, and small text like you're reading now, will show individual pixels at 1920x1080, but nothing else will. If it did, we'd miss it among the larger features anyway...unless we ignore everything else to pay special attention to that one spot where it is.