You're going to be limiting yourself actually just by having the 1050ti in there, because you're now running at PCIe3.0x8 instead of x16. Take the 1050ti out entirely, unless you have other specific reasons for it (such as 5+ monitors).
The reason being, Nvenc is a "free" encoder. It exists as separate silicon on your GPU, and its sole purpose is to encode video. Those two cards have the exact same silicon for the encoder, meaning even though your 1050ti may not be utilized for gaming, it's only able to do encoding just as well as the 1060 which is being used.
Add on to that fact, now you're asking one GPU to send its frame buffer over the PCIe bus to the other GPU, and both cards now only have a x8 connection to perform this on now.
Your best bet is to use the Nvenc (new) encoder, with only the 1060 installed. This lets the GPU send its composited frame straight from VRAM to the encoder, without going across the PCIe bus to do it. Plus, you get the benefit of a full x16 connection to your CPU for less potential bottlenecks (there is a LOT of data being sent back and forth just for normal scene compositing).
I take the liberty and clarify carlmmii's post a bit.
You get the best performance by
physically remove the GTX 1050Ti from your PC, because as sole card the 1060 gets full x16 pci-expres bus speed
use "nvenc (new)" encoder of the GTX 1060. This uses the least system resources, because game-captured data is never transmitted via the pci-express bus. It's encoded directly from the game's frame buffer. If you use nvenc on the 2nd GPU, the (huge) raw data must be transmitted from one GPU to the other, which is a very resource-intensive task. Encoding itself doesn't take away any GPU resources, because nvenc is a dedicated circuit on the GPU chip.