tripletopper
Member
I noticed a few different things when I looked at the results of my playback on various different shows.
First let me start by saying that I film in binocular stereo with two cameras representing a left eye and a right eye view from a particular perspective.
I have three 3D camera shots usually on my shows one on my face approximately one on my hands to show real movements with the game and sync with the CRT TV or TN based monitor, and something I like to call a money shot which is a third angle, Plus game footage.
I'm trying to see whether there's an issue with my USB pathways or with an issue with my bandwidth or with an issue with my cameras.
The computer I have is a 2023 M2 Mac Mini.
My capture card is a fairly generic HDMI to USB capture card found on Amazon typically.
My audio is routed through surround sound analog capture and conversion into headphone sound using a Turtle Beach DSS, and sent back into the computer using a two-track analog stereo input.
My cameras are sq11s mainly because those are the only cameras I can find that I can place within 2 in of each other people to pupil and guarantee they're on the same plane and pointing in the same direction with a rig I recently created to do such things more accurately.
I have a total of five 3d angles but I always unplug whichever two angles I am not currently using. It seems like Mac OS has a limit of nine identically named cameras as a maximum so it cannot get all 10 possible so I have to alternate and reset.
When I run my camera videos through OBS's processes of dechroming the camera and then dying the whites red and cyan respectively in each eye, I get good constant easily 30 frames per second binocular vision where both eyes move.
However if I tried to do a 32 by 9 side by side half even though I do get accurate 3D through through Google cardboard, one eye from one persepective does not seem to want to move and is frozen still in color.
By the way my internet is fiber internet from the county which gets 100 megs in 100 megs out, and sub 50 milliseconds ping time.
I'm trying to find a way to do a full color broadcast in 3D. I noticed cameras sometimes get still when I use six different camera angles (three left eyes three right eyes) and try to keep the color info and use positional placement to put a 32x9 side by side proper ratio element together which works well in Google cardboard on YouTube but not so well because one of my eyes are frozen.
Is there a way you could reduce the capture size in terms of pixels for frame, because for my hand cam I'd rather have more frames and less detailed pictures of my hands as opposed to less frames and more detail pictures of my hands.
Before a Minoru camera worked, but the maximum frame rate I could get was 15 frames per second.
Is there a way you could turn down high definition camera resolutions so that it takes less space on the USB bandwidth?
By the way for my USB ports I have two thunderbolt 4 cables going to two different thunderbolt 3 hubs. I noticed most of the hubs lose space with going down from USB C 3.1 to USB A 3.0. Are there any devices that can have multiple USB 3.1 ports which I know are identical in physical hookup to thunderbolt 3 and 4 ports?
Would that help my cameras better? Just wondering whether the bottleneck is at the thunderbolt level the USB level or the broadband level or the camera level.
Also could I have numbers of megabits per second it takes to do a camera depending on the resolution, frames per second, and whether it's color or monochrome? I assume the reason why my red and cyan cameras come out well is because I save USB bandwidth by not capturing chroma information.
First let me start by saying that I film in binocular stereo with two cameras representing a left eye and a right eye view from a particular perspective.
I have three 3D camera shots usually on my shows one on my face approximately one on my hands to show real movements with the game and sync with the CRT TV or TN based monitor, and something I like to call a money shot which is a third angle, Plus game footage.
I'm trying to see whether there's an issue with my USB pathways or with an issue with my bandwidth or with an issue with my cameras.
The computer I have is a 2023 M2 Mac Mini.
My capture card is a fairly generic HDMI to USB capture card found on Amazon typically.
My audio is routed through surround sound analog capture and conversion into headphone sound using a Turtle Beach DSS, and sent back into the computer using a two-track analog stereo input.
My cameras are sq11s mainly because those are the only cameras I can find that I can place within 2 in of each other people to pupil and guarantee they're on the same plane and pointing in the same direction with a rig I recently created to do such things more accurately.
I have a total of five 3d angles but I always unplug whichever two angles I am not currently using. It seems like Mac OS has a limit of nine identically named cameras as a maximum so it cannot get all 10 possible so I have to alternate and reset.
When I run my camera videos through OBS's processes of dechroming the camera and then dying the whites red and cyan respectively in each eye, I get good constant easily 30 frames per second binocular vision where both eyes move.
However if I tried to do a 32 by 9 side by side half even though I do get accurate 3D through through Google cardboard, one eye from one persepective does not seem to want to move and is frozen still in color.
By the way my internet is fiber internet from the county which gets 100 megs in 100 megs out, and sub 50 milliseconds ping time.
I'm trying to find a way to do a full color broadcast in 3D. I noticed cameras sometimes get still when I use six different camera angles (three left eyes three right eyes) and try to keep the color info and use positional placement to put a 32x9 side by side proper ratio element together which works well in Google cardboard on YouTube but not so well because one of my eyes are frozen.
Is there a way you could reduce the capture size in terms of pixels for frame, because for my hand cam I'd rather have more frames and less detailed pictures of my hands as opposed to less frames and more detail pictures of my hands.
Before a Minoru camera worked, but the maximum frame rate I could get was 15 frames per second.
Is there a way you could turn down high definition camera resolutions so that it takes less space on the USB bandwidth?
By the way for my USB ports I have two thunderbolt 4 cables going to two different thunderbolt 3 hubs. I noticed most of the hubs lose space with going down from USB C 3.1 to USB A 3.0. Are there any devices that can have multiple USB 3.1 ports which I know are identical in physical hookup to thunderbolt 3 and 4 ports?
Would that help my cameras better? Just wondering whether the bottleneck is at the thunderbolt level the USB level or the broadband level or the camera level.
Also could I have numbers of megabits per second it takes to do a camera depending on the resolution, frames per second, and whether it's color or monochrome? I assume the reason why my red and cyan cameras come out well is because I save USB bandwidth by not capturing chroma information.