A few question about organizing structure.

I noticed there are four different variables you could use and have it the number of static scenes multiplied by those four numbers equals all possible scenes.

The four variables I noticed were profiles, scene collections, scenes, and sources,

I assume sources is the most raw data you could get meaning actual camera footage, actual game footage, etc.

I don't exactly understand what the difference between a scene a scene collection and a profile is.

Probably the best way to explain it is to explain it in a way that gets me closer to my goal.

I currently do static shots of my room with no alterations between them. It's a Swiss cheese shot mainly of the game and three shots of my body, one at my face to see me talking to people, one at my hands to prove I'm controlling the game, and one that I should just call the money cam for short.

I was able to get them in 2D fine. I'm able to get them in 3D but I have to currently ruin the 2D version in order to make it 3D and vice versa.

The current 2D version I use monochromises all the cameras except for the game footage and places the left red eye on top of the right cyan eye with a recent 27.2 feature of RGB Addition to make the perfect anaglyph merger.

Eventually once I get a web program that could take a 32x9 vr footage except it's static therefore it's just 32x9 full color 3D and have it run directly to my own website where I apply either a left eye a right eye or a 16x9 side by side half for 3D TV effect to essentially make a 2d friendly 3D program.

I'm trying to find a way that I could best preserve everything and not have to re-edit each time I want to change modes.

Probably the 32 by 9 raw footage would be keeping the left on the left hand side the right on the right hand side and that's all. That would be the base footage and the ultimate broadcast I would do. The only reason I don't do it now is because I don't have the means to make it an easy 2D transition.

Sometimes when I look at different scenes I want to preserve the XY position of the various elements. However in doing that it also marries the various elements themselves so that they're always the same source. I want to use the same swiss cheese template but have different cameras filling in the various cameras either left eye right eye game footage etc.

I heard twitch can do 32 by 9 VR broadcasts. And isn't 32x9 3D the same as 32 x 9 VR broadcast except the perspective is constant? I know I'm able to do live 32x9 as camera footage on the road.

It just takes an hour to set up every time I want to switch from 2D to 3D. By the way I know that the easiest way to make 3D into 2D is poke out one of the two eyes, meaning just do the left or just do the right.

So how does this organization work exactly so that I could have an anaglyph version of 3D a regular 2D version and a 32x9 real ratio 3D version?

What would be the best way to preserve XY position the best?
 
Top