alvinhochun
New Member
If you have used the node editor in Blender you may know what I mean. The idea is that all the Sources, scenes and output targets are made into nodes with sockets and nodes are to be connected together.
An output socket can connect to many input sockets, while an input socket can only connect to one output socket.
I made a rough mock up with Paint .NET, pretty incomplete:
In the example, "Video Input" has one output socket, while "Live Streaming" has two input sockets. Data is passed from left to right.
Now my idea is that the evaluation goes from ending to beginning. Take the mock up as an example: The "Live Streaming" node requires video from the "Video Overlay" node, then it requires the game capture "Video Input", so the "Video Input" will grab a frame and return it to "Video Overlay" and "Video Overlay" will compose the resulting frame image, store it and return it to "Live Streaming". The "Preview" node also requires video from "Video Overlay", but this time "Video Overlay" already stored the resulting frame, the stored frame is returned without further processing.
The flow of a node processing is like this:
Not sure how it matches with the current operation principles of OBS. Perhaps if I get some more information of how OBS work now, I can get a clearer idea on how to do it.
It would not be easy to program though. I was trying to model an example toy system in C# but get things quite messy. It will be a lot messier with C++. The GUI will also need work.
An output socket can connect to many input sockets, while an input socket can only connect to one output socket.
I made a rough mock up with Paint .NET, pretty incomplete:
In the example, "Video Input" has one output socket, while "Live Streaming" has two input sockets. Data is passed from left to right.
Now my idea is that the evaluation goes from ending to beginning. Take the mock up as an example: The "Live Streaming" node requires video from the "Video Overlay" node, then it requires the game capture "Video Input", so the "Video Input" will grab a frame and return it to "Video Overlay" and "Video Overlay" will compose the resulting frame image, store it and return it to "Live Streaming". The "Preview" node also requires video from "Video Overlay", but this time "Video Overlay" already stored the resulting frame, the stored frame is returned without further processing.
The flow of a node processing is like this:
- An output is requested from one of the output sockets.
- If already processed once jump to 5.
- For each input sockets, request output from the output socket connected.
- Evaluate the result and store it.
- Return the stored result.
Not sure how it matches with the current operation principles of OBS. Perhaps if I get some more information of how OBS work now, I can get a clearer idea on how to do it.
It would not be easy to program though. I was trying to model an example toy system in C# but get things quite messy. It will be a lot messier with C++. The GUI will also need work.