Complicated workflow, looking for suggestions

GJam

New Member
Thanks in advance for any help you can give me with this. Also, this is my first post in these forums so apologies if it's not in the right place or format.

I run a series of live call-in shows primarily up until now exclusively on YouTube, with anywhere from 2-4 remote hosts. We do all of our shows as remote shows except for the last shows of each month, which we do live in-person from our studios, and directly import audio and video sources from our equipment there. I'm interested in branching out to concurrently live stream on other platforms. I'm able already to do this on Twitter and Twitch, but would like to extend this even further to include TikTok and Instagram, which have different aspect ratios than my currently streams.
As some background, I'm currently using Wirecast as my primary platform mostly because of it's Rendezvous functionality (inclusion of remote guests), and the ability to multistream, which are both built into the software. However, I'm becoming more and more irritated by it's bugs and limitations.

I have a thought as to how to pull this off, but I'm not sure how I might be able to pull this together:

My initial plan is to run 2 concurrent OBS instances, one at a 16:9 output resolution, and one at a 9:16. The two instances will probably run on the same machine. I've done some testing already and have 2 OBS instances broadcasting to two separate live streams at the same time.

I am still looking into the best way to bring in remote guests without Rendezvous, but I'm wondering if one of the instances could be the "primary" and provide some sort of video/audio output to the second instance, perhaps via virtual cameras/microphones. This would allow for the fact that for the 9:16 ratio, I'd need differently laid out scenes, to avoid window shading/letter boxing issues.

To further complicate this, it's my intention to try and use Bitfocus Companion as a controller for both instances, so that it can synchronize the scene changes on both instances, so that the video operator doesn't have to replicate changes twice in different windows.

Mostly what I'm looking for is some guidance or refocusing of my plan to be able to accomplish something like this. I'm willing to put in the time and effort to research things, but since I'm relatively new to OBS I was just trying to see if what I'm proposing to do is possible, and if so, how best to go about it.

Thanks
 

Lawrence_SoCal

Active Member
A bunch of your questions are outside my areas of expertise, so hopefully others will chime in
1. multi-destination streaming
beware both local system hardware resource and the network bandwidth (upload/upstream) implications of multiple streams. restreamio.com or similar can bypass some (many?) of the complications. Or you may have the hardware and bandwidth and prefer local control... it depends​
2. Remote video sources
Numerous options. but OBS Studio, being free, open-source (FOSS) means that certain options with licensing restrictions (cost or otherwise) usually aren't included. Meaning you have to put together and support the connections yourself. NDI has a remote bridge/guest option, but your requirements on remote device support, ease of use/install, etc come into play. And NDI is not natively supported in OBS [though OBS Studio and a NDI PTZ camera is my setup, using my camera vendor 's supplied/supported VirtualUSB driver to take NDI feed and make appear as locally connected USB camera to Operating System (and therefore OBS Studio as well)]​
There are other options, of varying levels of maturity.​
3. You could use something like Advanced Scene Switcher and its extensive Macro capability to take websocket command input (from a 'controller' and pass onto 2nd OBS Studio instance as you designate (ie duplicate/same, or adjusted as appropriate))

In a similar vein on camera feeds, your choice is whether to have a 'Primary' OBS Studio instance receiving the video feed, processing there, and then having a 2nd OBS instance receive the Presentation Mode output of the first. But... with different aspect rations and the natural lossy nature of real-time video re-encoding, I'd be more inclined (if resources and budget allow) to duplicate the incoming video feeds individually to each OBS instance... but I've never done this, and there may be other considerations/approaches/software options... so just food for thought. Duplicating network video traffic (ex NDI camera feeds) should be relatively simple (listener receiving feed, then distributing to 2 endpoints (ports, in this case, on local PC). USB camera feeding multiple targets seems doable as long as software/drivers allow non-exclusive access to video stream.

I've been around tech long enough, that my experience is that what can go wrong, usually will, so my preference is independent systems with as little common points of failure as budget/trained staff allows. In general, that means I'd prefer to not have 2nd OBS Studio instance be dependent on 1st instance... but until I got into the details, determined the risks, etc.. I could go either way. If I was going to have OBS instance #2 dependent on instance#1, I'd know I'd try to make sure instance #1 was as stable/reliable as I could make it... very limited and only stable and mature plugins, etc

Anyway, hope that helps. Good luck
 

AaronD

Active Member
I have a hybrid (both local and remote) meeting rig that uses two instances of the same installation of OBS - and thus the same plugins - in a Master/Slave configuration. Not quite the same as what you're doing, but a few similarities.

Like you said, OBS can run just fine in two copies on the same machine, and my command-line switches (actually sent from a script) make them use different profiles and scene collections too. Advanced Scene Switcher does a LOT of automation!
The core automation is to decode the naming convention for the Master's scenes, and use that to send control signals to the DAW for audio, and to the Slave instance of OBS for the local display and recording. So the operator only has to focus on the Master instance, to produce a live stream to the remote people. And make sure that the Slave is actually recording. Otherwise, they're focused entirely on the Master.

Both of my instances are set to the same picture size - 1920x1080, which is 16:9 - so that the Slave can take the Master's output fullscreen as a passthrough for that part of the meeting. That way, we can show a visual aid to everyone at the same time. Then we switch back to the cameras, where the local one is sent to the remote people via the Master instance, and the meeting window is Window-captured by the Slave instance to show to the local display and recording. As I said before, those are different scenes in the Master, which allows Adv. SS to send the control signals to everything else to change accordingly.

It just so happens that my operating system allows two apps to use the same video source simultaneously. So both the meeting and the Slave instance use the Master's virtual cam. Not all OS's, or versions of the OS, can do that. Historically, that access has been exclusive for performance reasons, and some still have that mentality today, regardless of the hardware that they're running on. So if my rig were to be built on one of those, it would need a different way to get the Master output to its two destinations.

---

Generally, you want to draw out the entire signal flow as a controlling document, and THEN build the system to match that document *exactly*. The more detail, the better. If something doesn't work, change the document first, perhaps even as a new revision while keeping a record of the old, and then change the system to match. Treat it like a serious engineering project: there's a reason they do things the way they do.

If you get stuck, post that document as part of the question.
 

GJam

New Member
Thank you both for your responses on this. If I can get all this working I'll definitely post back here, in case it helps someone else.
 

Catahoula

New Member
Hello,

Curious if you were able to work this out and how? I am looking to do something similar...

I am capturing scenes off a live lip-synced animation program and wanting to create two simultaneous recordings, one in 16:9 format and one in a vertical 9:16 format. Each format frames the scene differently and I have each set-up on their own profiles. I can output to 16:9 or I can output to 9:16, but not both simultaneously.

I have watched tons of tutorials and the closest I found was to open OBS twice and use the appropriate settings of each. On the tutorial it worked, but on my attempts I only get one or the other. When one is visible the other is not.

If any information on how to make this happen it would be greatly appreciated.

Thanks!!
 

AaronD

Active Member
Hello,

Curious if you were able to work this out and how? I am looking to do something similar...

I am capturing scenes off a live lip-synced animation program and wanting to create two simultaneous recordings, one in 16:9 format and one in a vertical 9:16 format. Each format frames the scene differently and I have each set-up on their own profiles. I can output to 16:9 or I can output to 9:16, but not both simultaneously.

I have watched tons of tutorials and the closest I found was to open OBS twice and use the appropriate settings of each. On the tutorial it worked, but on my attempts I only get one or the other. When one is visible the other is not.

If any information on how to make this happen it would be greatly appreciated.

Thanks!!
New thread for that topic here:
 
Top