MS Teams Live event and OBS

tbenjamin

New Member
Hi!

I'm doing a hybrid event with both physical and digital attendance in a couple of weeks.
The event will be planned, organized and executed using MS Teams Live events.
I'm using OBS Studio to do intro posters, corner bug, overlays and to control pre-recorded content that is part of the event.
In addition, I will do camera switching using an ATEM Mini that is set up as a source in OBS.

A couple of questions and reflections springs to mind:
  1. How can pre-recorded content be sent both to the Teams event, as well as the projector in the auditorium we're using? We only want presentations from the speakers in addition to the pre-recorded content "on the wall", not presenters or the production of camera sources.
  2. I've been testing, and although the external encoder option (using RTMP stream) in Teams Live events seems to work like a treat, there is no possibility of viewing anything else in Teams but the RTMP stream. I.e. remote speakers isn't available, as well as sharing other local content. Of course this is a flaw in Teams, and the result is that one will be required to use Teams for producing the event.
  3. I'm guessing that using the virtual camera option in OBS to send the production to Teams is the way to go to insure remote speaker capability, the only downside is that they will be "outside" of the OBS production. Is there a way to get video/audio out of Teams and into OBS - and then back again?
  4. Audio in a mixed media environment has always been a challenge, at least for me. Is there anything I might do to reduce the possibility of getting into serious trouble with this production?
Hardware running OBS (and Teams, for that matter) is a 2016 MacBook Pro 2,6GHz Quad Intel Core i7.

Any thoughts or advice?

TIA,

-Tom
 

AaronD

Active Member
I have a rig that does exactly that, on Ubuntu Studio Linux. So some things might be different for you.

I have two simultaneous instances of OBS: one to produce the feed to the remote participants, and one to produce and record the display to the local room. I've called the remote one Master and the local one Slave, because the automation makes it work that way.

Both instances have different Profiles and different Scene Collections so that they don't run over each other, and those are specified on the command-line:
obs --multi --profile "Master" --collection "Master" --startvirtualcam obs --multi --profile "Slave" --collection "Slave"

obs --help on the command-line to see all that you can do there. :-)

Actually, those commands are in a script that does all of the setup, waits for me to tell it I'm done, and then tears down to leave the system the same as it was to start with. So now I just "run script, get meeting." The flowchart for that script is attached here.

Both instances also run the Advanced Scene Switcher plugin, which handles all of the ongoing automation:
Its settings are part of the Scene Collection, so by having different Collections already, each instance also has different "programming" in Adv. SS.

---

Adv. SS in the Master decodes a naming convention for the current scene, and sends a WebSocket message to the Slave. Adv. SS in the Slave receives that WebSocket message, and switches to the corresponding scene.

There are 2 possibilities for that scene-switching:
  • Camera: Window-Captures the meeting window, full-screen. In my case it's a web browser, but it could just as easily be Zoom, Teams, or whatever else.
  • Feature: Takes the Master's VCam, full-screen.
The meeting also takes the Master's VCam as its camera, always.

The local display is simply a full-screen projector from the Slave.

---

Audio is not in OBS at all. The Master sends everything it has to the Monitor, which goes to a loopback, and gets picked up by a DAW. (Digital Audio Workstation: a complete sound studio in one app, and lightyears better than OBS's pile of bandaids) Mics also go directly to the DAW, not OBS. The meeting is fed from the DAW through another loopback, the local speakers are fed directly from the DAW, and the Slave is also fed from the DAW through yet another loopback as its only audio source at all. No further processing in OBS, and it only goes to the recording from there.

And as you might guess by now, the meeting audio return goes to yet another loopback and gets picked up by the DAW as its own channel strip.

To automate the DAW, the Master Adv. SS looks at a different part of the scenes' naming convention, and sends the appropriate OSC (Open Sound Control) message to it:
  • Mics on/off
  • Playback on/off
In the DAW, those OSC messages don't control the flow of audio directly, but rather the amount of a sinewave generator that is sent to the side-chain of a gate. Normally, a gate cuts off or mutes something when it gets quiet, but the side-chain makes it respond to a different signal than what it's processing. So by controlling the amount of the sine generator that goes to the side-chain, I effectively make it mute or unmute the signal that it's actually inserted on.

The reason to add that extra step, is to convert the hard on/off signal that comes via OSC, into a nice fade according to each gate's timing controls.

And because the room mics are far enough away from the room speakers that the standard echo canceller can't account for it, I also have the remote audio return connected to the side-chain of an aggressive compressor for the room mic mix. Because of that side-chain, the compressor doesn't care about the room mics that it's processing, but when a remote person starts talking, it practically cuts off the local mics to eliminate that echo.

This means that the remote people MUST all be muted unless they're actually talking at that moment! And the local people can't talk over them. Well, locally they can, but the remote people won't know it.

The audio flowchart is also attached here.

---

Thus, I have a hybrid meeting that "just works". Run script, get meeting. Name the Master's scenes according to that convention, then produce the feed to the remote people as if it were any other live broadcast, and everything else follows automatically.

As a convenience function, I used one of the newer features of Adv. SS to add a docked button in the Master to start and stop the recording. That button sends a WebSocket message to the Slave, which does that and puts a popup message in the system tray, just for some operator feedback that the message got there.

1695314835791.png


More screenshots coming, once I move my laptop ("mobile workstation") from my "office" dock to the media dock so that the script and OBS work like they're supposed to...

And here they are:

OBS Master:
1695315880789.png

1695315635546.png

1695315678559.png

1695315704931.png

1695315727501.png

The "End of Media" macro is intended to create a nice transition back to whichever camera was previously active. It can be abused, however, by transitioning from one media scene to another. In that case, it would create a loop of those two scenes. So you need to be a little bit careful, and think through the logic that you're programming and then using.

OBS Slave:
1695316180744.png

The above is two views of the same scene. Normally, the Window Capture completely covers the placeholder text, but when the meeting window isn't present or it's configured wrong, that source is transparent and so the text appears.
1695316000547.png
 

Attachments

  • Setup-Script.pdf
    27.5 KB · Views: 19
  • Audio-Routing.pdf
    79.2 KB · Views: 22
Last edited:

AaronD

Active Member
And the remaining screenshots of OBS Slave (only 10 files allowed per post):
1695316375133.png

My meeting runs in the Chrome web browser, which is why the Slave Initialization/Setup is looking for it.
1695316451410.png

1695316474824.png

And because I forgot to include it previously, here the Master's end of the Start Recording button:
1695316575258.png
 

AaronD

Active Member
And, here's the self-reported About page for this 2015 Dell Precision M6800, running Ubuntu Studio as I mentioned up top:
1695316736162.png

Butter-smooth, once I found by trial-and-error, the one version of the GPU driver that both supports this old card and what the modern versions OBS want to do with it. I had one GPU crash after that, which lost that recording, so I switched the encoder to VAAPI instead of NVENC. Not a problem since.

Note: this is a "Mobile Workstation", not a "typical laptop". It actually has decent cooling, and a fair amount of weight. Most laptops prioritize portability over sustained performance, so while they might have comparable specs to the screenshot here, they can only use it for a few seconds at a time. Then they throttle back. The idea there, is to load an app, document, or web page quickly, and then do nothing while the user looks at it. Not what you want for live media!
 

tbenjamin

New Member
And, here's the self-reported About page for this 2015 Dell Precision M6800, running Ubuntu Studio as I mentioned up top:
View attachment 97904
Butter-smooth, once I found by trial-and-error, the one version of the GPU driver that both supports this old card and what the modern versions OBS want to do with it. I had one GPU crash after that, which lost that recording, so I switched the encoder to VAAPI instead of NVENC. Not a problem since.

Note: this is a "Mobile Workstation", not a "typical laptop". It actually has decent cooling, and a fair amount of weight. Most laptops prioritize portability over sustained performance, so while they might have comparable specs to the screenshot here, they can only use it for a few seconds at a time. Then they throttle back. The idea there, is to load an app, document, or web page quickly, and then do nothing while the user looks at it. Not what you want for live media!
Wow, that’s one heck of a write up! Thanks a bunch, this is really bloody inspiring, if I may say so

Sad part is that if I should endeavor in an attempt to do something similar, it would probably take more time than I have at hand at the moment.

How much time did you spend setting this up from scratch?

And is it even possible to run multiple instances of an application in macOS?

-Tom
 

AaronD

Active Member
How much time did you spend setting this up from scratch?
A couple of months.

The original rig ran on Windows, and was kinda "hacky" in a few places because Windoze is stoopid. Linux is like the "Erector Set" of operating systems. Or "software Legos". I can do *anything* in there!

The primary reason I don't have experience with Mac is the price. 3x for the same functionality? No thanks!
If the "walled garden" is worth that to you - to have a finite list of predefined things "just work" - then that's fine. But beyond that, it's just an expensive way to get to the same place.
 
1. How can pre-recorded content be sent both to the Teams event, as well as the projector in the auditorium we're using? We only want presentations from the speakers in addition to the pre-recorded content "on the wall", not presenters or the production of camera sources.
A simpler solution is to add a iMag scene in OBS then add the presenters laptop source and media in it. Then do a full screen projection of that scene to the wall. You will also need to install "source-toggler" script by (Exceldro) that way only one of those sources can be visible at any one time. You can use hotkeys or Streamdeck to control what you want to hide or make visible. The program would not be affected and you can control it separately on the same Streamdeck.
 
2. I've been testing, and although the external encoder option (using RTMP stream) in Teams Live events seems to work like a treat, there is no possibility of viewing anything else in Teams but the RTMP stream. I.e. remote speakers isn't available, as well as sharing other local content. Of course this is a flaw in Teams, and the result is that one will be required to use Teams for producing the event.
You can bring remote speakers in using another solution into OBS then send the program to teams.
 
3. I'm guessing that using the virtual camera option in OBS to send the production to Teams is the way to go to insure remote speaker capability, the only downside is that they will be "outside" of the OBS production. Is there a way to get video/audio out of Teams and into OBS - and then back again?
Open Teams on a separately screen the capture that screen in OBS as a source. You might have to run two instances of Teams, one with the remote speakers and the other for delivering your program to Teams. Not sure what the latency would be though.
 
4. Audio in a mixed media environment has always been a challenge, at least for me. Is there anything I might do to reduce the possibility of getting into serious trouble with this production?
You can buy or download royalty-free sounds or create your own in a music production software. And please look into getting at least a M1(What I use) or M2, I find they can handle more.
 
Top