HTML Interface

Anaerin

New Member
This can/will require some heavy modifications to OBS, but I think it will be a great concept in the long run.

Essentially, the "Screen" captured is a pure HTML window. Every source added is defined as a video stream accessible by <video> elements. The interface can/will (by default) just add elements to the HTML, but that can be overridden so you can provide your own HTML. This would then enable you to programmatically create your own animations, drop in elements (iframes and the like), hide, show and move around elements.

You could use things like Websockets and Javascript events to fire changes. And at the simplest end, it could work just the same as the current interface does, but if you want to, you can make things a lot more complex, right down to editing the CSS and HTML that defines the output at the very end.
 
This is already possible, to some extent, with OBS + OBSRemote (websocket interface) and CLR Browser Source in the older, non-MP version of OpenBroadcaster.

THe only thing missing from OBS Multiplatform, at the time being, is OBSRemote and/or an integrated WebSocket interface.

While a whole overhaul of OBS Multiplatform to add the features you suggest may not happen due to, as you say, the required heavy modifications, the inclusion of a WebSocket interface, allowing external applications and browser sources to gather state information about OBS would go a long way to helping in that regard.

The problem with the overhaul you suggest is that, in general, HTML is not rendered via the GPU. It can be told to be rendered via the GPU, using the same tricks that are done on Mobile platforms, but this can have negatives as well.
 

Anaerin

New Member
Kinda, but what I'm suggesting goes deeper than that. As everything is in HTML, you can, for instance, use CSS animations to move your main capture area to the bottom left of the screen, making transitions (other than crossfading) relatively simple to do. It could be done by adding events to the document or window object (window.onSceneSwitch = function()...) or pushing an event to a websocket.

Currently, I have an overlay system built using the CLR Browser, which has a transparent area (for the current game capture to show through) and a <video> element to show the webcam, along with an iframe for chat, that's controlled by websockets from a second browser window. This gives the capability to cover the screen for AFK, or to bring the webcam over the game capture for "Booth", but the game capture has to be fixed in place (as it's placed there, and showing through, from underneath, and cannot be moved or scaled). It also uses websockets to talk with streamtip and an irc bot for statistics.
 
If I might ask, what version of OBS and Browser source are you using? And how exactly are you getting the webcam loaded into the web page, and is it at all possible to have multiple camera sources? If not in a single Browser source, what about 2 browser sources (one accessing the webcam, the other accessing the video capture card).
 

Anaerin

New Member
I'm running OBS 0.657b (This is Windows OBS, not OBS-MP) and the latest CLR Browser version. To enable webcam support, you have to add the flag "--enable-media-stream" in the CommandLineArguments for the runtime. To get the webcam loaded, I'm using a little Javascript on load:
Code:
function DisplayWebcam(webcamObject) {
    this.webcamObject = webcamObject;
    this.webcam;
    this.webcamConstraints = { audio: false, video: { width: { min: 320, ideal: 1280 }, height: { min: 240, ideal: 720 } } };
    var my = this;
    this.webcamCallback = function (mediastream) {
        console.log("Got media stream");
        my.webcamObject.src = window.URL.createObjectURL(mediastream);
        my.webcamObject.onloadedmetadata = function (e) {
            this.play();
        };
    }
    var navigator = window.navigator;
    navigator.getMedia = (navigator.getUserMedia ||
                         navigator.webkitGetUserMedia ||
                         navigator.mozGetUserMedia ||
                         navigator.msGetUserMedia);
    if (navigator.mediaDevices) {
        console.log("Got mediaDevices, attempting to open with promise");
        this.webcam = navigator.mediaDevices.getUserMedia(this.webcamConstraints).then(this.webcamCallback, function (err) {
            console.log("Permissions Error", err);
        });
    } else if (navigator.getMedia) {
        console.log("Attempting to open webcam...");
        this.webcam = navigator.getMedia(this.webcamConstraints, this.webcamCallback, function (err) {
            console.log("Legacy permissions error", err);
        });
    } else {
        console.log("Unable to get webcam - no getUserMedia function");
    }
}
You can add more than one cam, though finding the source might be more difficult, but this grabs the first available webcam and places it into the passed-in video element.
 
ah, im using one of the older versions of CLR Browser Source, before the update that changed SingleProcess and a few other things. I found the latest version spammed the Log file for OBS too much whenever it made an AJAX/XHR request and simply doesn't have the performance level that I am comfortable with. If I was to update CLR right now, all of the browser sources I make use of right now would immediately take a 10-15fps drop.
 

Anaerin

New Member
Interestingly, it looked like it could be done with DXTory, but it seems that Chrome/CLR Browser doesn't want to work with DXTory's DirectShow interface for some reason.
 
Top