# [OBS Multiplatform] Extending win-dshow device support



## jpk (Feb 14, 2016)

Greetings,

I'm looking into hacking on the win-dshow plugin (and maybe other stuff) to get a device working.  According to the advice in the docs, I'm posting here to explain what I want to accomplish before I start, and get any direction you may have.

Here's some context: I recently got a camera that I'd like to use as a source in an OBS scene, but it doesn't work out of the box.  For the curious, it's a Celestron NexImage Burst M, which is the Celestron-branded version of this.  It's essentially a monochrome webcam without a lens, it has a barrel that fits into where a telescope eyepiece would go, and is used as a capture device in astrophotography.

Clearly the device is a little out of the ordinary as far as OBS is concerned, so I wasn't really surprised it didn't work right away.  However, the device is designed to be captured with DirectShow, so there's hope. OBS enumerates the camera in the devices dropdown of the 'Video Capture Device' source (which is win-dshow under the hood, right?).  I can add the source to the scene, but it doesn't produce any output and I just get '{name}: Video configuration failed' in the log (where {name} is whatever I named the source).  Poking around in the plugin's source, there seems to be more than one thing that can cause that log message, but I haven't debugged it any further yet.  I also tried it in AMCap (which is Microsoft's DirectShow capture example application that comes in the Windows SDK), as well as in VLC.  VLC chokes, but AMCap works fine.

So the net of all that is: the device is definitely capture-able via DirectShow, it's just that OBS might need some love to do it.  Before I dive into that, I want to ask some questions and solicit any advice developers might have.  So here goes:

I'm under the assumption that AMCap and OBS are both using essentially the same API to interact with capture devices, and therefore OBS should theoretically be able to do everything AMCap does.  However, I currently don't know jack about DirectShow so I maybe don't know what I'm talking about.  If that's the case, feel free to set me straight.
It looks like some of the low-level DirectShow stuff is in libdshowcapture, which is used by win-dshow.  I suspect I'll have to fiddle around in there as well.  Am I right to assume the the contribution guidelines are the same for that code as they are for OBS-proper?
For these types of imagers, it's desirable to be able to tune things like sensor gain, exposure time per frame, and much more (all of which are available in AMCap).  I suspect if I were to add a bunch of settings like that, it would inflate the properties dialog for the Video Capture Device source quite a bit.  For my purposes, I'm okay with that, but it might be a bunch of noise that an average OBS user wouldn't care about.  Do you have any opinions about how to approach that?  Maybe it's just a matter of organizing that properties dialog well enough.  Maybe it's worth it to make it a separate plugin -- 'Advanced Video Capture Device" or something.
Maybe I'll figure this out as I set up the dev environment, but is there anything special I need to do to run my dev builds of OBS side-by-side with the current production version?  Do they share settings?  Can they run at the same time?  Etc.
If I can manage it, I plan to get OBS to support my camera without any hacks that cause regressions elsewhere.  Assuming I can do that (and meet style guidelines, etc etc), would support for a device like this be considered merge-able if I put in a pull request?
I'm a software developer at my day job, but spend most of my time working in Linux and OSX.  Not only would this be my first stab at OBS dev, it'd also be my first time hacking on a Windows application (which, by extension, also means my first time messing with DirectShow). So any pointers you may have that'll save me some grief are very much welcome. :)
Anyway, thanks for reading and I appreciate the help!


----------



## jpk (Feb 18, 2016)

Cool, so I got my camera working tonight.

The issue was the camera's color format is Y800 (luma plane only, 8-bits per pixel, so like NV12 or something just without the chroma part), which OBS didn't handle.  It was mostly a matter of adding Y800 values to a few enums and adding the case statements for them throughout the video pipeline to handle it.  I didn't add anything with regard to tuning sensor gain, exposure time, etc yet, though.  Mostly because (1) it seems orthogonal to the color format stuff and would be better in another change, and (2) the color balance effect filter might be enough to get by for my purposes, so I'll do some testing before determining if it's necessary.

Couple questions:

This did end up requiring a small change in libdshowcapture, which is in a git submodule.  I've never dealt with pull-requesting a change that involved a submodule.  How should I got about that?  I'm guessing I need to fork libdshowcapture, commit my changes to the fork and pull-request it.  Then also pull-request the changes to obs-studio, which would include the submodule version bump.  But the PR to obs-studio would have to be after the PR to libdshowcapture was merged?  Or no?
I'd like to test my dev builds on a second machine.  What's the best way to do that?  Can I build the installer or something, or should I just copy the build directory over and just try to run the exe?
I need to clean up a little before pull-requesting the Y800 support, which I'll do Soon™.  I also want to test it in the telescope but I'm waiting on a part and a clear day before I can do that.  Once that's lined up, I'll get an initial PR in for review.


----------



## dodgepong (Feb 18, 2016)

Thanks for looking into this. You can make a PR to libdshowcapture as well as a PR to obs-studio with the updated submodule reference, that should work fine. If you want to find people to help you test, you can try asking in the #obs-dev IRC channel on Quakenet.


----------

