Basic Suggestions List

Saucy

New Member
- Automatic Updater
- Customizable Hotkeys for Scenes
- Customizable transitions for scenes
- Automatic server picker for twitch (not sure if twitch already does this with main cluster)
- Installer
- Being able to rotate/skew sources
- Able to add video source
- Able to animate sources (a bit of a stretch, not needed)
- More control on video encoding, ex. changing video encoding (not just "quality" and bitrate)
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Nice, all good suggestion, and most of them were already on the list.. a few things, though, I'd like to ask.

Customizable scene transitions? Could you be more specific on what exactly you mean? Like instead of a quick fade, add the option for instant transition, for example?

Rotate sources. I mean you can already do it internally, but I'm sort of curious why you'd want to rotate a source generally as it's essentially a 3D renderer, and you can already stretch sources. The main issue is formally adding rotation is that it would be a serious pain, and though I could add it at some point, I'd really have to be very low priority unless there's a real good reason for adding it.

Able to add video source? You mean play a video file? Definitely a good suggestion. Thanks, will add that to the list.
 

Saucy

New Member
Customizable scene transitions: Yes being able to make it instant instead of a quick fade, or make the fade longer or having some kind of wipe. Mostly (for me personally) being able to make it instant or change the fade to be smaller or larger.

Rotate Sources: not a huge priority. I personally would have no use for it and couldn't see any use. Low priority sounds right.

New Idea
I thought of another idea which I may have use for. Having a.. mask or some kind of "pixel/color replacer" for video or bitmap. This would be an extremely advanced feature but being able to replace pixel's it's familiar with. Here's some examples since I don't think I'm explaining this very clearly.

Say you have a bright pink image or video in your scene and you want to replace ONLY the pink with an image or a video.
and or
Say in starcraft 2 it recognizes the pixels in the menu and knows that there needs to be a video/image in certain spots.
and or
In starcraft 2 (again) when you are in game it see's the UI below and knows there needs to be a webcam in the bottom right corner.

Kind of confusing but would be extremely useful for streamers. Kind of advanced though.

P.S. add a donation page. :)
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Wow, that really is actually an amazingly cool idea. Unfortunately there are a ton of issues associated with it that make it less than feasible. Reaplcing solid colors are doable (such as like what you see itmejp's stream do, he'll have the background in his webcam transparent because it's some solid color that he's replacing), but entire image comparisons are probably not very doable.
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
I'm okay on money, and I would prefer not to ask for donations unless I really need it for some specific equipment to get some feature working, but thank you very much for the offer. You are totally awesome.
 

zerocul

Member
Hello :) Can I add some suggestion in this list?
First, many thanks! It is a really great work! I'm so like OBS, so I think I'm just drop off another fmle, ffsplit and other :D
And my suggestion is:
1. Possibility to moving image inside capture, e.g. logos, overlays, etc.
2. Adding some debug info for streamers, e.g. current FPS, frame drops. Because current bitrate is not enough for controlling stream status. It would be good to see count of viewers like in xsplit.
3. I'm not fully understand, but I think OBS can't mantain aspect ratio? For example, if I try to capture "square" window with resolution 800*600 and stream with resolution 1280*720, output video is so ugly :( Streaming with 1:1 aspect ratio is workaround, but some viewers have problems with black lines in their players, because almost all players developed for widescreen video.
4*. Local video recording, but I saw topic with this suggestion :)
One more time, thanks! :)
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Hey!

1.) You can move your overlays and image. Just turn on edit mode while streaming, click them in the view (or the list box) to select them, and you should be able to simply drag them to move them around.

2.) Did add frame drop info in 0.34a, which is the color indicator. Yellow typically means it starts dropping the disposable frames (not very noticable on stream), and red means it starts dropping important frames which is where you'll get visible spikes. Generally what it means is you should turn down your bitrate/buffersize until it is fairly consistently green. Will be putting more info on it in the help file later. As for viewers, I'll write it on the list of things to do, as you're right, there is a way to get the viewers, at least for twitch.

3.) I'm not entirely sure I understand what you are trying to do. Is your image stretched? Is that what you mean by ugly? There's a bug where images do get stretched after changing settings, but you can fix it by selecting the source, and pressing Ctrl-R to reset its size. I'm not sure why you'd want to capture 800x600 if you're streaming at 1280x720, but you should still be able to do it without problems and it should look fine.

4.) Yep! As soon as I work through some more bugs and other stuff I'm going to get to work on it ASAP :)

And no, thank you! ^_^
 

zerocul

Member
Oh, really. I just don't think that everything solved so simply :) Thanks for explanation.
Did add frame drop info in 0.35a, which is the color indicator
About color fps status - great idea! I'm just used to see numbers :) I think this is bandwith meter or something :lol:
I'm not sure why you'd want to capture 800x600 if you're streaming at 1280x720
For example - old games, which can't work in higher resolution :)
 

sneaky4oe

Member
IMO no installer needed.
Better add these features:
- cross-service streaming (to twitch and own3d, for example)
- service saving in broadcasting settings not to enter them each time (I prefer streaming multiplayer to own3d, and singleplayer to twitch)
- rework adding video sources menu

PS: I totally love your window capture realisation. Now I can add chats to my streams and keep them in backgrounds. Chat on stream makes it more interesting to viewers to write there. ty so much for that, dont change it.
PPS: how2donate?
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Hey! Don't worry, I'm not about to force anyone to use an installer. Some people prefer it, though.

As for your suggestions -
- For your first one, cross service streaming? Forgive me, but I don't quite understand. You'll have to be a bit more specific.
- For your second one, in the latest version, you can save multiple setting profiles and switch between them in the general settings section. So you can save your own3d profile and your twitch profile and switch between them there when you wish.
- For your third one, you'll have to explain in more detail again, as I don't quite understand.
 

kutu

New Member
- local recording for testing purposes
- remember microphone "mute" setting
- twitter acoount and/or rss feed, for quick news, and update notifications
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Hey kutu ^_^

- Local recording and saving to file is what I'm working on in the next update. You can see what is currently on the list at http://www.openbroadcastersoftware.com/forum/viewtopic.php?f=7&t=20
- Mute not saving? Ah, woops. Fixed in the next version. Thank you for pointing it out.
- @OBSProject is the twitter for project updates. It actually should be on the upper right hand of this web site.
 

kutu

New Member
Jim said:
- Local recording and saving to file is what I'm working on in the next update. You can see what is currently on the list at http://www.openbroadcastersoftware.com/forum/viewtopic.php?f=7&t=20
sorry, my eyes skipping it for some reason, my fault
Jim said:
- @OBSProject is the twitter for project updates. It actually should be on the upper right hand of this web site.
i, like others, have some plugins in browser, which remove social plugins, because they slow down loading the page, simple link in the footer in "home" will be good for "unsocial" people

ps i'm glad, i'm first, who starred your project on github.
 

Warchamp7

Forum Admin
I'll put a basic Twitter link in the div containing the external Twitter button for anyone that might be blocking it like you :)

Edit: Done, you might see two Twitter buttons until your cache of the website updates if you don't have the normal one hidden with a plugin.
 

Muf

Forum Moderator
I have a small but spicy suggestions list (these will probably be hard to implement):

- Automatic bandwidth scaling.
Adobe Flash Media Live Encoder implements this, and it basically does away with ALL server-side framedrop issues if you have a decent connection. You can tell it to scale down to some insanely low value like 100kbps before it starts dropping frames. You can also safely let it scale up close to your theoretical maximum upload speed without having to fear about the occasional hiccup. If this is too much work, an intermediate solution could be to provide a virtual capture device to enable uncompressed output to FMLE.

- Video filters
Deinterlace, denoise, crop, delay, etc. Your big advantage is being open source-- you should not have to reinvent the wheel here. I suggest having a look at ffdshow (SourceForge project). An intermediate solution could be injecting the ffdshow raw video filter in front of the OBS frame grabber filter when constructing the DirectShow graph. I recently reconstructed corrupted video capture from a capture card (there was some sync/stride problem going on resulting in offset stretched interlaced video) using ffdshow's Avisynth scripting capabilities (split fields, crop, resize, stack). This was a lifesaver as I was running out of time and could not fix the problem on the signal side.

- Audio filters (VSTs)
I use VSTs for filtering background noise out of my headset mic and generally making things sound more professional. Doing this processing inside OBS would mean the video could be automatically delayed based on the amount of audio buffer used by the VST filter(s). Matching this all up manually including delays induced by Virtual Audio Cable is a serious pain, not to mention the instability of using so many different tools that have to work together. VSTHost is an open source tool that already contains all the hard work, and as far as I know even includes an object oriented API for integration into OBS.

- Alpha masks for video sources
Say, you want to fit your webcam feed into a round-shaped object like the ones you find in Diablo III's in-game UI. No amount of cropping will help you, you need a mask. Simply giving the option to load a greyscale bitmap from disk and using it as a mask would be ideal! Additionally, the choice to scale the mask relative to the video source (the mask covers 100% of the source, and if you move/scale the source, you also move/scale the mask with it), or scaling it relative to the canvas (the mask covers 100% of the canvas, and scaling/moving the video source does not affect the mask) would be even better.

Just wanted to mention these regardless of the fact that I probably won't see them in OBS for a long time if ever :)

Also, if you need certain things tested that require specific hardware, I have a beefy system, two 1080p webcams, three HDMI inputs, and a copy of Visual Studio. Let me know if I can help testing anything.
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
- Bandwidth scaling - I'll look into it. My current setup should be compatible with dynamic setting changes, so I'll see what I can do.

- Video filters - I've thought about this as well. It should be fairly easy to implement at some point, but I want to keep it on the GPU if possible, so my idea is that I would like to make it so that you can run an image source through extra pixel post-process pixel shaders if desired. I'd prefer to keep things image processing off the CPU if possible, and shaders would be easier anyway. Image processing on the CPU is just super CPU/memory consuming. Also, what was going on with your video capture?

- Audio filters - Yes! This is definitely something I was going to look into, someone else mentioned wanting to get rid of his audio hum, and I immediately thought of VST plugins (being that I also use them myself in apps like ableton live) -- I'll see what I can do here as well. CPU is pretty tight, but the audio thread should have room for a little bit of extra optional processing.

- Alpha masks for video - Good idea. I hadn't thought of masks. Could do that through the post-processing mentioned above. Another feature I've seen that I'd also like, is converting specific color ranges to alpha on video devices - such as you may see on JP McDaniel's stream, or the EG Masters cup.

I can't promise any of these things right away, but all of them should be fairly doable.
 

Muf

Forum Moderator
Jim said:
- Bandwidth scaling - I'll look into it. My current setup should be compatible with dynamic setting changes, so I'll see what I can do.
I suspect that what FMLE does is it maintains a dynamically sized send buffer, and when the buffer fills up it starts reducing the bitrate until the buffer is emptied, at which point it increases the bitrate again. Or there might be some RTMP magic behind it, I don't know.

Jim said:
- Video filters - I've thought about this as well. It should be fairly easy to implement at some point, but I want to keep it on the GPU if possible, so my idea is that I would like to make it so that you can run an image source through extra pixel post-process pixel shaders if desired. I'd prefer to keep things image processing off the CPU if possible, and shaders would be easier anyway. Image processing on the CPU is just super CPU/memory consuming.
Personally, I have much more CPU to burn than GPU, since StarCraft II only uses two threads, and I have 8 cores with HyperThreading, but only a single AMD HD5870 GPU. I realise that that's a fairly uncommon setup, but I think you'll have to concede at some point that certain more complex filters (like spatiotemporal denoisers) simply can't be implemented in shaders.

Jim said:
Also, what was going on with your video capture?
I'm not entirely sure what the underlying cause was, as I had another identical game board that synced just fine. The video source was 320x240 CGA, and the capture card interpreted it as 640x480 VGA.

The image was horizontally stretched, starting at 4x stretched and gradually tapering off to actual pixels, with the image spilling out from even lines into odd lines, so one half of the image was on odd lines with the other half on even lines. To make matters worse, scanlines "started" in the middle of the frame, meaning the left of the image was actually the far right of the image.

To fix it, I wrote an Avisynth script that first separated the even and odd lines of the image, then horizontally cropped out three segments of the frame, the part that was 4x stretched (and squeezed it back into its original size), the middle part that was 2x stretched, and the last part that rapidly tapered off to actual pixels (which was too complex to correct). I then stitched the three parts together and resized the result to 640x480.

Because of the offset (image starting in the middle of the frame) there were parts of the image missing on the left and right in the reconstructed frame, but thankfully I could set the game to single player mode so that all the in-game action would occur between the missing parts of the image.

I wish I would've made a screenshot, because I don't have the culprit video source with me any more (it was borrowed from a friend). I do have the Twitch recording of the live stream though, if you're interested. The video feed at the top right is the one I've stitched together.

The shaking in the bottom left feed is because it was captured using an AVerMedia capture card, and I needed to run the 320x240 signal through a scan doubler to make it 640x480, because the AVerMedia card doesn't support capturing resolutions lower than 640x480. The output of the scan doubler isn't exactly stable, but at least it works (and I wouldn't be able to capture 3 CGA sources otherwise).

I'm not using OBS, obviously; I'm using VHMultiCam (the free predecessor to XSplit) fed into FMLE, with ffdshow injected into VHMultiCam's DirectShow graph using DirectShowSpy and GraphEdit (you have to stop the graph, insert the filter, connect it between the capture source and VHMultiCam's frame grabber filter, and then restart the graph and hope you haven't crashed VHMultiCam in the process).

Jim said:
- Audio filters - Yes! This is definitely something I was going to look into, someone else mentioned wanting to get rid of his audio hum, and I immediately thought of VST plugins (being that I also use them myself in apps like ableton live) -- I'll see what I can do here as well. CPU is pretty tight, but the audio thread should have room for a little bit of extra optional processing.
That's so awesome to hear! This is actually the one feature I was least expecting you to take seriously, and so far I've heard of no other streamers using VSTs. I'm sure people will start messing with it as soon as it stops being such a faff.

Jim said:
I can't promise any of these things right away, but all of them should be fairly doable.
Great! I can't wait to be able to start using OBS for serious streaming. Keep up the good work :)
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Muf said:
Personally, I have much more CPU to burn than GPU, since StarCraft II only uses two threads, and I have 8 cores with HyperThreading, but only a single AMD HD5870 GPU. I realise that that's a fairly uncommon setup, but I think you'll have to concede at some point that certain more complex filters (like spatiotemporal denoisers) simply can't be implemented in shaders.

I don't know specifically what you want to do, so in order to try to prevent any miscommunication, I'll try to address the individual possibilities of what come to my mind:
1.) You are wanting the ability to manipulate any/every individual image source via the CPU with a filter - This one would be something I would firmly object against, as it would require redesigning the graphics pipeline. To me, this would be the most terribly nightmarish scenario, and one that I -hope- you don't mean.
2.) You're wanting to just manipulate the output image - that's far more doable. Not the most comfortable with it, but far more doable as long as it's kept in separate threads. It's also something I do intend to allow at some point, as I'd like to allow for the option of software downscaling and image space conversion.
3.) You want to manipulate a DirectShow image source - Much easier, that would be no problem really.
4.) You just want to do CPU image manipulation on a specific image source - In this case, you'd want to write a plugin or custom code that does your image manipulation before it's uploaded to the GPU.

Basically, what I'm saying is once things get on the GPU, they're not going back to the CPU again until it's time to send them to the video encoder for streaming. That's really all I mean. And the main capture thread is also critical, it's really the sole reason I express concern for CPU usage. The capture thread -has- to stay under 15ms in order to be able to maintain a smooth 60fps. The app is essentially a 3D graphics engine that outputs to a stream, so doing things through the GPU is always easier, and often more preferable, as the GPU is designed for image manipulation.

The CPU and GPU are in an interesting balance with the way I have it set up at the moment, I really like it and so far it's been working really well for everyone, so I'm just wary about some sort of major design change.
 

Muf

Forum Moderator
Jim said:
3.) You want to manipulate a DirectShow image source - Much easier, that would be no problem really.
That one. Use case scenarios: patching up that corrupted video problem I described, denoising a noisy webcam, deinterlacing an interlaced video source (like a 1080i HDMI camera or games console), delaying video by an arbitrary amount (although it would also be nice to have a global A/V delay setting, to tweak audio sync).

I played around a bit with OBS 0.381a today, and noticed a few things:

- When your canvas is less than 694 pixels wide, there is no way to view the preview at 1:1 pixel scale. With my slow ADSL, I have to stream at 640x400 to prevent a complete blockfest at my bitrate (don't worry, I upgrade to VDSL soon). I would suggest adding a few preset scale options (25%, 50%, 100%, 200%) to the right-click context menu for the preview (which now only contains "Enable view"). If the canvas is too small for the rest of the OBS GUI, just centre the smaller preview inside the larger OBS GUI.

- When switching scenes, capture sources are deinitialised/reinitialised. This wouldn't be so bad, if it weren't for the fact OBS doesn't wait for them to finish initialising before switching scenes (causing you to see a "build-up" of sources that pop up over eachother). Also, two scenes with the same capture source will still reinitialise the capture source, causing it to disappear momentarily and then reappear.

- I would move the name entry for a source to the same window where you choose which device to use. That way, you can autofill the name with the device name, and the user can choose to add something to that or rename it entirely. While testing, I often end up with sources called "fgsfds" because I keep having to type a name.

- OBS doesn't have an audio mixer. I realise almost everyone does their audio mixing with Virtual Audio Cable (as do I), but it would be far more elegant to be able to mix multiple audio sources within OBS. Also, I'm fairly sure Windows 7's WASAPI allows you to intercept audio from any given application (so long as it isn't in the PMP) regardless of what output device it's using, allowing a similar feature to the "Window capture", but for audio. Which leads me to suggest:

- VU meters! It's always nice to know if your microphone is possibly distorting or if you're too quiet. I would place them vertically, left of the video preview.
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Hey again. Yep, DirectShow works on a dynamic texture, so you can basically manipulate it before uploading all you want. I'll definitely take a look now that we clarified that ^_^

Currently there appears to be a little bit of an audio/video sync issue with directshow, depending on what filters the filtergraph decides to throw in. Going to try to do more colorspace conversion myself in a future update to prevent further CPU usage and image delays.

- Canvas thing - Ah, yes, you are correct. I'll adjust that.

- The initialization/destruction of capture sources thing is only really an issue when using devices. This is actually the reason why I made the option for "global" sources. Global sources do not get destroyed across scene changes, and persist through scenes.

- The naming thing is indeed annoying. I'm actually going to get rid of the pop up window and make it typable in the list box or something, and I'll make it start with a default name or something.

- I'm not entirely sure what you mean when you say the application doesn't have an audio mixer. It does have an internal audio mixer. As for selecting specific applications, that's a bit problematic. We've been discussing it in another thread -- viewtopic.php?f=7&t=33

- Meters are another good idea, I hadn't thought of that. I'll see what I can do. Thank you very much. ^_^
 
Top