Obs problems on Ubuntu

Ok AaronD, thanks for the advice, you are an expert but I have to start from simple things, so I'll go step by step and first of all I'll do some tests with the 57 hanging from the ceiling then only if I don't get an acceptable result in terms of volume then I will opt for a microphone suitable for being at a distance.
I knew Ardor by name but I've never used it.
 
I forgot one thing, is it possible, is there a cable to connect the output of the Berhinger to the PC in order to hear the voice or the instrument? Is this connection necessary to hear audio through headphones?
From what I see it should be a cable going from jack (output) to USB to PC to use PC speakers or directly to external PC speakers?
 

AaronD

Active Member
I knew Ardor by name but I've never used it.
It can be intimidating at first, because of all it does. You might think of it as the ultimate digital mixing console, that combines the vast digital toybox of processors with analog freeform patching between them. But once you get started with something simple, it's fairly easy to expand on that and grow into it.

I forgot one thing, is it possible, is there a cable to connect the output of the Berhinger to the PC in order to hear the voice or the instrument? Is this connection necessary to hear audio through headphones?
From what I see it should be a cable going from jack (output) to USB to PC to use PC speakers or directly to external PC speakers?
Its only power supply is USB, and it does need power. BUT, it *only* needs power to do its internal functions. My demo video actually had it plugged into a phone charger.

To hear the inputs directly in the headphones and line-level outputs, there's a switch for that. I almost always have that switch off, so that the important PC app can send back what it actually gets without competing with the local loopback, but it does work to turn it on and listen directly, even when powered from a phone charger.
 
Ok yes I saw that there is the special key and now I can hear the audio in the headphones, excellent, I'm really happy with the Berhinger, thanks for the advice!
I also have Ardor installed, so I don't need any other connections to hear audio using Ardor? In practice I should hear the audio of what enters the Berhinger through the PC speakers? I have 2 speakers connected to the PC via USB on Linux while the Windows PC has its own internal speaker.
 

AaronD

Active Member
Ardour doesn't do anything by default. It's such a general thing that any default will be unusably wrong for most people. So you have to build up what you want it to do. Once you figure out how to add a track, your capabilities will explode from there. Routing is important too, so make sure you keep track of that!

It does have an example project of a generic semi-modern band, but the best way to understand what's really going on is to start with the blank one and build it all yourself.

Another big boost is to switch to Ubuntu Studio, because it comes with not only OBS and Ardour preinstalled, but a ton of effects processors that are already available and working. I've had that problem before: I do everything right, to install some processors that I like, and they still don't show up. UStudio already happens to have my favorites by default, ready and working. (great minds think alike???)
If you add OBS's official PPA (see the sticky thread in this forum), then the normal update process gets you the current version automatically.
 
Thank you for the always precious indications which I will take into account, but first I need to do some simple tests to gain some confidence!
 
Hi AaronD, I'm looking at the features of Ardor, and at least as far as simple things are concerned, it wouldn't seem very difficult, to start you add tracks and then work on them, but I have a problem, as you know I'm using the Behringer, so i would expect that when i open ardour and add a track, when i talk into the mic plugged into the behringer, i should see on ardour panel an alert bar showing that audio is coming, but i don't see anything, what could this be about? Is there anything in particular to set?
I understand that we are talking about Obs here, but if you could help me, thanks!
 

AaronD

Active Member
Ardour has a forum over here:

The short answer is that that's part of the Routing that I was talking about. Every track, bus, and other similar thing has its own input routing and output routing. You need to tell it where to get its raw signal from, and where to send its finished signal to.

Sometimes one of those is determined for you, like if you send several tracks to a bus for example, then you only have to do the track outputs, and the bus input is already determined from that. But you do need to know that both exist, and if something's not right, check all of them.

Then if you want to have auxiliary sends and side-chained effects, that's even more routing.......
 
Hi, I understood a little how it works, I was able to record something, I didn't see the volume column because it appears only by activating the recording, however I find a problem on linux, the recording occurs in jerks, and then stops with a message from like "recording stopped due to an alsa in/out problem, is this a driver problem?
 

AaronD

Active Member
It might be a bad driver, or a weird configuration, or any number of things. You're probably better off getting support from its own community though. I've had different problems with Ardour, that were solved in that forum, but I haven't had yours. So I don't think I can help you with it. Sorry.
 
Hi AaronD, with Obs I can record both audio and video at the same time, if instead I wanted to use only the video re-rendered through Obs while the audio taken from another source, for example an audio reproduced by Ardour or another software, such as Audacity ( I also tried Audacity and the jerky recording problem does not occur), is it possible to tell Obs to use the audio played with Ardour or Audacity as the audio source?
I've seen around the net that many record video and audio separately and then synchronize them, perhaps they do this to use more suitable software for audio recordings, but then they have to be synchronized with the video, how do you do this type of recording? I mean the method, like .. for example, you put on a base on which you play over and record the video, then of this video you use only the video part and synchronize it with an audio file recorded separately with the software more suitable for audio? Is this the method?
 

AaronD

Active Member
If you produce the final result live, then you of course need all of the connections to happen live, and OBS is like the "final assembly" step.

If you want to connect the live audio processor to OBS, you need a loopback. Essentially, this is a virtual speaker and a virtual mic, and whatever you send to that "speaker" shows up in that "mic".

For Windows, this is something that you can google and install. Virtual Audio Cable seems to be popular, but there are others too.
For Ubuntu and similar, it already does that natively, as part of both PulseAudio and JACK, and PipeWire too when it becomes mainstream to replace both of those. I believe PW is already "live" in the latest version, but not the latest LTS. At any rate, google is your friend again!



If you record the parts separately and then put them together later, then OBS is not the right tool anymore. The final assembly then, is in a video editor, that has a bunch of tracks like Audacity or Ardour, except that these tracks carry video. Move things around, line them up, make your transitions and effects, etc., all "outside of time" if you will, play it in the editor to see if you like it, tweak if you don't, and then export when you're done.

You might think of the export in a similar way to OBS, except that it's all scripted, and you just spent however long it was to make the script. It plays the script, and sends the result through the same encoder that OBS does. If your computer can't keep up with the script in real-time, that's fine. It's all math at that point, not physics, and the result will (eventually) come out like you told it...which might be compensated by accident for not keeping up live, so you might need to fix that and export again.

The concept of "proxy files" is supposed to solve that problem, by converting the original sources into something smaller - less data to shove through the pipeline - so that you build the script with the smaller files and then export with the originals. That's all handled in the editor, if you tell it to do that, so that you don't get confused. You just have to remember that the preview is not full-quality.

My favorite editor is Shotcut, because it's free and runs on everything, it's easy enough to get started with, and has a TON to grow into:
 
As always you are a mine of information, as always I will have to start from something easy, and I was thinking about this..
Use Obs connected to the external camera as video source, at the same time, import a track as a backing track in Audacity and with Audacity record the audio that I play over the backing track, this way I would record the video with the camera and Obs and the audio with Audacity, then with a video editor I would cut out the initial part of the video and the audio track to make them synchronize together so that they start from the same point and consequently images and audio are synchronized, it should be fine since they were taken at the same time anyway but they are simply out of phase.
Do you think it can work?
If you wish, Obs also sees the smartphone as an audio source and therefore can the smartphone camera be used?
 

AaronD

Active Member
That might work...though at that point you're probably better off recording in the camera if it'll do that, and not OBS.

As you say, play the backing track(s) while recording the new one - that way the audio stays in sync - and the video file from the camera has its own "scratch track" of audio that is known to be in sync with the video. Mix the good audio and export the final soundtrack, then put each video file on its own track in Shotcut (you might have a lot of tracks; that's fine), and use Shotcut's "Align To Reference Track" feature to automatically line up each video file with the good soundtrack.

Then mute the video tracks, leaving only the "good" sound, and be careful to not mess up the alignment while you edit the visuals.

Ardour is far better than Audacity for this. :-) But they both work.



If you do it that way, it might be worth recording everything twice: first to make a workable "scratch track" that has everything on it, and for the second time that you actually keep, you're always playing to a "minus one" mix of the scratch track. Might give you a better idea of the context that way, as you record the real one.
 
Ok, out of curiosity, on Obs is it possible to add a smartphone as a video capture source to take advantage of its video camera?
 

AaronD

Active Member
There are lots of ways to do that! Pretty much all of them have an app on the phone that streams the camera, and either instructions or a plugin to catch it in OBS.

The common theme though, is latency. Some are unusably delayed for studio use, because they're designed to be reliable across the public internet. Others are fast enough to be a second studio cam, but will probably fall flat if you go beyond the local network. Unfortunately, none of them say what they're optimized for, only that they "send video".

For a studio cam, I like IP Webcam on Android. No experience with iOS.
 
I detect a little delay in video shooting with Obs, in practice to give an example, if I move an object the movement is seen with a delay of about 1/2 1 second, I tried to decrease the FPS value but it doesn't change much , even changing the resolution I don't get an improvement, can I change some other parameter?
 

AaronD

Active Member
Live video does that. If you think in terms of discrete-time sampled streams, and each video frame as one sample, then you have quite a low sample rate. Some things absolutely require a full-frame buffer to work, which necessarily adds at least one sample of latency at that point in the chain. Because of the low sample rate, it doesn't take very many of those to become noticeable.

Compare that to audio, which typically has sample rates around 48kHz (reasonable) to 192kHz (ridiculous). Audio can afford a few hundred samples' worth of buffer without anyone noticing, whereas video can barely afford just a handful.

Most of the time, even for pro rigs, we just live with a slightly-noticeable amount of latency. More for remote displays where you don't have the original to compare to. Then we delay the audio, more than is needed by itself, to make it line up with the video again. That's what the Sync Offset is for, in OBS's Advanced Audio Properties.

Every once in a while, we might delay the video too, because we're working with the speed of sound in air across a large audience and we have to line up with *that*, but that's relatively rare.
 
Top