ASIO OBS and other options to capture audio

marchvet

New Member
I have spent the last 3 weeks trying to resolve audio crackling in OBS. Here is my setup along with the most recent log file.

Windows 11
NVIDIA GeForce RTX 3070 Ti (driver: 560.81)
Intel i7-13700k
32 gig ram
OBS 30.2.2
ASIO OBS
Podmic going into a Roland VT-4 and out to computer via USB from VT-4
Ableton Live 12
Logitech Brio 4k camera
Focusrite Scarlett 4i4

Problem: I add the Ableton source as "ASIO Input Capture" and select Focusrite USB ASIO as device, format: stereo, obs channel 1 - loop 1 and obs channel 2 - loop 2. The mic is added as "Audio Input Capture" device: WET (2- VT4). However, if I adjust the volume on the ASIO Input capture, the mic's volume will also be affected.

Since these are setup on two different options, shouldn't it be separate, and one not affect the other?
 

Attachments

  • 2024-08-11 12-36-15.txt
    28.7 KB · Views: 100

MrGhost

Member
Don't use ASIO (that plugin you mentioned). Use OBS's source Audio inputs then pan to the one side (if using a mono input such as a guitar or mic) and set as mono (in advanced audio properties in OBS). Use VBAN (Voicemeeter) if you are trying to do some sort of loopback virtual cable based in your CPU's resources. ASIO sucks has been my experience it crackles and your computer should be able to handle that load. So it doesn't seem like your computer crackling. But I use a 12/24 core computer and it crackled once before when I used a Roland device as an input via Direct Audio or ASIO or WDM or MME. Fortunate I found that Kernel Streaming option and tried it there in the Voicemeeeter program. At least VBAN will let you choose KS for your Roland source, probably. But it sounds like you are using your Roland Vocal Transformer as a direct in to OBS but using that ASIO plugin. Just remove the ASIO plugin right out of your setup.. And your 4i4 is your Ableton engine? So if you want to get a good signal into OBS you need some sort of loop back or you could go with an actual physical output patched to a physical in. But you are not using very many ins or outs right? But 4i4 indicates there are 4 ins and 4 outs. So if you are not using all your outs and ins, you could run 2 outs to 2 ins (1 stereo pair) and get your audio out of Ableton that way. I do that and recommend it (of course I don't have the same interfaces).

What I don't recommend is to use ASIO plugin for OBS. ASIO just isn't a good thing to use. I won't use any ASIO thing. KS is good, and VoiceMeeter Banana (donationware) has that as an option.

As soon as you use that ASIO plugin you are going to get bad results. Nothing against that plugin, but ASIO in general.

I switched from other types of audio sources, KS worked for me not saying it's a fix all but you should try it.
 
Last edited:

marchvet

New Member
I ended up reinstalling OBS and trying restream again with no issues. Same settings, same setup but it appears to work. I am going to make note of your suggestions for the future if restream starts acting funny. Thank you.
 

AaronD

Active Member
Since you have a DAW already, I'd highly recommend that you do EVERYTHING in there. Receive all the sources, do all the processing, drive all the destinations, etc. OBS then, becomes silent except for what it actually needs to put the stream together, and that's a dumb, straight-wire passthrough.

If you have an audio source that originates in OBS, like a video soundtrack or browser source, (ab)use the Monitor to get it into the DAW as another dumb passthrough. Monitor Only, so it doesn't go *directly* to the stream; and the DAW return has Monitor Off, so it doesn't create a feedback loop.

Then you won't need ASIO in OBS, or the stereo-to-dual-mono trick that @MrGhost mentioned, because the DAW does everything, and it's actually designed for it. Just two loopbacks - DAW to global source in OBS, and maybe OBS Monitor to DAW - both as dumb, straight-wire passthroughs in OBS, and that's all that OBS does with audio.

If you have headphones or studio monitors, that connects to the DAW, not OBS.
If you have an online meeting, its audio in both directions connects to the DAW, not OBS.
Etc.
Everything to do with audio connects to the DAW, not OBS.

If you need to turn audio sources on or off based on what you're doing in OBS - as if it were a local source in a specific scene, for example - the Advanced Scene Switcher plugin can send Websocket and Open Sound Control (OSC) messages. Read the documentation to see what your DAW wants to receive, and set up Adv. SS to send it.
One useful trick, if the message itself can only create a step-change, hard cut-on/off, is to use that message indirectly, to control a full-scale 20kHz sinewave generator that only goes to the side-chain input of a gate. The timing controls of the gate, then, create a fade. My rig has the first channel strip dedicated to that sinewave generator, with several pre-fade aux sends that are controlled separately, and each goes to a different place.
 

MrGhost

Member
Sounds fascinating. Exactly what do you do with the sine wave? I couldn't quite understand what the purpose of the sine wave was. In order to make the A.S.S. effect some control such as create a fade.... I didn't get what the fade was for?

It's my preference to use OBS and also use DAW. I use 2 interfaces and an ADAT connection, I also use VBAN. I have every hardware synth that goes to the daw, also as an OBS input stereo (or MONO panned) so I get about 10 OBS audio inputs. Only one of my OBS ins is bringing the DAW sound, but it makes it considerably louder when it is active. It comes from a patched stereo pair to one interface so a direct in to OBS.

I don't have latency problems. The two sounds of each instrument do not sound with any timing difference because of the patch out from the DAW. This requires a fast computer. I like to put it all in there. My MIC doesn't go to my daw at all I only do instrumental music there. It is generally for music that I run this way but without any daw I also do gaming with MIC and 7 or 8 filters in the OBS input for it. I have a stereo pair patched from the DAW engine interface for the DAW in music.. WIth gaming, I run the audio out of the game thru the interface patch so I can pick it up separately from the MIC in OBS. This way I can toggle the MIC with the Companion button on the streaming (2nd) computer still and get the audio without ever involving desktop which is my monitor for everything usually.

I use Bitfocus Companion and a streamdeck on a 2nd computer to toggle MIC when necessary.

I find that I am able to control the MIC best this way. I could record it to the DAW but I'm really not a singer.
 

MrGhost

Member
If you need to turn audio sources on or off based on what you're doing in OBS - as if it were a local source in a specific scene, for example - the Advanced Scene Switcher plugin can send Websocket and Open Sound Control (OSC) messages. Read the documentation to see what your DAW wants to receive, and set up Adv. SS to send it.
One useful trick, if the message itself can only create a step-change, hard cut-on/off, is to use that message indirectly, to control a full-scale 20kHz sinewave generator that only goes to the side-chain input of a gate. The timing controls of the gate, then, create a fade. My rig has the first channel strip dedicated to that sinewave generator, with several pre-fade aux sends that are controlled separately, and each goes to a different place.


But OBS has a gate. With the Noise Reduction you get that too. And it is easy to hotkey a MIC button.

At any rate if you are going to use Websocket you can use Bitfocus companion and do it all even if you have only one computer, using a hotkey for anything in a poor version of do it all for free. Bitfocus companion doesn't require a streamdeck. Hotkey straight to applications. Don't leave anything pretty important up to the advanced scene switcher which acts pretty expiremental at best listening for messages and missing quite a lot of scene switching.
 

AaronD

Active Member
Exactly what do you do with the sine wave? I couldn't quite understand what the purpose of the sine wave was.
1723559964143.png

Different DAW (Ardour), different OS (Ubuntu Studio), same idea.

The far left channel strip (black at the top) has the sinewave generator on it. (C* Sin) It's technically an effects plugin, just like an EQ, compressor, reverb, or whatever else, but it ignores its input and simply outputs a pure sinewave. Following that on the strip are two aux sends (Ctrl_Mics and Ctrl_Play), and then the Fader. Those aux sends are what the OSC messages control, as a hard-cut on or off. (0dB or -200dB, which Ardour interprets as -oo)

The final output of that channel strip goes nowhere. (big button/label below the fader) So the only output is the aux sends, and they only go to the sidechains of several gates.

The Room Mic Mix bus (rightmost red, and slightly darker) has, after its bus processing, another aux send that goes to a loopback for something else to grab the always-on room sound. (I use it to drive a "convenience meter" in OBS, but OBS doesn't send it anywhere) Then the fader, and then an A/V sync delay. After that is a gate, which takes its sidechain from the Adv. SS controlled aux send that has the sinewave on it.
1723560768281.png

500ms Attack and 500ms Release, cause the mic mix to fade in and out, respectively, at that rate, when the sidechain signal comes and goes. As shown here, the sinewave is present (SideChain and Envelope meters), and so the gate is open (no gain reduction), but nothing is actually going through it at the moment (In and Out meters are empty).

Following that, is another aux send (post-fade this time, as you can see from its position relative to the fader, which also controls its color in this DAW), to give the option to record the controlled but not ducked signal. And finally, the ducker, which is a compressor that is set to be super aggressive (as you can see from its graph) and takes its sidechain from the "Rm Mics SC" aux send on the Meeting Rtrn channel (rightmost yellow).

Finally, the output of the Room Mic Mix is routed to the Meeting Send bus (hence the ducker/"squasher", to avoid a debilitating echo to the remote people), which is itself supposed to go to a loopback that the meeting picks up, but the loopbacks are not active for these screenshots.

The Meeting Rtrn channel (rightmost yellow) has its remote-control gate also side-chained to the same Ctrl_Mics signal, since I'm trying to consider the local and remote people to be as equal as I practically can.

The Playback channel (green) is fed from a loopback that becomes the OS's default audio device when I run the script that sets all of this up for real. It has a post-fade aux that goes to the Local Spkr bus, and then another gate that is side-chained to the *other* Adv. SS controlled aux send. (Ctrl_Play) Its final output is routed to Meeting Send (yellow) and Record (purple)

The Local Spkr bus has a speaker-correction EQ ready to use, and is supposed to connect to the physical device which is also not present for these screenshots.

The Record bus goes to another loopback, which is the only one that OBS actually does something with. As you can see, there's a lot of complex processing to come up with that signal, and by the time it gets here, it's completely finished. OBS is a dumb straight-wire passthrough.

But OBS has a gate. With the Noise Reduction you get that too. And it is easy to hotkey a MIC button.
Yes, sorta, and yes.

  • OBS has a gate, but not like this!
  • Noise Reduction is NOT a gate. I have one as a plugin on each of the raw mic inputs individually (called "Noise repellent" here, as one of several different options), as well as the Meeting Rtrn.
    • Noise Reduction tries to figure out what speech is, specifically, and remove everything else. No threshold of input level. Lots of people have just slapped them on without understanding that, and wondered what happened to their music or the more repetitive sound effects.
    • A gate compares its side-chain input (which is normally connected to its signal input unless specified otherwise) to some threshold. No more logic than that. If loud, open the gate (allow the entire signal through), according to the Attack time. If quiet, close the gate (stop everything), according to the Release time. It's not actually as useful as people tend to think it is, because an open gate allows noise through too. It can't "unmix".
      • For low-level noise that is easily drowned out, it can be useful as a subtle clean-up, but louder noise is often more distracting when it comes and goes than if it simply stays there.
      • It can also work for drums, to help a trailing "ring" to fade out and to make the hit more "punchy", in which case it's used to create a sound rather than fix a problem.
  • Once you understand Adv. SS, and your DAW, it's not all that much harder to reassign that same hotkey to trigger a macro that sends a message to the DAW. Yes, there are more steps involved, but the amount of control and flexibility that you get from that is lightyears beyond what OBS could even dream of!

...and do it all even if you have only one computer...
My rig is all on one computer:
  • Two simultaneous instances of OBS, producing different content at the same time, with Adv. SS on both, and using WebSockets to coordinate them.
  • Adv. SS also sends OSC messages to the DAW on the same machine, to coordinate that.
  • The VCam from one OBS gets picked up by the meeting app. (actually an external instance of the Chrome web browser) Its local audio, like a video soundtrack, goes through the Monitor to the DAW, and nowhere else. As detailed above, the Meeting Send gets its audio from the DAW.
  • The other OBS window-captures the meeting and gets the Record signal from the DAW.

...do it all for free...
Mine is all free too, *including the operating system*!

Ubuntu Studio even comes with (almost) all of the parts pre-installed and already working. You just have to install Adv. SS, which is not hard at all, and put everything together.

Don't leave anything pretty important up to the advanced scene switcher which acts pretty expiremental at best listening for messages and missing quite a lot of scene switching.
Works for me. (quick composite of two screenshots, as the content is taller than my screen)
1723564088095.png

This matches any scene that starts with "Feat - ", following my naming convention, and tells my DAW to turn off the Mics and turn on the Playback. (channel strip 1, aux sends 1 and 2, -200dB and 0dB)

Another part of my naming convention is that a macro that ends in (*) also has a docked button in OBS's main window:
1723564857165.png

A different rig has maybe 30 macros at a guess, about half of which are actually looking for something and the other half are a library of frequently used subroutines. All of that works too. I've attached the all-at-a-glance documentation that I keep for it.

Advanced Scene Switcher is a programming language, full stop. It has to be, to allow as much flexibility as it does. So you need to think like that to understand what's going on. Once you do, and get its instructions right, it's rock-solid.
 

Attachments

  • Macros.pdf
    40.1 KB · Views: 8
Last edited:

MrGhost

Member
I have about 10 A.S.S. macros for video scene switching. The rest are dedicated to my mostly operational MIDI signals to text output. It's over 100 MIDI to Text MACs. The MIDI Text Timers are pretty hard to set up but they work pretty well.

I am thinking you are recording a room full of people saying things like AMEN and possibly singing gospel songs. I only record myself. The teams thing is another thing I have never tried to manage. I imagine the latency would cause a lot of gaffs. Guess you could get suspended in gaffa from it all!

Pretty cool though man. I will try and catch a screenshot of my A.S.S. macros They are really long. Really long A.S.S. Macros. What I find with switching based on different videos which catch different parts of my movements depending on what synth I am reaching for in a room with about 5 synths and 5 MIDI controllers, or which side of my longer synth, is that these are only accurate so far. I also switch based on scene movement for screen recordings. These are not particularly perfect. Imperfect but still pretty strong at switching. As far as I know no one has ever watched much of any of my recordings I have made, aside from myself. But I am a connoisseur of my own A.S.S. driven recordings.

If you want to watch my A.S.S. in action, I have a link to some videos someplace. The most recent video had issues with the streaming because the phone was overheating because I stupidly put it in a sunny window (which works in winter to keep it cool, I didn't know the sun would reach that window at that time, but it did!) but the video from a month ago or so made a pretty good use of my A.S.S with its plethora of shit I have lodged in its macros. The Prophet synthesizer is the central focus of my A.S.S. Macros, it is the MIDI which I use.

I should direct you to an earlier video in which I play the drums. There are more cameras active in a video where I play the drums, and there is a chance I will play the MIDI device I have mounted on the wall near my hi-hat which also sends MIDI to the Prophet and also across the same network MIDI that I use for the Synth's own panel. So you can see me playing the drums, and when I reach and turn a knob or two the CCs will change settings on the Prophet, but also trigger the same Text source on timers that the Prophet's knobs trigger. Also I made a 2nd Text line, in the lower right of the screen of my recordings to capture the Program I dial up in the Prophet. This required a repeating group of digit assignment MACROs which could determine based on the particular MIDI messages received across the network channels, which program (111-558) was being hit. That's 200 programs. The first digit is 1-5, 2nd digit is 1-5 and 3rd digit is 1-8. It was strictly mathematical to get the programming. Then I had to include an F at the end for any program which was from the Factory programs (another 200 generated via the same Macros). It only took an extra MACRO each for the F.

Going back a few months, I found a pretty good one where I actually discussed my A.S.S. right from the beginning of the video:

 

AaronD

Active Member
I am thinking you are recording a room full of people saying things like AMEN and possibly singing gospel songs. I only record myself. The teams thing is another thing I have never tried to manage. I imagine the latency would cause a lot of gaffs. Guess you could get suspended in gaffa from it all!
The one I got screenshots of, and most of the description, is for a small group meeting that has had some people move away, and they still want to join in. It's also been handy for when someone is sick enough to not want to infect everyone, but feels okay enough to join online. No singing in that group, just discussion.

It's really hard to get low enough latency without buffering, across the open internet, to make live music work from multiple sources at once! There have been some attempts, but I'm not convinced of their success yet.

My other rig that has 30 or so macros and a bunch of subroutines, is for the main service. No interaction there, except for YouTube's chat. Instead of a DAW, that one sends OSC messages across the network to a physical digital console. Otherwise a similar idea, except that I do fade out OBS's own volume when I don't want any external sound at all. That eliminates analog noise from the physical cord between the console and a USB stereo line input. Other than that, everything is done in the console, and OBS is just a passthrough.

Don't leave anything pretty important up to the advanced scene switcher which acts pretty expiremental at best listening for messages and missing quite a lot of scene switching.
What I find with switching based on different videos which catch different parts of my movements depending on what synth I am reaching for in a room with about 5 synths and 5 MIDI controllers, or which side of my longer synth, is that these are only accurate so far. I also switch based on scene movement for screen recordings. These are not particularly perfect.
I think I see where you got the low reliability estimate. You're using a video feed to see when you're reaching for something. That's Machine Vision (search term), which has always been hard, even for career-long professionals. It does okay for kids'-show level shape and color recognition, and for VERY specific industrial quality control, but the lane-assist on my dad's truck still has trouble with some of the lane markers that are obvious to me.

I have about 10 A.S.S. macros for video scene switching. The rest are dedicated to my mostly operational MIDI signals to text output. It's over 100 MIDI to Text MACs. The MIDI Text Timers are pretty hard to set up but they work pretty well.
...200 programs. The first digit is 1-5, 2nd digit is 1-5 and 3rd digit is 1-8. It was strictly mathematical to get the programming. Then I had to include an F at the end for any program which was from the Factory programs (another 200 generated via the same Macros). It only took an extra MACRO each for the F.
I wonder if you could consolidate some of that. Similar to my pattern-matched scene switch instead of a bunch of specific scenes. Yes, it works to have them all spelled out explicitly, but it's hard to maintain that much! The 30 or so macros that I use for the main church service already need a separate document to keep track of them, and you have several times that!

With the pattern match on the meeting rig, I can add and remove scenes as needed, and as long as they follow the naming convention that I documented right there in the first scene name (see that screenshot), it works with no more effort than that. Don't even open the Adv. SS window at all.

Going back a few months, I found a pretty good one where I actually discussed my A.S.S. right from the beginning of the video:

Sounds like you need to write it all out, because it's more than what can fit in your head at one time and work with easily. That could be a set of blocks like I did, or it can be a truth table like is commonly used for boolean/digital logic, or it can be a sequence diagram, or whatever. Good software isn't "just written", it's designed using those tools and more.

For the music itself, some people love that style, but I tend to get lost. Sorry! Looks like you know what you're doing though, so good there!
I think the closest I get is something like this, which happens to be pure analog:
 

MrGhost

Member
I think I see where you got the low reliability estimate. You're using a video feed to see when you're reaching for something. That's Machine Vision (search term), which has always been hard, even for career-long professionals. It does okay for kids'-show level shape and color recognition, and for VERY specific industrial quality control, but the lane-assist on my dad's truck still has trouble with some of the lane markers that are obvious to me.



I wonder if you could consolidate some of that. Similar to my pattern-matched scene switch instead of a bunch of specific scenes.

I have found that the % change macro for scene works better than the change from pattern (which requires a screenshot).


Yes, it works to have them all spelled out explicitly, but it's hard to maintain that much! The 30 or so macros that I use for the main church service already need a separate document to keep track of them, and you have several times that!

With the pattern match on the meeting rig, I can add and remove scenes as needed, and as long as they follow the naming convention that I documented right there in the first scene name (see that screenshot), it works with no more effort than that. Don't even open the Adv. SS window at all.


Sounds like you need to write it all out, because it's more than what can fit in your head at one time and work with easily.
Generally it works. There are issues I face with that text source not appearing, or staying too long. I think it may be largely due to using the studio mode in OBS. the scene switcher seems to only change one and not the other (preview vs program). I suppose I should try turning off studio mode on OBS. I am not sure I even use it anymore, and as far as I can tell it screws up when the A.S.S. is used with it.

I used to use it because I used to watch the preview fullscreen projector a few years ago but I can't even remember why I used to think the preview was the one to watch then. It had something to do with the network connected computer and using it to record its own screen so needing the preview to monitor the other screen, I think I recorded the program and looked at the preview and both were the same, that way I could operate other things in the foreground without recording them. Now I don't really need to do that, I can just record the program. I think back then I never even realized there was a full screen program projector. That was all before the days of using the A.S.S. anyway.
That could be a set of blocks like I did, or it can be a truth table like is commonly used for boolean/digital logic, or it can be a sequence diagram, or whatever. Good software isn't "just written", it's designed using those tools and more.

For me it has been pretty hard to understand the timer functions in the A.S.S. it doesn't ever seem to run the way I thought it would. It's a weird way it operates. I have mostly done it without guidance I did ask some questions back last year when I first was using it. Haven't changed it much. Seems to work pretty much the way I have it set up and I am pretty satisfied with how few the imperfections it has still.
For the music itself, some people love that style, but I tend to get lost. Sorry! Looks like you know what you're doing though, so good there!
I think the closest I get is something like this, which happens to be pure analog:

Yeah Jarre is sort of rich though. I have a video of him playing a Schmidt which is very high end and no one has one of those. It costs more than any car I ever owned.

I am off to do some recordings on my Microphone setup which I do a few times a year. I will try and include some about the A.S.S. in there because I know it's pretty long and I don't think I could screenshot much of it. Of course it's all been done before but not for a year or so. I will get it up here in a few days or maybe later today and link it here.
 

AaronD

Active Member
I think it may be largely due to using the studio mode in OBS. the scene switcher seems to only change one and not the other (preview vs program). I suppose I should try turning off studio mode on OBS.
I always use Studio Mode. Works fine for me.

There *is* a difference, but it has to do with how OBS works, not Adv. SS. When you change the visibility of a source in Studio Mode, it changes everywhere *except* the Program. Then you're supposed to switch to it when you're ready. If you're not in Studio Mode, then the Program does change immediately. Again, that's an OBS thing, not Adv. SS, but it might be the difference that you need.

For me it has been pretty hard to understand the timer functions in the A.S.S. it doesn't ever seem to run the way I thought it would. It's a weird way it operates.
You mean the Condition Timer?:
1723650741744.png

That runs as long as the base condition remains true. The entire condition is not true (and the actions don't run) until the timer runs out. If the base condition becomes false before the timer runs out, the timer resets (and the actions don't run).

Once the timer runs out, the actions run repeatedly unless the "only on change" checkbox is checked. Then the actions only run once when the timer runs out.

Think of an industrial timer relay, if you've been in that line of work...
 

MrGhost

Member
There *is* a difference, but it has to do with how OBS works, not Adv. SS. When you change the visibility of a source in Studio Mode, it changes everywhere *except* the Program. Then you're supposed to switch to it when you're ready. If you're not in Studio Mode, then the Program does change immediately. Again, that's an OBS thing, not Adv. SS, but it might be the difference that you need.
I made a video with plenty of looking at my A.S.S. It is on Rumble now. The A.S.S part is in the second hour mostly Rumble link MIC settings vid other audio setup things I do (2nd half) advanced scene switcher things (third half hour) actually starting at around 50 or 51 minutes I think

Think of an industrial timer relay, if you've been in that line of work...
No I haven't. But I do have a pretty good Idea what to do in the A.S.S.
 

AaronD

Active Member
I made a video with plenty of looking at my A.S.S. It is on Rumble now. The A.S.S part is in the second hour mostly Rumble link MIC settings vid other audio setup things I do (2nd half) advanced scene switcher things (third half hour) actually starting at around 50 or 51 minutes I think
It's a little bit hard to follow that. Several times, it looks like you're about to show what's in something, and then you get distracted, so it effectively ends up with you just reading the names that I can already see. *You* might know what they mean, but no one else does.

When you finally do show one and explain what it's actually doing, it almost looks to me like self-modifying code. That is notoriously hard to debug! It can be tempting for inexperienced programmers to do that - it seems at the time like "the one thing you need to make it work" - but there are very good reasons for seasoned programmers to be scared of it!

You have a Timer set to 5 seconds to start with, and then you have a different macro that sets it to 11 seconds and runs it, and a different macro sets it to 10 seconds, and that's only what you happened to show. Who knows at any given time, what that timer is?!

That's an excellent way to create chaotic behavior, in the mathematical sense of tiny changes having huge unpredictable effects. I'm pretty sure that's not what you want.

See if you can make a sequence diagram (search term), or a gantt chart, or something similar of the behavior that you actually want, and use that to redesign it with fixed timers. You can chain some together, but never modify them. See if you can make that work. Some might be converted from a Timer condition to the Condition Timer that I have in an earlier screenshot here.
 

MrGhost

Member
When you finally do show one and explain what it's actually doing, it almost looks to me like self-modifying code.
You noticed that huh? I didn't think that was pretty clear from the few I showed. I could go into more detail. Maybe I'll take another look at it. I was only able to set up the timer this way I'll try to explain why in a further video. I just didn't find that the timer did a very good job of dealing with my particular situation, I couldn't quite understand the way the A.S.S. was programmed as far as the timers went.

You have a Timer set to 5 seconds to start with, and then you have a different macro that sets it to 11 seconds and runs it, and a different macro sets it to 10 seconds, and that's only what you happened to show. Who knows at any given time, what that timer is?!

IIRC the timer I created, had to be started, given an amount of time and then continued, it couldn't be allowed to switch anything unless it really reached the bottom, so I made it just longer than what my source was. SO then it would never reach the bottom unless a particular criterion was met with. Which could only happen in some case. Then possibly the text source stays up there as long as MIDI CC messages keep coming in, but if no CC messages come in it is allowed to become hidden.

I did this thing you mention, setting up a timer that reached the bottom only in one circumstance. Other than that it just kept resetting to 11 seconds and counting down 10 seconds at a time. Yes, there may have been a start timer of 5 seconds that it could start from only but never return to as long as the 11 secons never reached the bottom because 10 seconds kept being added. This way the text window would stay open as long as MIDI CC kept coming across.

You noticed that good for you. I didn't think that I even showed all the scenes.

I will try to go into the Timer I placed for the Prophet synthesizer's one button which changes to 4 things, due to MIDI CCs which I activate the timer after the first CC (91 0)to turn on the Velocity - filter OFF chatlog Text Source to be sent and if within that timer the 2nd velocity - filter OFF chatlog is activated (including during the 2nd instance the Velocity - Amp OFF which was ON during the first time the CC 91 0 was sent for Velocity - filter OFF) then the chatlog to text source Macro for "vel- amp off fillter off" is activated. This timer is only activated after the first time the particular sequence is sent. So then the text fields are sent in order so they come up correctly.
 
Last edited:

MrGhost

Member
I added some quotes to clarify that but didn't manage to get them in during the rather short edit window of the post.
Clarification using quotations:

I will try to go into the Timer I placed for the Prophet synthesizer's one button which changes to 4 things, due to MIDI CCs which I activate the timer after the first CC (91 0) to turn on the 1st "Velocity - filter OFF" chatlog Text Source to be sent and if within that timer the 2nd "velocity - filter OFF" chatlog is activated (including during the 2nd instance the Velocity - Amp OFF situation is present which was rather ON during the first time the CC 91 0 was sent for "Velocity - filter OFF" and that chatlog was sent) then the chatlog to text source Macro for "vel- amp off fillter off" is activated. This timer is only activated after the first time the particular sequence is sent. So then the text fields are sent in order so they come up correctly.
 

MrGhost

Member
You are probably wondering why this is all so useful or at all useful? I found that after extensive programming of my M Audio Oxygen 8 v2's 8 dials and 4 of its buttons, that I can use my 2nd monitor (an old TV that is no longer used) which monitors the fullscreen program projector of my obs recording/streaming/sometimes ndi source output laptop 2nd computer in series to see the Prophet settings that I am changing.

This 2nd monitor which resides above my 1st Main Audio computer monitor on the same wall mount, which sets above my Crash cymbal and snare drum while drumming, tells me which prophet synthesizer controls I am changing using the MIDI controller. I am able to fiddle with the dials on my M Audio Oxygen 8 v2 MIDI controller with my left hand while drumming the hat or whatever with my feet and right hand. So that this A.S.S. which I have programmed to output the relevant CC setting of the Prophet, can be viewed in large form while drumming. I have the Oxygen 8 V2 MIDI controller routed hardwired to my template to send CCs across the same channel of Network MIDIs the Prophet uses to communicate with the A.S.S.

I have found that all this A.S.S.ing works slightly faster when I run the 2nd output from the 2nd in the series over the NDI to a 3rd computer across the living space about 20-25 m cat 6 cable away, and that one streams to the internet, than if it records to disk there on the 2nd computer.
 

MrGhost

Member
Yeah I went back into the Advanced Scene Switcher a couple of times to look into it today, and found a couple things and finally remember a thing that I found when I first set this up and got it working which was this:

The timer or countdown on the A.S.S. would repeatedly go back to the full time and continue itself. I would have to always be concerned whenever the timer was in motion because it would repeat endlessly (for instance from its default of 5 seconds) and carry out an action whenever it reached 0 which it would do every 5 seconds after it was started. This seemed like a huge design flaw to me (A.S.S. 1.24.0 I think was the version, mid to early 2023)

So like if the text source gets triggered on, and the triggering MACRO also starts a timer, the timer would run and run and run over and over again, and never go back and just wait for the text source to start it again with its full value. I found that I had to go into the MACRO which was triggered by the TIMER, and set a condition at the end which would reset the timer to a top amount, (such as 10 seconds) and then pause the timer.

In my initial instance (when the timer was turned on by the MACRO which also showed the Txt source in OBS) I liked a time of 11 seconds, so I set the timer to 11 seconds. Only one timer dealt with by 3 separate MACROS which we could call a ON macro, a TIMER macro and an OFF macro.

This was the best way I found of dealing with the timers. It included an ON macro, which started the timer (and showed the OBS source), and then a TIMER macro which counted down and at the end triggered an OFF macro which would hide the source in OBS, and at the end would reset the timer to 10 seconds and pause it so it could wait till it was enacted another time by the ON MACRO.

So to summarize:

for the timing in the Timers, I found that it was the only way, to add a timer operation inside the MACRO that the timer runout triggered, at the end of that MACRO in order to reset the timer and pause it.
 
Last edited:

AaronD

Active Member
The timer or countdown on the A.S.S. would repeatedly go back to the full time and continue itself. I would have to always be concerned whenever the timer was in motion because it would repeat endlessly (for instance from its default of 5 seconds) and carry out an action whenever it reached 0 which it would do every 5 seconds after it was started. This seemed like a huge design flaw to me (A.S.S. 1.24.0 I think was the version, mid to early 2023)
That's actually a feature for the use case that asked for it. They wanted something to repeat indefinitely at a given time interval, and that's how you do it.

Everything's a feature for some use or another, even if it originally came about by accident, and so the "fix" might cause its own problems.
 

MrGhost

Member
That's actually a feature for the use case that asked for it. They wanted something to repeat indefinitely at a given time interval, and that's how you do it.

Everything's a feature for some use or another, even if it originally came about by accident, and so the "fix" might cause its own problems.

That wasn't my idea I wanted a timer to work once then act on something then go back to wait. SO that's what I developed. Works well. Program shows for usually 8 seconds (because of delays in the action of the system I figure) if you count up, rather than 11, but 8 seconds is enough. I think it probably varies slightly also.
 

AaronD

Active Member
@marchvet Sorry to hijack your thread. It was meant to "show, not tell" that Adv. SS is indeed reliable if you set it up right, against the accusation that it wasn't. I think that's done now. You okay?
 
Top