I need some insight on running two instances of OBS (Or better option)

dethwombat

New Member
Hey all, fairly new to this, but I've been running two instances of OBS in order to record my face in one and the gameplay in the other, since I like being able to animate just my facial frame when I do edits. However, running current games (Like The Callisto Protocol) I overload OBS *very* easily. Even when I have the game running at 1080P and medium graphic settings. I know the big-time Youtubers are able to record high-end graphics with these frames (game and face cam) separate - am I missing something major? Usually when they post their setups, they aren't running two computers which I've seen mentioned before here on Reddit since it's a lot of work to process that information during recording.

I'm at a total loss here as up until now I've been able to run a resolution of 2560x1440 in games like RE8, Subnautica, etc - nothing as current as Callisto, so I'm aware there would be a difference. I also know some people run a single OBS but have the resolution in OBS be 2x the size of their recording and they fit both the gameplay and face cam into one and split them in Adobe Premier (I use Hitfilm because I can't afford anything fancy right now) so I'm more limited.
I've also lowered OBS settings significantly by taking off Psycho Visual Tuning, utilizing Good Quality over Better/Max, single pass for the MultiPass Mode. Rate Control is CQP set at 17. Encoder is set to NVENC as suggested.

If anyone has any insight on what can or should be done I would greatly appreciate the assistance. I feel like my computer is pretty powerful - not top of the line by any means, but I didn't think I'd be running into this kind of trouble considering people do the whole 4k/max game settings and I'm far from that. I just don't know what exactly other people are doing to achieve what I'm after.

I currently run:
AMD Ryzen 5 5600X 6-Core Processor 3.70 GHz
GeForce RTX 3070TI
32 Gigs of RAM

Thank you!
 

AaronD

Active Member
The specs for a live media PC are pretty much the same as a gaming one, and a lot of games are heavily optimized to squeeze the most out of the (reasonably) available hardware at the time of their release. That leads me to believe that you can do either/or but not both with a modest PC, or get a monster PC to do both/and.

If you stay well *ahead* of what a "typical gamer" (whatever that is) has, then you might be able to do both on the same machine, especially if you keep a couple of years or so between your hardware's release and the games that you play on it. Otherwise, I'd recommend running the game on one PC, and put its physical video signal through a capture device that goes to a second one. That way, all of the time-critical math that is required to calculate the game's display, isn't competing with another bunch of time-critical math that is required to compress live video.



If you're shopping for editors, you might have a look at Shotcut: https://shotcut.org/
It's free, open-source, runs everywhere, it's easy enough to get started with, and it has a TON of toys to grow into. It's been my go-to for several years now. I'm pretty sure that the zoom and pan filter will take an oversized frame and crop the desired part in full-res to the project frame. A second copy of the same clip with the same filter and different settings can be the start of an overlay, etc.
 
Last edited:

koala

Active Member
Although you don't like to double the video, I recommend you double the video size vertically, so you get an output resolution of 2560x2880. In the upper half you put your game, and in the lower half you put your webcam (as 1920x1080 or lower and leave the overflow just black).

To split, you don't need any fancy Adobe Premiere, you can use a simple tool like ffmpeg to create one 2560x1440 video from the upper half and another one from the lower half, so you can postprocess this separately. However, it involves a second encoding step and requires additional time.

Suggested ffmpeg command line to split again:
Code:
ffmpeg -i "2022-11-25 11-30-52.mkv" -filter_complex "split=2[in1][in2];[in1]crop=2560:1440:0:0[upper];[in2]crop=2560:1440:0:1440[lower]" -map 0:a -c:a copy -map "[upper]" -c:v h264_nvenc -preset p7 -profile:v high -rc constqp -qp 17 upper.mp4 -map "[lower]" -map 0:a -c:a copy -c:v h264_nvenc -preset p7 -profile:v high -rc constqp -qp 17 lower.mp4

(double the vertical resolution and put the videos above/below themselves, because if you double the horizontal resolution, you get 5120x1440, and 5120 is bigger than 4096, so you cannot use nvenc to encode this video)
 

Suslik V

Active Member
There is plugin named Source Record (applies as general filter to the video Source in OBS and has its own encoding an filename settings). It can record the source that is not even listed in the current scene (but not deactivated or disconnected, of course) and it records it into a separate file. So the OBS can output two files:
 

dethwombat

New Member
Although you don't like to double the video, I recommend you double the video size vertically, so you get an output resolution of 2560x2880. In the upper half you put your game, and in the lower half you put your webcam (as 1920x1080 or lower and leave the overflow just black).

To split, you don't need any fancy Adobe Premiere, you can use a simple tool like ffmpeg to create one 2560x1440 video from the upper half and another one from the lower half, so you can postprocess this separately. However, it involves a second encoding step and requires additional time.

Suggested ffmpeg command line to split again:
Code:
ffmpeg -i "2022-11-25 11-30-52.mkv" -filter_complex "split=2[in1][in2];[in1]crop=2560:1440:0:0[upper];[in2]crop=2560:1440:0:1440[lower]" -map 0:a -c:a copy -map "[upper]" -c:v h264_nvenc -preset p7 -profile:v high -rc constqp -qp 17 upper.mp4 -map "[lower]" -map 0:a -c:a copy -c:v h264_nvenc -preset p7 -profile:v high -rc constqp -qp 17 lower.mp4

(double the vertical resolution and put the videos above/below themselves, because if you double the horizontal resolution, you get 5120x1440, and 5120 is bigger than 4096, so you cannot use nvenc to encode this video)
Fantastic information, thank you. It makes more sense to do top/bottom, yes. Seems obvious now that you said it.

This FFMPEG might be a key to this - so the program will take my 2560x2880 video and separate the two out - but how does the audio work? Maybe you can choose which audio track it pulls to each section of the video? (I'm curious where you got your hands on it. I went to the FFMPEG site and downloaded the master build to have an executable, but when I run it it only flashes on briefly and disappears). I feel like there's a reason you gave me a command line, so I'm missing something. My apologies for the hand-holding.

My other emphasis is I read somewhere that it takes equally the same computing to have two OBS recording two 2560x1440 as it does for one to record 2560x2880, since it's the same amount of "pixels" being processed. This make sense, but I imagine it still has to be less because your still not running an entirely different program.

I'm not super savvy when it comes to this side of the computers, so I really appreciate that there's people who review these plights.

Thank you.
 

AaronD

Active Member
Fantastic information, thank you. It makes more sense to do top/bottom, yes. Seems obvious now that you said it.

This FFMPEG might be a key to this - so the program will take my 2560x2880 video and separate the two out - but how does the audio work? Maybe you can choose which audio track it pulls to each section of the video? (I'm curious where you got your hands on it. I went to the FFMPEG site and downloaded the master build to have an executable, but when I run it it only flashes on briefly and disappears). I feel like there's a reason you gave me a command line, so I'm missing something. My apologies for the hand-holding.

My other emphasis is I read somewhere that it takes equally the same computing to have two OBS recording two 2560x1440 as it does for one to record 2560x2880, since it's the same amount of "pixels" being processed. This make sense, but I imagine it still has to be less because your still not running an entirely different program.

I'm not super savvy when it comes to this side of the computers, so I really appreciate that there's people who review these plights.

Thank you.
FFMPEG is command-line *only*. You can use it directly like that, or through any number of other programs that advertise cool features but are really just GUI wrappers for that same utility.

OBS itself doesn't really take all that much to run. It does just fine on a Raspberry Pi...until you try to run some video through it. (been there, done that) The vast majority of the processing power is for the actual content.

As for two separate videos or one double-size video, there might be some advantage to putting it all into one because the compression algorithm can see everything and take advantage of some similarities. Plus it eliminates the problem of two timebases drifting apart from each other. You'd think that being on the same computer would inherently use the same clock, but if one drops more frames than the other...



The downside, as you mentioned, is the soundtrack. A single video file has a single soundtrack, just like it has a single picture. You're just putting two different sources in different places on that one picture, and encoding that as a single video. You can set up OBS to record 5.1 audio, which would technically give you space for 3 stereo pairs if you wanted to use it that way, but you'd have to trick OBS into doing it like that.

My experience with a many-channel USB interface says that OBS absolutely insists that every device has the same channel mapping that OBS itself is set for, repeated if necessary to fill the device and mixed down from there before you can do anything with it.
My 18-channel USB thing, for example, is forced into a mono mix of everything if OBS were set to mono, and THEN made available for processing. Or all of the odd/even pairs mixed into a single stereo signal and THEN made available as stereo. Or 3 sets of 5.1 all mixed together the same way.

So I would imagine that multiple stereo sources must always map to the front corners of a 5.1 soundtrack with the remaining 4 channels probably silent, with no other options in OBS. It's also possible for a filter to take a 5.1 input (as created by OBS - stereo up front and the remaining 4 silent) and give a 5.1 output with its channels rearranged, so you might look and see if someone has written one. If you can do that, then you can put one stereo source in the front corners, another in the back corners, and maybe abuse the center and sub channels for yet another stereo pair...or two mono's, or just silent.

Shotcut has an audio filter to rearrange channel assignments, but by the time you're able to use that one, it's already too late.

Linux has JACK, which is a completely free-form live-audio connection tool which makes that job trivial, and there's technically a Windows version of it too, but its actual use on Windows is almost unheard of.

Voicemeeter is made for Windows, and I believe it does 5.1, but it's meant to be paid. https://voicemeeter.com/ You can use it forever for free if you don't mind it bugging you constantly with popups to pay for it, and holding the audio for ransom until you answer the popup one way or another. Maybe you can get it to map each stereo input to a different pair of the same 5.1 output? Then you'd send each app to a different virtual sound card that feeds into Voicemeeter, and have OBS record Voicemeeter's output.

If it weren't for a game that probably must run on Windows, I would take a serious look at running the video production side of things on Linux. SOOO much more flexibility over there!
 
Last edited:

dethwombat

New Member
There is plugin named Source Record (applies as general filter to the video Source in OBS and has its own encoding an filename settings). It can record the source that is not even listed in the current scene (but not deactivated or disconnected, of course) and it records it into a separate file. So the OBS can output two files:
FFMPEG is command-line *only*. You can use it directly like that, or through any number of other programs that advertise cool features but are really just GUI wrappers for that same utility.

OBS itself doesn't really take all that much to run. It does just fine on a Raspberry Pi...until you try to run some video through it. (been there, done that) The vast majority of the processing power is for the actual content.

As for two separate videos or one double-size video, there might be some advantage to putting it all into one because the compression algorithm can see everything and take advantage of some similarities. Plus it eliminates the problem of two timebases drifting apart from each other. You'd think that being on the same computer would inherently use the same clock, but if one drops more frames than the other...



The downside, as you mentioned, is the soundtrack. A single video file has a single soundtrack, just like it has a single picture. You're just putting two different sources in different places on that one picture, and encoding that as a single video. You can set up OBS to record 5.1 audio, which would technically give you space for 3 stereo pairs if you wanted to use it that way, but you'd have to trick OBS into doing it like that.

My experience with a many-channel USB interface says that OBS absolutely insists that every device has the same channel mapping that OBS itself is set for, repeated if necessary to fill the device and mixed down from there before you can do anything with it.
My 18-channel USB thing, for example, is forced into a mono mix of everything if OBS were set to mono, and THEN made available for processing. Or all of the odd/even pairs mixed into a single stereo signal and THEN made available as stereo. Or 3 sets of 5.1 all mixed together the same way.

So I would imagine that multiple stereo sources must always map to the front corners of a 5.1 soundtrack with the remaining 4 channels probably silent, with no other options in OBS. It's also possible for a filter to take a 5.1 input (as created by OBS - stereo up front and the remaining 4 silent) and give a 5.1 output with its channels rearranged, so you might look and see if someone has written one. If you can do that, then you can put one stereo source in the front corners, another in the back corners, and maybe abuse the center and sub channels for yet another stereo pair...or two mono's, or just silent.

Shotcut has an audio filter to rearrange channel assignments, but by the time you're able to use that one, it's already too late.

Linux has JACK, which is a completely free-form live-audio connection tool which makes that job trivial, and there's technically a Windows version of it too, but its actual use on Windows is almost unheard of.

Voicemeeter is made for Windows, and I believe it does 5.1, but it's meant to be paid. https://voicemeeter.com/ You can use it forever for free if you don't mind it bugging you constantly with popups to pay for it, and holding the audio for ransom until you answer the popup one way or another. Maybe you can get it to map each stereo input to a different pair of the same 5.1 output? Then you'd send each app to a different virtual sound card that feeds into Voicemeeter, and have OBS record Voicemeeter's output.

If it weren't for a game that probably must run on Windows, I would take a serious look at running the video production side of things on Linux. SOOO much more flexibility over there!
Okay, now that I'm working with a super-genius, everything I say is going to be childlike and hand-holding, so I won't hold back!

As for the command lines for FFMPEG, where would I interface with this? Just the CMD Prompt when I have the FFMPEG sources files extracted to my computer, or is there some other way? I'm used to an interface that lets me tell it "hey, do this to this video with this command line" so my apologies for my extreme lack of experience with this. That command line you gave me, for instance. How do I incorporate that command with my video of choice?

I really appreciate the help.
 

AaronD

Active Member
Okay, now that I'm working with a super-genius, everything I say is going to be childlike and hand-holding, so I won't hold back!

As for the command lines for FFMPEG, where would I interface with this? Just the CMD Prompt when I have the FFMPEG sources files extracted to my computer, or is there some other way? I'm used to an interface that lets me tell it "hey, do this to this video with this command line" so my apologies for my extreme lack of experience with this. That command line you gave me, for instance. How do I incorporate that command with my video of choice?

I really appreciate the help.
Well, you could google "man ffmpeg" (without quotes), and the first couple of results will likely be the same as if you ran that as a command in a Linux terminal. (That's another thing I like about Linux. Pretty much EVERYTHING has a manual, and you can almost always get to it like that.)

But that's probably not all that useful to you right now. FFMPEG is like a swiss army knife for media. The reason it's used everywhere is because it does practically everything you can imagine and about 100x more! With that much capability, you also have to tell it in sometimes surprising detail what *exactly* you want it to do. But yes, you would open a CMD prompt, and then type: ffmpeg [options]...

Instead, if you can get the soundtrack figured out, I'd recommend a single instance of OBS, recording a single file that has both sources in it, not overlapping visually or audibly. Then use a video editor to re-work that file into what you actually want, using multiple copies of that one OBS recording.

Or, if your live-production skills are good enough (or you can have someone else do that job on a second PC while you play the game on the first), then you might do everything live in OBS as if you were streaming, and just record that. "Live to tape" has the advantage of *everything* being done and over with when the event itself is done, but it takes a bit more setup to have all the tools ready to go beforehand that you're going to need, and a bit more work in the moment to make all the decisions live and to know what's coming.
 

koala

Active Member
This FFMPEG might be a key to this - so the program will take my 2560x2880 video and separate the two out - but how does the audio work? Maybe you can choose which audio track it pulls to each section of the video? (I'm curious where you got your hands on it. I went to the FFMPEG site and downloaded the master build to have an executable, but when I run it it only flashes on briefly and disappears).
ffmpeg is a commandline tool. You need to open a command prompt, then enter the command. Usually, one writes a complex command like mine into a batch file (*.cmd) and call the batch file, so you have all arguments in the batchfile where you can change it and not lose anything.
If you're not familiar with the command prompt and batch files, it might difficult to get into. I will not explain how to use command line and batch files, that's completely out of scope for the OBS forum. Command line and batch files is the key to automation.

I made the command so, that it copies all existing audio tracks into both videos. It's the "-map 0:a -c:a copy" arguments for each output file. You can change that to only add certain audio tracks to one output file, or all output files, or even not copy anything.

Ffmpeg has a overwhelming amount of commandline arguments for all kind of tasks. It's the swiss army knife of video processing. Usually, it's best to google for some task and choose appropriate answers from stackoverflow with example arguments, then go from there.

If you're unable to get into the command line, I recommend Suslik V's post and try the completely different and more direct approach with the Source Record plugin. I read this plugin isn't completely stable and might lead to crashes now and then - if it works for you or if it crashes for you is unkown until you tried how it works.
 

dethwombat

New Member
ffmpeg is a commandline tool. You need to open a command prompt, then enter the command. Usually, one writes a complex command like mine into a batch file (*.cmd) and call the batch file, so you have all arguments in the batchfile where you can change it and not lose anything.
If you're not familiar with the command prompt and batch files, it might difficult to get into. I will not explain how to use command line and batch files, that's completely out of scope for the OBS forum. Command line and batch files is the key to automation.

I made the command so, that it copies all existing audio tracks into both videos. It's the "-map 0:a -c:a copy" arguments for each output file. You can change that to only add certain audio tracks to one output file, or all output files, or even not copy anything.

Ffmpeg has a overwhelming amount of commandline arguments for all kind of tasks. It's the swiss army knife of video processing. Usually, it's best to google for some task and choose appropriate answers from stackoverflow with example arguments, then go from there.

If you're unable to get into the command line, I recommend Suslik V's post and try the completely different and more direct approach with the Source Record plugin. I read this plugin isn't completely stable and might lead to crashes now and then - if it works for you or if it crashes for you is unkown until you tried how it works.
Very good information. I don't expect you to run a Command Prompt 101 for me, but you've given me enough insight to work from, and I appreciate that.

As for the Source Record Plugin, it creates other waves. I got it and had everything setup and ended up getting a solid green screen instead of my Face Cam. The fix I found was changing the target in the EXE the have an OpenGL tag and changed the Renderer to OpenGL which fixed it. However, when trying to record The Callisto Protocol, the Game Capture window wouldn't load the game and was just blank. This was do to me changing the Renderer to OpenGL. I haven't found a fix yet to harmonize both areas. That, and my voice record on the Source Record plugin has a bunch of crackling now, so it did not seem like a sound solution to push into.

For now, I have just really dumbed down The Callisto Protocol (since it's horribly optimized currently) to be able to keep consistency with content. I will continue to attack methods to improve more intensive games like that as I go along. I just wish there was sort of a one-stop-shop to go through that maps out best practices (not always that simple, obviously). I know many do exactly what I do and achieve exactly what I want to achieve with a much better and higher resolution than I do so I ask myself "what am I doing" wrong? My PC isn't anything to blow minds but it's nothing to scoff at.

Thanks for all the help, koala!
 

dethwombat

New Member
Well, you could google "man ffmpeg" (without quotes), and the first couple of results will likely be the same as if you ran that as a command in a Linux terminal. (That's another thing I like about Linux. Pretty much EVERYTHING has a manual, and you can almost always get to it like that.)

But that's probably not all that useful to you right now. FFMPEG is like a swiss army knife for media. The reason it's used everywhere is because it does practically everything you can imagine and about 100x more! With that much capability, you also have to tell it in sometimes surprising detail what *exactly* you want it to do. But yes, you would open a CMD prompt, and then type: ffmpeg [options]...

Instead, if you can get the soundtrack figured out, I'd recommend a single instance of OBS, recording a single file that has both sources in it, not overlapping visually or audibly. Then use a video editor to re-work that file into what you actually want, using multiple copies of that one OBS recording.

Or, if your live-production skills are good enough (or you can have someone else do that job on a second PC while you play the game on the first), then you might do everything live in OBS as if you were streaming, and just record that. "Live to tape" has the advantage of *everything* being done and over with when the event itself is done, but it takes a bit more setup to have all the tools ready to go beforehand that you're going to need, and a bit more work in the moment to make all the decisions live and to know what's coming.
The FFMPEG and command lines is very intriguing and seems like it could easily bear fruit. I'm just so slammed for time as is, running a full-time job, family, kids, household and animals... it's really hard to even squeeze in the YouTube/filming time. I want to work with and offer quality stuff though, so I have to learn. I just usually try to find quicker solutions so I can keep the ball rolling and I hate to waste time aimlessly looking for solutions because I just don't have much to spare. This is why I retreated to the forums in the first place in hopes to find a bit more of a linear path to success in how to go about this recording process.

One final question: I went and downloaded the FFMPEG, but it appears to just be a zip file with a bunch if files in it, nothing that's being "installed." I have been keeping most installs off my "C:" drive to keep it clean for the OS, so I currently have a folder under Programs on my "D:" drive called FFMPEG with those items extracted into it. I guess my question is how does the CMD Prompt now interface with those goodies in order to properly do all the video editing command lines?

I appreciate your time, AaronD.
 

Suslik V

Active Member
...As for the Source Record Plugin, it creates other waves. I got it and had everything setup and ended up getting a solid green screen instead of my Face Cam.
Previously it was reported that hardware encoders (NVENC etc.) can cause this. It is wise to try software x264. But who knows. Better to ask in the plugin's thread.
 

AaronD

Active Member
One final question: I went and downloaded the FFMPEG, but it appears to just be a zip file with a bunch if files in it, nothing that's being "installed." I have been keeping most installs off my "C:" drive to keep it clean for the OS, so I currently have a folder under Programs on my "D:" drive called FFMPEG with those items extracted into it. I guess my question is how does the CMD Prompt now interface with those goodies in order to properly do all the video editing command lines?
Most programs rely on some sort of automated installer, to put all the files in the right place and add/change right system settings to make it all work. That's true for pretty much any system. If you get a bunch of files, then you have to know, in some form or another, what to do with them yourself, and that can vary widely.

On Windows, you normally don't install FFMPEG directly. Each app that uses it includes its own independent copy as part of its own install. Not to say that you *can't* have a standalone copy as well, but that's normally how it works.

On Linux, it's pretty much always standalone, but might be downloaded and installed automatically as a dependency of something else. So there's only that one copy on the entire system, everything that needs it uses that same one, and you can too.

I can't tell for sure without looking at what you actually have, but since it's open-source, there's a strong possibility that a zip file with a bunch of stuff in it is actually the human-readable source code that must be compiled/built before it can run. It's what the developers work with, or the "controlling document" if you will. The computer itself has no idea what to do with any of that. The advantage of having the source code and building it from there, is that you can tell your own personal build tools to do things slightly differently if you need that (or even change the program itself), but you do have to both have the tools and know how to use them.

Most projects that I've seen as source, include a README file and either a LICENSE file or a COPYING file at the root or top level. Those are plain text files with no extensions, so Windows will never know what to do with them. But you can still tell it every time to open them in your favorite text editor, like Notepad for example.
If you have those files, then you've probably got the source code. There still could be a pre-built executable in there somewhere, as a nice gesture but not guaranteed to work on *your* system, but most things don't.



On the command line, each line that you type (or paste in) is split at the spaces into a set of arguments, tokens, whatever you want to call them. If you need to include spaces in a single argument, put double quotes around that argument. "A:\path to\a program.exe" arg1 arg2 would be a good example of that. (backslashes are a Windows thing, Mac and Linux use forward slashes)

The first in that list, after splitting, is a file to run (program, script, whatever), and everything after that gets passed, in order, to that program to be interpreted however the program does. Actually, the program receives everything, including the name that was used to call it, which can be useful sometimes too, for the person writing the program or script. So in the above example, a program.exe would start to run, and it would have:
0. A:\path to\a program.exe
1. arg1
2. arg2

available to use however it does.

So far, that's an absolute path to the program. You can also use a relative path, simply by leaving off the first bit. That starts a (limited, fast) search for what you did specify: first the working directory, then each item in the system path, which is usually buried somewhere in the system settings but an automated installer will often add itself to it.
To change the working directory, use the cd command: cd ~/new/working/directory to use the Linux version this time. It's also cd in Windows, but of course the path needs to use the Windows formatting instead. ~ on Linux is shorthand for the current user's home directory, wherever that might be.
If you need to jump up a level, that works like a subfolder named .. So ../../something goes up two and then looks for something. Likewise on Windows but with backslashes instead.
To put most of that together, you might do:
cd "A:\path to\a different folder
"..\a program.exe" abcde 1 "another path with spaces" 2 3

and the same program would run with these arguments instead:
0. ..\a program.exe
1. abcde
2. 1
3. another path with spaces
4. 2
5. 3


I've heard that Bill Gates liked what Unix was doing with their command line, and just copied it. But changed it just enough to claim that he didn't copy it. Hence the similarities and, I think, function-killing differences. Linux inherited the original, and I like it a LOT better! Of course it's forked by now too, into several different versions, but the most common one is bash, for Bourne Again SHell, after the guy who maintained it for a long time.
 

AaronD

Active Member
I'm just so slammed for time as is, running a full-time job, family, kids, household and animals... it's really hard to even squeeze in the YouTube/filming time. I want to work with and offer quality stuff though, so I have to learn. I just usually try to find quicker solutions so I can keep the ball rolling and I hate to waste time aimlessly looking for solutions because I just don't have much to spare. This is why I retreated to the forums in the first place in hopes to find a bit more of a linear path to success in how to go about this recording process.
I don't see much connection between fancy editing or production work, and quality content. You can do a pretty good job with a single camera and mic, and maybe some physical props that you show to the camera. That should get you going for a while.

Once you figure out how things work, you might think about adding maybe a second camera, or a media source to show some pre-recorded video, or something like that, still produced live.

Or a different way to start might be to have a screen capture of a game or something else, plus a mic. Don't show your face yet because you're not using a camera yet. Get used to that, maybe tweak the audio settings so that the computer sound and the mic blend nicely, live, and then look at adding a camera.



I'm pushing live production here because, while it does create more setup and more work in the moment, it's completely DONE when your session is done. You have a finished product already, whatever it is, and the practical impossibility to change it prevents you from obsessing in an editor. It's just DONE! It is what it is, move on.

And once you get a good workflow, you can use the same setup over and over again, so that effort goes away. And you get good enough at the live production part that it becomes second nature as well. So the end result is not much time at all beyond the actual sit-down-and-do-something.
 
Top