Tips and tricks for Lua scripts

upgradeQ

Member
really interesting but I can't get it to work I get :

[testClick.lua] Error loading file: [string "C:/Program Files/obs-studio/data/obs-plugins/..."]:31: unfinished long string near '<eof>'

does it needs any dependancies ( should I install LuaJIT ffi somehow ? )
Try to delete all comments, and put the script into neutral directory like: c/users/owner/documents. This seems like a syntax error. LuaJIT is builtin in Windows.
 

upgradeQ

Member
img.png


Rendering 3D models in OBS Studio.​

Although the newest version 30.1.0+ of the program now supports capturing textures with the alpha channel, it may
still be useful to work with geometry inside the program, e.g. creating a 3D logo, rendering sources to texture on the model etc.


So the first step is to set up vertex buffer data, it is a relatively slow operation and it is done at the initialization stage.
Lua:
  S.gs_render_start(true) -- enters the render context
  S.gs_vertex3f, S.gs_texcoord, S.gs_normal3f -- functions to call inside that context

  data.my_buf = S.gs_render_save() -- buffer is saved

  S.gs_load_vertexbuffer(data.my_buf) -- loading geometry so we can draw it

  S.gs_draw(S.GS_TRIS, 0, vertex_num) -- this will do it
It can be finished here if we wanted some simple 2D graphics, but since that is not the case, several other
steps must be executed. The depth buffer testing, correct camera to Z, and clear color and depth buffers gracefully.
Sounds like a lot but it is actually just 4 lines of code.
Lua:
  S.gs_clear(bit.bor(S.GS_CLEAR_COLOR , S.GS_CLEAR_DEPTH), clear_color, 0, 0)
  S.gs_ortho(0.0, data.width, 0.0, data.height, -2000.0, 2000.0)
  S.gs_enable_depth_test(true)
  S.gs_depth_function(S.GS_GREATER)
Yes, but no, we skipped that depth buffer is not set as default for source rendering,at least inside Lua. Fortunately, there is gs_texrender_begin,
which can handle that if we pass the GS_Z16 flag. So we do the drawing inside of this texrender context, and finally we pass the texture to the output.
Here is the link to the script which can load and show .obj files.
A test model is also included. It utilises an undocumented method to obtain vertex IDs (place this inside vertex shader arguments input "uint id : VERTEXID"), and allows for setting one texture for the model.
 

alcarkse

New Member
Hi!

No sure if this is the right place to post this, but I figured this might interest some of you.

I recently started writing lua scripts for obs, but I could not find anything related to support of obslua with the lua language server. I think that having this sort of support would make the overall development experience quite a bit more fluid.

I made a post for this on the obs ideas & suggestions website (https://ideas.obsproject.com/posts/2557/support-for-lua-language-server). If you guys also think this sort of thing would be nice, feel free to upvote the post :)
 

xin61414

New Member
Hello.

Is there a way to perform HTTP communication in obslua?
I'm considering creating a script to retrieve statistics from OBS and send them via webhook.
 

upgradeQ

Member
DirectX C API access via ffi

Tested on OBS Studio version 30.1.0, but should work on earlier versions too.
OBS Studio Direct3D feature level is 11.0, and shader model version is 4.0.
Now, before we start, why even bother to writing raw DirectX at all instead of using libobs graphics wrappers?
While it is has an extensive list of supported features, it is mostly tailored to composition, filtering and capturing.
So if you want to do some effects that are outside of that paradigm, you may not be able to.
For example, you can't use Vertex Texture Fetch inside a vertex shader. Its resources are not set in the rendering pipeline.
But I will show you how I did it, the end result, without the important ffi bindings is only 4 lines of code in the rendering context.
Lua:
  while S.gs_effect_loop(data.effect, "Custom1") do
    S.gs_load_texture(data.tex1, 0) -- sets pixel shader resource view
    data.PSGetShaderResources(data.pContext2, 0, 1, data.pRes)
    data.VSSetShaderResources(data.pContext2, 0, 1, data.pRes)
    S.gs_draw_sprite(nil, 0, 2560, 1440)
  end
The tricky part is how to define these API calls. Actually it is very straightforward if we know the underlying model that DirectX uses.
COM - or Component Object Model is the main driving force behind it. I will not go deep into it, but here is a good 10/10 overview.
The other important link is related to modding old games, the solution there is done with LuaJIT ffi and DirectX version 9.
Similarly, libobs provides an almost identical function to get device pointer.
It is technically possible to load d3d11 and create a device yourself, the same approach would work in a standalone interpreter, but I digress.
I'll add correctly indexed and full vtables later, for now let's focus on the glue code.
Lua:
  data.device = obsffi.gs_get_device_obj()
  data.pDevice = ffi.cast("struct d3ddevice*", data.device)
  data.GetImmediateContext = ffi.cast("long(__stdcall*)(void*, void**)", data.pDevice.lpVtbl[40])
  data._arg1 = ffi.new("unsigned long[1]")
  data.pContext = ffi.cast("void**", data._arg1)
  data.GetImmediateContext(data.pDevice, data.pContext)
  data.pContext2 = ffi.cast("struct d3ddevicecontext*", data.pContext[0])
  data.Release_pContext = ffi.cast("unsigned long(__stdcall*)(void*)", data.pContext2.lpVtbl[2])
  data.Release_pDevice = ffi.cast("unsigned long(__stdcall*)(void*)", data.pDevice.lpVtbl[2])
  data.VSSetShaderResources = ffi.cast("long(__stdcall*)(void*, unsigned int, unsigned int, void**)", data.pContext2.lpVtbl[25])
  data.PSGetShaderResources = ffi.cast("long(__stdcall*)(void*, unsigned int, unsigned int, void**)", data.pContext2.lpVtbl[73])
  data.VSSetSamplers = ffi.cast("long(__stdcall*)(void*, unsigned int, unsigned int, void**)", data.pContext2.lpVtbl[26])
  data._arg2 = ffi.new("unsigned long[1]")
  data.pRes = ffi.cast("void**", data._arg2)
As you can see, it looks very similar to the DirectX 9 link approach, experimentally it is found that we must use index instead of a name in our vtable.
Now the routine is to find an interesting function, and write it's binding to the following template:

data.VSSetShaderResources = ffi.cast("long(__stdcall*)(void*, unsigned int, unsigned int, void**)", data.pContext2.lpVtbl[25])

data.VSSetShaderResources - this is a permanent location on stack that survives garbage collector

"long(__stdcall*)(void*, unsigned int, unsigned int, void**)" - return type, calling convention, first argument is vtable, the type and number of arguments

ffi.cast("...", data.pContext2.lpVtbl[25]) - the index in the vtable

GetImmediateContext - MSDN says this will increment the reference count, so we must release it when we are done with it to avoid a memory leak, this is done in the destroy callback.

data.Release_pContext(data.pContext2)
--data.Release_pDevice(data.pDevice)
I commented Release_pDevice because we did not create this resource, so I think it is not our responsibility to deal with it.

So this is basically it, the lost device case is left as an exercise for the reader.
From my tests, this final code does not crash during execution and at exit nor it does it leak memory, the gist features texture reading in vertex shader passing results to fragment shader, simple color cycle.
Looking forward to what you guys can do with all of this, some ideas :)
- Thread-safe texture reading from GPU to RAM
- Stateful particle system
- D3D11 clear screen, triangle

Vtables. I have found out the Odin programming language vendor sources to be a reliable and readable source of vtables information for DirectX,
Although you can find enum lists and indexes on the net, I think it is fair to include a full description here.
Device

C:
struct d3ddeviceVTBL {
  void *QueryInterface;
  void *AddRef;
  void *Release;
  void *CreateBuffer;
  void *CreateTexture1D;
  void *CreateTexture2D;
  void *CreateTexture3D;
  void *CreateShaderResourceView;
  void *CreateUnorderedAccessView;
  void *CreateRenderTargetView;
  void *CreateDepthStencilView;
  void *CreateInputLayout;
  void *CreateVertexShader;
  void *CreateGeometryShader;
  void *CreateGeometryShaderWithStreamOutput;
  void *CreatePixelShader;
  void *CreateHullShader;
  void *CreateDomainShader;
  void *CreateComputeShader;
  void *CreateClassLinkage;
  void *CreateBlendState;
  void *CreateDepthStencilState;
  void *CreateRasterizerState;
  void *CreateSamplerState;
  void *CreateQuery;
  void *CreatePredicate;
  void *CreateCounter;
  void *CreateDeferredContext;
  void *OpenSharedResource;
  void *CheckFormatSupport;
  void *CheckMultisampleQualityLevels;
  void *CheckCounterInfo;
  void *CheckCounter;
  void *CheckFeatureSupport;
  void *GetPrivateData;
  void *SetPrivateData;
  void *SetPrivateDataInterface;
  void *GetFeatureLevel;
  void *GetCreationFlags;
  void *GetDeviceRemovedReason;
  void *GetImmediateContext;
  void *SetExceptionMode;
  void *GetExceptionMode;
};
struct d3ddevice {
  struct d3ddeviceVTBL** lpVtbl;
};
Device context

C:
struct d3ddevicecontextVTBL {
  void *QueryInterface;
  void *Addref;
  void *Release;
  void *GetDevice;
  void *GetPrivateData;
  void *SetPrivateData;
  void *SetPrivateDataInterface;
  void *VSSetConstantBuffers;
  void *PSSetShaderResources;
  void *PSSetShader;
  void *SetSamplers;
  void *SetShader;
  void *DrawIndexed;
  void *Draw;
  void *Map;
  void *Unmap;
  void *PSSetConstantBuffer;
  void *IASetInputLayout;
  void *IASetVertexBuffers;
  void *IASetIndexBuffer;
  void *DrawIndexedInstanced;
  void *DrawInstanced;
  void *GSSetConstantBuffers;
  void *GSSetShader;
  void *IASetPrimitiveTopology;
  void *VSSetShaderResources;
  void *VSSetSamplers;
  void *Begin;
  void *End;
  void *GetData;
  void *GSSetPredication;
  void *GSSetShaderResources;
  void *GSSetSamplers;
  void *OMSetRenderTargets;
  void *OMSetRenderTargetsAndUnorderedAccessViews;
  void *OMSetBlendState;
  void *OMSetDepthStencilState;
  void *SOSetTargets;
  void *DrawAuto;
  void *DrawIndexedInstancedIndirect;
  void *DrawInstancedIndirect;
  void *Dispatch;
  void *DispatchIndirect;
  void *RSSetState;
  void *RSSetViewports;
  void *RSSetScissorRects;
  void *CopySubresourceRegion;
  void *CopyResource;
  void *UpdateSubresource;
  void *CopyStructureCount;
  void *ClearRenderTargetView;
  void *ClearUnorderedAccessViewUint;
  void *ClearUnorderedAccessViewFloat;
  void *ClearDepthStencilView;
  void *GenerateMips;
  void *SetResourceMinLOD;
  void *GetResourceMinLOD;
  void *ResolveSubresource;
  void *ExecuteCommandList;
  void *HSSetShaderResources;
  void *HSSetShader;
  void *HSSetSamplers;
  void *HSSetConstantBuffers;
  void *DSSetShaderResources;
  void *DSSetShader;
  void *DSSetSamplers;
  void *DSSetConstantBuffers;
  void *DSSetShaderResources;
  void *CSSetUnorderedAccessViews;
  void *CSSetShader;
  void *CSSetSamplers;
  void *CSSetConstantBuffers;
  void *VSGetConstantBuffers;
  void *PSGetShaderResources;
  void *PSGetShader;
  void *PSGetSamplers;
  void *VSGetShader;
  void *PSGetConstantBuffers;
  void *IAGetInputLayout;
  void *IAGetVertexBuffers;
  void *IAGetIndexBuffer;
  void *GSGetConstantBuffers;
  void *GSGetShader;
  void *IAGetPrimitiveTopology;
  void *VSGetShaderResources;
  void *VSGetSamplers;
  void *GetPredication;
  void *GSGetShaderResources;
  void *GSGetSamplers;
  void *OMGetRenderTargets;
  void *OMGetRenderTargetsAndUnorderedAccessViews;
  void *OMGetBlendState;
  void *OMGetDepthStencilState;
  void *SOGetTargets;
  void *RSGetState;
  void *RSGetViewports;
  void *RSGetScissorRects;
  void *HSGetShaderResources;
  void *HSGetShader;
  void *HSGetSamplers;
  void *HSGetConstantBuffers;
  void *DSGetShaderResources;
  void *DSGetShader;
  void *DSGetSamplers;
  void *DSGetConstantBuffers;
  void *CSGetShaderResources;
  void *CSGetUnorderedAccessViews;
  void *CSGetShader;
  void *CSGetSamplers;
  void *CSGetConstantBuffers;
  void *ClearState;
  void *Flush;
  void *GetType;
  void *GetContextFlags;
  void *FinishCommandList;
};

struct d3ddevicecontext {
  struct d3ddevicecontextVTBL** lpVtbl;
};
 

npatch

New Member
For the sake of completing the documentation this thread provides, and relating to my previous post here, I can confirm that creating sources is indeed possible in Python, as well as adding them to the OBS GUI. It's just that doing these two very fundamental actions for a script is a bit counter-intuitive, and the API documentation is not very clear about this (to the extent I've studied it so far), so for anyone who was just as lost as me, I will indicate here how to create sources and add them to the OBS GUI so that you can further manipulate them.

@bfxdev , maybe you'd like to edit the messages in this thread which say that sources can't be added with Python, to increase the readability of the thread and not mislead possible new readers.

The example code will be in Python but is perfectly doable in Lua as well.

ADDING A SOURCE

On paper, it's fairly simple; you just have to use the obs_source_create function to create it. And that's right, this alone is enough to create the source as shown by the obs_enum_sources function, which, when iterated upon, also displays the newly added source. The thing is, in order for it to actually be displayed as a source that can be added to a scene, you must correctly define the id field of the parameters. But how?

When you choose to add a source in OBS GUI, this menu is shown(yeah, it's in Spanish, sorry lol but you get the idea by the icons :P):

View attachment 63070

This shows all the types of sources OBS has. The id parameter in obs_create_sources indicates which of these types your new source will be, which means that every type of source in OBS has a distinct id.

For example, let's say you want to create a new multimedia source that plays a .mp3 file or whatever. The code to create this new source would be as follows.

Python:
settings = obs.obs_data_create()
obs.obs_data_set_string(settings, "local_file", song_path)
source = obs.obs_source_create("[B]ffmpeg_source[/B]", file, settings, None)

In the specific case of this source, since we assume it's being played from a local file (song_path), we have to also add to the source's settings an obs_data_t object with a string called "local_file" as key and the song path as value.

As you can see, in the specific case of multimedia files, the id you must enter into the id parameter is called "ffmpeg_source". This right here is the key in order for it to show up in OBS. If you don't put it (or make it up), the source WILL be created, but it won't show up anywhere in the GUI, and hence won't be able to be accessed by the user, which makes manipulating it later basically hell. If you do put the id properly to fit the id of the type of source you want to add, the source will be added and will be shown among the other existing sources. Yay!

Of course, if you now want to add it to a scene, you have to make use of the obs_scene_t and obs_sceneitem_t structures and functions, but that is a whole other can of worms (I can explain it if somebody wants me to though).

The id thing is very poorly explained in the documentation, and in fact, the only way I've found to learn what the id for each type of source is, is creating sources of each type from the GUI, then list them with obs_enum_sources, and THEN get their id with obs_source_get_id and print it. It's an essential part of adding sources, and I haven't seen it explained anywhere else, and it's a very dumb and simple thing that should be obvious. I dunno, perhaps I am stupid and that's why it took me so long to figure this out :P

The only caveat to this method is that the sources don't seem to persist between OBS executions. Sometimes. Sometimes they do persist. I don't know, it's very weird and I haven't found a clear pattern for when this happens, so if anyone can shed some light into this, it would be very appreciated!
First off, this is years apart but thank you so much because docs are not as thorough. I've come across the ID and always wondered whether this was a known literal (as @Mega64 suggests) or whatever so long as it's unique.

This post is mostly for anyone else stumbling into this issue, about creating a scene and a source (based on known types) and even though I wrote it in python, my observations should be transferable.

My use case is that I'm QAing games voluntarily for a small indie company and the tester pool doesn't use OBS or screen capture all the times, so I wanted to make sth to propose as standard practice. I wanted to write a script to automate setup of everything, including running the game, starting and stopping recording depending on game runtime and in case of game crash, stop the recording and assemble an archive based on selected saved games and the video clip of the session, which could be then emailed to the developer. I've figured out how to run the game and check for a crash. So I wanted to create a Scene and a Game Capture source, specifically for the game, set up with the game's window name so that the game window can be insta-recognized when source is active.

Creating a scene was simple:
Python:
def find_or_create_scene(scene_name):
    scene_ref = find_scene(intended_scene_name)
    if scene_ref is None:
        print("Intended scene does not exist. Will create.")
        scene_ref = obs.obs_scene_create(intended_scene_name)
    return scene_ref

So next step was to set up the source. At this point I have to say that this whole "a Scene is a source, but a Scene is also comprised of scene items which are also sources" had me confused and still does in some cases and I think this is mostly due to the python wrapper interface. At the end of the day, I make heads or tails just looking at the C api and what every function returns and try to go from there and whatever the error tracebacks tell me.

My first attempt to figure out how to create a source (because my first attempt at using the obs_source_create just crashed OBS) was to create the source myself on the GUI and then try to access all relevant info and print it so I can replicate the setup.

I first found the scene context. Then I enumerated the scene items from that context and for each source, enumerate its properties and their values in the settings.

But then I hit a wall with python not having an implementation for obs_property_next(**p). I found this:
Still trying to get something useful working with FFI (now I experience just just many crashes).
UpgradeQ pointed out an interesting use of FFI to enumerate properties in another forum thread by MacTartan.

...

I tried it, but it didn't yield results (or I just did it wrong) and honestly I didn't think it was worth the effort and time to set me back if there was a simpler way to move on.

Next step was to print everything for a property, provided I could figure out the property name by trial and error and that did show some promise and some results, but without knowing them all, I couldn't be sure I had all the necessary properties for a working setup. At this point, I understood how properties and settings worked together as I was trying to figure out how to draw the selected value out of a property.

This is what I had at the time. Sure looks enough, but...
Python:
def create_source(props, property):
    scene_ref = find_scene(game_scene_name) #obs_scene_t*
    settings = obs.obs_data_create()
    obs.obs_data_set_string(settings, "capture_mode", "window")
    obs.obs_data_set_string(settings, "window", game_capture_window_name)

    new_source = obs.obs_source_create("<game_name>_capture", intended_scene_source_name, settings, None)
    obs.obs_scene_add(scene_ref, new_source)

    obs.obs_save_sources()

It didn't crash mind you, but it also didn't look right and the new source was created, but I couldn't access the properties.

This corroborates what bfxdev was saying here, since if you notice my id above is "<gamename>_capture":
Thanks for sharing @Mega64. Indeed it sounds like a possible way to create sources in OBS in Python, applicable as well to Lua. Now I'm not sure that this is the intended use of the obs_source_create function. I wrote myself a script that registers a filter (one of the types of "sources") and did not use this function at all. It shows up in the list of filters, and is persistent across OBS executions.

The documentation is not very explicit about what the function does: Creates a source of the specified type with the specified settings. The “source” context is used for anything related to presenting or modifying video/audio. Use obs_source_release to release it. My understanding is that you use the function to create an instance of an existing source type (no idea how to get the registered id values!), and then you add it to a scene. Now if the "id" is not registered, the function will not fail and will keep an own id field. I can imagine that this id is registered somehow in the list of source types later on, that is why it appears in the list of source types.

that the registration of ID won't fail if it's not one of the known ids. Which happened to me. I could create the source, but the source did not have a game capture icon next to it, likely because it was not properly recognized as one of the types of sources. I still thought just wasn't setting it up fully and I was missing some property/setting.

I figured since Settings are what keeps the actual selected values of the properties, I didn't need to enumerate properties but the settings instead, which led to obs_data_get_json_pretty_with_defaults.

JSON:
{
    "window": "<game name>.exe",
    "capture_mode": "window",
    "priority": 2,
    "sli_compatibility": false,
    "capture_cursor": true,
    "allow_transparency": false,
    "premultiplied_alpha": false,
    "limit_framerate": false,
    "capture_overlays": false,
    "anti_cheat_hook": true,
    "hook_rate": 1,
    "rgb10a2_space": "srgb"
}

Most of those are what you see when you click on the Properties button. But even though that was a huge improvement, it still didn't help with having the GUI recognize the source as a game capture type.

So after reading Mega's post, I changed the ID from what I had to "game_capture", the source got the game capture icon, the Properties button was both visible and enabled and pressing it showed me the properties window and all of the properties had the right values.

I did think about why there are seemingly two ways to set up a new source, Mega's and mine's obs_source_create approach and @bfxdev's source info approach. I think the source info approach is for extending the source types to add new ones, mainly due to the fact that you have to reimplement a few funtions that you shouldn't have to just to add a source of a known type. Which makes sense that this was the go-to way to add a *new* type of filter to OBS. Otherwise, the docs have an api for applying a filter to a source (obs_source_filter_add).

Also, @Mega64, in your quoted post at the beginning, you said the only caveat is that new scenes don't persist between obs executions and that's because you have to explicitly call obs_save_sources. The only issue so far for me is that sometimes you a newly created scene or source won't be able to get the name you want, because you tried this same piece of code before. e.g. if I run OBS, delete my source and then trigger the python script to recreate the source, it will create <source-name>2.
 

upgradeQ

Member
Hello @npatch

Regarding implementation for obs_property_next(**p). I've checked your GitHub project where you've used cffi for this.
Personally I recommend to use first-party industry-standart : ctypes. I have had problems with cffi in the past and simple snippets would not compile on the virtual machine, while at the same time in ctypes - it 100% just works.
I am using SWIG trick, I've shared earlier. Here is conversion from LuaJIT ffi to ctypes interface. A link to the full source code.


Python:
def xyz():
    source = obs_get_source_by_name("tmp")
    if source:
        fSource = obs_source_get_filter_by_name(source, "color")
        if fSource:
            props = obs_source_properties(fSource)
            if props:
                prop = obs_properties_first(props)
                name = obs_property_name(prop)
                if name:
                    _p = cast(c_void_p(int(prop)), POINTER(Property))
                    foundProp = G.obs_property_next(byref(_p))
                    prop = cast(_p, POINTER(Property))
                    while foundProp:
                        print(G.obs_property_name(prop))
                        _p = cast(prop, POINTER(Property))
                        foundProp = G.obs_property_next(_p)
                        prop = cast(_p, POINTER(Property))
            obs_properties_destroy(props)
        obs_source_release(fSource)
    obs_source_release(source)

Also why not using Lua for automating OBS Studio ?
 

npatch

New Member
Hello @npatch

Regarding implementation for obs_property_next(**p). I've checked your GitHub project where you've used cffi for this.
Personally I recommend to use first-party industry-standart : ctypes. I have had problems with cffi in the past and simple snippets would not compile on the virtual machine, while at the same time in ctypes - it 100% just works.
I am using SWIG trick, I've shared earlier. Here is conversion from LuaJIT ffi to ctypes interface. A link to the full source code.


Python:
def xyz():
    source = obs_get_source_by_name("tmp")
    if source:
        fSource = obs_source_get_filter_by_name(source, "color")
        if fSource:
            props = obs_source_properties(fSource)
            if props:
                prop = obs_properties_first(props)
                name = obs_property_name(prop)
                if name:
                    _p = cast(c_void_p(int(prop)), POINTER(Property))
                    foundProp = G.obs_property_next(byref(_p))
                    prop = cast(_p, POINTER(Property))
                    while foundProp:
                        print(G.obs_property_name(prop))
                        _p = cast(prop, POINTER(Property))
                        foundProp = G.obs_property_next(_p)
                        prop = cast(_p, POINTER(Property))
            obs_properties_destroy(props)
        obs_source_release(fSource)
    obs_source_release(source)

Also why not using Lua for automating OBS Studio ?
thanks for that, I've also looked through your github. It's been a good resource for things, as well as this thread.
I ended up dropping cffi and obs_property_next (which was only really meant for printing to be honest). I just keep the import in case I need to dabble in it again against my will. My use-case is pretty simple. I only need recording automation with process tracking and some special handling if the app crashed.

As for lua vs python, I don't know lua, I know python from blender, so my choice was that. I didn't want to restart the whole thing, since I just sidestepped the issue of property traversal. Again, I need specific stuff. But I'm thinking of redoing this as a plugin instead, but after I have sth working in python. I figure a plugin might have better coverage of functionality than scripting. Currently I have a different issue, check my other thread, if possible.
 
Top