Help with Encoder Plugin Creation

Xaymar

Active Member
I've started working on a AMF (AMD Media SDK Framework) based VCE implementation for OBS Studio, but I'm having a hard time with the OBS Studio plugin interface for encoders. There are some differences between how I need things and how OBS delivers them.
  1. 'encode' receives a 'struct encoder_frame*', however I don't see a pointer to continuous memory in it. Do I have to create it myself?
  2. AMF supports DX11 and OpenGL source surfaces for encoding, how would I go about getting these? Is this even possible?
  3. How do I tell if the incoming frame is the last one that is to be encoded? AMF allows me to hint the hardware encoder that this is the last frame.
  4. AMF may create multiple packets for a single frame, yet I don't see a way to return multiple, which means that eventually the output queue is going to be full and the plugin will have to set received_packet to false. Does that drop frames or will OBS buffer them until the plugin can handle them?
  5. AMF also supports many more color formats than just NV12, how would I be able to tell what format encoder_frame will be in apart? (Supported Formats without conversion: NV12, YV12, BGRA, ARGB, RGBA, GRAY8, YUV420P, U8V8, YUY2)
And a question not exactly related to plugin coding: why does the generated doxygen files take ages to load a page? It's on an SSD now and it still takes at least 5 seconds before it even starts loading the page. Do I need to disable an option in doxygen so it actually becomes usable?
 
Last edited:

Lain

Forum Admin
Lain
Forum Moderator
Developer
1. I'm confused, there is a pointer to continuous memory: encoder_frame::data[] contains the raw data for each plane, along with encoder_frame::linesize[] for the row byte sizes. This structure is defined in libobs/obs-encoder.h.
2. Not possible currently, in the future it may be. Haven't had time to implement it, and its benefit isn't too significant to be considered a significant priority unfortunately.
3. You can't.
4. If it returns multiple full packets per frame, then simply store excess packets and return packets in the order they were received. You could do something like store the data in a circular buffer if needed. If you're a first-time h264 user, keep in mind that NALs individually do not constitute a full packet. A full packet can have multiple NALs.

5. Honestly, the generated doxygen stuff hasn't really been tested or fully constructed. It's kind of annoying. I'm planning on writing some proper documentation at some point when I'm not too bogged down with things to do. (That's also quite annoying because it really really needs more thorough documentation, and it feels I never get enough of a break to write it, everyone always needs something ASAP, a new capture method, a device implemented, a bug fixed, a super high priority feature implemented, very frustrating)
 

Xaymar

Active Member
1. Oh my bad, I just looked at the mf-h264-encoder and it does some rebuilding into another buffer which confused me a bit.
2. Alright. I'll leave that as a ToDo for now.
3. Good to know, I'll disable the hinting feature then.
4. I don't know why I didn't have that idea... Thanks.
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
Probably not the best to look at the media foundation encoders as an example. If you want something that has similar hardware constraints, look at obs-qsv11. It shows an example of a natively implemented hardware encoder that also handles things like color format and resolution limitations. obs-x264 is another good example.
 

Xaymar

Active Member
Thanks to you I finally managed to have it load and encode. Now all that's left is to make the static configuration dynamic.
obs64_2016-07-25_19-51-47.png
 
Top