# How the 'rtmp_output' plugin gets video frame data from the textures?



## choes (Sep 9, 2016)

Hello guys,

I'm reading the source code of _obs-studio_, and confused about the data source of the _rtmp_output _plugin.

The _av_capture_input_ plugin puts the captured frame data into _async_frames _by calling_ obs_source_output_video(), _we can get the frame in _obs_source_update_async_video() _by calling_ obs_source_get_frame(), _then copy the frame data to  the _async_texture. 
_
However, I notice that the _obs_video_thread _uses _convert_textures _as the source while calling _gs_stage_texture(). _I'm a novice  about the OpenGL, and confused about the relationship between the _async_texture_ and the _convert_textures._

So, how does the _rtmp_output _plugin gets video frame data from these textures?

Maybe I misunderstand the source code, please correct me if that.

Thanks,
Choes


----------



## Jim (Sep 9, 2016)

rtmp_output gets encoded data, not raw video data.  If you look at its capability flags, you'll notice it's an output that only takes encoded data -- it has to be linked to audio/video encoders before it can be used.  The raw video/audio data goes to the encoders, then the encoders pass the encoded packets to rtmp_output automatically.  rtmp_output then muxes the packets into FLV and sends it out to the stream.


----------



## choes (Sep 9, 2016)

Jim said:


> rtmp_output gets encoded data, not raw video data.  If you look at its capability flags, you'll notice it's an output that only takes encoded data -- it has to be linked to audio/video encoders before it can be used.  The raw video/audio data goes to the encoders, then the encoders pass the encoded packets to rtmp_output automatically.  rtmp_output then muxes the packets into FLV and sends it out to the stream.


Hello Jim,
Thanks for your reply, but I'm still confused about the process that the encoders gets raw video data from the textures.
How does the encoer(e.g. obs_x264) gets its raw video data?


----------



## Jim (Sep 9, 2016)

They get it through their "encode" callback.  Defined in libobs/obs-encoder.h:162, which obs_x264 defines in obs-x264.c:755, which goes to the obs_x264_encode function in obs-x264.c:674.


----------



## choes (Sep 9, 2016)

Jim said:


> They get it through their "encode" callback.  Defined in libobs/obs-encoder.h:162, which obs_x264 defines in obs-x264.c:755, which goes to the obs_x264_encode function in obs-x264.c:674.


Hello Jim,
Maybe I don't express my intention clearly. I'm not good at English, thanks for your patience.
I'd like to figure out  the data flow from the source plugin(e.g. _av_capture_input_) to the encoder plugin. 

I noticed that the _av_capture_input_ puts the captured data to _source->async_frames_ by calling _obs_source_output_video()_, and gets the closest frame by calling _obs_source_get_frame() _while calling _obs_source_update_async_video()_, then stores the data to the _soruce->async_texture_ by calling _gs_texture_set_image()_.
Does the encoder plugin get the data from the source plugin's _async_texture_? If so, how does it get data form this texture?


----------

