No one has attempted to port it yet to my knowledge, though some have expressed the desire. It can't currently compile on android, but sure, it's possible to port the core libraries (but not the UI), and I personally don't particularly mind as long as it doesn't negatively impact the current work we're doing or our development workflow. I can't be guaranteed that I can have time to personally help with things in terms of code, you need to be able to hold your own to prevent it from impacting my own workflow if possible. I'm always around to talk however.
To port to different operating systems/architectures, there's fair amount of things you'll have to do:
1.) Currently it's a little bit dependent on the x86 architecture. There's some architecture-specific intrinsics in the code that need to be #ifdef'ed for this sort of thing. Most of the 3D math in libobs/graphics uses x86 SSE intrinsics, for example. There's also some image conversion code that uses SSE in libobs/media-io. You'll have to #ifdef that with android equivalent or general architecture-independent C code.
2.) For graphics, we use a custom abstraction, and it currently requires OpenGL 3.2 equivalent of shader support. If you can't access normal OpenGL on android, you'll probably have to make an OpenGL ES graphics library, which isn't too big of a deal as long as it's compatible with our graphics subsystem design (which it should be). OpenGL ES is fairly similar to regular OpenGL and you should be able to do that without too much issue. If OpenGL ES use similar shaders to OpenGL you could probably use the same shader conversion code that regular OpenGL uses. I haven't used OpenGL ES yet so I don't know too much about it other than what little I've skimmed through. We have to use a custom graphics subsystem to maximize capture performance so it's a bit of an annoyance to have to deal with it.
3.) There's some platform-specific stuff (primarily in libobs/util) that you'd have to adapt for android.
4.) You'll have to write some capture modules for capturing the android screen, camera, and mics.
5.) The UI probably won't be something you can port. I'd assume you'd have to make an android-specific UI for it.
API documentation is currently very lacking, though I'm planning on making some very thorough documentation as soon as I have time using Sphinx most likely. Before doing that I've been going over some API design quirks that I'd like to resolve first, which keep leading me on side-adventures in to other features that need to be finished (such as transitions and the current audio subsystem overhaul that are being worked on). I'm always around if you have questions - fastest way to get in touch is IRC.
The core by itself is probably not too big of a deal to port, and though the graphics might be a little bit annoying it's probably not too different from regular OpenGL so I'd expect that to be no big deal as well. Ideally you'll also want audio/video capture, and then to use it you'll need some sort of front-end. So no matter what you still have quite a bit to do.