I'm making a video streaming app that adapts the video bitrate to the available uplink bandwidth and I'd like it to change the video resolution dynamically so that there aren't as much compression artifacts on lower bitrates. While I've got this working by releasing the
MediaCodec and calling
stopRepeating() on the
CameraCaptureSession and then configuring everything for the new resolution, this causes a very noticeable interruption in the stream - at least half a second in my tests.
I use OpenGL to scale the image for when the camera doesn't support the required resolution natively, similar to this. I initialize the capture session with two surfaces - one for preview to the user (using
TextureView) and one for the encoder, that's either MediaCodec's input surface directly or my OpenGL texture surface.
This could potentially be solved by using
MediaCodec.createPersistentInputSurface() in that I'll be able to reuse this instance of scaler across resolution changes and won't have to do anything with the capture session because no surface changes occur as far as the camera is concerned, but it's only available since API 23 and I need this implementation to support API 21 as well.
Then there's also the issue of surfaces getting invalidated and recreated. For example, when the user presses the back button, the activity, and the
TextureView it contains, are destroyed, thus making the preview surface invalid. Then when the user navigates to that activity again, a new
TextureView is created and I need to start showing the preview in it, without introducing any lag to the stream seen by the scaler/encoder.
So, my question in general: how do I change the set of the output surfaces in a
CameraCaptureSession, or recreate a
CameraCaptureSession, while introducing as little lag into the video stream as possible?