ProAndroidDev

The latest posts from Android Professionals and Google Developer Experts.

Follow publication

Understanding Camera2 API from callbacks (Part 1)

Photo by Markus Spiske temporausch.com from Pexels

Android development is becoming easy to jump in day by day. There are so many convenient development tools such as Jetpack. However, when it comes to Camera2, there’s still a big hurdle for new comers to understand the whole picture of how it works. Also I should note that even famous Android books or the official doc do not explain a lot about Camera2 API, especially how we should implement.

One reason why Camera2 confuses developers is that there are so many callbacks that we have to manage just to take one picture.
In this series of articles, I will try to explain this complex system and flow by looking into how each callback interacts with Camera2 API while we capture an image from a device. This first post, I will describe about callbacks and how these are used when we launch a camera and render a preview.

Before we go on here is a sample project I made for this article. It’s forked from https://github.com/googlesamples/android-Camera2Basic. I pulled out almost all functionalities that relate to Camera out side of Camera2BasicFragment.kt and placed them into Camera.kt .

5 callbacks you need to know

Let me first explain important callbacks of Camera2 API. There are actually five of them. Yeah, five. There are more if you want, but I chose ones you really should understand.

TextureView.SurfaceTextureListener

This callback is actually used since Camera1 API. TextureView is the view which renders captured camera image data. TextureView is prepared at View creation, and this callback gives us a notification when we are ready to prepare for the camera device initialization.

CameraDevice.StateCallback()

This is used to check a camera device state. It is required to open a camera.

CameraCaptureSession.StateCallback()

A callback for configuring capture sessions from a camera. This is needed to check if the camera session is configured and ready to show a preview.

CameraCaptureSession.CaptureCallback()

This callback manages captured sessions. Whenever a focus or a still picture is requested from a user, CameraCaptureSession returns callbacks through this callback.

ImageReader.OnImageAvailableListener

This callback returns an image when CameraCaptureSession completes capture. You need to set this against ImageReader before capturing a still picture.

Open a camera and render a preview

Now let’s see how we open a camera and render a camera preview.
The below figure represents the order of tasks that are processed when we launch a camera.

First, we need to wait for TextureView to get ready, meaning that the width and height is calculated and SurfaceTexture is instantiated. We can do this by setting up TextureView.SufaceTextureListener.

Once TextureView is ready we launch the camera.

There are a couple of things happening in the method above. Let me go through one by one.

(1) Create Camera Manager

Camera Manager is a class that controls camera operations. It has numerous device settings and we can access them through getCameraCharaceristics.

(2) Setup cameraId

We instantiate Camera class. Inside, we setup the camera device id. Usually a camera device has multiple cameras, front and back. We need to select which one to use.

(3) Setup CameraOutput

Based on given width and height of TextureView and a sensor orientation, we calculate the preview size and aspect ratio of TextureView.
Each camera has supported size and we usually takes the largest we can get.
The rotation value we get from the sensor of the camera( CameraCharacteristics.SENSOR_ORIENTATION) and the actual rotation of the window display(activity.windowManager.defaultDisplay.rotation) can be different, so we also need to check if we should swap the width and height of the preview.

(4) Set the transformation matrix to TextureView

Once the preview size is fixed, we need to configure a transform matrix so that the preview will be properly rendered on TextureView.
There’s a bunch of calculations happening at (3) and (4), so I won’t write down the code here but you can refer this.

(5) Start Background thread

(5) and (6) happen inside camera.open().

Once TextureView is ready, we will start the background thread that handles all heavy capturing tasks.

(6) Open a connection to the camera with the given id

We need to lock with Semaphore that locks the thread until the camera is properly opened.

As I explained previously, Camera.StateCallback() returns the callback once CameraDevice is ready. We will release the lock and set CameraDevice.

(7) Setup the image buffer size to TextureView

TextureView requires the image buffer size to render the image input from Camera. That size is defined by the preview’s size.

(8) Start Camera

The next step is to start the camera. This is done by calling cameraDevice?createCaptureSession.
We also should set the list of Surfaces which we want to get outputs of captured image data.

CameraCaptureSession.StateCallback() will return CameraCaptureSession which allows us to start the preview.

(9) (Finally) Start the camera preview

We can now render the preview by calling captureSession.setRepeatingRequest. Again we need to lock the camera until we can properly set request to CaptureSession.

Wrap up

3 callbacks — TextureView.SurfaceTextureListener, CameraDevice.StateCallback(), CameraCaptureSession.StateCallback() — are used to initialize TextureView, CameraDevice and CameraCaptureSession. Rather than just knowing each class’s functionality, understanding callbacks help us observe what is happening while a camera is launching.
But this is just a beginning. In the next blog post I’ll write about how to capture a image from a camera device and write to a file. Stay tuned!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Published in ProAndroidDev

The latest posts from Android Professionals and Google Developer Experts.

Written by Tomoaki Imai

CTO at Noxx https://www.noxx.net/ AI hiring tool. FullStack developer and leader. Love to share ideas about software development. https://github.com/tomoima525

Responses (5)

Write a response