Camera2 custom camera development process detailed explanation

Camera2 custom camera development process detailed explanation

[[432612]]

Preface

Today I will introduce the detailed explanation of Camera2 camera development process to you guys;

Android 5.0 has completely redesigned the camera API and added a newly designed Camera 2 API. These APIs not only greatly improve the camera function of the Android system, but also support RAW photo output and even allow programs to adjust the camera's focus mode, exposure mode, shutter, etc.

1. Camera2 Architecture Introduction

Camera2 uses the concept of pipeline to connect Android devices and cameras. The system sends a Capture request to the camera, and the camera returns CameraMetadata. All this is built in a session called CameraCaptureSession;

The core participating roles of the architecture are: CameraManager, CameraDevice, CameraCharacteristics, CameraRequest and CameraRequest.Builder, CameraCaptureSession and CaptureResult;

1. CameraManager

Located under android.hardware.camera2.CameraManager, it was also added in Android 21 (5.0). Like other system services, it is initialized through Context.getSystemService(CameraManager.class) or Context.getSystemService(Context.CAMERA_SERVICE). It is mainly used to manage the system camera:

  • Get the camera list of the Android device through the getCameraIdList() method;
  • getCameraCharacteristics(String cameraId) gets the detailed parameters and supported functions of the camera;

2. CameraDevice

  • CameraDevice is an object abstracted from Camera2, directly connected to the system hardware camera;
  • That is, the camera functions can be divided into two levels: limit and full. When the camera is at the limited level, the functions of Camera2 and the earlier Camera are similar. In addition, in the Camera2 architecture, CameraDevice also undertakes other important tasks:
  • Monitor the camera status through CameraDevice.StateCallback (mainly including onOpened, onClosed, onDisconnected, onErro);
  • Manage CameraCaptureSession, - create sessions through the createCaptureSession() method and the createReprocessableCaptureSession() method. Usually, the corresponding methods are called in CameraDevice.StateCallback to create a preview session;
  • Manage CaptureRequest, mainly including creating capture requests through createCaptureRequest(int templateType). When you need to preview, take pictures, and preview again, you need to create requests to complete them;

3. CameraCaptureSession

  • The system sends a Capture request to the camera, and the camera returns CameraMetadata. All this is done in the CameraCaptureSession session created by the corresponding CameraDevice. When the program needs to preview, take pictures, and preview again, it must first go through the session;
  • CameraCaptureSession has many important callbacks:
  • Manages the CameraCaptureSession.StateCallback state callback, a callback object used to receive updates about the CameraCaptureSession state;
  • Manages the CameraCaptureSession.CaptureCallback capture callback, which is used to receive the callback of the capture request status. The corresponding method of the callback will be triggered when the request triggers the capture to start; when the capture is completed; when an error occurs when capturing an image;
  • Submitting a capture image request by calling the capture() method is to take a photo. The request defines all the parameters for capturing a single image, including the sensor, lens, flash, and post-processing parameters. The result of each request will generate a CaptureResult, which can generate a new frame for one or more Surfaces, and then attach it to the corresponding Surface for display through the addTarget(Surface) method of CaptureRequest.Builder;
  • Calling the method setRepeatingRequest() to request repeated image capture, that is, to achieve preview;
  • Call stopRepeating() to stop capturing images, that is, stop previewing.

4. Camera Characteristics

An object describing the properties of a Camera device, which can be queried using CameraManager via getCameraCharacteristics(String cameraId);

5. CameraRequest and CameraRequest.Builder

CameraRequest represents a capture request, and CameraRequest.Builder is used to describe various parameter settings for capturing images, including capture hardware (sensor, lens, flash memory), focus mode, exposure mode, processing pipeline, control algorithm and output buffer configuration, which are then passed to the corresponding session for setting. CameraRequest.Builder is responsible for generating the CameraRequest object. When the program calls the setRepeatingRequest() method for preview, or calls the capture() method for taking pictures, the CameraRequest parameter needs to be passed in. CameraRequest can be initialized through CameraRequest.Builder and obtained by calling createCaptureRequest;

6. CaptureResult

A CaptureRequest is an object that describes a subset of the results of capturing a single image from an image sensor and is generated by a CameraDevice after the CaptureRequest is processed;

7. Camera2: Connections between the main characters

CameraManager is at the top management position and is responsible for detecting and obtaining all cameras and their characteristics and passing in the specified CameraDevice.StateCallback callback to open the specified camera. CameraDevice is responsible for managing abstract objects, including monitoring the Camera state callback CameraDevice.StateCallback, creating CameraCaptureSession and CameraRequest;

2. Detailed explanation of Camera2 camera development process

1. Apply for permissions

  1. <uses-permission android: name = "android.permission.CAMERA" />
  2. <uses-permission android: name = "android.permission.WRITE_EXTERNAL_STORAGE" />

2. Define a TextureView in the xml layout file

  1. <TextureView
  2. android:id= "@+id/textureView"  
  3. android:layout_width= "match_parent"  
  4. android:layout_height= "match_parent" />

3. Create a CameraHelper class and add a callback function to the TextureView object

  1. class Camera2Helper(val mActivity: Activity, private val mTextureView: TextureView) {
  2. companion object {
  3. const val PREVIEW_WIDTH = 720 //Preview width
  4. const val PREVIEW_HEIGHT = 1280 //Preview height
  5. const val SAVE_WIDTH = 720 //Save the image width
  6. const val SAVE_HEIGHT = 1280 //Save the image height
  7. }
  8. private lateinit var mCameraManager: CameraManager
  9. private var mImageReader: ImageReader? = null  
  10. private var mCameraDevice: CameraDevice? = null  
  11. private var mCameraCaptureSession: CameraCaptureSession? = null  
  12. private var mCameraId = "0"  
  13. private lateinit var mCameraCharacteristics: CameraCharacteristics
  14. private var mCameraSensorOrientation = 0 //Camera orientation
  15. private var mCameraFacing = CameraCharacteristics.LENS_FACING_BACK //Use the rear camera by default
  16. private val mDisplayRotation = mActivity.windowManager.defaultDisplay.rotation //Phone direction
  17. private var canTakePic = true //Can I take a photo?
  18. private var canExchangeCamera = false //Can I switch the camera?
  19. private var mCameraHandler: Handler
  20. private val handlerThread = HandlerThread( "CameraThread" )
  21. private var mPreviewSize = Size (PREVIEW_WIDTH, PREVIEW_HEIGHT) //Preview size
  22. private var mSavePicSize = Size (SAVE_WIDTH, SAVE_HEIGHT) //Save picture size
  23. init {
  24. handlerThread.start()
  25. mCameraHandler = Handler(handlerThread.looper)
  26. mTextureView.surfaceTextureListener = object : TextureView.SurfaceTextureListener {
  27. override fun onSurfaceTextureSizeChanged(surface: SurfaceTexture?, width: Int , height: Int ) {
  28. }
  29. override fun onSurfaceTextureUpdated(surface: SurfaceTexture?) {
  30. }
  31. override fun onSurfaceTextureDestroyed(surface: SurfaceTexture?): Boolean {
  32. releaseCamera()
  33. return   true  
  34. }
  35. override fun onSurfaceTextureAvailable(surface: SurfaceTexture?, width: Int , height: Int ) {
  36. initCameraInfo()
  37. }
  38. }
  39. }
  40. }
  • Opening the camera and creating a session are time-consuming operations, so we start a HandlerThread to handle them in the child thread;
  • There are two variables about size, one is the preview size (displayed on the screen), and the other is the size of the saved image (the size of the image saved to the SD card);
  • There are two directions, one is the phone direction (if it is a vertical screen application, this direction is 0), and the other is the camera direction (generally speaking, the front camera direction is 270, and the rear camera direction is 90);

4. Initialize related parameters

  1. /**
  2. * Initialization
  3. */
  4. private fun initCameraInfo() {
  5. mCameraManager = mActivity.getSystemService(Context.CAMERA_SERVICE) as CameraManager
  6. val cameraIdList = mCameraManager.cameraIdList
  7. if (cameraIdList.isEmpty()) {
  8. mActivity.toast( "No camera available" )
  9. return  
  10. }
  11. for (id in cameraIdList) {
  12. val cameraCharacteristics = mCameraManager.getCameraCharacteristics(id)
  13. val facing = cameraCharacteristics.get(CameraCharacteristics.LENS_FACING)
  14. if (facing == mCameraFacing) {
  15. mCameraId = id
  16. mCameraCharacteristics = cameraCharacteristics
  17. }
  18. log( "camera $id in device" )
  19. }
  20. val supportLevel = mCameraCharacteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)
  21. if (supportLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
  22. mActivity.toast( "Camera hardware does not support new features" )
  23. }
  24. //Get the camera direction
  25. mCameraSensorOrientation = mCameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION)
  26. //Get StreamConfigurationMap, which manages all output formats and sizes supported by the camera
  27. val configurationMap = mCameraCharacteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)
  28. val savePicSize = configurationMap.getOutputSizes(ImageFormat.JPEG) //Save photo size
  29. val previewSize = configurationMap.getOutputSizes(SurfaceTexture::class.java) //Preview size
  30. val exchange = exchangeWidthAndHeight(mDisplayRotation, mCameraSensorOrientation)
  31. mSavePicSize = getBestSize(
  32. if (exchange) mSavePicSize.height else mSavePicSize.width,
  33. if (exchange) mSavePicSize.width else mSavePicSize.height,
  34. if (exchange) mSavePicSize.height else mSavePicSize.width,
  35. if (exchange) mSavePicSize.width else mSavePicSize.height,
  36. savePicSize.toList())
  37. mPreviewSize = getBestSize(
  38. if (exchange) mPreviewSize.height else mPreviewSize.width,
  39. if (exchange) mPreviewSize.width else mPreviewSize.height,
  40. if (exchange) mTextureView.height else mTextureView.width,
  41. if (exchange) mTextureView.width else mTextureView.height,
  42. previewSize.toList())
  43. mTextureView.surfaceTexture.setDefaultBufferSize(mPreviewSize.width, mPreviewSize.height)
  44. log( "Optimal preview size: ${mPreviewSize.width} * ${mPreviewSize.height}, ratio ${mPreviewSize.width.toFloat() / mPreviewSize.height}" )
  45. log( "Optimal size for saving pictures: ${mSavePicSize.width} * ${mSavePicSize.height}, ratio ${mSavePicSize.width.toFloat() / mSavePicSize.height}" )
  46. //Adjust the size of TextureView according to the preview size to ensure that the image is not stretched
  47. val orientation = mActivity.resources.configuration.orientation
  48. if (orientation == Configuration.ORIENTATION_LANDSCAPE)
  49. mTextureView.setAspectRatio(mPreviewSize.width, mPreviewSize.height)
  50. else  
  51. mTextureView.setAspectRatio(mPreviewSize.height, mPreviewSize.width)
  52. mImageReader = ImageReader.newInstance(mPreviewSize.width, mPreviewSize.height, ImageFormat.JPEG, 1)
  53. mImageReader?.setOnImageAvailableListener(onImageAvailableListener, mCameraHandler)
  54. if (openFaceDetect)
  55. initFaceDetect()
  56. openCamera()
  57. }
  58. /**
  59. * Returns whether the width and height need to be swapped based on the provided screen orientation [displayRotation] and camera orientation [sensorOrientation]
  60. */
  61. private fun exchangeWidthAndHeight(displayRotation: Int , sensorOrientation: Int ): Boolean {
  62. var exchange = false  
  63. when (displayRotation) {
  64. Surface.ROTATION_0, Surface.ROTATION_180 ->
  65. if (sensorOrientation == 90 || sensorOrientation == 270) {
  66. exchange = true  
  67. }
  68. Surface.ROTATION_90, Surface.ROTATION_270 ->
  69. if (sensorOrientation == 0 || sensorOrientation == 180) {
  70. exchange = true  
  71. }
  72. else -> log( "Display rotation is invalid: $displayRotation" )
  73. }
  74. log( "screen direction $displayRotation" )
  75. log( "Camera orientation $sensorOrientation" )
  76. Return exchange
  77. }
  78. /**
  79. *
  80. * Returns the size that is equal to or closest to the specified width and height according to the provided parameter values
  81. *
  82. * @param targetWidth target width
  83. * @param targetHeight target height
  84. * @param maxWidth Maximum width (ie, the width of TextureView)
  85. * @param maxHeight Maximum height (ie the height of TextureView)
  86. * @param sizeList supported Size list
  87. *
  88. * @return Returns the size that is equal to or closest to the specified width and height
  89. *
  90. */
  91. private fun getBestSize(targetWidth: Int , targetHeight: Int , maxWidth: Int , maxHeight: Int , sizeList: List< Size >): Size {
  92. val bigEnough = ArrayList< Size >() // Size list larger than the specified width and height
  93. val notBigEnough = ArrayList< Size >() // Size list smaller than the specified width and height
  94. for ( size   in sizeList) {
  95. //width <= maximum width && height <= maximum height && aspect ratio == target aspect ratio
  96. if ( size .width <= maxWidth && size .height <= maxHeight
  97. && size .width == size .height * targetWidth / targetHeight) {
  98. if ( size .width >= targetWidth && size .height >= targetHeight)
  99. bigEnough.add ( size )
  100. else  
  101. notBigEnough. add ( size )
  102. }
  103. log( "System supported sizes: ${size.width} * ${size.height}, ratio: ${size.width.toFloat() / size.height}" )
  104. }
  105. log( "Maximum size: $maxWidth * $maxHeight, ratio: ${targetWidth.toFloat() / targetHeight}" )
  106. log( "target size: $targetWidth * $targetHeight, ratio: ${targetWidth.toFloat() / targetHeight}" )
  107. //Select the smallest value in bigEnough or the largest value in notBigEnough
  108. return   when {
  109. bigEnough. size > 0 -> Collections. min (bigEnough, CompareSizesByArea())
  110. notBigEnough. size > 0 -> Collections. max (notBigEnough, CompareSizesByArea())
  111. else -> sizeList[0]
  112. }
  113. }
  • First, get the CameraManager instance through mActivity.getSystemService(Context.CAMERA_SERVICE) as CameraManager;
  • By looping through the available cameras in the device, various information about the camera can be obtained through mCameraManager.getCameraCharacteristics(id);
  • mCameraCharacteristics.get(CameraCharacteristics.SENSOR_ORIENTATION) gets the direction of the camera sensor;
  • Get the preview size and saved image size supported by the camera through configurationMap.getOutputSizes(ImageFormat.JPEG) and configurationMap.getOutputSizes(SurfaceTexture::class.java);
  • The function of the exchangeWidthAndHeight(displayRotation: Int, sensorOrientation: Int) method is to determine whether the width and height need to be exchanged based on the screen orientation and camera orientation;
  • Get the optimal width and height through the getBestSize() method. According to the input target width and height values, the maximum width and height values ​​(i.e. the screen size) and the size list supported by the camera, get an optimal value from the size list supported by the camera;
  • Use the mTextureView.surfaceTexture.setDefaultBufferSize() method to set the preview size of TextureView;
  • Create an ImageReader object and set the callback function;
  • Process the obtained image data in onImageAvailableListener;

5. Open the camera

  1. /**
  2. * Open the camera
  3. */
  4. private fun openCamera() {
  5. if (ContextCompat.checkSelfPermission(mActivity, Manifest.permission.CAMERA) != PackageManager.PERMISSION_GRANTED) {
  6. mActivity.toast( "No camera permission!" )
  7. return  
  8. }
  9. mCameraManager.openCamera(mCameraId, object : CameraDevice.StateCallback() {
  10. override fun onOpened(camera: CameraDevice) {
  11. log( "onOpened" )
  12. mCameraDevice = camera
  13. createCaptureSession(camera)
  14. }
  15. override fun onDisconnected(camera: CameraDevice) {
  16. log( "onDisconnected" )
  17. }
  18. override fun onError(camera: CameraDevice, error: Int ) {
  19. log( "onError $error" )
  20. mActivity.toast( "Failed to open camera! $error" )
  21. }
  22. }, mCameraHandler)
  23. }

6. Create a preview session

  1. /**
  2. * Create a preview session
  3. */
  4. private fun createCaptureSession(cameraDevice: CameraDevice) {
  5. val captureRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)
  6. val surface = Surface(mTextureView.surfaceTexture)
  7. captureRequestBuilder.addTarget(surface) // Bind the CaptureRequest builder to the Surface object
  8. captureRequestBuilder.set ( CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH) // Flash
  9. captureRequestBuilder.set ( CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE) // Auto focus
  10. // Create a CameraCaptureSession object for camera preview
  11. cameraDevice.createCaptureSession(arrayListOf(surface, mImageReader?.surface), object : CameraCaptureSession.StateCallback() {
  12. override fun onConfigureFailed(session: CameraCaptureSession?) {
  13. mActivity.toast( "Failed to open preview session!" )
  14. }
  15. override fun onConfigured(session: CameraCaptureSession) {
  16. mCameraCaptureSession = session
  17. session.setRepeatingRequest(captureRequestBuilder.build(), mCaptureCallBack, mCameraHandler)
  18. }
  19. }, mCameraHandler)
  20. }
  21. private val mCaptureCallBack = object : CameraCaptureSession.CaptureCallback() {
  22. override fun onCaptureCompleted(session: CameraCaptureSession, request: CaptureRequest?, result: TotalCaptureResult) {
  23. super.onCaptureCompleted(session, request, result)
  24. canExchangeCamera = true  
  25. canTakePic = true  
  26. }
  27. override fun onCaptureFailed(session: CameraCaptureSession?, request: CaptureRequest?, failure: CaptureFailure?) {
  28. super.onCaptureFailed(session, request, failure)
  29. log( "onCaptureFailed" )
  30. mActivity.toast( "Failed to open preview!" )
  31. }
  32. }
  • Create a Builder object for preview through cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
  • Add a Surface object to the Builder object and set various related parameters;
  • Create a session through cameraDevice.createCaptureSession, and pass a surface and mImageReader?.surface in the first parameter. This indicates that the image data of this session is output to these two objects;
  • When the session is successfully created, a preview request is initiated through session.setRepeatingRequest(captureRequestBuilder.build(), mCaptureCallBack, mCameraHandler);

7. Take photos and save them

  1. /**
  2. * Photograph
  3. */
  4. fun takePic() {
  5. if (mCameraDevice == null || !mTextureView.isAvailable || !canTakePic) return  
  6. mCameraDevice?.apply {
  7. val captureRequestBuilder = createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE)
  8. captureRequestBuilder.addTarget(mImageReader?.surface)
  9. captureRequestBuilder.set ( CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE) // Auto focus
  10. captureRequestBuilder.set ( CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH) // Flash
  11. captureRequestBuilder.set ( CaptureRequest.JPEG_ORIENTATION, mCameraSensorOrientation) //Rotate the saved photo according to the camera direction to make it "natural direction"  
  12. mCameraCaptureSession?.capture(captureRequestBuilder.build(), null , mCameraHandler)
  13. ?: mActivity.toast( "Photo taking abnormality!" )
  14. }
  15. }
  16. private val onImageAvailableListener = OnImageAvailableListener {
  17. val image = it.acquireNextImage()
  18. val byteBuffer = image.planes[0].buffer
  19. val byteArray = ByteArray(byteBuffer.remaining())
  20. byteBuffer.get(byteArray)
  21. it.close ()
  22. BitmapUtils.savePic(byteArray, mCameraSensorOrientation == 270, { savedPath, time ->
  23. mActivity.runOnUiThread {
  24. mActivity.toast( "Image saved successfully! Saved path: $savedPath Time consumed: $time" )
  25. }
  26. }, { msg ->
  27. mActivity.runOnUiThread {
  28. mActivity.toast( "Image saving failed! $msg" )
  29. }
  30. })
  31. }
  • Create a Builder object for a photo request through createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
  • Then set various parameters. Note that captureRequestBuilder.set(CaptureRequest.JPEG_ORIENTATION, mCameraSensorOrientation) is used to set the rotation direction of the saved photo. If it is not set, the saved photo will not be in the "natural direction";
  • The result of taking a photo is obtained in the OnImageAvailableListener object;
  • Get an Image object through acquireNextImage() method, then get byteBuffer through image.planes[0].buffer, and convert this byteBuffer into byteArray;
  • This byteArray is the image data obtained by taking a photo. Then you can save this byteArray as a picture in the phone storage;

8. Release the camera and threads

  1. fun releaseCamera() {
  2. mCameraCaptureSession?. close ()
  3. mCameraCaptureSession = null  
  4. mCameraDevice? .close ()
  5. mCameraDevice = null  
  6. mImageReader?. close ()
  7. mImageReader = null  
  8. canExchangeCamera = false  
  9. }
  10. fun releaseThread() {
  11. handlerThread.quitSafely()
  12. }

Summarize

Camera2 API is very different from the old Camera API. You may not be used to it at first, but Camera2 has many advantages and provides a lot of parameters for us to control;

Let’s study together and come on, my friends;

<<:  How can I turn off the annoying "Subscription Account" messages on WeChat? Just two steps to turn it off completely

>>:  WeChat has two versions updated! Ten new features have been quietly launched, and it is recommended to upgrade

Recommend

8 Marketing Rules for Brand Content Marketing

New media marketing methods are not WeChat , Weib...

The hot operating strategy of Daodao Accounting APP

In the past two years, an accounting software cal...

60% of users actively search for apps — App Store ASO Strategy

ASO optimization refers to the keyword ranking se...

Frequently asked questions and answers about user growth!

This article is just a beginning, sorting out 5 v...

There are so many programming languages, why does Google prefer JS?

I have been engaged in software development for 1...

HuafangzixMuduojia|Art Community Issue 6

HuafangzixMuduojia|Art Community Sixth Issue Reso...

Ye Maozhong's Marketing Strategy Notes

1. Tribute to Mr. Ye Maozhong There is one thing ...

How to acquire customers at low cost through Zhihu operation and promotion?

Before doing promotion, I think you should at lea...

Microsoft: Android is so big, I want to occupy it too

[[149270]] According to foreign technology media ...

Analysis of marketing and promotion of memes, emoticons, and interesting content

Introduction: In the process of spreading interes...

How to predict user churn rate and make strategies in advance?

The user churn rate directly reflects the market ...