FORUMS

ML Kit: Face Detection Development Procedure

1,030 posts
Thanks Meter: 1,109
 
By XDARoni, XDA Community Manager on 19th June 2020, 03:33 PM
Post Reply Email Thread
Before API development, you need to make necessary development preparations, ensure that the Maven repository address of the HMS Core SDK has been configured in your project, and the SDK of this service has been integrated.

Static image detection
1. Create a face analyzer. You can create the analyzer using the MLFaceAnalyzerSetting class.
Code:
// Method 1: Use customized parameter settings.
// If the Full SDK mode is used for integration, set parameters based on the integrated model package.
MLFaceAnalyzerSetting setting = new MLFaceAnalyzerSetting.Factory()
// Set whether to detect key face points.
    .setKeyPointType(MLFaceAnalyzerSetting.TYPE_KEYPOINTS)
// Set whether to detect facial features.
    .setFeatureType(MLFaceAnalyzerSetting.TYPE_FEATURES)
// Set whether to detect face contour points.
    .setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
// Set whether to enable face tracking.
    .setTracingAllowed(true)
// Set the speed and precision of the detector.
    .setPerformanceType(MLFaceAnalyzerSetting.TYPE_SPEED)
    .create();
MLFaceAnalyzer analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer(setting);
// Method 2: Use the default parameter settings. This method can be used when the Lite SDK is used for integration. The default parameters are key points, face contour, facial features, precision mode, and face tracking (disabled by default) for detection.
MLFaceAnalyzer analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
2. Create an MLFrame object by using android.graphics.Bitmap for the analyzer to detect images. JPG, JPEG, and PNG images are supported. It is recommended that the image size be within the range of 320 x 320 px to 1920 x 1920 px.
Code:
// Create an MLFrame by using the bitmap. 
MLFrame frame = MLFrame.fromBitmap(bitmap);
3. Call the asyncAnalyseFrame method to perform face detection.
Code:
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
     @Override
     public void onSuccess(List<MLFace> faces) {
       // Detection success.
     }
 }).addOnFailureListener(new OnFailureListener() {
     @Override
     public void onFailure(Exception e) {
        // Detection failure.
         // Recognition failure.
         try {
             MLException mlException = (MLException)e;
    // Obtain the result codes. You can process the result codes and customize respective messages displayed to users. For details about the result codes, please refer to MLException.
             int errorCode = mlException.getErrCode();
   // Obtain the error information. You can quickly locate the fault based on the result code.
             String errorMessage = mlException.getMessage();
         } catch (Exception error) {
           // Handle the conversion error.
         }
    }
 });
4. After the detection is complete, stop the analyzer to release detection resources.
Code:
try {
    if (analyzer != null) {
        ananlzer.stop();
    }
} catch (IOException e) {
    // Exception handling.
}
The asynchronous call mode is used in preceding sample code. Face detection also supports synchronous call of the analyseFrame function to obtain the detection result.
Code:
SparseArray<MLFace> faces = analyzer.analyseFrame(frame);
Camera stream detection
You can process camera streams, convert video frames into an MLFrame object, and detect faces using the local static image detection method. If the synchronous detection API is called, you can also use the LensEngine class built in the SDK to locally detect faces in camera streams. The sample code is as follows:
1. Create a face analyzer.
Code:
MLFaceAnalyzer analyzer = MLAnalyzerFactory.getInstance().getFaceAnalyzer();
Create the FaceAnalyzerTransactor class for processing detection results. This class implements the MLAnalyzer.MLTransactor<T> API and uses the transactResult method in this class to obtain the detection results and implement specific services.
Code:
public class FaceAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLFace> { 
    @Override 
    public void transactResult(MLAnalyzer.Result<MLFace> results) { 
        SparseArray<MLFace> items = results.getAnalyseList(); 
      // Determine detection result processing as required. Note that only the detection results are processed. 
        // Other detection-related APIs provided by HUAWEI ML Kit cannot be called. 
    } 
    @Override 
    public void destroy() { 
        // Callback method used to release resources when the detection ends. 
    } 
}
2. Set the detection result processor to bind the analyzer to the result processor.
Code:
analyzer.setTransactor(new FaceAnalyzerTransactor());
3. Create an instance of the LensEngine class provided by the HMS Core ML SDK to capture dynamic camera streams and pass the streams to the analyzer. It is recommended that the camera display size be set to a value ranging from 320 x 320 px to 1920 x 1920 px.
Code:
LensEngine lensEngine = new LensEngine.Creator(getApplicationContext(), analyzer) 
    .setLensType(LensEngine.BACK_LENS) 
    .applyDisplayDimension(1440, 1080) 
    .applyFps(30.0f) 
    .enableAutomaticFocus(true) 
    .create();
4. Call the run method to start the camera and read camera streams for recognition.
Code:
// Implement other logic of the SurfaceView control by yourself. 
SurfaceView mSurfaceView = findViewById(R.id.surface_view); 
try { 
    lensEngine.run(mSurfaceView.getHolder()); 
} catch (IOException e) { 
    // Exception handling logic. 
}
5. After the detection is complete, stop the analyzer to release detection resources.
Code:
if (analyzer != null) {   
    try { 
        analyzer.stop();     
    } catch (IOException e) {    
        // Exception handling. 
    } 
} 
if (lensEngine != null) { 
    lensEngine.release();     
}
In camera stream detection, when MLAnalyzer.MLTransactor<T> is inherited to process detection results, if your app needs to stop detection after a specific result is detected and continue detection after the result is processed, please refer to Development for Multi Detections in Camera Stream Detection Mode.
The Following 5 Users Say Thank You to XDARoni For This Useful Post: [ View ] Gift XDARoni Ad-Free
Post Reply Subscribe to Thread

Guest Quick Reply (no urls or BBcode)
Message:
Previous Thread Next Thread