• Introducing XDA Computing: Discussion zones for Hardware, Software, and more!    Check it out!

Beginner: Integration of Landmark recognition by Huawei ML Kit in apps (Kotlin)

Search This thread

muraliameakula

Senior Member
Dec 29, 2020
99
3
Introduction
In this article, we can learn the integration of landmark recognition feature in apps using Huawei Machine Learning (ML) Kit. The landmark recognition can be used in tourism scenarios. For example, if you have visited any place in the world and not knowing about that monument or natural landmarks? In this case, ML Kit helps you to take image from camera or upload from gallery, then the landmark recognizer analyses the capture and shows the exact landmark of that picture with results such as landmark name, longitude and latitude, and confidence of the input image. A higher confidence indicates that the landmark in input image is more likely to be recognized. Currently, more than 17,000 global landmarks can be recognized. In landmark recognition, the device calls the on-cloud API for detection and the detection algorithm model runs on the cloud. During commissioning and usage, make sure the device has Internet access.
2640091000002377241.20210630101601.41507429511965275365652849309327:50520711112435:2800:BF4F2A6DABA7FB77FD4887C4F03C66BBD5AFBD8652CFDE3C5E0FD598AF8F804B.png

Requirements
1. Any operating system (MacOS, Linux and Windows).
2. Must have a Huawei phone with HMS 4.0.0.300 or later.
3. Must have a laptop or desktop with Android Studio, Jdk 1.8, SDK platform 26 and Gradle 4.6 installed.
4. Minimum API Level 21 is required.
5. Required EMUI 9.0.0 and later version devices.

Integration Process
1. First register as Huawei developer and complete identity verification in Huawei developers website, refer to register a Huawei ID.
2. Create a project in android studio, refer Creating an Android Studio Project.
3. Generate a SHA-256 certificate fingerprint.
4. To generate SHA-256 certificate fingerprint. On right-upper corner of android project click Gradle, choose Project Name > Tasks > android, and then click signingReport, as follows.
2640091000002377241.20210630101600.36592931343531174521211959366315:50520711112435:2800:658E28894E9186B5E3A4316FA69139020FFFD0108E3C827B8411846D3BC6FDA3.png

Note: Project Name depends on the user created name.
5. Create an App in AppGallery Connect.
6. Download the agconnect-services.json file from App information, copy and paste in android Project under app directory, as follows.
cke_202.png

7. Enter SHA-256 certificate fingerprint and click tick icon, as follows.
cke_203.png
Note: Above steps from Step 1 to 7 is common for all Huawei Kits.

8. Click Manage APIs tab and enable ML Kit.
cke_204.png


9. Add the below maven URL in build.gradle(Project) file under the repositories of buildscript, dependencies and allprojects, refer Add Configuration.
Java:
maven { url 'http://developer.huawei.com/repo/' }
classpath 'com.huawei.agconnect:agcp:1.4.1.300'

10. Add the below plugin and dependencies in build.gradle(Module) file.
Code:
apply plugin: 'com.huawei.agconnect'
// Huawei AGC
implementation 'com.huawei.agconnect:agconnect-core:1.5.0.300'
// Import the landmark recognition SDK.
implementation 'com.huawei.hms:ml-computer-vision-cloud:2.0.5.304'

11. Now Sync the gradle.
12. Add the required permission to the AndroidManifest.xml file.
Java:
<uses-permission android:name="android.permission.CAMERA"/>
<uses-permission android:name="android.permission.INTERNET"/>
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.RECORD_AUDIO"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE"/>
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE"/>

Let us move to development
I have created a project on Android studio with empty activity let us start coding.
In the MainActivity.kt we can create the business logic.
Java:
class MainActivity : AppCompatActivity(), View.OnClickListener {

    private val images = arrayOf(R.drawable.forbiddencity_image, R.drawable.maropeng_image,
                                 R.drawable.natural_landmarks, R.drawable.niagarafalls_image,
                                 R.drawable.road_image, R.drawable.stupa_thimphu,
                                 R.drawable.statue_image)
    private var curImageIdx = 0
    private var analyzer: MLRemoteLandmarkAnalyzer? = null
    // You can find api key in agconnect-services.json file.
    val apiKey = "Enter your API Key"

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        this.btn_ok.setOnClickListener(this)
        //Images to change in background with buttons
        landmark_images.setBackgroundResource(images[curImageIdx])
        btn_next.setOnClickListener{
           curImageIdx = (curImageIdx + 1) % images.size
           nextImage()
        }
        btn_back.setOnClickListener {
           curImageIdx = (curImageIdx - 1) % images.size
           prevImage()
        }
    }

    private fun nextImage(){
        landmark_images.setBackgroundResource(images[curImageIdx])
    }

    private fun prevImage(){
        landmark_images.setBackgroundResource(images[curImageIdx])
    }

    private fun analyzer(i: Int) {
        val settings = MLRemoteLandmarkAnalyzerSetting.Factory()
                       .setLargestNumOfReturns(1)
                       .setPatternType(MLRemoteLandmarkAnalyzerSetting.STEADY_PATTERN)
                       .create()
        analyzer = MLAnalyzerFactory.getInstance().getRemoteLandmarkAnalyzer(settings)

        // Created an MLFrame by android graphics. Recommended image size is large than 640*640 pixel.
        val bitmap = BitmapFactory.decodeResource(this.resources, images[curImageIdx])
        val mlFrame = MLFrame.Creator().setBitmap(bitmap).create()

        //set API key
        MLApplication.getInstance().apiKey = this.apiKey

        //set access token
        val task = analyzer!!.asyncAnalyseFrame(mlFrame)
                   task.addOnSuccessListener{landmarkResults ->
                   [email protected](landmarkResults[0])
        }.addOnFailureListener{ e ->
                   [email protected](e)
        }
    }

    private fun displayFailure(exception: Exception){
        var error = "Failure: "
           error += try {
              val mlException = exception as MLException
               """
               error code: ${mlException.errCode}   
               error message: ${mlException.message}
               error reason: ${mlException.cause}
               """.trimIndent()
        } catch(e: Exception) {
               e.message
        }
        landmark_result!!.text = error
    }

    private fun displaySuccess(landmark: MLRemoteLandmark){
         var result = ""
         if(landmark.landmark != null){
            result = "Landmark: " + landmark.landmark
        }
        result += "\nPositions: "

        if(landmark.positionInfos != null){
            for(coordinate in landmark.positionInfos){
                result += """
                Latitude: ${coordinate.lat} 
                """.trimIndent()

                result += """
                Longitude: ${coordinate.lng}
                """.trimIndent()
            }
        }
        if (result != null)
            landmark_result.text = result
    }

    override fun onClick(v: View?) {
        analyzer(images[curImageIdx])
    }

    override fun onDestroy() {
        super.onDestroy()
        if (analyzer == null) {
            return
        }
        try {
            analyzer!!.stop()
        } catch (e: IOException) {
            Toast.makeText(this, "Stop failed: " + e.message, Toast.LENGTH_LONG).show()
        }
    }

}

In the activity_main.xml we can create the UI screen.
XML:
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">

    <ImageView
        android:id="@+id/landmark_images"
        android:layout_width="match_parent"
        android:layout_height="470dp"
        android:layout_centerHorizontal="true"
        android:background="@drawable/forbiddencity_image"/>
    <TextView
        android:id="@+id/landmark_result"
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:layout_below="@+id/landmark_images"
        android:layout_marginLeft="15dp"
        android:layout_marginTop="15dp"
        android:layout_marginBottom="10dp"
        android:textSize="17dp"
        android:textColor="@color/design_default_color_error"/>
    <Button
        android:id="@+id/btn_back"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentBottom="true"
        android:layout_alignParentLeft="true"
        android:layout_marginLeft="5dp"
        android:textAllCaps="false"
        android:text="Back" />
    <Button
        android:id="@+id/btn_ok"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentBottom="true"
        android:layout_centerHorizontal="true"
        android:text="OK" />
    <Button
        android:id="@+id/btn_next"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_alignParentBottom="true"
        android:layout_alignParentRight="true"
        android:layout_marginRight="5dp"
        android:textAllCaps="false"
        android:text="Next" />

</RelativeLayout>

Demo

Result_Gif.gif

Tips and Tricks
1. Make sure you are already registered as Huawei developer.
2. Set minSDK version to 21 or later.
3. Make sure you have added the agconnect-services.json file to app folder.
4. Make sure you have added SHA-256 fingerprint without fail.
5. Make sure all the dependencies are added properly.
6. The recommended image size is large than 640*640 pixel.

Conclusion
In this article, we have learnt integration of landmark recognition feature in apps using Huawei Machine Learning (ML) Kit. The landmark recognition is mainly used in tourism apps to know about the monuments or natural landmarks visited by user. The user captures image, then the landmark recognizer analyses the capture and provides the landmark name, longitude and latitude, and confidence of input image. In landmark recognition, device calls the on-cloud API for detection and the detection algorithm model runs on the cloud.

I hope you have read this article. If you found it is helpful, please provide likes and comments.

Reference
ML Kit - Landmark Recognition

Original Source