Note Keeper with ML — A Pet Project Idea

Ivan Shafran
ProAndroidDev
Published in
5 min readMay 19, 2020

--

Photo by Kyle Glenn on Unsplash

The article presents a pet project idea for Android developers. It includes design in Figma, useful links, and other tips.

Initially, I’ve created the materials for an Android course for beginners. Most students were able to implement a project from scratch, so I believe this pet project is a good start point in Android development. Experienced developers may use it as a code sample for job applications.

An empty project

A good start is essential. Let’s create an empty project in Android Studio. Almost all default preferences are okay. Check several instructions below:

  • Set 21 min API. It simplifies API usage a lot.
  • Add .idea folder to root .gitignore . It contains files from .idea by default, but there is usually no need to keep any of .idea files.
  • If you are familiar with Kotlin, enable it in the empty project wizard. Kotlin is the preferred language for Android development.
  • Add project under git, for example, on GitHub. Try to give a meaningful name for every commit. Moreover, commit code piece by piece. It is a positive sign for those who will discover your repository.

Links: install Android Studio, run an app on an emulator, on the device.

First screen

The first screen contains an image and text. Content is static for now. The task is to create this in an Activity.

Please sign in to Figma to get full information about the design. Pay attention to the distance between elements on the screen.

It looks easy to implement but try to see it in landscape mode. Also, try different image shapes and text that doesn’t fit in the screen.

Landscape

Okay, let’s lay down for a second and rest after the hard work. Oh no, our app looks strange in landscape mode. The image is small, and it is hard to read the text. Therefore we need to study resource directory qualifiers.

I propose to implement a landscape layout as displayed below. The screen is divided into two parts horizontally.

List

Now move on to more complicated exercises. Take a look at the list design above. In Android, it is usually implemented via RecyclerView. An element of the list has a rounded border. The easiest way to achieve it is a CardView.

At this point, you could still use stub data. We will connect to a database later.

Pad

Sorry, but I’ll blow your mind with multiple-device configuration. As you can see we should show the double-row list in the portrait mode and one-row list in the landscape mode. Moreover, the detail screen should be placed alongside the list screen.

If you can’t beat that task, take a look at this sample. There are several ways to implement it, so it’s okay if your solution differs from the sample.

Pad testing

Without a tablet device, you could use an Android emulator to test different configurations. The more implicit way is to change the “smallest width” in the developer settings.

Camera

Our camera screen should be able to take and save a photo.

Implementing a camera from scratch on Android has been painful for many years. Because of this camera libraries exist to simplify our code:

You could use them if you’d like. However, I’d suggest checking the new CameraX library from the Android team. At the point of writing this article, it is still in the beta version. Despite this, I hope CameraX is the future, so I prepared a sample in case you struggle with implementation details.

I left behind the scenes the easiest implementation. You could just ask Android to take a photo for you. But it is boring, isn’t it?

One last tip about the camera. You should learn about permissions and ask the camera permission before starting the preview.

CameraX troubleshooting

In the case of Caused by: java.lang.BootstrapMethodError: Exception from call site #0 bootstrap method or similar during CameraView inflating, enable Java 1.8 compatibility:

compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}

kotlinOptions {
jvmTarget = "1.8"
}

ML Kit

The most magical part of our application will be done by ML Kit from Google. It has free offline support for Latin-based languages. Just follow steps in this tutorial. There are several ways to recognize text on the image using the ML Kit API. I assume, that in the previous part we saved a photo to the disk, so we will use code like this:

Database

Now it is a good time to implement a database. You could use this tutorial to implement it. Additionally, I implemented the sample. But we’ll notice the warning:

Caution: Although these APIs are powerful, they are fairly low-level and require a great deal of time and effort to use.

This article describes the Room library good enough, so check it out. Generally, the implementation is easy, the only unobvious thing is using a background thread for all operations. Even DAO creation takes a noticeable amount of time, which may cause UI lags.

Test

In my opinion, testing an Android application is hard, but it worth it. As you know, there are unit tests, functional tests, and so on. However, on Android, I would divide tests into three main categories:

  • Local tests that could be run on the JVM.
  • Instrumented tests which should be run on device or emulator. Tests check classes.
  • UI tests which should be run on device or emulator. Tests check application UI.

You should write as many local tests as you can because they are fast, easy to run. Unfortunately, they are hard to write. Therefore I wrote an article about them.

The end

I understand that following this tutorial may be hard if you are new to Android development. You could check my implementation version below:

Note that the recognition will not work without package.json file. I can’t post it due to security reasons. You could generate this file in Firebase and use it.

Let’s become friends on Twitter, Github, and Facebook!

Clap, share and follow me if you like it🐱‍💻

--

--