Autotests on Android. The entire picture

Hi folks!
This article will also serve as a basic road map to the Avokado Project. We believe that very soon now, deploying autotests will be a much less time-consuming process thanks in part to our own efforts.
Why automated tests?
Some people think UI tests are pointless if you’ve got a sufficient number of unit and integration tests. But the following metaphor refuses to go away. Imagine you’re building a bicycle: you’ve got two well-tested wheels, a tested frame with a seat, and a tested pedal system with a chain and handlebars. In other words, we’ve got a decent set of integration and unit tests. But will the bike be ridable? To check this, you hire manual testers whose job is to make sure these flawless parts function together properly before each release, and the bike will ride smoothly for anyone who hops on.
The same story goes for any mobile software. Unfortunately, in the mobile world, we can’t roll back unsuccessful changes quickly because all updates pass through the Google Play Store and App Store, which from the very start slap devs with restrictions like long rollouts compared to web and backend updates, mandatory version compatibility, and dependence on user decisions to update or not desire of the user to update. Therefore, it’s critical that we always make sure prior to release that the main app use cases work exactly as expected.
When your release cycle is several months long, manual testing and a decent level of code coverage with unit and integration tests are sufficient. However, when the release cycle is shortened to one or two weeks (sometimes less), manual testers alone often don’t cut it, which forces you to either sacrifice test quality, or hire more specialists.
All of this naturally culminates in the need to automate the verification of use cases, i.e. write end-to-end or autotests. At this point, the need to automate the verification of use cases arises naturally, i.e. by writing end-to-end or autotests. “Avito” has a Russian video on how autotests help and how much they cost (2019). However, this method is so complicated and expensive to implement that it scares off most teams. This brings us back to why we wrote this article and, speaking more generally, one of the goals of the Avokado Project — to standardize autotesting in Android and significantly reduce its cost.
The full picture
Here’s that complete picture we promised.
If you see something you don’t understand, don’t worry. We’ll explain everything in detail point by point.
Writing tests
As a first step, try to write some tests on your machine and run them locally.
Selecting tools to write autotests
It’s best to decide immediately on the technology stack you’ll be using to write tests.
The first fork in the road is choosing between a cross-platform solution (Appium, etc.) and a native solution (Espresso, UI Automator). People tend to be fierce in defending their decision.
We’re all about native solutions. In our experience, they’re:
- stabler
- faster
- easier to integrate in an IDE
- don’t contain layers that introduce instability and force you to have super-specialized expertise
Plus, Google supports Espresso and UI Automator.
Here are some articles offering more on this comparison:
- Appium vs Espresso: Key Differences.
- Appium vs. Espresso: Which Framework to Use for Automated Android Testing.
- Appium vs Espresso: The Most Popular Automation Testing Framework in 2019.
Today, hardly anyone writes in Espresso and UI Automator alone. Developers have come up with all kinds of candy wrappers to make their lives easier. We’re currently putting together an article about these tools with classifications and comparisons. In short, we back the Kaspresso framework.
Kaspresso
Kaspresso is a framework that:
- provides a DSL, making it much easier to write autotests
- has built-in multilevel protection against flaky tests
- speeds up work in UI Automator
- provides complete logs about what’s happening in the test
- makes it possible to run any ADB commands inside tests
- provides an interceptor mechanism to intercept all actions and checks. This mechanism is the foundation for logging and protection against unstable tests
- describes the best practices for writing autotests (based on our experience)
You can read more about Kaspresso on GitHub and Medium.
Test runner
So you wrote a few tests, and now you need to run them. This is done using a test runner.
Google offers the AndroidJUnitRunner utility with its special Orchestrator mode. AndroidJUnitRunner does exactly what’s required of it: runs tests, including in parallel. Orchestrator ensures tests continue to run even if some of them fail, and resets the general state to the same place before each new test. Thus, each test is executed in isolation.
Over time, you’ll start needing more from the runner, such as the ability to:
- run separate test groups
- run tests only on specific devices
- restart failed tests (second wave of protection against the results of unstable tests after Kaspresso)
- effectively distribute tests across devices, taking into account the history of runs and success of previous launches
- create run reports in different formats
- display run results (preferably Allure based)
- maintain run histories for further analysis
- simply integrate with your infrastructure
There are several third-party runners available on the market. Among them, we want to single out Marathon as the most promising because it’s easy to configure and helps with a portion of the above tasks. For example, it supports the parallelization of tests or preparation of run results in a format that can be displayed in Allure.
However, Marаthon also lacks certain properties that we consider vital. For example, it doesn’t have:
- Simple and native integration of runners with the infrastructure that provides emulators. Or even better, the ability to run tests right on the cloud. But to be fair, this isn’t an issue for Marathon alone. Unfortunately, not a single runner we know of makes it its duty to receive emulators. Instead, this is always a job for the developers.
- Stronger integration with frameworks like Kaspresso to get the fullest, most precise and accurate information about tests.
We also believe that the runner should be proprietary, i.e. either for Android or iOS. This is due to the uniquely specific characteristics of each OS and the inevitable complexity of making changes to the runner.
Therefore, we’re currently working on our Avito Runner to include all the top proven solutions and ideas. Stay tuned for more updates!
Where to run tests
Along with the issue of which runner to choose for tests, we’re immediately faced with another question: where is the best place to run tests? There are three options.
An actual device
Pros
Shows issues specific to certain devices and firmware. Many manufacturers tweak Android for their own needs, both the UI and OS logic. So it’s easy to see why it can be useful to check that your app works properly in such an environment.
Cons
You’ll need to get a device farm somewhere and set up a special room for it with climate control, no direct sunlight, etc. Also, keep in mind that batteries tend to blow and fail. Tests are also known to change the state of devices, making it impossible to restore them to a stable backup.
A clean emulator
By “clean,” we mean that you run the emulator on your own or somewhere on a machine with AVD Manager installed.
Pros
This is quicker, more convenient and stabler than an actual device. It hardly takes any time at all to create a new emulator, and you won’t run into problems maintaining special facilities, changing batteries, etc.
Cons
The absence of special device specifics mentioned earlier. However, the number of test scenarios based on the specifics of a certain device is negligible and not considered high priority. The biggest drawback here is poor scalability. The otherwise simple task of uploading a new emulator version on all hosts turns into a nightmare.
Android emulator Docker image
Docker resolves the issues with clean emulators.
Pros
Docker combined with wrappers for preparing and rolling out the emulator image is a fully scalable solution that helps quickly and efficiently prepare emulators to roll them out on all hosts, ensuring they’re sufficiently isolated.
Cons
A higher entry threshold.
Summary
We back Docker.
There are various Docker images of Android emulators available online, so we recommend using the following:
As we already mentioned, preparing the image requires a certain level of skill. There’s also often an eagerness to pre-configure the emulator, including disabling animation, logging into your Google account, disabling Google Play Protect and much more. This is all difficult to accomplish. Therefore, it’s one of our priorities to provide everyone with detailed documentation on how to prepare and use images.
Infrastructure
You’ve written hundreds of UI tests. You want to run some of them as part of a PR, which means the entire group needs to finish as soon as possible, for example, in 20 minutes. This is where the real scaling comes in.
However, this is a blind spot for a portion of Android developers and automation engineers, albeit unsurprising, as infrastructure requires knowledge of hardware, server configuration, DevOps practices and more. So make sure you get people on your side who understand all this, within or outside your company, as they’ll be immensely helpful in saving you from mistakes.
What’s in store for you:
- Choosing between a cloud solution, local solution from scratch and local solution based on whatever infrastructure your company has for running tests of other platforms.
- The hardest part is deploying internal infrastructure from scratch. In this case, you need to select hardware that autotests will use to its fullest extent. You’ll have to measure the load on CPU/GPU /Memory/Disk, and also try out various numbers of simultaneously running emulators to track test stability. This is a broad topic we want to explore from all angles and share our recommendations with you.
The further launch of necessary software, online integration and so on is all up to DevOps engineers. - The output should be some kind of service, a single point that gives you emulators. This could be Kubernetes, a cloud service like Google Cloud Engine, or a custom solution.
It will once again be the DevOps engineers who help set it up. - Linking the service with the Test Runner.
One of AvitoRunner’s goals is to make this link as simple and transparent as possible, as well as provide extension points to make implementing your custom service easy.
In the near future, we plan to release AvitoRunner and other articles to help you configure your infrastructure.
Miscellaneous
Don’t forget about these other important points as well:
- the output of test run reports (Allure)
- implementation/synchronization with TMS
- integration with CI/CD
- developer and tester training
- processes (who, when, how many and which autotests should be written)
We’ll focus more on these issues later.
Conclusion
We covered the main facets of setting up automated Android testing. Now we hope that this puzzle comes together in your minds so you can see the whole picture.
Watch us grow on our site.