Test By Layer

In the world of Java and Android, Mockito is an industry standard. Many a unit test are written where the very first thing a developer does is add @Mock
annotations for each and every dependency the class under test requires. This works, for the most part, as many developers do not seem to follow the simple principle of open/close. This would, in part, state that if a class is not meant for extension, it should be finalized. Doing so results in some awkwardness, it would seem, when it comes to testing, as now we need to do some extra special things in order to get Mockito to work. This is exacerbated in Kotlin, where both classes and methods are closed by default. Your initial reaction might be to open everything up, or to use interfaces anywhere and everywhere, or use the “back door” that Mockito2 gives you to get around this. But Wait! What if there’s a better way?!
Mocking your Boundaries
Let’s imagine an app which lets you transfer money between customer accounts. We have entities like Customer
, Account
, and Transaction
. We also have some use-cases, such as CreateCustomerUseCase
, CreateTransactionUseCase
, and so on. You can see where I’m going with this. (Or, you can check out the sample repository at the bottom of this post!).
Our use-cases interact with the database which holds persisted entities via repository interfaces. Testing these use-cases is pretty simple:
For each different behaviour we expect, we write an appropriate test. Note the use of mocks here. Since we interact with two repositories, which are interfaces, we can comfortably utilize mocks. These might be implemented into services or gateways, but in any case they would involve reading and writing, and should be mocked out.
Note as well that we use a real Customer object. This should come as no surprise. They are small, easy to make, don’t require any kind of verification, and ensure us that we’re using valid input data, given a Customer’s information is validated at object construction time:
This has its own set of unit tests, to verify proper enforcement of valid data. Similar tests exist as well for other entities.
Now, let’s look at a View Model’s unit test:
Notice we are still mocking out our repositories, as they exist at the boundary of our system, but we do not mock out the use cases or the entities. We use the real CreateAccountUseCase
and verify that our behaviour is as expected. This is good, and gives us deeper insight into our code. Furthermore, our tests respect the fact that CreateAccountUseCase
is a finalized class, and is not extensible.
Gaining Insight
So what does this approach give us? Insight. Let’s imagine that we’ve written a large application, and tested it in this layered manner. Think back to what we learned about clean coding. Our layered architecture is exactly what is prescribed by “Clean” methodology:
- Entities in the center
- Then domain objects
- Then presentation objects
The View, Services, and Data pieces are not present in our sample application. Our testing follows our clean methodology:
- Entities are tested in isolation, as they only know about themselves and each other.
- Domain objects are tested using real entities, and mocking out inverted dependencies (repositories)
- Presentation objects are tested using real use-cases and real entities.
What happens now, is that when we make a breaking change in one layer, components in other layers are also effected. This sounds like strong coupling, but it has large benefits.
- You can instantly see, through test failures, which pieces of your code are effected by behavioural changes elsewhere in the system.
- Regressions decrease because of the increased level of code penetration.
That last point is very important. Doing this means that your outer layers are no longer ignoring the logic of your inner layers, but assuming that it adheres to the contract set in their (the inner layers) own unit tests. If that contract is broken, we get to see the immediate breakage right away. Imagine if, in our last code snippet, we just mocked our responses of our use case instead of using the real one. Now, if the use-case’s behaviour changes, we don’t see this in this test. Perhaps we just update the use-case’s test and move on. This causes a breaking change, and may only get picked up in regression testing, leading to a prolonged release cycle as you scramble to solve the issue!
Conclusion
I firmly believe that we as developers love new tools. We like to latch on to tech, patterns, etc. and never let go. In the case of mocks, we need to make a decision each and every time we use them, and it’s a decision that a lot of us ignore. “Why is this a mock?” is the question we need to answer, and with that answer then make this decision of whether or not to create a mock. If we can’t think of a great reason other than “because it’s what I’ve done before” or “because it’s a dependency,” then maybe it’s time to take a good hard look at your tests, and decide whether they’re really as robust as they can be.
Examples of good times to mock are places where you’ve inverted a dependency or where you know IO or other blocking or async actions could be taking place. One last one is when you actually need to be able to verify actions, such as feeding your class under test a callback, and verifying methods are invoked in accordance with expected behaviour.
Every day, we get opportunities to make our software better and safer, and I think this is an easy step to both of these.
Here is the sample repository: https://github.com/exallium/TestingLayers/