Diving deeper into context-oriented programming in Kotlin

Alexander Nozik
ProAndroidDev
Published in
6 min readMay 12, 2019

--

This article is a follow-up on my previous article on context-oriented programming (I was quite delighted to find that some guys in Kotlin community started to use abbreviation COP for it already). This time I want to go deeper and show a more complicated real-life example of the same approach.

Context-scoped functions (mapping)

Here is an example that appeared in a discussion about custom mappings. One quite frequently wants for some kind of behavior to be added to the existing class. That is what extensions for:

fun Int.map(): String = toString()

But seldom one wants this behavior to be different in different places. For that, we usually define an interface and few instances like :

interface IntMapper{
operator fun get(index: Int): String
}
object DefaultMapper: IntMapper{
override fun get(index: Int) = "NONE"
}
object MyMapper: IntMapper{
override fun get(index: Int) = index.toString()
}

And then use it wherever we like:

val i = 10
val str = DefaultMapper[i]

It solves the problem in most cases, but it does not, in fact, add behavior to a class. One needs to explicitly call the mapper object each time and it is not what we need. So, since we need some functionality to exist in a context, let us make it context bound:

interface IntMapper{
fun Int.map(): String
}
object MyMapper: IntMapper{
fun Int.map(): String = toString()
}

and then:

MyMapper.run{
val i = 10
val str = i.map()
}

In Kotlin the context is even not necessarily local, one can pass it to some external function, declaring it as a receiver:

fun IntMapper.doSomethingWithMap(){
val i = 10
val str = i.map()
}
MyMapper.run{
doSomethingWithMap()
}

In fact, we can do even better and avoid interfaces completely:

object MapperScopefun doSomething(){
MapperScope.run{
fun Int.map(): String = toString()
val i = 10
val str = i.map()
}
}

In this case, mapping function will only exist inside one specific scope and will never leave it.

A stateful context (2D Array example)

One of the things people complain a lot about Java and by association, Kotlin is the design solution for 2D-arrays. In Java, a two-dimensional array is represented by the type double[][]. It could be accessed by double index like arr[1][1], which gives an illusion of square matrix, familiar from numpy or other matrix-friendly languages, but in Java the two-dimensional array is in fact not a continuous memory block with two dimensions, but an array of references to arrays. Meaning that different rows could have different dimensions. Kotlin uses more verbose definition: Array<DoubleArray>, which better represents the inner structure of the object, but gives math code developers a lot of frustration. It still could be accessed like arr[1][1], but the creation of such array could look rather ugly:

val arr = Array(5){i -> 
DoubleArray(5){j ->
i + j //or whatever else with i an j
}
}

Further frustration comes from the fact that the resulting array is not a continuous memory block and each call by index involves additional heap access operation.

Can we have zero-overhead convenient two-dimensional arrays? Yes, of course, there are several ways to do it. Let me show you how to do it in a context-oriented way (some safety checks are omitted):

class ArrayAccessor(val rows: Int, val columns: Int){
operator fun <T> Array<T>.get(i: Int, j: Int): T =
get(i + columns* j)
operator fun <T> Array<T>.set(i: Int, j: Int, value: T) {
set(i + columns* j, value)
}
inline fun <reified T> create(init: (i: Int, j: Int) -> T): Array<T> =
Array<T>(rows * columns) { offset ->
init(offset / columns, offset % columns)
}
}
fun double2D<R>(
rows: Int,
columns: Int,
block: ArrayAccessor.() -> R
){
return ArrayAccessor(rows,columns).run(block)
}
fun doSomethingWithArray(rows: Int, column: Int){
val res = double2D(5,5){
val arr = create{ i, j -> i + j}
arr[2,3] = 0.0
arr[1,1]
}
println(res)
}

Let’s see what happens here. We create a ArrayAccessor class that has only two fields: the number of columns and the number of rows, it does not actually store any data. What is important about this class is that when we use regular Array in the context of this class, it becomes interpreted as a two-dimensional array without any data copy and with zero access syntax (get and set operations could be inlined if VM does not guarantee automatic method inlining). Now, all we need is to create a lexical scope from this class and use regular arrays for 2D operations (array2D function). If for some reason we want to transport our 2D array somewhere, all we need is to move the data alongside with ArrayAccessor by creating a Pair object or specialized class. What is important that in any case, you do not need to create a copy of the array or to have a special class for data representation.

Now, a bonus. We actually can do the same with an array of any type without boxing and without additional typed accessors. Look here:

fun ArrayAccessor.get(arr: IntArray, i: Int, j: Int): Int = 
arr.get(i + columns* j)

We have the same non-boxing behavior for Int without creating a new class. Sadly, we can’t use operator overloading in this case since it would require multiple receivers, but it could be fixed with KEEP-176. In fact, we can reduce our ArrayAccessor to a simple two-field data class and implement all operations as extensions, allowing unrivaled compile-time polymorphism options.

This method is currently being introduced in kmath library to allow boxing avoidance in generic matrix operations.

A scope with mutable state (additional property)

The final example for today is inspired by a piece of code, donated by Roman Elizarov. The code itself is an algorithm doing automatic first-order differentiation (it is available here). The basic idea is that in order to perform automatic differentiation for expression, one needs to not only compose numeric operations but also remember the composition of expressions that were used to get the result. It could be done by storing those transformations in the result itself, but in this case, Roman uses the context to accumulate changes. Of course, such techniques must be used with care. For example, a context with a mutable state must be created and disposed of in a controllable way and never reused. But I wanted to point out a different feature (not presented in Roman’s initial variant). In order for the algorithm to work properly, the variable must have a mutable property d that could be accessed only inside the derivation procedure and not outside it. It is quite easy to add a read-only property via extensions, but what about writeable properties? In theory, we can’t do it because we can’t introduce new fields to a class, but who says that those fields must be in the class? Look here (this is simplified code, for full code see original):

private class AutoDiffContext {    internal val derivatives = HashMap<Variable, Double>()

override var Variable.d: Double
get() = derivatives[this] ?: 0.0
set(value) {
derivatives[this] = value
}
}

Here the context stores all field values for all objects and effectively adds variable while context exists. Of course, such an approach has numerous limitations. For example, hash map lookup has an impact on performance. Also, such a simple construct does not treat well inner value scopes and GC on Variable inside the scope (one needs to use a weak hash map for that). And still, it is a very powerful technique when used with care.

What else?

I planned to add two more examples to this article:

  • How to add static typing to a dynamic typed structure via the decorator context in Kotlin.
  • How to use nested contexts to organized complicated data flows.

But, both cases deserve a detailed explanation and it is better to write another article for each of them.

The context-oriented approach in Kotlin allows solving many problems in a concise and easily maintainable way. Of course, it does not (in most cases) allow us to solve problems, which could not be solved by more traditional object-oriented or function-oriented approaches, but in some cases, it allows us to do things faster and in a more concise way. One also needs to remember that as most design principles, require a change in a way you write and interpret the code, so you need to spend some time adjusting to style to get the most of it.

--

--

Senior research scientist at MIPT, (ex) team lead at JetBrains Research.