Conscious Compose optimization 2: Tackling composition

Andrey Bogomolov
ProAndroidDev
Published in
14 min readMay 2, 2024

--

Image generated by Kandinsky

Jetpack Compose is constantly evolving, opening new horizons for developers to optimize. Since our last review, we have made significant progress, reducing scrolling lag from 5–7% to zero. In this article, we will share fresh discoveries and advanced practices in optimizing Compose. For a deeper dive into the topic, we recommend reading the first part.

Article series:

1. Conscious Compose optimization (original)
2. Conscious Compose optimization 2: Tackling composition (current) (original)

Table of contents

Composition — a dethroned god

The initial composition problem

In the first part, one of the issues with lazy lists was described — using SubcomposeLayout under the hood. According to developers, another problem is the speed of initial composition. During this costly step, the element tree is constructed for the first time. For lazy lists, this moment is especially critical, as the initial composition occurs with the creation of each element.

In the podcast, developers emphasize that Compose was designed with the notion that it is easier for people to think in terms of simple layouts (Box, Column, Row), rather than complex ones like ConstraintLayout. Unlike View, Compose does not face the issue of exponentially increasing dimensions, as they are limited to a single pass. Function skips also help solve the nesting issue. However, this is not the case with initial composition, which requires processing thousands of layouts. Developers aim to reduce this burden: upgrading from Compose 1.4 to 1.6 will increase list performance by 40%, which is a significant argument in favor of updating. However, for already optimized lists, the performance gain may not be as noticeable.

Composition in Compose is costly, but this is the price for the convenience of the declarative approach. In this article, we will propose ways to minimize the costs of composition and reduce their impact during recompositions.

First, let’s look at how the time of composition relates to the other phases of Compose:

Average time of each phase. Data for a weak device. Source

From this graph, it becomes clear why moving work to the next phase from composition is so effective. However, we should not overlook the costs involved in creating lambdas and the potential problems with their equality when closures capture external variables. Hence, it is advisable to use them to delay frequent changes, rather than individual ones.

Modifier.Node

Modifier.composed has long been the primary tool for creating custom modifiers. However, despite its flexibility, this method has drawbacks. Modifier.composed requires the inclusion of a composable lambda that generates a restartable group and several other constructs. During composition, these composable lambdas are called internally to produce the final modifier. This process, known as materialization, adds to the complexity of the initial composition process. Additionally, the use of lambdas complicates the comparison of modifiers since each new lambda differs from the previous, resulting in distinct modifier chains that limit the number of skips Compose can apply.

Now, Modifier.Node offers a new, more efficient method for creating custom modifiers. This approach accomplishes all the tasks previously handled by Modifier.composed, but without the excessive overhead associated with composable lambdas and composition. The primary advantage is that classes created with Modifier.Node can be compared and reused, significantly enhancing performance.

Google has recently updated its documentation to include numerous examples and guidelines for using Modifier.Node. This update provides an excellent opportunity to review and replace existing modifier implementations in your project with the more optimized Modifier.Node.

Problems with DerivedState and remember

As we have already discovered, composition itself is costly, and utilizing DerivedState is advised to reduce the number of recompositions. However, it is often misused, as detailed in the first part. It is important to explain why it is crucial to pay attention to this.

Let’s examine the time it takes to read specific data to grasp the problem’s extent (source):

  • local variable read — 1 ns
  • field read — 5 ns
  • method call — 10 ns
  • synchronized read — 50 ns
  • map.get — 150 ns
  • state.value — 2,500 ns
  • derivedState.value — 10,000 ns

Considering that to maintain a screen refresh rate of 120 fps, the maximum rendering time per frame must not exceed 8,333,333 ns, it is clear that the misuse of DerivedState can significantly decelerate your application. If this method does not lead to a noticeable reduction in recompositions, it may slow down the code more than directly accessing the original State<T>.

The same principle applies to the remember function. Careless memorization of simple calculations can unjustifiably slow down the application’s performance. For instance, to memorize a simple expression, it is necessary first to compare keys with their previous values, which alone is more costly, not to mention the additional overhead of reading and writing to the Slot Table.

// Counterexample
val expr = remember(a, b, c) { a + b * c }

Thus, it is beneficial to have at least a basic understanding of how Compose operates internally to avoid detrimentally leveraging its features.

Balancing Compose with traditional Kotlin techniques

To work effectively with Jetpack Compose, it is crucial to find a balance between utilizing Compose features and standard Kotlin code. As Compose developer Andrey Shikov noted, “While learning how to Compose, we forgot how to Kotlin”. Let’s take a look at his example demonstrating this approach. The original version of the code utilized many separate states and side-effects:

@Composable 
fun MyComponent(modifier: Modifier, config: Config) {
val interactionSource = remember { MutableInteractionSource() }
val activeInteractions = remember { mutableStateListOf<Interaction>() }
val config by rememberUpdatedState(config)

LaunchedEffect(interactionSource) {
interactionSource.interactions.collect {
// update the interaction list
}
}

val animatableColor = remember {
Animatable(config.defaultColor, Color.VectorConverter)
}

LaunchedEffect(config) {
// update state.animatable to new target
// update state.config
}

LaunchedEffect(interactions) {
snapshotFlow { activeInteractions.lastOrNull() }.collect {
// update animatableColor based on interactions and configuration
}
}

// use animatableColor in drawing
}

This code was optimized by combining multiple states and reducing the number of side-effects by moving the logic into a single coroutine:

@Composable 
fun MyComponent(modifier: Modifier, config: Config) {
val state = remember { MyComponentState(config) }

LaunchedEffect(state) {
state.collectUpdates()
}

SideEffect {
state.config.value = config
}

// use state.animatableColor in draw
}

Such refactoring resulted in a 9% improvement in code performance.

Pre-rendering

When you first open an application screen, user wait time consists of two primary phases: data loading and UI rendering. To minimize the wait for content to appear, you can employ a pre-rendering technique.

The method involves rendering the screen in parallel with data loading, using mock data. It is crucial that the UI composition structure remains relatively unchanged after real data replaces the mock data. Given the high cost of the composition process, this approach allows the composition to “warm up” in advance, significantly reducing the time it takes for the actual content to appear on the screen.

One way to implement pre-rendering is to use a loading indicator drawn on top of the entire screen:

ProductDetailScreen(state: ProductUiState) {
// Initially, the state contains empty data for pre-rendering content
ProductDetailContent(state)

// Display the loading indicator over the content, not in place of it
if (state.isLoading) {
FullscreenLoader()
}
}

You can also add a shimmer effect to mock data using the placeholder modifier from the Accompanist library. This method does not alter the composition structure after real data is loaded and ensures a smooth visual transition. However, it requires additional adjustments of existing elements to accurately render the shimmer effect over the content.

In the context of using lazy lists, it’s important to ensure uniformity among elements with the same contentType. If the list items do not vary significantly in composition structure, it reduces the need for additional work in rebuilding the composition tree when reusing them, which also helps speed up scrolling.

Deferred Composition

Lazy layouts use SubcomposeLayout to determine which elements should be displayed on the screen. This is done internally during the layout phase. The entire process occurs within a single frame, which can be problematic with complex screens.

In such scenarios, the technique of deferred composition can be beneficial. It allows the element composition process to be spread across several frames, thus enhancing frame rendering speed. More details on this technique can be found in the linked article.

Below is an example demonstrating how to defer the composition of part of the screen using the withFrameNanos method. This method, similar to delay(), suspends coroutines, but specifically, it does so right until the start of the next frame.

@Composable
fun ProductDetail(productInfo: ProductInfo) {
var blockState by remember { mutableStateOf(0) }

LaunchedEffect(Unit) {
while (blockState < 3) {
// Defer the composition of each block by one frame
withFrameNanos { }
blockState += 1
}
}

ProductBlock0(productInfo)

if (blockState >= 1) {
ProductBlock1(productInfo)
}
if (blockState >= 2) {
ProductBlock2(productInfo)
}
if (blockState >= 3) {
ProductBlock3(productInfo)
}
}

Painter

Xml icons vs Compose icons

When working with icons in Compose, there are two primary approaches: using XML format icons and creating icons directly with code. Let’s explore the processes and efficiencies of each method.

// Xml icon
Image(
painter = painterResource(R.drawable.my_icon),
contentDescription = null
)

// Compose icon
Image(
painter = rememberVectorPainter(Icons.Filled.Home),
contentDescription = null
)

For XML icons, the loading process involves a call to painterResource(R.drawable.my_icon) and includes these steps:

  1. Reading the resource from a file or retrieving it from the cache.
  2. Transforming XML into an ImageVector.
  3. Creating a Painter object with rememberVectorPainter().

For icons directly created in Compose, the process is streamlined as follows:

  1. Calling Icons.Filled.Home immediately initiates the creation of an ImageVector object.
  2. Creating a Painter object with rememberVectorPainter().

Tests have shown that icons made with Compose load 5% to 18% faster than XML format icons. The speed of loading depends on the complexity of the icon’s structure and the size of the source file. It’s important to note that the overall impact of icons on app performance can significantly vary depending on the number of icons displayed and their use in lazy lists.

To create your own Compose icons, you can use the SVG to Compose tool, which supports converting both SVG and XML files. It’s also important to mention that these icons reduce problems with resources when using Compose Multiplatform.

Painter instability

This section was intended to discuss how Painter instability affects image and icon rendering performance, and when it might be appropriate to use a wrapper class for Painter. However, with the new ability to declare external types as stable, this concern has become outdated. This development is further explored in the innovations chapter.

Extracting the Painter

Extracting the Painter creation from the list significantly improves rendering speed. This is crucial in lists where each item requires its own Painter initialization. Despite the presence of a cache for XML resources, each call still imposes additional load. Creating a single Painter for the entire list markedly reduces this load. However, similar to removing modifiers, this may impact the readability of the code.

Example before optimization:

@Composable
fun MyList(products: ImmutableList<ProductUiState>) {
LazyColumn {
items(products) { product ->
// Painter inside a list item
MyProductItem(product, painterResource(R.drawable.ic_menu))
}
}
}

Example after optimization:

@Composable
fun MyList(products: ImmutableList<ProductUiState>) {
// Extracting Painter for the common icon from the list
val menuPainter = painterResource(R.drawable.ic_menu)

LazyColumn {
items(products) { product ->
MyProductItem(product, menuPainter)
}
}
}

This technique is not only effective for icons but also for other resources, although the performance gains might be less noticeable for the latter. The approach is also applicable in scenarios with frequent recomposition due to animations, where creating costly objects is generally best avoided.

Design system

Creating a design system is a crucial step in implementing Jetpack Compose in projects. Developers at this stage may face challenges due to limited experience with the new technology, which can lead to inefficient code in vital parts of the design system.

Color scheme

Traditionally, when creating a custom color palette, developers copy the approach used in MaterialTheme. However, recent developer commits highlight the flaws of this implementation.

In the traditional implementation, a separate State<T> was created for each color, leading to the need to subscribe to a state change each time it was read. This could negatively affect performance, especially if multiple colors were used in the design system.

Traditionally, each color used its own State<T>, requiring monitoring for changes whenever a color is used. This could negatively impact performance, particularly when the design system incorporates many colors.

Example before optimization:

@Stable
class ColorScheme(
primary: Color,
onPrimary: Color,
) {
// State<T> for each color
var primary by mutableStateOf(primary)
internal set
var onPrimary by mutableStateOf(onPrimary)
internal set
}

Initially, this implementation had the advantage of flexibility — the ability to change each color individually without significant performance overhead. However, as practice shows, in most applications the color scheme changes only when switching between light and dark themes, not when changing individual colors individually.

Moving to using a regular data class to describe the color scheme eliminates the need for state subscriptions and improves application performance.

Originally, this implementation offered flexibility — the ability to change each color independently without significant performance costs. However, in practice, color schemes in most applications change only with the switch between light and dark themes, not with changes to individual colors.

Switching to a simple data class to describe the color scheme eliminates the need to track state changes and improves performance.

Example after optimization:

@Immutable
data class ColorScheme(
val primary: Color,
val onPrimary: Color,
)

Mistakes of the Past

The code we’re discussing was written long ago and has remained unchanged since. However, it was extensively used across all our screens and significantly impacted frame rendering times due to its inefficient handling of typography in AppTheme.

AppTheme.typography was programmed to create new typography with each invocation, loading fonts and colors from resources for each text style individually. This approach led to repeated resource calls and necessitated reading AppTheme.colors.textPrimary 16 times for each TextStyle, which was highly inefficient.

object AppTheme {
@Composable
@ReadonlyComposable
val typography
get() = DefaultAppTypography
}

@Composable
@ReadonlyComposable
val DefaultAppTypography
get() = AppTypography(
headXXL = TextStyle(
color = AppTheme.colors.textPrimary,
...
),
// Creating 15 other text styles
)

This implementation notably slowed down rendering on screens with a lot of text and varied styles, as each text element triggered a call to, for example, AppTheme.typography.headXXL. We identified this issue after thoroughly analyzing several screens with composition tracing.

The solution involved changing how we create and utilize typography. Now, AppTheme typography is initialized just once and can be accessed via LocalAppTypography.current, significantly reducing resource calls and improving typography performance across the entire application:

object AppTheme {
@Composable
@ReadonlyComposable
val typography
get() = LocalAppTypography.current
}

Formatters

It’s crucial to correctly place the logic for formatting numbers and currencies. While it might seem convenient to incorporate formatting directly into Compose code, this can lead to unexpected performance issues. It is advisable to integrate the creation and use of formatters into your application’s business logic. This method not only lightens the load on the main thread but also prevents the redundant duplication of formatter objects, facilitating their reuse across screens. Although this recommendation may appear straightforward, it’s frequently overlooked in practice, particularly when formatting is obscured by utility functions.

Example of what not to do in Compose сode:

Text(
text = productItem.price.toMoneyFormat()
)

However, bear in mind that shifting formatting to the business logic might not always yield the anticipated improvements. When processing lists in business logic, formatting is applied to the entire list at once. In contrast, within Compose, especially in the context of lazy lists, formatting is only applied to the elements visible to the user. Consequently, relocating formatting logic could potentially delay the display of content on the screen. It is important to remember that not all optimizations enhance performance, and it is crucial to thoroughly analyze any changes before implementation.

Micro-optimizations

These recommendations will prove useful when developing utility functions, essential application components, or creating a design system. They become particularly important when working with animations and graphics. When writing code that runs in the background and isn’t used universally, striving to use structures more efficient than the standard ones may be unnecessary. It’s important to remember that optimizing algorithmic complexity will yield a more noticeable performance boost than micro-optimizations.

Autoboxing

Autoboxing of primitive data types can subtly slow down code execution, especially when used extensively. In Jetpack Compose, developers actively seek to minimize such overhead by implementing various methods:

  • Using special MutableState for primitive types (e.g., mutableIntStateOf()) helps avoid boxing.
  • Introducing special values (like Unspecified) for certain classes instead of null prevents auto-boxing by leveraging inline value classes. For instance, Unspecified was recently added for properties like TextAlign and TextDirection to avoid null.
  • Replacing Pair<Int, Int> with a value class having a Long field significantly reduces data storage costs by utilizing the stack instead of the heap. Long can store twice as many bits as Int, which enables it to hold two Int values and access them using bitwise operations.

Fast methods

Not all standard Kotlin methods are ideally suited for specific tasks. Jetpack Compose offers alternatives such as fastForEach or fastFirst. More about these methods can be found on Romain Guy’s blog.

Effective structures

Standard data structures in Kotlin might not be the best fit for specialized tasks.

  • For example, mutableListOf uses ArrayList, which could be excessive if there's no need to dynamically resize the collection or use generics. In critical code sections, it is preferable to use specialized arrays, like IntArray.
  • The mutableMapOf method defaults to creating a LinkedHashMap, which may be less efficient than other data types. For instance, Jetpack Compose uses ScatterMap.

AndroidX Collections contain numerous optimized structures that are advisable to use in critical code.

Nested “if” statements

When working with conditional constructs, it’s important to consider the performance impact of nesting. Each conditional branch generates additional calls to startReplaceableGroup and endReplaceableGroup to facilitate fast code replacement. Therefore, unless necessary, prefer a flat structure over nested conditional blocks. This approach simplifies the code and enhances performance by reducing the number of operations.

Example of code generation for flat “if” statements (also applies to when):

if (condition1) {
$composer.startReplaceableGroup()
Content1()
$composer.endReplaceableGroup()
} else if (condition2) {
$composer.startReplaceableGroup()
Content2()
$composer.endReplaceableGroup()
} else {
$composer.startReplaceableGroup()
Content3()
$composer.endReplaceableGroup()
}

Example of code generation for nested “if” statements:

if (condition1) {
$composer.startReplaceableGroup()
Content1()
$composer.endReplaceableGroup()
} else {
$composer.startReplaceableGroup()
if (condition2) {
$composer.startReplaceableGroup()
Content2()
$composer.endReplaceableGroup()
} else {
$composer.startReplaceableGroup()
Content3()
$composer.endReplaceableGroup()
}
$composer.endReplaceableGroup()
}

Innovations

Specifying the stability of external types

With the release of the Compose Compiler version 1.5.5, you can now explicitly specify the stability of external types. This innovation eliminates the need for additional wrappers to ensure stability. We recommend adding frequently used classes such as Kotlin’s standard collections and Painter to your list of stable types. It’s essential to specify this in every module where Compose is used. This is particularly important for those who choose not to use immutable collections, either because they are still in alpha or due to the extensive code rewriting that would be required.

// All collections from Kotlin
kotlin.collections.*

// Painter
androidx.compose.ui.graphics.painter.Painter

Strong skipping mode

Currently, Strong Skipping Mode is experimental and can be activated through a special flag. It may become the default setting in the future. Here’s what it offers:

  • All restartable functions become skippable. For unstable parameters, comparison is by instances; for stable ones, it’s by equals.
  • Lambdas capturing unstable variables are also wrapped in remember.

Toolkit

Viewing Compose source code

Whenever you are unsure about how certain code will function after being compiled by Compose, it’s advisable to examine the final Java code. Even if you are well-versed in Compose code generation from Jetpack Compose Internals or various articles, this does not protect you from potentially outdated information. For this purpose, there is a convenient Gradle plugin called decomposer.

Vkompose plugin

Among the useful tools are plugins from VK for Kotlin and IDE that:

  • Highlight unstable parameters and functions that cannot be skipped directly in the IDE.
  • Visually indicate ongoing recompositions in the UI with colored borders.
  • Log the reasons for recompositions.

Detekt

Detekt not only helps monitor code style but also guards against poor practices that lead to performance degradation, including:

  • compose-rules — a large list of diverse rules.
  • vkompose — offers a rule in addition to the plugin that checks the skippability of functions.

Summary

In summary, we’ve learned how to tackle issues with the initial setup in Compose and why it’s crucial not to burden it with unnecessary logic, as convenience and simplicity do come at a cost. However, thanks to the relentless efforts of the developers, Compose’s performance has significantly improved, allowing us to divert our attention to other development aspects. Screens with huge DAU, even when built on traditional View systems, have required optimization beyond standard methods. For Compose, the limits are far from being reached: future optimizations will make it an even more powerful tool, perfectly equipped for any task.

--

--