<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[ProAndroidDev - Medium]]></title>
        <description><![CDATA[The latest posts from Android Professionals and Google Developer Experts. - Medium]]></description>
        <link>https://proandroiddev.com?source=rss----c72404660798---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 13 Apr 2026 03:56:28 GMT</lastBuildDate>
        <atom:link href="https://proandroiddev.com/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Why we used STOMP with WebSocket?]]></title>
            <link>https://proandroiddev.com/why-we-used-stomp-with-websocket-8343d4feeb0d?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/8343d4feeb0d</guid>
            <category><![CDATA[stomp]]></category>
            <category><![CDATA[kotlin-multiplatform]]></category>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[websocket]]></category>
            <dc:creator><![CDATA[Behzod Halil]]></dc:creator>
            <pubDate>Mon, 13 Apr 2026 03:40:00 GMT</pubDate>
            <atom:updated>2026-04-13T03:39:58.836Z</atom:updated>
            <content:encoded><![CDATA[<p>webSocketSession.send(&quot;call someone&quot;) sends raw bytes over a persistent connection. That&#39;s it. No routing, no subscriptions, no message types. When you&#39;re building a voice call signaling system that handles incoming calls, WebRTC negotiation, and call state events simultaneously — raw WebSocket becomes a routing nightmare you have to solve yourself.</p><p>This is where STOMP comes in, and understanding why matters if you’re designing any real-time feature beyond a simple chat.</p><h3>The Problem with Raw WebSocket</h3><p>WebSocket gives you a bidirectional pipe. You open a connection, you send text or binary frames, and the other side receives them. That’s the entire contract.</p><pre>webSocket(&quot;ws://server/signaling&quot;) {<br>    send(&quot;some json string&quot;)<br>    incoming.collect { frame -&gt;<br>        // What kind of message is this?<br>        // Who is it for?<br>        // Which handler should process it?<br>    }<br>}</pre><p>Now imagine building a voice call system. You need to handle three completely different message streams over a single connection:</p><ul><li><strong>Incoming calls</strong> — someone is calling you</li><li><strong>Call events</strong> — the call ended, was rejected, timed out</li><li><strong>WebRTC signaling</strong> — SDP offers, ICE candidates, answers</li></ul><p>With raw WebSocket, every incoming frame hits the same collect block. You&#39;d have to inspect each message, parse its type, and route it yourself. You&#39;d also need to build your own subscription mechanism, your own heartbeat, and your own reconnection logic.</p><p>We’ve seen teams do this. It works until it doesn’t — usually at 2 AM when messages arrive out of order and nobody can reproduce it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fwpVmTx6BIgkuo0vnv0IVQ.png" /></figure><h3>What STOMP Actually Is</h3><p>STOMP (Simple Text Oriented Messaging Protocol) rides on top of WebSocket the same way HTTP rides on top of TCP. It adds structure to the raw pipe:</p><pre>SEND<br>destination:/topic/signal.create<br>content-type:application/json<br><br>{&quot;callId&quot;:&quot;abc-123&quot;}<br>^@</pre><p>That’s a real STOMP frame. It has a command (SEND), headers (destination, content-type), and a body. Compare this to raw WebSocket where you&#39;d send {&quot;callId&quot;:&quot;abc-123&quot;} with no metadata about where it should go.</p><p>STOMP gives you three things that raw WebSocket doesn’t:</p><p><strong>Destinations</strong> — messages have addresses. Instead of sending into the void, you send to /topic/signal.create or subscribe to /topic/public. The server knows exactly where to route each message.</p><p><strong>Subscriptions</strong> — you explicitly declare what you want to receive. The server only sends you messages for topics you subscribed to, not everything happening on the connection.</p><p><strong>Frame structure</strong> — every message has a command, headers, and body. No ambiguity about what a message means or how to parse it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IhAwPSkA0LOYQz_WKKzofQ.png" /></figure><h3>How Our Signaling System Uses It</h3><p>Here’s how the connection layer works. We wrap a STOMP client around Ktor’s WebSocket transport, injecting authentication at the connection level:</p><pre>class SignalingStompClient(private val httpClient: HttpClient) {<br><br>    suspend fun connect(url: String, token: String): SignalingSession {<br>        val transport = WebSocketDelegate(httpClient, token)<br>        <br>        val stomp = StompClient(transport) {<br>            heartBeat = HeartBeat(<br>                minSendPeriod = 150.seconds,<br>                expectedServerPeriod = 150.seconds<br>            )<br>            connectionTimeout = 30.seconds<br>        }<br>        <br>        val raw = stomp.connect(url)<br>        <br>         return SignalingSession(raw)<br>    }<br>}</pre><p>Notice the heartbeat configuration — 150 seconds. This is how STOMP keeps the connection alive. Every 150 seconds, client and server exchange heartbeat frames. If one side stops responding, the other knows the connection is dead. With raw WebSocket, you’d implement your own ping/pong loop.</p><p>Once connected, the real power shows. We subscribe to three separate topics over one connection:</p><pre>class SignalingConnection(private val client: SignalingStompClient) {<br>    fun observe(url: String, token: String): Flow&lt;String&gt; = flow {<br>          val session = client.connect(url, token)<br>          val streams = listOf(<br>              session.subscribe(&quot;/topic/public&quot;),<br>              session.subscribe(&quot;/user/queue/call-events&quot;),<br>              session.subscribe(&quot;/user/queue/webrtc&quot;),<br>          )<br>          <br>          emitAll(streams.merge())<br>    }<br>}</pre><p>Three subscriptions. Three independent message streams. One WebSocket connection. Each subscription gets only the messages it cares about — incoming calls don’t mix with ICE candidates, call events don’t arrive in the WebRTC handler.</p><p>With raw WebSocket, you’d receive all messages in one stream and write a routing switch:</p><pre>// What you&#39;d have to build without STOMP<br>incoming.collect { frame -&gt;<br>    val json = Json.parseToJsonElement(frame.text)<br><br>    when {<br>        &quot;callerId&quot; in json -&gt; handleIncomingCall(json)<br>        &quot;status&quot; in json   -&gt; handleCallEvent(json)<br>        &quot;sdp&quot; in json      -&gt; handleWebRtc(json)<br>        else               -&gt; log(&quot;Unknown: $json&quot;)<br>    }<br>}</pre><p>This works for three message types. It breaks down when you add ten. And debugging becomes detective work — “why did this message end up in the wrong handler?”</p><h3>The Send Side — Destination-Based Routing</h3><p>Sending messages shows the same advantage. Every action has a clear destination:</p><pre>class SignalingSession(private val stomp: StompSession) {<br><br>suspend fun createCall(callId: String) =<br>        send(&quot;/topic/signal.create&quot;, CallAction(callId))<br><br>suspend fun acceptCall(callId: String) =<br>        send(&quot;/topic/signal.accept&quot;, CallAction(callId))<br><br>suspend fun sendAnswer(callId: String, sdp: String) =<br>        send(&quot;/topic/signal.answer&quot;, WebRtcAnswer(callId, sdp))<br><br>suspend fun sendIceCandidate(callId: String, candidate: IceCandidate) =<br>        send(&quot;/topic/signal.ice&quot;, candidate.toDto(callId))<br><br><br>private suspend inline fun &lt;reified T&gt; send(destination: String, body: T) {<br>        stomp.send(<br>            headers = StompSendHeaders(destination),<br>            body = FrameBody.Text(Json.encodeToString(body))<br>        )<br>    }<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9WuqbzbApxbxXFpMRJ0a5A.png" /></figure><p>The server maps these destinations to specific handlers. /topic/signal.accept goes to the call acceptance handler. /topic/signal.ice goes to the WebRTC relay. No ambiguity, no routing logic on the client side.</p><p>The destination is a header, not part of the payload. This means your message body stays clean — just the data, no routing metadata mixed in.</p><h3>The Parsing Layer — Clean Separation</h3><p>Because STOMP handles routing, our parsing layer only deals with content. Messages arrive already separated by topic, so the repository can focus on one thing — transforming JSON into domain objects:</p><pre>class DefaultSignalingRepository(<br>    private val connection: SignalingConnection,<br>    private val json: Json,<br>) : SignalingRepository {<br><br>override fun observe(): Flow&lt;SignalingMessage&gt; =<br>        connection.observe(url, token).mapNotNull { raw -&gt; raw.toMessage() }<br><br><br>private fun String.toMessage(): SignalingMessage? =<br>        asCallEvent() ?: asIncomingCall() ?: asWebRtcSignal()<br><br><br>private fun String.asCallEvent(): SignalingMessage? =<br>        tryDecode&lt;CallEventDto&gt;()<br>            ?.takeIf { it.status == &quot;ENDED&quot; }<br>            ?.let { SignalingMessage.CallEnded(it.callId, it.reason) }<br><br>private fun String.asIncomingCall(): SignalingMessage? =<br>        tryDecode&lt;IncomingCallDto&gt;()<br>            ?.let { SignalingMessage.IncomingCall(it.callId, it.callerName, it.type) }<br><br>private fun String.asWebRtcSignal(): SignalingMessage? =<br>        tryDecode&lt;WebRtcMessageDto&gt;()?.let { dto -&gt;<br>            when (dto.type) {<br>                &quot;offer&quot;         -&gt; SignalingMessage.Offer(dto.sdp, dto.callId)<br>                &quot;ice-candidate&quot; -&gt; SignalingMessage.Ice(dto.candidate, dto.callId)<br>                else            -&gt; null<br>            }<br>        }<br><br>private inline fun &lt;reified T&gt; String.tryDecode(): T? =<br>        runCatching { json.decodeFromString&lt;T&gt;(this) }.getOrNull()<br>}</pre><p>Notice there’s no routing logic here — no “which topic did this come from?” checks. The repository just parses content. STOMP already handled delivery to the right subscription.</p><h3>Why Not Just Use a REST API?</h3><p>Fair question. You could poll a REST endpoint for incoming calls and post WebRTC messages via HTTP. But consider the timing:</p><p>WebRTC negotiation requires exchanging SDP offers, answers, and ICE candidates in rapid succession — often within milliseconds. A polling interval of even 500ms would make call setup noticeably slow. And HTTP’s request-response model means the server can’t push an incoming call notification to you — you have to ask “any calls?” repeatedly.</p><p>WebSocket gives you the push capability. STOMP gives you the structure on top of it.</p><h3>The Bottom Line</h3><p>Raw WebSocket is a pipe. STOMP turns that pipe into a messaging system with routing, subscriptions, and structured frames. For a voice call feature juggling incoming calls, call events, and WebRTC signaling over a single connection, that structure isn’t optional — it’s what keeps the system debuggable and maintainable.</p><p>You could build all of this yourself on top of raw WebSocket. We’ve seen the code that results from that approach. It starts simple, grows custom routing, adds ad-hoc heartbeats, and eventually becomes the framework you should have used from the start.</p><p><strong>Reference</strong></p><ul><li><a href="https://stomp.github.io/stomp-specification-1.2.html"><strong>STOMP Protocol Specification</strong></a></li><li><a href="https://github.com/joffrey-bion/krossbow"><strong>Krossbow — Kotlin Multiplatform STOMP</strong></a></li><li><a href="https://ktor.io/docs/client-websockets.html"><strong>Ktor WebSocket Client</strong></a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8343d4feeb0d" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/why-we-used-stomp-with-websocket-8343d4feeb0d">Why we used STOMP with WebSocket?</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Biometric Auth in Compose Made Easy: The New Library You Need]]></title>
            <link>https://proandroiddev.com/biometric-auth-in-compose-made-easy-the-new-library-you-need-29814270506d?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/29814270506d</guid>
            <category><![CDATA[androiddev]]></category>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[biometric-authentication]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[jetpack-compose]]></category>
            <dc:creator><![CDATA[Nav Singh]]></dc:creator>
            <pubDate>Mon, 13 Apr 2026 03:34:50 GMT</pubDate>
            <atom:updated>2026-04-13T03:34:49.722Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*fjCNJ5kbgLAOUZb00Vzl8w.png" /><figcaption>Image generated using Perplexity</figcaption></figure><p>In this article, we will learn how to integrate a new library biometric-compose into Android applications for Biometric integration.</p><p>A new <strong>biometric-compose library</strong> simplifies the integration of biometrics into Compose-based applications.</p><h3>Implementation</h3><h4><strong>Dependencies</strong></h4><pre>    implementation(&quot;androidx.biometric:biometric:1.4.0-alpha06&quot;)<br>    implementation(&quot;androidx.biometric:biometric-compose:1.4.0-alpha06&quot;)</pre><h4><strong>rememberAuthenticationLauncher</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ty1Rl1W1Rwj62J8rk6sa3w.png" /><figcaption>rememberAuthenticationLauncher</figcaption></figure><ul><li>A composable function to streamline biometric authentication requests and callbacks directly in composables.</li><li>It returns AuthenticationResultLauncher that we can use to initiate the authentication process.</li><li>We need to pass a <strong>callback</strong> of type AuthenticationResultCallback . It will be called when an <strong>AuthenticationResult</strong> is <strong>available</strong>.</li></ul><p>A <strong>successful</strong> or <strong>error</strong> result will be delivered to AuthenticationResultCallback.onAuthResult , and <strong>failures</strong> will be delivered to AuthenticationResultCallback.onAuthAttemptFailed, which is set by a callback.</p><blockquote><em>The 👆callback will be executed on the </em><strong><em>main thread</em></strong><em>.</em></blockquote><pre>fun AuthenticationResult.processAuthResult(){<br>     when (this) {<br>        is AuthenticationResult.Success -&gt;<br>            Log.d(TAG, &quot;AuthenticationResult Success, \nAuth type: $authType, \nCrypto object: $crypto&quot;)<br><br>        is AuthenticationResult.Error -&gt;<br>            Log.d(TAG,&quot;AuthenticationResult Error, \nError code: $errorCode, \nErr string: $errString&quot;)<br><br>        is AuthenticationResult.CustomFallbackSelected -&gt; {<br>            Log.d(TAG,&quot;AuthenticationResult CustomFallbackSelected ${fallback.text}&quot;)<br>        }<br>    }<br>}</pre><h3>AuthenticationResultType</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qW6XlbW5EL2GQ7_t8CxOZw.png" /><figcaption>AuthenticationResultType</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/968/1*56oTXUj64eXd9ALooewXBQ.jpeg" /><figcaption>Demo AuthenticationResultType</figcaption></figure><h3>AuthenticationError (Error code)</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*TAXbDjkys3VpGaoF6DB5KQ.png" /><figcaption>AuthenticationError (Error code)</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/968/1*xHs-ajeuITiXgAAqOnl8zA.jpeg" /><figcaption>Demo AuthenticationError (Error code)</figcaption></figure><h3>🧑‍💻 Code: rememberAuthenticationLauncher</h3><pre>val launcher =<br>    rememberAuthenticationLauncher(<br>        resultCallback =<br>            object : AuthenticationResultCallback {<br>                override fun onAuthResult(result: AuthenticationResult) {<br>                    Log.d(TAG, &quot;onAuthResult: $result&quot;)<br>                    processAuthResult()<br>                }<br>                override fun onAuthAttemptFailed() {<br>                    super.onAuthAttemptFailed()<br>                    Log.d(TAG, &quot;onAuthAttemptFailed: &quot;)<br>                }<br>            }<br>    )</pre><h3><strong>🚀 Initiate the auth process</strong></h3><ul><li>Here we need to pass the <strong>AuthenticationRequest — </strong>It&#39;s basically the configuration of the request. Like <strong>title</strong>, <strong>fallbacks</strong>, <strong>subtitle</strong>, <strong>icon</strong>, etc.</li><li>In the following code, we are using the static <strong>function</strong> <strong>biometricRequest </strong>from <strong>AuthenticationRequest</strong> class to create a <strong>AuthenticationRequest</strong></li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eFg3gKYCtiJYy3uFA-3iLA.png" /><figcaption><strong>biometricRequest fun</strong></figcaption></figure><pre>Button(<br>    onClick = {<br>        launcher.launch(<br>            biometricRequest(<br>                title = &quot;Biometric auth..&quot;,<br>                authFallbacks = arrayOf(AuthenticationRequest.Biometric.Fallback.DeviceCredential),<br>            ) {<br>                // Optionally set the other configurations. <br>                // setSubtitle(), setContent(), etc.<br>              }<br>        )<br>    }<br>) {<br>    Text(text = &quot;Start Authentication&quot;)<br>}<br></pre><h4><strong>👇Prompt without any customization based on the above 👆 implementation</strong></h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/558/1*on8B-dSIEa4PF4j-_xsBVA.png" /><figcaption>Default prompt</figcaption></figure><h3>Customization of the prompt</h3><h3><strong>Subtitle</strong></h3><pre>Button(<br>    onClick = {<br>        launcher.launch(<br>            biometricRequest(<br>                title = &quot;Biometric auth..&quot;,<br>                authFallbacks = arrayOf(AuthenticationRequest.Biometric.Fallback.DeviceCredential),<br>            ) {               <br>                setSubtitle(&quot;Subtitle goes here...&quot;)<br>              }<br>        )<br>    }<br>) {<br>    Text(text = &quot;Start Authentication&quot;)<br>}</pre><h3>Logo</h3><ul><li>This requires <strong>SET_BIOMETRIC_DIALOG_ADVANCED</strong> permission.</li><li>This permission is <strong>granted</strong> <strong>only</strong> to<strong> system apps</strong>, so it would be better to let the library show the <strong>application’s logo</strong> in the prompt.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Zi2wgVvzjswyIQ_--3x2Ng.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wJdp6B0JkIZGTKfrKXogwA.png" /></figure><pre>Button(<br>    onClick = {<br>        launcher.launch(<br>            biometricRequest(<br>                title = &quot;Biometric auth..&quot;,<br>                authFallbacks = arrayOf(AuthenticationRequest.Biometric.Fallback.DeviceCredential),<br>            ) {               <br>                setLogoRes(R.drawable.food_bank_24px)<br>                 //....<br>              }<br>        )<br>    }<br>) {<br>    Text(text = &quot;Start Authentication&quot;)<br>}</pre><h3>Description</h3><ul><li><strong>setContent(content: BodyContent?) : </strong>A single parameter of type <strong>BodyContent</strong> will be used to display the <strong>description</strong>.</li><li><strong>BodyContent:</strong> It&#39;s an abstract class, and we have the following<strong> 3 </strong>implementations that we can use <br><strong>PlainText</strong> , <strong>VerticalList</strong>, &amp; <strong>ContentViewWithMoreOptionsButton</strong></li></ul><blockquote><strong>ContentViewWithMoreOptionsButton</strong> : Requires <strong>SET_BIOMETRIC_DIALOG_ADVANCED</strong></blockquote><h4><strong>BodyContent class</strong></h4><pre>public abstract class BodyContent private constructor() {<br>    <br>    public class PlainText public constructor(public val description: String) : BodyContent()<br><br>    public class VerticalList<br>    @JvmOverloads<br>    public constructor(<br>        public val description: String? = null,<br>        public val items: List&lt;PromptContentItem&gt; = listOf(),<br>    ) : BodyContent()<br><br>   <br>    public class ContentViewWithMoreOptionsButton<br>    @RequiresPermission(SET_BIOMETRIC_DIALOG_ADVANCED)<br>    @JvmOverloads<br>    public constructor(public val description: String? = null) : BodyContent()<br>}</pre><pre>Button(<br>    onClick = {<br>        launcher.launch(<br>            biometricRequest(<br>                title = &quot;Biometric auth..&quot;,<br>                authFallbacks = arrayOf(AuthenticationRequest.Biometric.Fallback.DeviceCredential),<br>            ) {<br>                setContent(AuthenticationRequest.BodyContent.PlainText(&quot;Description&quot;))<br>                setContent(<br>                    AuthenticationRequest.BodyContent.VerticalList(<br>                        &quot;Description&quot;,<br>                        listOf(<br>                            PromptContentItemPlainText(&quot;Item1...&quot;),<br>                            PromptContentItemBulletedText(&quot;Bulleted item...&quot;)<br>                        )<br>                    )<br>                )<br>            }<br>        )<br>    }<br>) {<br>    Text(text = &quot;Start Authentication&quot;)<br>}</pre><h4><strong>👇Prompt with custom subtitle, and description based on the above implementation </strong>👆</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HmfeTUzId7w_jOkUf1QWhg.png" /><figcaption>Prompt Screenshot</figcaption></figure><h3>Authfallbacks</h3><ul><li>If we don’t provide <strong>any auth fallbacks</strong> when creating the <strong>AuthenticationRequest</strong>, the <strong>default cancel button</strong> will be shown in the prompt.</li></ul><pre>Button(<br>    onClick = {<br>        launcher.launch(<br>            biometricRequest(<br>                title = &quot;Biometric auth..&quot;<br>            ) {<br>                // Configuration goes here,...<br>             <br>            }<br>        )<br>    }<br>) {<br>    Text(text = &quot;Start Authentication&quot;)<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*QDz2ZZri2oIfH_818VmUsQ.png" /><figcaption>Prompt with default cancel button</figcaption></figure><h3>References</h3><p><a href="https://developer.android.com/jetpack/androidx/releases/biometric#1.4.0-alpha05">Biometric | Jetpack | Android Developers</a></p><h3>Stay in touch</h3><p><a href="https://www.linkedin.com/in/navczydev/">https://www.linkedin.com/in/navczydev/</a></p><ul><li><a href="https://x.com/navczydev">JavaScript is not available.</a></li><li><a href="https://github.com/navczydev">navczydev - Overview</a></li><li><a href="https://bsky.app/profile/navczydev.bsky.social">navczydev.bsky.social</a></li></ul><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=29814270506d" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/biometric-auth-in-compose-made-easy-the-new-library-you-need-29814270506d">Biometric Auth in Compose Made Easy: The New Library You Need</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Implement Shaders in Compose Multiplatform (Android, iOS, Desktop & Web)]]></title>
            <link>https://proandroiddev.com/how-to-implement-shaders-in-compose-multiplatform-android-ios-desktop-web-c86a36dd9666?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/c86a36dd9666</guid>
            <category><![CDATA[compose-multiplatform]]></category>
            <category><![CDATA[shaders]]></category>
            <category><![CDATA[jetpack-compose]]></category>
            <category><![CDATA[kotlin-multiplatform]]></category>
            <category><![CDATA[kotlin]]></category>
            <dc:creator><![CDATA[Meet]]></dc:creator>
            <pubDate>Mon, 13 Apr 2026 03:27:20 GMT</pubDate>
            <atom:updated>2026-04-13T03:27:18.662Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zt6f9pwKzMesRCAJcknqlg.png" /></figure><p>Shaders are one of the most powerful ways to create modern, high-performance UI effects — ranging from animated backgrounds to complex visual distortions. However, implementing shaders across multiple platforms has traditionally been difficult due to differences in graphics APIs and shader languages.</p><p>In this article, you’ll learn how to build a clean and reusable shader abstraction using Compose Multiplatform that works seamlessly across <strong>Android, iOS, Desktop (Windows/macOS/Linux), and Web (WASM/JS)</strong> from a shared <strong>commonMain</strong> codebase.</p><p>We will use <strong>AGSL (Android Graphics Shading Language)</strong> on Android and <strong>SkSL (Skia Shading Language)</strong> on all other platforms, while writing shader code in a unified way so it runs everywhere with minimal changes.</p><p>By the end of this guide, you’ll have:</p><ul><li>A platform-agnostic shader architecture using <strong>expect/actual</strong></li><li>A consistent way to pass uniforms like time, resolution, and colors</li><li>Support for both background rendering (<strong>drawBehind</strong>) and post-processing effects (<strong>graphicsLayer</strong>)</li><li>Safe fallbacks for unsupported Android versions (API &lt; 33)</li></ul><p>This approach allows you to write shader-driven UI once and run it across all Compose Multiplatform targets efficiently and cleanly.</p><blockquote>Step 1: Project Setup</blockquote><p>If you haven’t already created a Compose Multiplatform project, head over to the <a href="https://kmp.jetbrains.com/">Kotlin Multiplatform Wizard website</a>.</p><ul><li><strong>Select</strong> the platforms: <strong>Android</strong>, <strong>iOS</strong>, <strong>Desktop and Web.</strong></li><li>Make sure that the <strong>Share UI</strong> option is selected for both <strong>iOS</strong> and <strong>Web</strong>.</li></ul><p>(There will be a platform configuration step later — whatever you select here, some related code will be generated there. I’ll explain that in the next steps.)</p><ul><li><strong>Project Name:</strong> You can set this to <strong>Shader-Animation-CMP</strong> (or any name you like)</li><li><strong>Project ID:</strong> You can use <strong>com.meet.shader.animation.cmp </strong>(or customize as needed)</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/690/1*pG3-OhBgMspSLG_pkcMMBQ.png" /></figure><p>After configuring your options, <strong>download the generated project template</strong>.</p><p>Once downloaded, open the project in <strong>Android Studio</strong> or <strong>IntelliJ IDEA</strong>.</p><blockquote>Step 2: Configure <strong>build.gradle.kts</strong> (composeApp module)</blockquote><p>Open the <strong>build.gradle.kts</strong> file inside the <strong>composeApp</strong> module.</p><p>Add the following configuration:</p><pre>// build.gradle.kts (composeApp module)<br>import org.jetbrains.kotlin.gradle.ExperimentalKotlinGradlePluginApi<br><br>plugins {<br>    ....<br>}<br><br>kotlin {<br>    @OptIn(ExperimentalKotlinGradlePluginApi::class)<br>    applyDefaultHierarchyTemplate {<br>        common {<br>            group(&quot;skikoCommon&quot;) {<br>                withJvm() // Enable this if Desktop platform is selected, otherwise comment it<br>                withApple() // Enable this if iOS platform is selected, otherwise comment it<br>                // Enable these if Web platform is selected, otherwise comment them<br>                withJs()<br>                withWasmJs()<br>            }<br>        }<br>    }<br>    ....<br>}</pre><p>This setup ensures that shared code can be properly organized across different platforms based on your selection.</p><p>Why <strong>skikoCommon</strong>?<br>iOS, Desktop, and all Web targets in Compose Multiplatform use Skia as their 2D rendering engine. By grouping them under <strong>skikoCommon</strong>, code in <strong>skikoCommonMain</strong> compiles for all four targets. Android has its own <strong>androidMain</strong> because it uses Android’s proprietary <strong>RuntimeShader</strong> API (AGSL) which is unavailable on other platforms.<br>This gives us exactly two actual implementation sites for the entire shader layer:</p><ul><li><strong>androidMain</strong> — Android AGSL</li><li><strong>skikoCommonMain</strong> — everything else (Skia/SkSL)</li></ul><p>Reference:</p><p><a href="https://kotlinlang.org/docs/multiplatform/multiplatform-hierarchy.html#see-the-full-hierarchy-template">Hierarchical project structure | Kotlin Multiplatform</a></p><blockquote>Step 3: Create <strong>expect_shader</strong> Package and Files</blockquote><p>Inside <strong>commonMain/kotlin/&lt;your/package&gt;/</strong>, create a sub-package named <strong>expect_shader</strong>. Then create two empty Kotlin files inside it:</p><ul><li><strong>ShaderProvider.kt</strong></li><li><strong>ShaderUtils.kt</strong></li></ul><p>These files live in <strong>commonMain</strong> and are visible to all platforms. The <strong>actual</strong> implementations will be placed in <strong>androidMain</strong> and <strong>skikoCommonMain</strong>.</p><pre>composeApp/src/<br>├── commonMain/.../expect_shader/<br>│   ├── ShaderProvider.kt           ← interface (all platforms)<br>│   └── ShaderUtils.kt              ← expect declarations + composable helpers<br>│<br>├── androidMain/.../expect_shader/<br>│   ├── ShaderProviderImpl.kt       ← Android 13+ (RuntimeShader)<br>│   └── ShaderUtils.android.kt      ← Android actual implementations<br>│<br>└── skikoCommonMain/.../expect_shader/<br>    ├── ShaderProviderImpl.kt       ← iOS / Desktop / Web (Skia)<br>    └── ShaderUtils.skikoCommon.kt  ← iOS / Desktop / Web actual implementations</pre><blockquote>Step 4: <strong>ShaderProvider.kt</strong> — The Uniform Interface</blockquote><p>Open <strong>ShaderProvider.kt</strong> and add the following interface. This is shared across all platforms. it defines how you pass data (time, resolution, colors, etc.) from Kotlin into the shader.</p><pre>// composeApp/src/commonMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderProvider.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import androidx.compose.ui.graphics.Color<br><br>interface ShaderProvider {<br>    fun uniformInt(name: String, value: Int)<br>    fun uniformInt(name: String, value1: Int, value2: Int)<br>    fun uniformInt(name: String, value1: Int, value2: Int, value3: Int)<br>    fun uniformInt(name: String, value1: Int, value2: Int, value3: Int, value4: Int)<br><br>    fun uniformFloat(name: String, value: Float)<br>    fun uniformFloat(name: String, value1: Float, value2: Float)<br>    fun uniformFloat(name: String, value1: Float, value2: Float, value3: Float)<br>    fun uniformFloat(name: String, value1: Float, value2: Float, value3: Float, value4: Float)<br>    fun uniformFloat(name: String, values: List&lt;Float&gt;)<br><br>    fun uniformColor(name: String, r: Float, g: Float, b: Float, a: Float)<br>    fun uniformColor(name: String, color: Color)<br><br>    fun update(block: ShaderProvider.() -&gt; Unit) { this.block() }<br>}</pre><p>Each method maps directly to a GLSL uniform declaration in your shader string:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/560/1*fLCek6FRzfXgaF2mgq7alA.png" /></figure><blockquote>Always match the Kotlin method call with the GLSL <strong>uniform</strong> type declared in your shader. Mismatched types can lead to silent failures or runtime crashes.</blockquote><blockquote>Step 5: <strong>ShaderUtils.kt</strong> — Core <strong>expect</strong> Functions</blockquote><p>Open <strong>ShaderUtils.kt</strong> and add the following <strong>expect</strong> declarations. These are platform-neutral contracts that must be implemented on each platform.</p><pre>// composeApp/src/commonMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderUtils.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import androidx.compose.ui.graphics.RenderEffect<br>import androidx.compose.ui.graphics.Shader<br><br>// Returns false on Android &lt; 13, true everywhere else.<br>// Always guard shader logic with this check.<br>expect fun isShaderAvailable(): Boolean<br><br>// The platform-specific shader object.<br>//   Android  → android.graphics.RuntimeShader<br>//   Skiko    → org.jetbrains.skia.RuntimeShaderBuilder<br>@Suppress(&quot;EXPECT_ACTUAL_CLASSIFIERS_ARE_IN_BETA_WARNING&quot;)<br>expect class AppRuntimeShader<br><br>// Compiles an AGSL/SkSL string into a shader object.<br>// This is a heavy operation — always call inside remember { }.<br>expect fun createAppRuntimeShader(shaderCode: String): AppRuntimeShader<br><br>// Converts a shader into a RenderEffect for use with graphicsLayer { }.<br>//<br>// Use this only when your shader declares: uniform shader inputShader;<br>// The shaderName argument must exactly match that uniform&#39;s name.<br>//<br>// The shader receives the composable&#39;s own rendered output as inputShader,<br>// allowing post-processing (distortion, blur, color grading, etc.).<br>expect fun createAppRuntimeShaderRenderEffect(<br>    appRuntimeShader: AppRuntimeShader,<br>    shaderName: String<br>): RenderEffect<br><br>// Returns a ShaderProvider for setting uniform values<br>// (time, resolution, touch position, colors, etc.)<br>expect fun createShaderProvider(appRuntimeShader: AppRuntimeShader): ShaderProvider<br><br>// Converts AppRuntimeShader → Shader for use with ShaderBrush and drawBehind { }.<br>// Use this for background shaders and bounded inner composables.<br>expect fun createShader(appRuntimeShader: AppRuntimeShader): Shader</pre><p>Two rendering paths:</p><pre>Background shader  (drawBehind)<br>────────────────────────────────────────────────────────────<br>createAppRuntimeShader(code)<br>  └─ createShaderProvider(shader)         ← set uniforms<br>  └─ createShader(shader)                 ← convert to Shader<br>  └─ ShaderBrush(createShader(shader))<br>  └─ drawRect(brush) inside drawBehind { }<br><br><br>Filter / post-processing  (graphicsLayer)<br>────────────────────────────────────────────────────────────<br>createAppRuntimeShader(code)              ← must declare uniform shader inputShader;<br>  └─ createShaderProvider(shader)         ← set uniforms<br>  └─ createAppRuntimeShaderRenderEffect(shader, &quot;inputShader&quot;)<br>  └─ graphicsLayer { renderEffect = ... }</pre><blockquote>Step 6: Platform <strong>actual</strong> Implementations</blockquote><h4>6a — Android (androidMain)</h4><p>Create <strong>ShaderUtils.android.kt</strong> inside <strong>androidMain/.../expect_shader/</strong>:</p><pre>// composeApp/src/androidMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderUtils.android.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import android.graphics.RuntimeShader<br>import android.os.Build<br>import androidx.annotation.RequiresApi<br>import androidx.compose.ui.graphics.RenderEffect<br>import androidx.compose.ui.graphics.Shader<br>import androidx.compose.ui.graphics.asComposeRenderEffect<br><br>actual fun isShaderAvailable(): Boolean =<br>    Build.VERSION.SDK_INT &gt;= Build.VERSION_CODES.TIRAMISU  // API 33<br><br>actual typealias AppRuntimeShader = RuntimeShader<br><br>@RequiresApi(Build.VERSION_CODES.TIRAMISU)<br>actual fun createAppRuntimeShader(shaderCode: String): AppRuntimeShader =<br>    RuntimeShader(shaderCode)<br><br>@RequiresApi(Build.VERSION_CODES.TIRAMISU)<br>actual fun createAppRuntimeShaderRenderEffect(<br>    appRuntimeShader: AppRuntimeShader,<br>    shaderName: String<br>): RenderEffect =<br>    android.graphics.RenderEffect<br>        .createRuntimeShaderEffect(appRuntimeShader, shaderName)<br>        .asComposeRenderEffect()<br><br>@RequiresApi(Build.VERSION_CODES.TIRAMISU)<br>actual fun createShaderProvider(appRuntimeShader: AppRuntimeShader): ShaderProvider =<br>    ShaderProviderImpl(appRuntimeShader)<br><br>@RequiresApi(Build.VERSION_CODES.TIRAMISU)<br>actual fun createShader(appRuntimeShader: AppRuntimeShader): Shader =<br>    appRuntimeShader  // RuntimeShader already extends android.graphics.Shader</pre><p><strong>Key points:</strong></p><ul><li><strong>AppRuntimeShader</strong> is a <strong>typealias</strong> for <strong>android.graphics.RuntimeShader</strong>, which already extends <strong>android.graphics.Shader</strong>, so <strong>createShader</strong> simply returns it.</li><li>Every <strong>actual</strong> function requires <strong>@RequiresApi(Build.VERSION_CODES.TIRAMISU)</strong>. Android Lint enforces this and will warn if missing at call sites.</li><li><strong>isShaderAvailable()</strong> performs a runtime API check, allowing <strong>minSdk = 24</strong> while enabling shaders on Android 13+ devices.</li></ul><p>Create <strong>ShaderProviderImpl.kt</strong> inside <strong>androidMain/.../expect_shader/</strong>:</p><pre>// composeApp/src/androidMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderProviderImpl.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import android.graphics.RuntimeShader<br>import android.os.Build<br>import androidx.annotation.RequiresApi<br>import androidx.compose.ui.graphics.Color<br><br>@RequiresApi(Build.VERSION_CODES.TIRAMISU)<br>class ShaderProviderImpl(<br>    private val runtimeShader: RuntimeShader,<br>) : ShaderProvider {<br><br>    override fun uniformInt(name: String, value: Int) =<br>        runtimeShader.setIntUniform(name, value)<br><br>    override fun uniformInt(name: String, value1: Int, value2: Int) =<br>        runtimeShader.setIntUniform(name, value1, value2)<br><br>    override fun uniformInt(name: String, value1: Int, value2: Int, value3: Int) =<br>        runtimeShader.setIntUniform(name, value1, value2, value3)<br><br>    override fun uniformInt(name: String, value1: Int, value2: Int, value3: Int, value4: Int) =<br>        runtimeShader.setIntUniform(name, value1, value2, value3, value4)<br><br>    override fun uniformFloat(name: String, value: Float) =<br>        runtimeShader.setFloatUniform(name, value)<br><br>    override fun uniformFloat(name: String, value1: Float, value2: Float) =<br>        runtimeShader.setFloatUniform(name, value1, value2)<br><br>    override fun uniformFloat(name: String, value1: Float, value2: Float, value3: Float) =<br>        runtimeShader.setFloatUniform(name, value1, value2, value3)<br><br>    override fun uniformFloat(name: String, value1: Float, value2: Float, value3: Float, value4: Float) =<br>        runtimeShader.setFloatUniform(name, value1, value2, value3, value4)<br><br>    override fun uniformFloat(name: String, values: List&lt;Float&gt;) =<br>        runtimeShader.setFloatUniform(name, values.toFloatArray())<br><br>    override fun uniformColor(name: String, r: Float, g: Float, b: Float, a: Float) =<br>        runtimeShader.setFloatUniform(name, r, g, b, a)<br><br>    override fun uniformColor(name: String, color: Color) =<br>        uniformColor(name, color.red, color.green, color.blue, color.alpha)<br>}</pre><h4>6b — iOS / Desktop / Web (skikoCommonMain)</h4><p>Create <strong>ShaderUtils.skikoCommon.kt</strong> inside <strong>skikoCommonMain/.../expect_shader/</strong>:</p><pre>// composeApp/src/skikoCommonMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderUtils.skikoCommon.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import androidx.compose.ui.graphics.RenderEffect<br>import androidx.compose.ui.graphics.Shader<br>import androidx.compose.ui.graphics.asComposeRenderEffect<br>import org.jetbrains.skia.ImageFilter<br>import org.jetbrains.skia.RuntimeEffect<br>import org.jetbrains.skia.RuntimeShaderBuilder<br><br>actual fun isShaderAvailable(): Boolean = true  // Skia always supports shaders<br><br>actual typealias AppRuntimeShader = RuntimeShaderBuilder<br><br>actual fun createAppRuntimeShader(shaderCode: String): AppRuntimeShader {<br>    val effect = RuntimeEffect.makeForShader(shaderCode)<br>    return RuntimeShaderBuilder(effect)<br>}<br><br>actual fun createAppRuntimeShaderRenderEffect(<br>    appRuntimeShader: AppRuntimeShader,<br>    shaderName: String<br>): RenderEffect =<br>    ImageFilter.makeRuntimeShader(appRuntimeShader, shaderName, null)<br>        .asComposeRenderEffect()<br><br>actual fun createShaderProvider(appRuntimeShader: AppRuntimeShader): ShaderProvider =<br>    ShaderProviderImpl(appRuntimeShader)<br><br>actual fun createShader(appRuntimeShader: AppRuntimeShader): Shader =<br>    appRuntimeShader.makeShader()</pre><p><strong>Key points:</strong></p><ul><li><strong>AppRuntimeShader</strong> is a <strong>typealias</strong> for <strong>org.jetbrains.skia.RuntimeShaderBuilder</strong>.</li><li><strong>RuntimeEffect.makeForShader(code)</strong> compiles the SkSL shader string.</li><li><strong>RuntimeShaderBuilder</strong> wraps the compiled effect and allows setting uniforms via <strong>.uniform(...)</strong>.</li><li><strong>makeShader()</strong> returns a <strong>org.jetbrains.skia.Shader</strong> used with <strong>ShaderBrush</strong>.</li><li><strong>ImageFilter.makeRuntimeShader</strong> wraps it as a Skia image filter and converts it to a Compose <strong>RenderEffect</strong>.</li></ul><p>Create <strong>ShaderProviderImpl.kt</strong> inside <strong>skikoCommonMain/.../expect_shader/</strong>:</p><pre>// composeApp/src/skikoCommonMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderProviderImpl.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import androidx.compose.ui.graphics.Color<br>import org.jetbrains.skia.RuntimeShaderBuilder<br><br>class ShaderProviderImpl(<br>    private val runtimeShaderBuilder: RuntimeShaderBuilder,<br>) : ShaderProvider {<br><br>    override fun uniformInt(name: String, value: Int) =<br>        runtimeShaderBuilder.uniform(name, value)<br><br>    override fun uniformInt(name: String, value1: Int, value2: Int) =<br>        runtimeShaderBuilder.uniform(name, value1, value2)<br><br>    override fun uniformInt(name: String, value1: Int, value2: Int, value3: Int) =<br>        runtimeShaderBuilder.uniform(name, value1, value2, value3)<br><br>    override fun uniformInt(name: String, value1: Int, value2: Int, value3: Int, value4: Int) =<br>        runtimeShaderBuilder.uniform(name, value1, value2, value3, value4)<br><br>    override fun uniformFloat(name: String, value: Float) =<br>        runtimeShaderBuilder.uniform(name, value)<br><br>    override fun uniformFloat(name: String, value1: Float, value2: Float) =<br>        runtimeShaderBuilder.uniform(name, value1, value2)<br><br>    override fun uniformFloat(name: String, value1: Float, value2: Float, value3: Float) =<br>        runtimeShaderBuilder.uniform(name, value1, value2, value3)<br><br>    override fun uniformFloat(name: String, value1: Float, value2: Float, value3: Float, value4: Float) =<br>        runtimeShaderBuilder.uniform(name, value1, value2, value3, value4)<br><br>    override fun uniformFloat(name: String, values: List&lt;Float&gt;) =<br>        runtimeShaderBuilder.uniform(name, values.toFloatArray())<br><br>    override fun uniformColor(name: String, r: Float, g: Float, b: Float, a: Float) =<br>        runtimeShaderBuilder.uniform(name, r, g, b, a)<br><br>    override fun uniformColor(name: String, color: Color) =<br>        uniformColor(name, color.red, color.green, color.blue, color.alpha)<br>}</pre><p>AGSL vs SkSL — Key Differences</p><p>Both are very close to GLSL ES 1.0. In practice, shaders written for AGSL will run on Skiko platforms without changes, except for one limitation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/526/1*pXgNW_dmQaUpjvvHXP9hmQ.png" /></figure><blockquote>Write shaders targeting AGSL for best compatibility across all platforms. Avoid dynamic array indexing to ensure they work everywhere.</blockquote><blockquote>Step 7: <strong>ShaderUtils.kt</strong> — Add Composable Helpers</blockquote><p>Add these three composable functions to the bottom of <strong>ShaderUtils.kt</strong>. These are part of <strong>commonMain</strong>, so no <strong>expect/</strong><strong>actual</strong> is required.</p><pre>// composeApp/src/commonMain/kotlin/&lt;your/package&gt;/expect_shader/ShaderUtils.kt<br><br>package com.meet.shader.animation.cmp.expect_shader<br><br>import androidx.compose.animation.core.withInfiniteAnimationFrameMillis<br>import androidx.compose.runtime.Composable<br>import androidx.compose.runtime.produceState<br>import androidx.compose.runtime.remember<br>import kotlin.time.Clock<br><br>// Use when shader availability is already confirmed.<br>// Will crash on Android &lt; 13 if called without a guard.<br>@Composable<br>fun rememberShaderInstance(<br>    shaderCode: String,<br>): Pair&lt;AppRuntimeShader, ShaderProvider&gt; {<br>    val shader = remember { createAppRuntimeShader(shaderCode) }<br>    val provider = remember(shader) { createShaderProvider(shader) }<br>    return Pair(shader, provider)<br>}<br><br>// Safe default. Returns a null pair on Android &lt; 13.<br>// Always check: if (isShaderAvailable() &amp;&amp; shader != null &amp;&amp; provider != null)<br>@Composable<br>fun rememberShaderInstanceOrNull(<br>    shaderCode: String,<br>): Pair&lt;AppRuntimeShader?, ShaderProvider?&gt; {<br>    val shader = remember {<br>        if (isShaderAvailable()) createAppRuntimeShader(shaderCode) else null<br>    }<br>    val provider = remember(shader) {<br>        if (shader != null) createShaderProvider(shader) else null<br>    }<br>    return Pair(shader, provider)<br>}<br><br>// Returns State&lt;Float&gt; — elapsed seconds since first composition.<br>// Read .value inside drawBehind { } or graphicsLayer { } for deferred reads.<br>@Composable<br>fun rememberShaderTime() = produceState(0f) {<br>    val start = Clock.System.now().toEpochMilliseconds()<br>    while (true) {<br>        withInfiniteAnimationFrameMillis {<br>            val now = Clock.System.now().toEpochMilliseconds()<br>            value = (now - start) / 1000f<br>        }<br>    }<br>}</pre><p><strong>rememberShaderInstance</strong><br> Returns a non-null <strong>Pair&lt;AppRuntimeShader, ShaderProvider&gt;</strong>. Use only when <strong>isShaderAvailable()</strong> is already confirmed — otherwise it will crash on Android &lt; 13.</p><p><strong>rememberShaderInstanceOrNull</strong><br> Safe default. Returns <strong>null</strong> on unsupported platforms. Always check before using the shader.</p><p><strong>rememberShaderTime</strong><br> Returns <strong>State&lt;Float&gt;</strong> representing elapsed time in seconds. Using <strong>.value</strong> inside <strong>drawBehind { }</strong> or <strong>graphicsLayer { }</strong> avoids full recomposition and improves performance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/588/1*Q3S_brjGJcxCy7zoGvgwIw.png" /></figure><blockquote>Step 8: Usage Examples</blockquote><p><strong>Example 1 — Full-Screen Background Shader</strong></p><pre>private const val PLASMA_SHADER = &quot;&quot;&quot;<br>uniform float2 resolution;<br>uniform float time;<br><br>half4 main(float2 fragCoord) {<br>    float2 uv = fragCoord / resolution * 2.0 - 1.0;<br>    float v = sin(uv.x * 4.0 + time)<br>            + sin(uv.y * 4.0 + time)<br>            + sin((uv.x + uv.y) * 3.0 + time * 1.3);<br>    float r = 0.5 + 0.5 * sin(v * 3.14159);<br>    float g = 0.5 + 0.5 * sin(v * 3.14159 + 2.09);<br>    float b = 0.5 + 0.5 * sin(v * 3.14159 + 4.19);<br>    return half4(r, g, b, 1.0);<br>}<br>&quot;&quot;&quot;<br><br>@Composable<br>fun PlasmaBackground(modifier: Modifier = Modifier) {<br>    val time by rememberShaderTime()<br>    val (shader, provider) = rememberShaderInstanceOrNull(PLASMA_SHADER)<br><br>    Box(<br>        modifier = modifier<br>            .fillMaxSize()<br>            .drawBehind {<br>                if (isShaderAvailable() &amp;&amp; shader != null &amp;&amp; provider != null) {<br>                    provider.uniformFloat(&quot;resolution&quot;, size.width, size.height)<br>                    provider.uniformFloat(&quot;time&quot;, time)<br>                    drawRect(ShaderBrush(createShader(shader)))<br>                }<br>            }<br>    )<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/1*im47cV2HCf22lB-NAP61Tw.png" /></figure><ul><li><strong>drawBehind</strong> runs in the draw phase. Reading <strong>time</strong> here defers the state read, avoiding full recomposition every frame.</li><li><strong>ShaderBrush(createShader(shader))</strong> wraps the platform-specific <strong>Shader</strong> as a <strong>Brush</strong>.</li><li><strong>isShaderAvailable()</strong> inside <strong>drawBehind</strong> is safe — it always returns <strong>true</strong> on Skiko platforms.</li></ul><p><strong>Example 2 — Bounded Inner Shader Card</strong></p><pre>@Composable<br>fun ShaderCard(modifier: Modifier = Modifier) {<br>    val time by rememberShaderTime()<br>    val (shader, provider) = rememberShaderInstanceOrNull(PLASMA_SHADER)<br><br>    Box(<br>        modifier = modifier<br>            .size(200.dp)<br>            .clipToBounds()   // optional<br>            .drawBehind {<br>                if (isShaderAvailable() &amp;&amp; shader != null &amp;&amp; provider != null) {<br>                    provider.uniformFloat(&quot;resolution&quot;, size.width, size.height)<br>                    provider.uniformFloat(&quot;time&quot;, time)<br>                    drawRect(ShaderBrush(createShader(shader)))<br>                }<br>            }<br>    )<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/1*C32zNxjMyH0UehPifBE5eg.png" /></figure><ul><li><strong>clipToBounds()</strong> is optional. In this example, the shader renders fully within the composable bounds, so clipping is not required.</li><li>Only use <strong>clipToBounds()</strong> when your shader may draw outside its bounds (e.g., blur, distortion, or offset-based effects).</li></ul><p><strong>Example 3 — Filter / Post-Processing with </strong><strong>graphicsLayer</strong></p><pre>private const val RIPPLE_SHADER = &quot;&quot;&quot;<br>uniform shader inputShader;<br>uniform float2 resolution;<br>uniform float time;<br><br>half4 main(float2 fragCoord) {<br>    float2 uv = fragCoord / resolution;<br>    float2 offset = float2(<br>        sin(uv.y * 20.0 + time * 3.0) * 0.008,<br>        cos(uv.x * 20.0 + time * 3.0) * 0.008<br>    );<br>    return inputShader.eval(fragCoord + offset * resolution);<br>}<br>&quot;&quot;&quot;<br><br>@Composable<br>fun RippleFilterScreen() {<br>    val time by rememberShaderTime()<br>    val (shader, provider) = rememberShaderInstanceOrNull(RIPPLE_SHADER)<br><br>    LazyVerticalStaggeredGrid(<br>        columns = StaggeredGridCells.Fixed(3),<br>        modifier = Modifier<br>            .fillMaxSize()<br>            .graphicsLayer {<br>                if (isShaderAvailable() &amp;&amp; shader != null &amp;&amp; provider != null) {<br>                    provider.uniformFloat(&quot;resolution&quot;, size.width, size.height)<br>                    provider.uniformFloat(&quot;time&quot;, time)<br>                    renderEffect = createAppRuntimeShaderRenderEffect(shader, &quot;inputShader&quot;)<br>                }<br>            }<br>    ) {<br>        items(50) { index -&gt;<br>            Text(&quot;Row $index&quot;, modifier = Modifier.padding(16.dp))<br>        }<br>    }<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/912/1*i-T7_kAcAH_JYvmD8jTF4A.png" /></figure><ul><li>Apply <strong>graphicsLayer</strong> directly to the composable whose content you want to filter.</li><li><strong>&quot;inputShader&quot;</strong> must exactly match the shader’s <strong>uniform shader inputShader;</strong>.</li><li>If <strong>isShaderAvailable()</strong> is false (Android &lt; 13), no <strong>renderEffect</strong> is applied and the UI renders normally (graceful fallback).</li></ul><blockquote>Best Practices</blockquote><p><strong>1. </strong><strong>isShaderAvailable() everywhere</strong><br> Always guard every shader code path with this check. On Skiko platforms (iOS, Desktop, Web) it always returns <strong>true</strong> at zero cost. On Android, it prevents crashes on API &lt; 33.</p><p><strong>2. </strong><strong>@RequiresApi(TIRAMISU) on every Android actual</strong><br> Android Lint enforces this. Any call site without a version check will produce a warning. The <strong>isShaderAvailable()</strong> guard at the call site satisfies Lint.</p><p><strong>3. Shader compilation is expensive</strong><br> <strong>createAppRuntimeShader(code)</strong> compiles the shader string into GPU bytecode. Never call it outside <strong>remember { }</strong>. Compile once per composable lifecycle and reuse it.</p><p><strong>4. Deferred state reads for performance</strong><br> Read <strong>time</strong> inside <strong>drawBehind { }</strong> or <strong>graphicsLayer { }</strong>, not in the composable body. This defers the state read to the draw phase. The composable does not recompose every frame—only the draw layer is invalidated. At 60 fps, this avoids 60 full recompositions per second per shader.</p><p><strong>5. </strong><strong>drawBehind vs </strong><strong>graphicsLayer — the rule</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/552/1*ofrNlyzHv3e28xpKAYBRJQ.png" /></figure><p><strong>6. Keep </strong><strong>minSdk = 24</strong><br> Do not increase <strong>minSdk</strong> to 33 just for shaders. Use <strong>isShaderAvailable()</strong> at runtime and provide a fallback for older Android versions.</p><blockquote>Summary</blockquote><p>You now have a clean, reusable shader abstraction that:</p><ul><li>Works on Android 13+, iOS, Desktop (Windows/macOS/Linux), and Web (WASM/JS) from shared <strong>commonMain</strong> code</li><li>Uses AGSL on Android and SkSL on other platforms with the same shader code</li><li>Provides null-safe composable helpers with graceful fallback behavior</li><li>Supports both rendering paths: <strong>drawBehind</strong> (background) and <strong>graphicsLayer</strong> (post-processing)</li></ul><blockquote>Demo &amp; Resources</blockquote><p>Explore the full working implementation with <strong>20+ animated shader screens</strong> across all platforms:</p><p><strong>Web demo:</strong></p><p><a href="https://coding-meet.github.io/Shader-Animation-CMP/">Shader Animation CMP</a></p><p><strong>Demo Video:</strong><br><a href="https://github.com/user-attachments/assets/bdee06b1-c11c-41f4-bd18-36069f7d26b1">https://github.com/user-attachments/assets/bdee06b1-c11c-41f4-bd18-36069f7d26b1</a></p><p><strong>Full project:</strong></p><p><a href="https://github.com/Coding-Meet/Shader-Animation-CMP">GitHub - Coding-Meet/Shader-Animation-CMP</a></p><p>This project demonstrates real-world shader usage in Compose Multiplatform, including background effects, interactive animations, and post-processing filters — all built using a shared architecture.</p><p>If you’re interested in learning more about <strong>Kotlin Multiplatform</strong> and <strong>Compose Multiplatform</strong>, check out my playlist on YouTube Channel:<br><a href="https://youtube.com/playlist?list=PLlSuJy9SfzvEiYH59pDDNvFJjHoYLV0MM&amp;si=VR1irW3wUJchQ7iz">Kotlin Multiplatform &amp; Compose Multiplatform</a></p><p>Thank you for reading! 🙌🙏✌ I hope you found this guide useful.</p><p>Don’t forget to clap 👏 to support me and follow for more insightful articles about Android Development, Kotlin, and KMP. If you need any help related to Android, Kotlin, and KMP, I’m always happy to assist.</p><h3>Explore More Projects</h3><p>If you’re interested in seeing full applications built with Kotlin Multiplatform and Jetpack Compose, check out these open-source projects:</p><ul><li><strong>DevAnalyzer </strong>(Supports Windows, macOS, Linux):<br>DevAnalyzer helps developers analyze, understand, and optimize their entire development setup — from project structure to SDK and IDE storage — all in one unified tool.<br>GitHub Repository: <a href="https://github.com/Coding-Meet/DevAnalyzer">DevAnalyzer</a></li><li><strong>Pokemon App — MVI Compose Multiplatform Template</strong> (Supports Android, iOS, Windows, macOS, Linux):<br>A beautiful, modern Pokemon application built with Compose Multiplatform featuring MVI architecture, type-safe navigation, and dynamic theming. Explore Pokemon, manage favorites, and enjoy a seamless experience across Android, Desktop, and iOS platforms.<br>GitHub Repository: <a href="https://github.com/Coding-Meet/CMP-MVI-Template">CMP-MVI-Template</a></li><li><strong>News Kotlin Multiplatform App</strong> (Supports Android, iOS, Windows, macOS, Linux):<br>News KMP App is a Kotlin Compose Multiplatform (KMP) project that aims to provide a consistent news reading experience across multiple platforms, including Android, iOS, Windows, macOS, and Linux. This project leverages Kotlin’s multiplatform capabilities to share code and logic while using Compose for UI, ensuring a seamless and native experience on each platform.<br>GitHub Repository: <a href="https://github.com/Coding-Meet/News-KMP-App">News-KMP-App</a></li><li><strong>Gemini AI Kotlin Multiplatform App</strong> (Supports Android, iOS, Windows, macOS, Linux, and Web):<br>Gemini AI KMP App is a Kotlin Compose Multiplatform project designed by Gemini AI where you can retrieve information from text and images in a conversational format. Additionally, it allows storing chats group-wise using SQLDelight and KStore, and facilitates changing the Gemini API key.<br>GitHub Repository: <a href="https://github.com/Coding-Meet/Gemini-AI-KMP-App">Gemini-AI-KMP-App</a></li></ul><h3>Follow me on</h3><p><a href="https://www.codingmeet.com/">My Portfolio Website</a> , <a href="https://youtube.com/@codingmeet26">YouTube</a> , <a href="https://github.com/Coding-Meet">GitHub</a> , <a href="https://www.instagram.com/codingmeet26/">Instagram</a> , <a href="https://www.linkedin.com/in/coding-meet">LinkedIn</a> , <a href="https://www.buymeacoffee.com/CodingMeet">Buy Me a Coffee</a> , <a href="https://twitter.com/CodingMeet">Twitter</a> , <a href="https://telegram.me/Meetb26">DM Me For Freelancing Project</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=c86a36dd9666" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/how-to-implement-shaders-in-compose-multiplatform-android-ios-desktop-web-c86a36dd9666">How to Implement Shaders in Compose Multiplatform (Android, iOS, Desktop &amp; Web)</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Android 16’s Edge-to-Edge Mandate: Why Your “Simple Fix” Will Break at Scale]]></title>
            <link>https://proandroiddev.com/android-16s-edge-to-edge-mandate-why-your-simple-fix-will-break-at-scale-7c1e7bcdb22b?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/7c1e7bcdb22b</guid>
            <category><![CDATA[android]]></category>
            <category><![CDATA[androiddev]]></category>
            <category><![CDATA[mobile-app-development]]></category>
            <category><![CDATA[jetpack-compose]]></category>
            <category><![CDATA[android-development]]></category>
            <dc:creator><![CDATA[Sarveshwar Maheshwari]]></dc:creator>
            <pubDate>Sun, 05 Apr 2026 15:10:40 GMT</pubDate>
            <atom:updated>2026-04-05T15:10:38.724Z</atom:updated>
            <content:encoded><![CDATA[<h3>The Hook: The “Safe Zone” is Dead, and Documentation Won’t Save You</h3><p>For years, Android developers treated the system bars like a DMZ — a “no-man’s land” where we didn’t have to worry about our UI. We leaned on fitsSystemWindows=&quot;true&quot; like a crutch, letting the OS handle the heavy lifting.</p><p>With Android 16, Google has issued an ultimatum: Edge-to-Edge (E2E) is no longer a choice; it’s the enforced reality for apps targeting SDK 35.</p><p>If you think adding enableEdgeToEdge() to your onCreate and sprinkling a few Modifier.systemBarsPadding() calls is the end of the story, you’re in for a painful release cycle. In a massive, multi-module codebase where legacy XML fragments live alongside modern Compose screens, this mandate is an architectural breaking change, not just a UI tweak.</p><blockquote><strong>💡 The TL’s Take: Platform Mandates are Architectural Debt</strong></blockquote><blockquote>When a platform change is “mandatory,” most teams treat it as a ticket to be closed. At scale, this is a trap. If you don’t build a centralized infrastructure to handle insets now, you are effectively scattering “UI-fix debt” across every feature module. In six months, when the next foldable or “island” cutout comes along, you’ll be hunting down 200 individual padding modifiers instead of updating one internal library.</blockquote><h3>The Real-World Scenario: The “Double-Padding” Disaster</h3><p>While prepping a core module for the Android 16 rollout at scale, we hit a classic production nightmare. We enabled the E2E flag, and suddenly, our primary navigation hub — a complex hybrid of a Compose Scaffold hosting a legacy View-based Fragment — went haywire.</p><p>The result was The Z-Index Collision. In a pre-Android 16 world, these layers were isolated. Now, they occupy the same coordinate space:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FBvZ2tUVj6Hg_NCwXpkeXQ.png" /></figure><p>Our TopAppBar was fine, but our primary CTA button was floating 100dp from the bottom, creating a massive, amateurish gap — the “Double-Padding” monster.</p><blockquote>🚩 <strong>My Opinionated Take</strong>: <strong>Stop Guessing, Start Measuring</strong>.</blockquote><blockquote>I see too many devs “eye-balling” padding values to make the UI “look right” on their Pixel 8. This is how you break the app for the guy using a 3-button nav on an entry-level Samsung or a landscape foldable. If you find yourself hardcoding Padding(bottom = 48.dp) to &quot;fix&quot; an E2E overlap, you&#39;ve already lost. Use the Inset API or don&#39;t touch it at all.</blockquote><h3>The Deep Dive: Architecting Inset Consumption</h3><p>The official documentation provides a “Hello World” solution. We’ve shifted to a “Manual Inset Consumption” pattern. Think of Inset Consumption as a Financial Ledger. Each level of the UI tree “spends” some of the available padding. If you don’t track the balance, you end up with “Double-Padding” debt.</p><p>Here is the ProductionGradeScaffold we built to automate this ledger:</p><pre>/**<br> * We use staticCompositionLocalOf because inset architecture <br> * rarely changes mid-composition.<br> */<br><br>val LocalConsumedInsets = staticCompositionLocalOf { PaddingValues(0.dp) }<br><br>@Composable<br>fun ProductionGradeScaffold(<br>    modifier: Modifier = Modifier,<br>    content: @Composable (PaddingValues) -&gt; Unit<br>) {<br>    val systemBarInsets = WindowInsets.systemBars<br>    Scaffold(<br>        modifier = modifier,<br>        contentWindowInsets = systemBarInsets <br>    ) { innerPadding -&gt;<br>        // Provide the &#39;consumed&#39; padding to the tree<br>        CompositionLocalProvider(LocalConsumedInsets provides innerPadding) {<br>            content(innerPadding)<br>        }<br>    }<br>}</pre><blockquote><strong>🛠 The Infrastructure Take: Composables Should Be “Inset-Blind”</strong></blockquote><blockquote>Your Feature Composables should not know that a Status Bar exists. They should only know about the PaddingValues passed to them. By using a CompositionLocal to track what has already been &quot;consumed,&quot; we ensure that a nested Legacy Fragment doesn&#39;t try to double-dip into the padding budget. Centralize the math, decentralize the rendering.</blockquote><h4>[Visual Architecture Map]</h4><pre>┌──────────────────────────────────────────┐  &lt;-- Screen Edge (y=0)<br>│ [System Status Bar] (24dp)               │  <br>├──────────────────────────────────────────┤  <br>│ [ProductionGradeScaffold]                │  &lt;-- Consumes Top/Bottom<br>│  │                                       │<br>│  ├─ [TopAppBar] (Uses innerPadding.top)  │<br>│  │                                       │<br>│  └─ [LegacyFragmentContainer]            │  &lt;-- Reads LocalConsumedInsets<br>│     └─ [ConstraintLayout] (0dp Padding)  │  &lt;-- Subtraction Logic: <br>│                                          │      (Required: 24 - Consumed: 24 = 0)<br>├──────────────────────────────────────────┤<br>│ [System Gesture Pill] (48dp)             │<br>└──────────────────────────────────────────┘  &lt;-- Screen Edge (y=max)</pre><h3>The “Gotchas”: What We Learned the Hard Way</h3><ol><li><strong>fitsSystemWindows is a Zombie:</strong> If you have legacy XML, android:fitsSystemWindows=&quot;true&quot; is now a recipe for unpredictable behavior.</li><li><strong>Horizontal Insets on Foldables:</strong> Almost no one tests for the “hinge” or “side cutout” in landscape. Use displayCutout insets explicitly.</li><li><strong>Transparent Bars, Dead Touches: </strong>E2E often makes the navigation bar transparent, but it still consumes touch events!</li></ol><blockquote><strong>⚠️ The UX Reality Check: Your “Transparent” Nav Bar is a Lie</strong></blockquote><blockquote>Designers love the look of content bleeding behind the navigation pill. But as engineers, we know the “Gesture Zone” is a touch-event black hole. If your “Buy Now” button is in that bottom 48dp, it’s a dead button. You must advocate for “UI Safety” even when Design wants “Clean Visuals.” E2E is about drawing in the bars, not interacting in them.</blockquote><h3>My Controversial Stance: Inset Architecture is a Platform Concern</h3><p>In the pursuit of “Clean Code,” we often map our ViewModels to “UI States” that include padding values.<strong> Stop doing this.</strong> If your ViewModel is calculating padding based on insets, you’ve introduced a logic round-trip for a UI-layer concern.</p><p>Insets should be handled by your <strong>Platform Infrastructure</strong> (Scaffolds and Modifiers) so that your Feature Developers don’t even have to think about them. Performance is a feature; architectural purity that causes jank is a failure.</p><h3>Call to Action</h3><p>Is your team prepared for the Android 16 rollout, or are you planning to just “slap a padding on it” and hope for the best? If you’ve battled the “double-padding” monster in a hybrid app, I want to hear how you killed it. Let’s argue in the comments.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7c1e7bcdb22b" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/android-16s-edge-to-edge-mandate-why-your-simple-fix-will-break-at-scale-7c1e7bcdb22b">Android 16’s Edge-to-Edge Mandate: Why Your “Simple Fix” Will Break at Scale</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Build Your Own Landscapist Image Plugin in Jetpack Compose]]></title>
            <link>https://proandroiddev.com/build-your-own-landscapist-image-plugin-in-jetpack-compose-660aecf26236?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/660aecf26236</guid>
            <category><![CDATA[android]]></category>
            <category><![CDATA[kotlin-multiplatform]]></category>
            <category><![CDATA[compose-multiplatform]]></category>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[jetpack-compose]]></category>
            <dc:creator><![CDATA[Jaewoong Eum]]></dc:creator>
            <pubDate>Sat, 04 Apr 2026 17:45:39 GMT</pubDate>
            <atom:updated>2026-04-04T17:45:37.934Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*gX5bZbcoV4hnr_nZ" /><figcaption>Upsplash@susan_wilkinson</figcaption></figure><p><a href="https://github.com/skydoves/landscapist">Landscapist</a> provides a composable image loading library for Jetpack Compose and Kotlin Multiplatform. Among its image composables, LandscapistImage stands out as the recommended choice: it uses Landscapist&#39;s own standalone loading engine built from scratch for Jetpack Compose and Kotlin Multiplatform, with no dependency on platform-specific loaders like Glide or Coil.</p><p>It handles fetching, caching, decoding, and display internally, and it works identically across Android, iOS, Desktop, and Web. On top of that, LandscapistImage exposes a plugin system through the ImagePlugin sealed interface, giving you five distinct hook points into the image loading lifecycle where you can inject custom behavior without modifying the loader itself.</p><p>In this article, you’ll explore the ImagePlugin architecture, examining each of the five plugin types and why they exist, how ImagePluginComponent collects and dispatches plugins through a DSL, and how built in plugins like <a href="https://skydoves.github.io/landscapist/placeholder/#placeholderplugin">PlaceholderPlugin</a>, <a href="https://skydoves.github.io/landscapist/placeholder/#shimmerplugin">ShimmerPlugin</a>, <a href="https://skydoves.github.io/landscapist/animation/#circular-reveal-animation">CircularRevealPlugin</a>, <a href="https://skydoves.github.io/landscapist/palette/">PalettePlugin</a>, and <a href="https://skydoves.github.io/landscapist/zoomable/#zoomableplugin">ZoomablePlugin</a> implement these interfaces in practice.</p><h3>Why LandscapistImage for plugins</h3><p>Before diving into the plugin system, it is worth understanding why LandscapistImage is the best foundation for plugin based image loading.</p><p>LandscapistImage uses its own standalone engine (landscapist-core) rather than delegating to Glide, Coil, or Fresco. This means every stage of the image loading pipeline, from network fetching through memory caching to bitmap decoding, is controlled by a single Kotlin Multiplatform implementation. The benefit for plugins is direct: when LandscapistImage transitions from loading to success, it knows the exact moment the bitmap becomes available. It passes that bitmap directly to PainterPlugin and SuccessStatePlugin without any adapter layer or platform specific conversion. The plugin receives a real ImageBitmap, not a wrapped platform object.</p><p>This also means LandscapistImage works on every Compose Multiplatform target. A ShimmerPlugin you write for Android runs identically on iOS and Desktop. There is no &quot;this plugin only works with Glide&quot; problem, because there is no Glide in the pipeline.</p><p>If you look at the LandscapistImage composable signature, you can see where plugins fit in:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f838cdb1b2e09159daff35cadf12d4ef/href">https://medium.com/media/f838cdb1b2e09159daff35cadf12d4ef/href</a></iframe><p>The component parameter is the entry point for the plugin system. When you pass a rememberImageComponent { ... } block, every plugin you add inside that block gets dispatched at the correct lifecycle stage automatically. You can still use loading, success, and failure lambdas for one off customization, but plugins are the reusable, composable alternative.</p><h3>The fundamental problem: Extending image loading without modifying it</h3><p>Consider a common scenario: you are loading images with LandscapistImage and want to show a shimmer effect during loading, apply a circular reveal animation on success, and extract the dominant color from the loaded image. Without a plugin system, you would need to manage all of this manually:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/eb3b1bc8d136d6de5300f3fa4b934119/href">https://medium.com/media/eb3b1bc8d136d6de5300f3fa4b934119/href</a></iframe><p>This approach has two problems. First, each behavior is tightly coupled to the call site. If you want the same shimmer in ten different screens, you duplicate the logic ten times. If you want to add a new behavior, you touch every call site. Second, behaviors that operate at different conceptual layers (UI rendering, painter transformation, side effects) are mixed together in the same lambda, making it harder to reason about what happens when.</p><p>The plugin system solves both problems. It turns each behavior into a self contained, reusable component that declares which lifecycle stage it belongs to. You compose them together declaratively, and the dispatch mechanism handles the rest.</p><h3>Introducing ImagePlugin</h3><p>The ImagePlugin system is designed around one principle: adding or removing image loading behavior should be as simple as adding or removing a single line of code. In practice, that means attaching a plugin with the + operator and detaching it by deleting that line:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3634a29351f3051c1f1fbd18d86f7268/href">https://medium.com/media/3634a29351f3051c1f1fbd18d86f7268/href</a></iframe><p>If the shimmer is no longer needed, you remove the +ShimmerPlugin() line. The rest of the plugins continue to work without any changes. There is no cleanup code, no conditional branches to update, and no shared state to untangle. Each plugin is self contained: it carries its own configuration, renders its own UI or performs its own side effect, and has no dependency on other plugins in the list.</p><p>This makes the plugin system useful beyond pre built options. When your team needs custom behavior tied to the image loading lifecycle, whether that is logging analytics when an image loads, showing a domain specific error screen on failure, or applying a brand specific visual transformation, anyone on the team can implement a custom ImagePlugin, attach it with +, and remove it later if the requirement changes. The plugin boundary keeps custom behavior isolated from the image loading pipeline, so adding or removing a plugin never risks breaking the loader itself.</p><p>The plugin system supports five types of plugins, each targeting a different moment in the image loading lifecycle. Let’s examine the interface that defines them.</p><h3>The ImagePlugin sealed interface</h3><p>The foundation of the plugin system is a sealed interface with five subtypes. Each subtype corresponds to a specific moment in the image loading lifecycle and receives parameters appropriate for that moment. If you examine the ImagePlugin interface:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8522931add9fd6ac4b6fa35d67c77ebe/href">https://medium.com/media/8522931add9fd6ac4b6fa35d67c77ebe/href</a></iframe><p>The remaining three types follow the same pattern:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/26b1b58c9d56c47968c077d3293f9c50/href">https://medium.com/media/26b1b58c9d56c47968c077d3293f9c50/href</a></iframe><p>The reason for five separate types rather than a single onStateChanged callback is that each lifecycle stage has fundamentally different needs.</p><p><strong>PainterPlugin</strong> receives the loaded ImageBitmap and the current Painter, and returns a new Painter. This is where you transform the visual output itself. The return type is important: because it takes a Painter in and returns a Painter out, multiple painter plugins can be chained. A circular reveal wraps the original painter, then a blur wraps the reveal, forming a decoration chain. This works because LandscapistImage provides the decoded ImageBitmap directly from its internal cache, so painter plugins always have access to the raw bitmap data for their transformations.</p><p><strong>LoadingStatePlugin</strong> receives the Modifier, ImageOptions, and an executor callback. The executor is a composable lambda that LandscapistImage provides for rendering a low resolution thumbnail while the full image loads. Plugins like ThumbnailPlugin use this executor to request a smaller version of the same image from the loading pipeline. Other loading plugins like ShimmerPlugin ignore the executor entirely and render their own UI instead.</p><p><strong>SuccessStatePlugin</strong> receives the loaded imageBitmap along with the original imageModel. This runs after a successful load, and its primary purpose is side effects rather than UI rendering. The imageModel parameter is included because some side effects, like palette caching, need to associate their results with the original image URL.</p><p><strong>FailureStatePlugin</strong> receives the Throwable reason for the failure. A custom failure plugin could use this to display different error images based on the failure type, for example showing a network error icon for IOException and a format error icon for IllegalArgumentException.</p><p><strong>ComposablePlugin</strong> receives the entire image content as a composable lambda. Unlike the other four types, this plugin does not operate on a specific lifecycle state. Instead, it wraps the rendered image content, regardless of which state produced it. This makes it the right choice for behaviors that apply to the image as a whole, like gesture handling, overlays, or accessibility wrappers.</p><p>The key observation: the sealed interface ensures every plugin must declare which lifecycle stage it belongs to at compile time. There is no ambiguity about when a plugin runs, and the compiler enforces that you implement the correct compose signature for your chosen stage.</p><h3>How plugins are collected: ImagePluginComponent</h3><p>Plugins need a container that collects them and makes them available to the image composable. The ImagePluginComponent class provides a DSL for collecting plugins into an ordered list:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7a19edd7497b4de452959af7ecc23808/href">https://medium.com/media/7a19edd7497b4de452959af7ecc23808/href</a></iframe><p>The unaryPlus operator is what enables the +Plugin() syntax. When you write +ShimmerPlugin(), Kotlin invokes the unaryPlus extension function defined inside ImagePluginComponent, which calls add() internally and appends the plugin to the mutable list. Because this is an operator defined in the scope of the component, it only works inside the ImagePluginComponent receiver block, preventing accidental use outside of plugin configuration.</p><p>The rememberImageComponent function creates and remembers this component across recompositions:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7681ce3eaa92a84bc8b6925e185f0efa/href">https://medium.com/media/7681ce3eaa92a84bc8b6925e185f0efa/href</a></iframe><p>The remember call ensures the plugin list is built once and cached. This matters for performance: without it, every recomposition would recreate the plugin list and all plugin instances. You use it like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d3f0b71a6d8c12c7fa8a1165f68256a1/href">https://medium.com/media/d3f0b71a6d8c12c7fa8a1165f68256a1/href</a></iframe><p>Three plugins, three different lifecycle stages, all composed into a single component with a clean DSL. The order in the block determines the order of execution within each plugin type.</p><h3>Built in plugins: From simple to advanced</h3><p>Now let’s examine how the built in plugins implement these interfaces. Each example demonstrates a different pattern for how plugins interact with LandscapistImage&#39;s loading pipeline.</p><h3><a href="https://skydoves.github.io/landscapist/placeholder/#placeholderplugin">PlaceholderPlugin</a>: The simplest LoadingStatePlugin</h3><p>PlaceholderPlugin is the most straightforward implementation. It displays a static image while loading is in progress, using whatever image source you provide. If you examine the PlaceholderPlugin.Loading class:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/b9055d112c86d5148990704fa82d714f/href">https://medium.com/media/b9055d112c86d5148990704fa82d714f/href</a></iframe><p>The source parameter accepts an ImageBitmap, ImageVector, or Painter. The compose function renders that source using ImageBySource, passing through all the image options from the parent LandscapistImage. This is an important detail: the plugin inherits contentScale, alignment, and other display options automatically, so the placeholder looks consistent with the final image. The apply return pattern means the plugin returns itself, a convention used across all state plugins.</p><p>The failure variant follows the same structure but implements FailureStatePlugin instead:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c9294c59b5940d23142e353fa567102f/href">https://medium.com/media/c9294c59b5940d23142e353fa567102f/href</a></iframe><p>Notice how the reason: Throwable? parameter is available but unused here. The built in PlaceholderPlugin.Failure shows the same static image regardless of the error type. A custom failure plugin could inspect this parameter to display different images: a network icon for connection errors, a broken image icon for decode failures, and so on. This is where building your own plugin adds value over the built in options.</p><h3><a href="https://skydoves.github.io/landscapist/placeholder/#shimmerplugin">ShimmerPlugin</a>: Adding animated loading state</h3><p>ShimmerPlugin is another LoadingStatePlugin, but instead of a static image, it displays an animated shimmer effect that signals to the user that content is loading:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9e31c2736d2694d3a372fc9b1c4e8650/href">https://medium.com/media/9e31c2736d2694d3a372fc9b1c4e8650/href</a></iframe><p>The structure is identical to PlaceholderPlugin. The only difference is what gets composed. Instead of ImageBySource, it renders a ShimmerContainer with customizable base and highlight colors. This demonstrates how the plugin interface keeps each implementation focused on a single responsibility: PlaceholderPlugin renders a static image, ShimmerPlugin renders an animation, but both hook into the same loading state lifecycle through the same compose signature.</p><p>Also notice that ShimmerPlugin ignores the executor parameter entirely. The executor is there for plugins that want to render a low resolution thumbnail while loading (like ThumbnailPlugin), but ShimmerPlugin has no use for it. This is a benefit of the interface design: plugins receive all the context they might need, but they are free to use only what is relevant to their behavior.</p><h3><a href="https://skydoves.github.io/landscapist/animation/#circular-reveal-animation">CircularRevealPlugin</a>: Transforming the painter</h3><p>CircularRevealPlugin demonstrates the PainterPlugin type, which operates on a fundamentally different level than state plugins. Instead of composing UI at a lifecycle stage, it transforms the Painter that renders the image:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f533507b7572b34ab5fc880d6fee7946/href">https://medium.com/media/f533507b7572b34ab5fc880d6fee7946/href</a></iframe><p>The compose function receives the current painter and wraps it with a circular reveal animation painter using rememberCircularRevealPainter. The original painter is passed through as the base, so the animation decorates the existing rendering behavior rather than replacing it. When LandscapistImage first transitions to the success state, the circular reveal painter starts its animation, gradually revealing the image from the center outward.</p><p>The BlurTransformationPlugin follows the same pattern:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d552997e5b403d3ef90b92215ce6e69c/href">https://medium.com/media/d552997e5b403d3ef90b92215ce6e69c/href</a></iframe><p>Both take a painter in and return a new painter out. Because PainterPlugin forms a chain (as shown in the composePainterPlugins dispatch function earlier), you can stack multiple painter transformations. If you add both CircularRevealPlugin and BlurTransformationPlugin, the blur is applied on top of the circular reveal, which wraps the original painter. Reversing the order would produce a different visual result: the reveal would animate over an already blurred image.</p><h3><a href="https://skydoves.github.io/landscapist/palette/">PalettePlugin</a>: Reacting to success with side effects</h3><p>PalettePlugin implements SuccessStatePlugin, running only after the image loads successfully. Unlike the plugins above, PalettePlugin does not render any UI. Its purpose is to extract dominant colors from the loaded bitmap and expose them to the rest of your composable tree:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d2984d381b96729c52a9ada4a3f3e21f/href">https://medium.com/media/d2984d381b96729c52a9ada4a3f3e21f/href</a></iframe><p>The compose function triggers the palette generation:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d209b6ab6a0cbb5f94e5f93cd544ecf9/href">https://medium.com/media/d209b6ab6a0cbb5f94e5f93cd544ecf9/href</a></iframe><p>This demonstrates that SuccessStatePlugin is not limited to visual rendering. The compose function performs a side effect: generating a color palette from the loaded bitmap and notifying through the paletteLoadedListener. The LocalInspectionMode check skips palette generation during IDE preview, since there is no real bitmap available in preview mode. The useCache flag avoids redundant palette generation when the same image is recomposed without changing.</p><p>Because LandscapistImage provides the imageBitmap directly from its internal decoding pipeline, PalettePlugin does not need to re decode the image or request it from a platform specific API. This is one of the benefits of the standalone engine: the bitmap is already available as a Compose ImageBitmap, ready for analysis.</p><h3><a href="https://skydoves.github.io/landscapist/zoomable/#zoomableplugin">ZoomablePlugin</a>: Wrapping the entire content</h3><p>ZoomablePlugin implements ComposablePlugin, which gives it a different scope than the other four plugin types. Instead of operating on a specific loading state or transforming the painter, it wraps the entire rendered image content:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ab2345450e88d0e0fec57e9552abaee0/href">https://medium.com/media/ab2345450e88d0e0fec57e9552abaee0/href</a></iframe><p>The content lambda contains whatever LandscapistImage has rendered for the current state, including any modifications from painter plugins. ZoomableContent wraps this content with a gesture detector that handles pinch to zoom, pan, and double tap. The user sees the image and can interact with it without any additional code at the call site.</p><p>The optional state parameter is what makes this plugin particularly useful. If you pass in a ZoomableState from outside, you can read the current zoom level, pan offset, and rotation from other parts of your composable tree. You could display a zoom percentage indicator, add a &quot;reset zoom&quot; button, or synchronize zoom across multiple images. If you do not pass a state, the plugin creates one internally, keeping the simple case simple.</p><p>This is the most flexible plugin type because it has full control over how the image content is presented within the composable hierarchy. You could use ComposablePlugin to add overlays, gesture detectors, context menus, or any other composable wrapper around the image.</p><h3>Putting it all together</h3><p>With all five plugin types available, you can compose a complete image loading experience with LandscapistImage in a few lines:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/960659a4128a010cf15af65baffa7b00/href">https://medium.com/media/960659a4128a010cf15af65baffa7b00/href</a></iframe><p>Each plugin handles one concern. The ShimmerPlugin shows an animated placeholder during loading. The CircularRevealPlugin animates the painter when the image arrives. The PalettePlugin extracts colors after loading. The PlaceholderPlugin.Failure displays an error image on failure. The ZoomablePlugin wraps the entire content with zoom gestures. Five plugins, five different lifecycle hooks, zero manual state management.</p><p>Because the plugin system operates at the Landscapist layer above any specific image loader, the same component configuration also works with GlideImage, CoilImage, and FrescoImage. But LandscapistImage gives you the added benefit of a standalone, multiplatform engine with no third party loader dependency. Your plugins, your loader, and your UI all run from a single Kotlin Multiplatform codebase.</p><p>In this article, you’ve explored the ImagePlugin sealed interface and its five subtypes, how ImagePluginComponent collects plugins through a DSL, and how built in plugins like <a href="https://skydoves.github.io/landscapist/placeholder/#placeholderplugin">PlaceholderPlugin</a>, <a href="https://skydoves.github.io/landscapist/placeholder/#shimmerplugin">ShimmerPlugin</a>, <a href="https://skydoves.github.io/landscapist/animation/#circular-reveal-animation">CircularRevealPlugin</a>, <a href="https://skydoves.github.io/landscapist/palette/">PalettePlugin</a>, and <a href="https://skydoves.github.io/landscapist/zoomable/#zoomableplugin">ZoomablePlugin</a> implement each plugin type in practice.</p><p>Understanding this plugin architecture opens up a wide range of practical use cases. Because each plugin is self contained and attaches with a single + line, you can freely mix and match behaviors for increasingly complex scenarios, a shimmer placeholder combined with a circular reveal and palette extraction, for example, without the complexity scaling with the number of features. When requirements change, you detach a plugin by removing one line and attach a new one in its place.</p><p>As always, happy coding!</p><p>— <a href="https://github.com/skydoves/">Jaewoong (skydoves)</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=660aecf26236" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/build-your-own-landscapist-image-plugin-in-jetpack-compose-660aecf26236">Build Your Own Landscapist Image Plugin in Jetpack Compose</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Jetpack Compose Hot Reload on Real Android Devices: Reduce the Build Time to Seconds]]></title>
            <link>https://proandroiddev.com/jetpack-compose-hot-reload-on-real-android-devices-with-compose-hotswan-2f5dfccd55bf?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/2f5dfccd55bf</guid>
            <category><![CDATA[compose-multiplatform]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[kotlin-multiplatform]]></category>
            <category><![CDATA[hot-reloading]]></category>
            <category><![CDATA[jetpack-compose]]></category>
            <dc:creator><![CDATA[Jaewoong Eum]]></dc:creator>
            <pubDate>Sat, 04 Apr 2026 17:42:55 GMT</pubDate>
            <atom:updated>2026-04-04T17:42:53.920Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0yvyGd2-xT2xeU-8" /><figcaption>Upsplash@lenaborge</figcaption></figure><p>In Jetpack Compose development, you are probably rebuilding your app tens of times per day to adjust the subtle design specifications. Each rebuild takes anywhere from 1 min to several minutes (~30 mins depending on your project scale), followed by navigating back to the screen you were working on and reconstructing the state you just lost for fair self-testing.</p><p>Let’s be honest. That adds up to one to two hours of dead time per day, every day, spent waiting instead of iterating. Well, for some, it’s the perfect coffee break. For others, it’s a frustrating blocker that slows everything down. That’s exactly the problem Compose HotSwan is built to solve.</p><p><a href="https://hotswan.dev/">Compose HotSwan</a> is a JetBrains IDE plugin for Android Studio/IntelliJ and a Gradle compiler plugin that performs Jetpack Compose hot reload on real physical Android devices/emulators, applying code changes to a running app in one to three seconds with full state preservation as possible.</p><p>In this article, you will explore what you can do with Compose HotSwan, the engineering constraints that make hot reload on Android difficult, how Compose HotSwan works within those constraints, and its core capabilities.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DhGY3fyxOcZRsgmx_q5Vdw.gif" /></figure><h3>The build tax: What slow feedback actually costs you</h3><p>Every Android developer knows the routine. You tweak a modifier, hit run, and wait. The build compiles, dexes, installs, and restarts your app. It launches from the home screen. You tap through multiple screens, scroll to the right position, and recreate the state just to see a one-line change.</p><p>This is not just an inconvenience. It changes how you make decisions, and the cost compounds throughout your day.</p><p>Picture this. You are working on a screen buried several layers deep. Everything is set up exactly where you need it. You change fontSize = 16.sp to 18.sp. Gradle starts building. On a small project, it takes a minute. On a large codebase, it takes much longer. The app reinstalls and restarts from the “home screen”.</p><p>All your state is gone. Now you navigate back again. Load the data again. Scroll to the same spot again. And when you finally get there, you cannot remember what the previous state looked like. The context you built up is gone after a single build cycle.</p><p>Now you do it all over again. And by the time you get back, you have already forgotten what you were comparing.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/853/1*sIQFIaFD-9jG4eiA9Qy-7A.gif" /></figure><p>With hot reload, this entire loop collapses. You change a value, save the file, and see the result on your device in ~1 second, with the same screen, the same state, and the same scroll position still in place. The video below shows this in practice: modifying composable code and watching the running app update in real time without a rebuild or restart.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DJKvhDbLh6ckxatAAepj_Q.gif" /></figure><p>When you save a file, it compiles only the changed code via Gradle’s incremental build, extracts the modified classes from the resulting DEX, transfers them to the device, and swaps them directly in ART’s memory using a native agent. It then triggers a scoped recomposition that invalidates only the affected composable scopes while preserving the slot table structures and all embedded state.</p><p>It sounds like a compelling shift in how you build with Compose, right? But on Android, this is fundamentally harder due to ART’s strict runtime constraints, which make class swapping far more restrictive than on other platforms. The engineering behind Compose HotSwan is covered later in this article.</p><h3>Installation and setup</h3><p>Getting Compose HotSwan running takes three primary steps: install the IDE plugin, apply the Gradle plugin, and sync.</p><h4>Requirements</h4><p>Make sure your environment meets these prerequisites. Your project’s minSdk does not matter for Compose HotSwan. What matters is the API level of the device you are running on. API 30 or higher is recommended because it supports a broader range of reloadable changes, including new functions, enum values, and data class properties.</p><p>For the full requirements and Gradle configuration options, see the <a href="https://hotswan.dev/docs/requirements">Requirements</a> and <a href="https://hotswan.dev/docs/gradle">Gradle Configuration</a> documentation.</p><h4>Step 1: Install the IDE plugin</h4><p>Open Android Studio (Meerkat 2024.3 or later) and navigate to Settings &gt; Plugins &gt; Marketplace. Search for &quot;Compose HotSwan&quot; and install it. Restart the IDE when prompted.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*XiHjUTb6_32tI1tY" /></figure><h4>Step 2: Apply the Gradle plugin</h4><p>Add the HotSwan compiler plugin to your version catalog. In libs.versions.toml:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/cdb6b0e77248d055736c130403b27d91/href">https://medium.com/media/cdb6b0e77248d055736c130403b27d91/href</a></iframe><p>For the recent version, check out the official <a href="https://hotswan.dev/releases">release notes</a> docs.</p><h4>Step 3: Sync and run</h4><p>Sync Gradle, build your app in debug mode, and run your app on your physical device or emulator. Once the app is running, hit the “Start HotSwan” button and wait a second. Now, you are ready to hot reload. Make a change to any Compose file, save it, and watch the update appear on your device in seconds.</p><h4>1 more step: Get the Free License</h4><p>Compose HotSwan is a freemium plugin, and you can start a 2-week free trial with full access to all features. This gives you plenty of time to try it in your own project and experience the impact firsthand.</p><p>You can claim your free trial in under 10 seconds by clicking the button in the Compose HotSwan panel with your JetBrains account. <a href="https://hotswan.dev/pricing">As for pricing</a>, well, it’s not about the price. It’s about your time. Save hours every day writing Compose.</p><h3>What you can hot reload</h3><p>The scope of reloadable changes directly determines how much of your daily workflow benefits from hot reload. If a tool only handles color and text changes, it helps with a narrow slice of UI iteration. Compose HotSwan covers a broader range because the compiler plugin and native client SDK operate at the class level, not the composable level.</p><p><strong>Composable function bodies<br></strong>Any change inside a composable function body reloads instantly. Text, colors, font sizes, layout modifiers, padding, animation parameters, conditional logic, and child composable structure all work.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/79ba91cc108f9f9722a762a911f57c80/href">https://medium.com/media/79ba91cc108f9f9722a762a911f57c80/href</a></iframe><p>Save the file, and the running app updates in place in ~1 second. No rebuild, no reinstall, no navigation.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*JjBaLuN26MD_311IessIJg.gif" /></figure><h4>Adding a new Composable and reordering</h4><p>You can add a new composable function and reorder existing composable calls within a function, and the state follows each composable correctly. Compose HotSwan’s compiler plugin preserves state continuity across structural changes in your code.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*O5A8y8VAJQZUFu-JO2G9kA.gif" /></figure><h4>Non-composable function changes</h4><p>This is one of the key differences from Live Edit, which only supports changes to composable functions. Because Compose HotSwan swaps classes at the ART level, any method body change is reloadable, not just @Composable ones. Changes to ViewModel methods, repository logic, utility functions, and data mappers all hot reload:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a0e9e525aa66de697bfa0daa9e10c048/href">https://medium.com/media/a0e9e525aa66de697bfa0daa9e10c048/href</a></iframe><p>Any composable that calls this function displays the updated result on the next recomposition. In practice, this means business logic iteration, not just UI tweaking, benefits from the fast feedback loop.</p><h4>Resource value changes</h4><p>Modify existing values in strings.xml, colors.xml, dimens.xml, and drawable files. HotSwan detects resource file changes and hot reloads all your resource value changes instantly.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/ad67dfd6f5897cfb1c2ba28e56b3a78a/href">https://medium.com/media/ad67dfd6f5897cfb1c2ba28e56b3a78a/href</a></iframe><p>Note that adding entirely new resources (new string keys, new color entries) changes the R class field assignments, requiring a full rebuild. Modifying existing values works because the resource IDs remain unchanged.</p><h4>Data class property additions</h4><p>You can add new properties to data classes, and the toString(), copy(), equals(), and hashCode() methods regenerate automatically:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/56e92b9b09a6df7760689941240837cc/href">https://medium.com/media/56e92b9b09a6df7760689941240837cc/href</a></iframe><p>Note that removing properties or changing their types requires a full rebuild. For the complete list of what you can and cannot hot reload, including edge cases around inline functions, lambda count changes, and cross-file references, see the <a href="https://hotswan.dev/docs/supported-changes">Supported Changes</a> documentation.</p><h3>So, how it works: The five-stage pipeline</h3><p>At a high level, Compose HotSwan executes a five-stage pipeline when you save a file: <strong>Save → Compile → Extract → Swap → Recompose</strong>. The IDE detects the changed file, identifies the Gradle module, runs an incremental compilation on just that module, extracts only the modified classes from the resulting DEX, swaps them into the device’s memory via the native ART agent, and triggers a scoped recomposition.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/808/1*kUNomt755nvkMfX5QOW8Tw.gif" /></figure><p>Each of these five stages contains layers of a waaayyyy more complexity that this summary does not begin to cover. The compilation stage alone involves custom IR transformations, each file compilation, and parameter reservation strategies. The extraction stage performs differential class analysis across lambda renumbering boundaries.</p><p>The swap stage interacts with the Android runtime under constraints that vary by API level. The recomposition stage integrates with the Compose runtime’s slot table through several invalidation strategies. For a more detailed (but still summarized) breakdown of each stage, see the <a href="https://hotswan.dev/docs/how-it-works">How It Works</a> documentation.</p><p>Compose HotSwan targets ART, so its architecture is fundamentally different from JVM-based hot reload and had to be built from scratch in a completely different environment. If you are curious about how hot reload works on the JVM (Desktop) side, the JetBrains team published <a href="https://blog.jetbrains.com/kotlin/2026/01/the-journey-to-compose-hot-reload-1-0-0/">The Journey to Compose Hot Reload 1.0.0</a>, which covers their own path to shipping hot reload for Compose Desktop. Even that article, operating in the comparatively forgiving JVM environment, clearly had to compress an enormous amount of internal detail to fit into a single post.</p><h3>Why hot reload on Android is hard</h3><p>Hot reload on Flutter (Dart VM) and React Native (JavaScript engine) all exist in mature forms. Android is the outlier, and the reason is the runtime itself.</p><h4>This is.. just a research-level problem</h4><p>There is no public API, library, or documented technique for doing this. This is definitely NOT a toy/pet level project. The Kotlin compiler’s internals, ART’s class swapping behavior, the Compose compiler’s code generation patterns, and D8’s DEX output format all intersect in ways that no one has had to reason about jointly before.</p><p>The HotSwan compiler plugin operates at the IR (Intermediate Representation) level of the Kotlin compiler, the same layer where the Compose compiler plugin itself runs. It analyzes function bodies, tracks composable identity, and performs independent compilation so that each function can be swapped in isolation without invalidating unrelated code.</p><p>On the device side, the problem shifts to DEX bytecode. HotSwan performs structural analysis on compiled DEX files, comparing class schemas between the installed APK baseline and the newly compiled output. But “comparing class schemas” understates the complexity. The Kotlin compiler generates synthetic classes for lambdas, and these classes are assigned sequential numeric identifiers. If you add a single lambda in the middle of a file, every subsequent lambda’s class name shifts. It’s a shit.</p><p>The Data class and ViewModel… It&#39;s another headache. Data class modifications require a binary-level schema transformation to reconcile the old memory layout with the new definition. And the Compose compiler generates its own layer of synthetic code, group keys and slot table entries, that track composable identity for recomposition. All of these interact. A change to a composable body can simultaneously trigger lambda renumbering, slot table key changes, and group restructuring, and Compose HotSwan has to handle all of it atomically.</p><h4>The biggest challenge: ART treats class structure as immutable</h4><p>The Android Runtime (ART) performs aggressive optimizations when it loads your app. It compiles bytecode ahead of time, inlines method calls, and lays out objects in memory based on the class definition at load time. Once a class is loaded, ART treats its structure as mostly immutable. You cannot add or remove fields, change method signatures, or alter the class hierarchy at runtime. Only method bodies can be replaced.</p><p>This is a fundamental constraint. A hot reload system for Android must work entirely within these boundaries: detect which changes are safe to swap, and fall back gracefully for everything else.</p><h4>State preservation is a thing; it’s really a matter</h4><p>Hot reload without state preservation is just a faster rebuild tool. You still lose navigation stacks, form data, scroll positions, and all the context you spent time constructing. The time savings from skipping the build are negated by the time spent reconstructing state.</p><p>The real engineering challenge is making the reload invisible: swapping method bodies in memory while keeping the Compose runtime’s slot table intact, so every remember{} value, every scroll position, every dialog state, and every ViewModel instance survives. This requires a compiler plugin that tracks composable identity across reloads and a bunch of multi-tier recomposition strategy that degrades gracefully when targeted invalidation is not possible.</p><h4>Other platforms control their own runtime</h4><p>Flutter runs Dart in the Dart VM. React Native runs JavaScript in Hermes or JSC. Both have full control over class definitions, memory layout, and code loading, which makes hot reload a solvable problem at the VM level. ART is a different kind of runtime. It was designed for production performance: ahead-of-time compilation, aggressive inlining, and fixed memory layouts.</p><p>These are the right trade-offs for shipping apps, but they work against developer iteration. Compose HotSwan does not try to replace ART or work around it. It operates within ART’s constraints, which is what makes the engineering difficult and what makes the result reliable.</p><p>For a deeper look at the full pipeline and the engineering behind each stage, see the <a href="https://hotswan.dev/docs/how-it-works">How It Works</a> documentation.</p><h3>Do the math and save your time</h3><p>To put concrete numbers on this: say you rebuild 30 times per day during active UI work, and each cycle costs you 2 minutes (build + navigate + reconstruct state). That is 60 minutes per day spent waiting. With hot reload completing in 1 to 3 seconds and preserving your full app state, those same 40 iterations take roughly 2 minutes total. That is over an hour reclaimed per day.</p><p>But 2 minutes per build is an optimistic number. It assumes a small to mid-sized project with a warm Gradle daemon. If you work at a large organization where the project has over 1,000 modules, a single incremental build can take 15 to 20 minutes. At that scale, 30 iterations per day is not realistic; developers self-limit to fewer attempts because each one is so expensive.</p><p>Even if you only rebuild 15 times at 15 minutes each, that is nearly 4 hours per day lost to the build cycle. With hot reload, those 15 iterations take under a minute. That is most of a working half-day returned to actual development, every day, for every engineer on the team. Multiply that across a team of 20, and you are looking at hundreds of engineering hours per month.</p><p>When adopted at the team level, hot reload becomes a productivity multiplier for the entire engineering organization. Fewer build cycles means less context switching, faster code reviews with visual proof of changes, tighter designer-developer iteration loops, and shorter time from prototype to production. It is not just about saving minutes per developer. It is a structural improvement to how a development team ships UI work.</p><h3>Conclusion</h3><p><a href="https://hotswan.dev/">Compose HotSwan</a> isn’t built to showcase the technical complexity of hot reload; it’s designed to meaningfully support the Android developer ecosystem and directly improve developer productivity. Going forward, it will continue to focus on helping developers work more efficiently.</p><p>Although not covered in this post, Compose HotSwan also includes several useful features:</p><ul><li><a href="https://hotswan.dev/docs/preview"><strong>Preview Runner</strong></a> that runs your Preview composable in ~1 second on your real device. Then you can generate full, pretty UI docs by running just a single Gradle task. This is a real gem.</li><li><a href="https://hotswan.dev/docs/snapshot"><strong>Screenshot Snapshots</strong></a> that automatically capture screenshots on every hot reload and generate shareable web pages for collaboration with designers</li><li><a href="https://hotswan.dev/docs/mcp-server"><strong>MCP integration</strong></a> that connects with your AI agent to trigger hot reload and supports 20+ MCP tasks.</li></ul><p>Compose HotSwan is available on the <a href="https://plugins.jetbrains.com/plugin/30551-compose-hotswan/">JetBrains Marketplace</a>. The full documentation, including detailed guides for each feature covered in this article, is at <a href="https://hotswan.dev/docs">hotswan.dev/docs</a>. For bug reports and feature requests, file an issue on <a href="https://github.com/skydoves/compose-hotswan-issuetracker">GitHub</a> or join the <a href="https://discord.gg/jaTcyK5XCr">Discord</a>.</p><p>As always, happy coding!</p><p>— <a href="https://github.com/skydoves/">Jaewoong (skydoves)</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2f5dfccd55bf" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/jetpack-compose-hot-reload-on-real-android-devices-with-compose-hotswan-2f5dfccd55bf">Jetpack Compose Hot Reload on Real Android Devices: Reduce the Build Time to Seconds</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Stop Inflating WebViews in XML: A Performance Fix for Heavy Content]]></title>
            <link>https://proandroiddev.com/stop-inflating-webviews-in-xml-a-performance-fix-for-heavy-content-bc7780787542?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/bc7780787542</guid>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[androiddev]]></category>
            <category><![CDATA[webview]]></category>
            <dc:creator><![CDATA[Dmytro Petrenko]]></dc:creator>
            <pubDate>Mon, 30 Mar 2026 20:14:12 GMT</pubDate>
            <atom:updated>2026-03-31T17:23:02.445Z</atom:updated>
            <content:encoded><![CDATA[<h4>We had a delay(300) hiding a real initialization problem. Here’s what was actually happening — and the fix that made it disappear.</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*z5cpI1MK1PY6wo47srHqnw.png" /><figcaption>Profiler Before-&gt;After. Source: Author</figcaption></figure><p>Our Android app loads a heavy, interactive SVG inside a WebView — think thousands of path elements, JavaScript interactivity, the works. The problem was that adding the SVG content into the WebView would cause a noticeable lag. Not a crash, not an ANR, just… a stutter. The kind of thing that makes an app feel cheap.</p><p>The workaround someone had added before?</p><pre>delay(300)<br>// then load SVG into WebView<br>webView.loadDataWithBaseURL(null, svgHtml, &quot;text/html&quot;, &quot;UTF-8&quot;, null)<br>webView.visibility = View.VISIBLE</pre><p>A 300ms artificial delay before loading the content, just to give the WebView time to “settle in” after inflation. Classic band-aid.</p><p>After digging in, the root cause was surprising: a single &lt;WebView&gt; tag in the XML layout.</p><h4>What was actually happening</h4><p>Here’s what the layout looked like:</p><pre>&lt;WebView<br>    android:id=&quot;@+id/web_view&quot;<br>    android:layout_width=&quot;match_parent&quot;<br>    android:layout_height=&quot;0dp&quot;<br>    android:visibility=&quot;invisible&quot;<br>    ... /&gt;</pre><p>Looks harmless, right?</p><p>The issue is that when Android inflates this XML, it creates the WebView immediately — before the Fragment even hits <em>onViewCreated()</em>.</p><p>And WebView isn’t just another TextView.</p><p>Under the hood it’s spinning up a Chromium renderer, allocating native memory, setting up IPC channels. On a older phone, that’s easily 100–200ms of work.</p><p>If you try to load HTML with heavy SVG content into a WebView that’s still initializing internally, the two operations compete for resources — and the result is a visible lag. That’s why the <em>delay(300)</em> “worked”: it was giving the WebView enough breathing room to finish setting up before throwing a complex HTML with SVG at it.</p><p>But a hardcoded delay is fragile. Too short on a slow device — lag is still there. Too long on a fast device — the user is staring at a blank screen for no reason.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*LjCEzUXf-3qhQyHKJSNQVA.png" /><figcaption>Source: Author</figcaption></figure><h4>The fix that actually worked</h4><p>Replace the <em>&lt;WebView&gt;</em> in XML with an empty <em>&lt;FrameLayout&gt;</em>:</p><pre>&lt;FrameLayout<br>    android:id=&quot;@+id/web_view_container&quot;<br>    android:layout_width=&quot;match_parent&quot;<br>    android:layout_height=&quot;0dp&quot;<br>    android:visibility=&quot;gone&quot;<br>    ... /&gt;</pre><p>Then create the WebView programmatically and load the content right away:</p><pre>override fun onViewCreated(view: View, savedInstanceState: Bundle?) {<br>    super.onViewCreated(view, savedInstanceState)<br>    <br>    webView = WebView(requireContext()).apply {<br>        layoutParams = ViewGroup.LayoutParams(MATCH_PARENT, MATCH_PARENT)<br>    }<br>    binding.webViewContainer.addView(webView)<br>    <br>    webView.apply {<br>        settings.javaScriptEnabled = true<br>        webViewClient = MyWebViewClient()<br>        loadUrl(&quot;file:///android_asset/big_svg.html&quot;)<br>    }<br>}</pre><p>The key difference is <em>when</em> initialization happens relative to content loading.</p><p>When WebView lives in XML, Android inflates it eagerly — during layout inflation, before your Fragment even reaches onViewCreated(). If you immediately throw heavy SVG content at it, you&#39;re racing against Chromium&#39;s internal setup: renderer threads, native memory allocation, IPC channels. On a slower device, that race produces visible jank.</p><p>When you create WebView programmatically inside onViewCreated(), you control the timing explicitly. The WebView initializes, gets added to the hierarchy via addView(), and <em>then</em> you load content — sequentially, not in parallel. No artificial delay needed because there&#39;s no race condition to paper over.</p><p>And don’t forget to clean up — remove the WebView from the container before destroying it:</p><pre>override fun onDestroyView() {<br>    binding.webViewContainer.removeView(webView)<br>    webView.destroy()<br>    super.onDestroyView()<br>}</pre><h3>Why this matters more than you’d think</h3><p>While working on this, a few other things became obvious about why the XML approach was problematic.</p><p><strong>The memory leak thing.</strong> When a WebView sits in the XML layout, it’s tied to the Fragment’s context through the view hierarchy. If some JavaScript callback or pending request keeps it alive past onDestroyView(), the whole view tree leaks with it. That removeView → destroy pattern above is specifically to avoid this.</p><p>Also — and this one bit us in a different context — XML inflation uses the <strong>Activity’s context</strong>. So the WebView holds a strong reference to the Activity. We’re building an SDK, so we don’t control the host app’s lifecycle. Using <em>requireContext().applicationContext</em> when creating the WebView programmatically breaks that reference. Probably doesn’t matter if you’re building a regular app, but for SDK work it’s kind of essential.</p><p>And then there’s the obvious one: sometimes you don’t even know if you need the WebView yet. Config might not be loaded, user might not have permissions. With XML you pay the full cost upfront no matter what.</p><h3>So when should you still use XML?</h3><p>Honestly? For simple stuff.</p><p>If the WebView is showing a privacy policy or a help page — something lightweight where 200ms of inflation time doesn’t matter — XML is fine. Less code, easier to read, visible in the layout preview.</p><p>But the moment heavy content, complex layouts, SDK work, or that lag on content load comes into play — switch to programmatic.</p><p>It’s maybe 5 extra lines of Kotlin, and it solves problems you didn’t even know you had.</p><p>A &lt;FrameLayout&gt; replacing a delay(300) isn&#39;t the kind of fix you&#39;d put on a resume. But it&#39;s the kind that makes you actually understand what WebView is doing underneath — and why timing matters more than most Android developers realize.</p><p>If you’re loading anything heavier than a static page, programmatic initialization isn’t a premature optimization. It’s just the right default.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=bc7780787542" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/stop-inflating-webviews-in-xml-a-performance-fix-for-heavy-content-bc7780787542">Stop Inflating WebViews in XML: A Performance Fix for Heavy Content</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AI Coding Agents Are Silently Eating Your RAM If You Do Android Development]]></title>
            <link>https://proandroiddev.com/ai-coding-agents-are-silently-eating-your-ram-if-you-do-android-development-09d87ed73e95?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/09d87ed73e95</guid>
            <category><![CDATA[gradle]]></category>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[android]]></category>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[androiddev]]></category>
            <dc:creator><![CDATA[Grigoriy Shimichev]]></dc:creator>
            <pubDate>Mon, 30 Mar 2026 15:50:26 GMT</pubDate>
            <atom:updated>2026-03-30T15:50:24.630Z</atom:updated>
            <content:encoded><![CDATA[<p>If you’re an Android developer using AI coding agents like Claude Code or Cursor, <strong>there’s a good chance your machine is slower than it should be right now</strong>.</p><h3>The problem</h3><p>AI agents trigger Gradle builds constantly. If your setup isn’t perfectly consistent across projects — different Gradle wrapper versions, different JDK versions — daemons don’t get reused. They pile up. Kotlin compilation daemons do the same.</p><p>No warnings, nothing crashes. Your machine just gets slower throughout the day. You blame Chrome, you blame Android Studio, you restart, and the cycle repeats.</p><p>Here’s what my Activity Monitor looks like after a few hours of AI-assisted Android development across different projects:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*DAnWTPuycBofdH5jO4HOqg.png" /></figure><p>A bunch of java processes sitting idle, eating over 7 GB of RAM combined. Gradle daemons, Kotlin compilation daemons — just waiting for a build that probably won&#39;t come.</p><h3>What I built</h3><p><strong>Java Daemon Watcher</strong> — a small macOS app that detects idle Gradle and Kotlin daemons and kills them automatically. It uses jps and CPU sampling to find idle daemons, kills them after a configurable timeout. Allowlist-based — never touches your IDE.</p><p>Saves me around <strong>100GB of RAM per week</strong>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/811/0*l_j61aZYuutIKK3p" /></figure><h3>How it works</h3><p>The app runs a simple three-step pipeline on a configurable interval:</p><p><strong>1. Discovery.</strong> It runs jps -lm (bundled with every JDK) to list all running Java processes with their main class names.</p><p><strong>2. Filtering.</strong> It matches processes against an allowlist of known daemon classes:</p><ul><li>org.gradle.launcher.daemon.bootstrap.GradleDaemon</li><li>org.jetbrains.kotlin.daemon.KotlinCompileDaemon</li></ul><p>Everything else — your IDE, app servers, any other Java process — is never touched.</p><p><strong>3. Idle detection + cleanup.</strong> For each daemon, the app samples CPU usage. If a daemon stays idle longer than the configured threshold, it gets a SIGTERM — the same graceful shutdown signal gradle --stop uses.</p><h3>App Configuration</h3><p>You can set the idle timeout, scan interval, and enable auto-start at login:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/870/1*ltCUnJ3xcnd01SjQtim3Mg.png" /></figure><h3>Open source</h3><p>MIT licensed: <a href="https://github.com/grishaster80/java-daemon-watcher">github.com/grishaster80/java-daemon-watcher</a></p><p>If you try it, I’d love to hear feedback — issues, feature requests, or just a star if it helped.</p><p>Have you noticed this problem on your machine? If you try the tool — how much RAM did it save you? Would love to hear your numbers!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=09d87ed73e95" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/ai-coding-agents-are-silently-eating-your-ram-if-you-do-android-development-09d87ed73e95">AI Coding Agents Are Silently Eating Your RAM If You Do Android Development</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Jetpack Compose: Animated Snake Border for Rectangle Shapes]]></title>
            <link>https://proandroiddev.com/jetpack-compose-animated-snake-border-for-rectangle-shapes-31de5e9ef713?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/31de5e9ef713</guid>
            <category><![CDATA[jetpack-compose]]></category>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[androiddev]]></category>
            <category><![CDATA[kotlin]]></category>
            <category><![CDATA[android]]></category>
            <dc:creator><![CDATA[Yuriy Skul]]></dc:creator>
            <pubDate>Mon, 30 Mar 2026 15:49:12 GMT</pubDate>
            <atom:updated>2026-03-30T15:49:11.231Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oIvo_V7tlkVq9mewnkfjVA.jpeg" /><figcaption>Glowing running snake border</figcaption></figure><h4>Introduction</h4><p>A glowing snake border may sound like a simple UI effect. On the web, there are many examples, variants, and breakdowns of how to build it. But on native Android, I have not really seen solid implementations done properly. Most of them are either simplified to a circle with plain rotation, or they use the same rotation-based progress idea even for rectangle shapes. As a result, the motion stops being uniform and starts to feel uneven and broken.</p><p>Let’s fix that properly.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/730/1*WYtX9eUXEuCIxTyr7noqxw.gif" /><figcaption>Rectangle snake Snake border Burtton</figcaption></figure><p><strong>Warning</strong>: this solution is primarily a showcase or a learning exercise. <strong>Never ever ship a continuously running animation like this in production unless you fully understand its performance implications and can guarantee it stays within the 16 ms frame budget.</strong></p><p>You will pay for this twice: first in development and maintenance — including the cost of supporting older and lower-end devices — and then, more importantly, in user experience. Users are extremely sensitive to even minor stutters, freezes, or small FPS drops, especially on budget devices where performance is already constrained.</p><h4>This article is part of a glowing snake border series:</h4><ul><li>Part 1 —<a href="https://medium.com/@yuriyskul/making-an-animated-glowing-circular-button-with-blur-and-gradient-shadows-in-jetpack-compose-72ffe9a3169d"> Animated Snake Border for </a><a href="https://medium.com/@yuriyskul/making-an-animated-glowing-circular-button-with-blur-and-gradient-shadows-in-jetpack-compose-72ffe9a3169d">CircleShape</a>.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*YZ2Mlfl6h-kPAPVN0h3rwQ.gif" /><figcaption>Simple rotation solution from Part 1</figcaption></figure><ul><li>Part 2 — Animated Snake Border for Rectangle Shapes . — <em>You are here</em></li><li>Part 3 — Animated Snake Border for any Shape; Path &amp; Sampling trick (coming soon).</li></ul><p><strong>Current article full code: </strong><a href="https://github.com/yuriyskulskiy/android-compose-playground/tree/main/app/src/main/java/com/skul/yuriy/composeplayground/feature/animatedRectButton">link</a> with <a href="https://github.com/yuriyskulskiy/android-compose-playground/tree/main/app/src/main/java/com/skul/yuriy/composeplayground/feature/animatedRectButton/snake">running snake</a>.</p><h4>Problem</h4><p>A common approach is to animate movement by rotating a line and taking its intersection with the path as the current position.<br>This works perfectly for circles, but for shapes where width and height differ significantly, the speed varies along the path.<br>Also, joins between vertical and horizontal segments often create visible seams, breaking the illusion of a continuous shape.<br>Rounded corners are often ignored completely, with no support for inner (inset) or outer (outset) borders that extend beyond the composable bounds.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/450/1*g0kWNzGQAWY04-ghzuzFyA.gif" /><figcaption>My 2020y Java/View solution — no corners support</figcaption></figure><h4>Solution Requirements</h4><ul><li>Smooth motion — the velocity vector changes smoothly while its magnitude remains constant</li><li>Two-layer snake body — a blurred halo and a thin stroke core, fading smoothly from head to tail until fully transparent</li><li>No visible seams — the snake must appear continuous across all segment joins</li><li>Support for circular shapes</li><li>Support for rectangles with sharp corners</li><li>Rounded corner support</li><li>Support for varying corner radii — the ability to define different radii for each corner</li><li>State persistence — the snake should preserve its state when leaving and re-entering composition</li><li>Encapsulation — the entire effect should be implemented within a single modifier, without spreading logic across the composition</li><li>Dynamic geometry updates — runtime changes in size or shape must not cause discontinuities or jumps; motion should remain smooth and continuous during transitions</li></ul><h4>Implementation</h4><p>This article focuses only on the snake movement.<br>Text-related effects and the glowing halo border when pressed are out of scope and are covered in my previous articles.</p><p><em>Also, for anyone looking for some hidden magic here, there really is none. The whole thing is just a perimeter split into 8 segments: 4 quarter-circle arcs, 2 horizontal lines, and 2 vertical lines. From there, it becomes a fairly boring mechanical routine: compute where the snake head and tail are, determine which segments they fall into, and then render the snake piece by piece inside each affected segment separately.</em></p><p><strong>Step 1 Extract animation STATE:</strong></p><p>By state here I mean the variable that describes the snake movement itself. If we treat the full perimeter as a closed contour of normalized length 1, then that variable is simply the animation progress. It moves from 0 to 1, then jumps back to 0 and repeats. Once this value is known, everything else can be derived from it: the head position, the tail position, and the currently covered perimeter segments.</p><p>This logic could be moved entirely inside the modifier with Modifier.composed, but I also needed the same progress value outside the modifier to keep it synchronized with the drop shadow angle inside GlowingText. That is why I moved it into a separate helper, rememberRectSnakeState.</p><p>Lifecycle observation is also encapsulated there, so the state is not animated when it does not need to be. Another common mistake is to tie progress directly to frame ticks. In that case, if FPS drops by X times,<br> the snake speed also drops by X times. Here the progress is time-based, so its speed does not depend on frame rate.</p><pre> @Stable<br>  class RectSnakeState internal constructor(<br>      initialProgress: Float = 0f<br>  ) {<br>      var progress by mutableFloatStateOf(initialProgress)<br>          internal set<br><br>      internal var isEnabled by mutableStateOf(true)<br>      internal var isLifecycleRunning by mutableStateOf(true)<br><br>      val isRunning: Boolean<br>          get() = isEnabled &amp;&amp; isLifecycleRunning<br><br>      companion object {<br>          // state saver<br>          ...<br>      }<br>  }</pre><pre> @Composable<br>  fun rememberRectSnakeState(<br>      enabled: Boolean = true<br>  ): RectSnakeState {<br>      val state = rememberSaveable(saver = RectSnakeState.Saver) {<br>          RectSnakeState()<br>      }<br>      val lifecycleOwner = LocalLifecycleOwner.current<br><br>      state.isEnabled = enabled<br><br>      DisposableEffect(lifecycleOwner) {<br>          // lifecycle observation<br>          ...<br>      }<br><br>      LaunchedEffect(state.isRunning) {<br>          if (!state.isRunning) return@LaunchedEffect<br><br>          val startProgress = state.progress<br>          val startFrameNanos = withFrameNanos { it }<br><br>          while (true) {<br>              val frameNanos = withFrameNanos { it }<br>              val elapsedMs = (frameNanos - startFrameNanos) / 1_000_000f<br>              val loopProgress = elapsedMs / SnakeLoopAnimationDurationMs<br>              state.progress = normalizeSnakeProgress(startProgress + loopProgress)<br>          }<br>      }<br><br>      return state<br>  }</pre><p><strong>Step 2: Build the perimeter geometry</strong></p><p>The first thing we need is an explicit perimeter model, because the snake should move not along an abstract shape, but along a fully defined geometric track.</p><p>In this solution, the perimeter geometry is a precomputed description of that track built from the current size, shape, stroke placement, and resolved corner radii.</p><p>As part of that, the full border is reduced to 8 ordered segments: 4 straight edges and 4 quarter-circle arcs. For each segment, we compute its geometry, its length, and the offset at which it starts along the full perimeter.</p><p>The final geometry object contains the track bounds (left, top, right, bottom), the radius of each corner (topLeftRadius, topRightRadius, bottomRightRadius, bottomLeftRadius), the centers of the four corner arcs (topRightCenter, bottomRightCenter, bottomLeftCenter, topLeftCenter), the length of every segment, the accumulated start offset of every segment, and the total perimeter length.</p><p>Based on this geometry, at any moment of the animation we can determine which segments the snake currently spans and the exact positions of its head and tail on the border.</p><p>— quarterTurn is the length coefficient for a quarter-circle arc.<br> — topLen, rightLen, bottomLen, and leftLen are the lengths of the straight perimeter segments.<br> — topRightArcLen, bottomRightArcLen, bottomLeftArcLen, and topLeftArcLen are the lengths of the four corner arcs.<br> — segmentLengths stores the real length of each of the 8 perimeter segments.<br> — segmentStarts stores the accumulated offset at which each segment begins along the full perimeter. (Pixel points)<br> — totalLen is the total length of the entire perimeter track.<br> — topRightCenter, bottomRightCenter, bottomLeftCenter, and topLeftCenter are the arc centers used later for rendering and point lookup.</p><pre> private fun buildRectSnakeTrackGeometry(...): RectSnakeTrackGeometry? {<br>      // track bounds, resolved corner radii, and validation<br>      ...<br><br>      val quarterTurn = PI.toFloat() / 2f<br><br>      val topLen = (trackWidth - radii.topLeft - radii.topRight).coerceAtLeast(0f)<br>      val rightLen = (trackHeight - radii.topRight - radii.bottomRight).coerceAtLeast(0f)<br>      val bottomLen = (trackWidth - radii.bottomRight - radii.bottomLeft).coerceAtLeast(0f)<br>      val leftLen = (trackHeight - radii.bottomLeft - radii.topLeft).coerceAtLeast(0f)<br><br>      val topRightArcLen = (quarterTurn * radii.topRight).coerceAtLeast(0f)<br>      val bottomRightArcLen = (quarterTurn * radii.bottomRight).coerceAtLeast(0f)<br>      val bottomLeftArcLen = (quarterTurn * radii.bottomLeft).coerceAtLeast(0f)<br>      val topLeftArcLen = (quarterTurn * radii.topLeft).coerceAtLeast(0f)<br><br>      val segmentLengths = floatArrayOf(<br>          topLen,<br>          topRightArcLen,<br>          rightLen,<br>          bottomRightArcLen,<br>          bottomLen,<br>          bottomLeftArcLen,<br>          leftLen,<br>          topLeftArcLen<br>      )<br><br>      val segmentStarts = FloatArray(segmentLengths.size)<br>      var totalLen = 0f<br>      for (index in segmentLengths.indices) {<br>          segmentStarts[index] = totalLen<br>          totalLen += segmentLengths[index]<br>      }<br><br>      return RectSnakeTrackGeometry(<br>          left = left,<br>          top = top,<br>          right = right,<br>          bottom = bottom,<br>          topLeftRadius = radii.topLeft,<br>          topRightRadius = radii.topRight,<br>          bottomRightRadius = radii.bottomRight,<br>          bottomLeftRadius = radii.bottomLeft,<br>          segmentLengths = segmentLengths,<br>          segmentStarts = segmentStarts,<br>          totalLen = totalLen,<br>          // arc centers<br>          topRightCenter = Offset(...),<br>          bottomRightCenter = Offset(...),<br>          bottomLeftCenter = Offset(...),<br>          topLeftCenter = Offset(...),<br>          // rendering-related geometry fields<br>          ...<br>      )<br>  }</pre><p><strong>Step 3: Convert animation progress into snake head and tail positions</strong></p><p>At this step, we stop working with the static perimeter model and derive the current snake span from the animation progress. This is an internal derived state used only for rendering, not the animation state object<br> from Step 1. The progress value is first converted into headDistance, which represents the current position of the snake head along the full perimeter length. The actual snake length is then computed from<br> snakeLengthFraction * totalLen, and tailDistanceis obtained by moving backward from the head along the same perimeter. Once both positions are known, they define the full visible snake span that will later be<br> rendered from tail to head across the affected segments.</p><p>— progress is the normalized animation progress.<br> — wrappedProgress keeps the progress inside the looping 0..1 range (normalized animation progress).<br> — headDistance is the current head position measured along the full perimeter.<br> — snakeLengthFraction defines how much of the perimeter the snake should occupy.<br> — snakeLength is the real snake length on the current track.<br> — tailDistance is the tail position measured backward from the head.<br> — alphaAtDistance(distance) gives the fade value for any point inside the current snake span.</p><pre> private fun buildRectSnakeProgressState(<br>      progress: Float,<br>      snakeLengthFraction: Float,<br>      totalLen: Float<br>  ): RectSnakeProgressState {<br>      // Keep progress inside the stable [0, 1) loop even if the input overflows.<br>      val wrappedProgress = ((progress % 1f) + 1f) % 1f<br>      val headDistance = wrappedProgress * totalLen<br>      val snakeLength = (snakeLengthFraction.coerceIn(0f, 1f) * totalLen)<br>          .coerceIn(0f, totalLen)<br>      val tailDistance = headDistance - snakeLength<br><br>      fun alphaAtDistance(distance: Float): Float {<br>          val relative = (distance - tailDistance) / snakeLength<br>          return relative.coerceIn(0f, 1f)<br>      }<br><br>      return RectSnakeProgressState(<br>          headDistance = headDistance,<br>          snakeLength = snakeLength,<br>          tailDistance = tailDistance,<br>          alphaAtDistance = ::alphaAtDistance //(Float) -&gt; Float<br>      )<br>  }</pre><p><strong>Step 4: Split the snake span across the affected segments<br></strong> If the snake is short enough, it may fit entirely inside a single segment.<br>In other cases, it spans multiple segments at once.<br>For each affected segment, we determine which part should be drawn. <br>This gives us the exact intervals that will be rendered on straight edges and corner arcs.</p><p>— startDistance and endDistance define the current snake span on the perimeter: startDistance is the current tail position, and endDistance is the current head position, both measured along the full perimeter path.<br> — wrapped keeps the current distance inside the 0..totalLen perimeter range, so we can correctly determine which segment it belongs to.<br> — findSegmentIndex(...) tells us which segment the current part of the snake belongs to.<br> — segmentStart and segmentEnd define the boundaries of that segment.<br> — intervalEnd gives the end of the current piece that can be drawn inside this segment before moving to the next one.</p><pre>private fun drawDistanceIntervalNative(<br>      canvas: android.graphics.Canvas,<br>      geometry: RectSnakeTrackGeometry,<br>      startDistance: Float,<br>      endDistance: Float,<br>      colorFrom: Color,<br>      colorTo: Color,<br>      alphaAtDistance: (Float) -&gt; Float,<br>      paint: Paint,<br>  ) {<br>      val totalLen = geometry.totalLen<br>      var current = startDistance<br><br>      while (current &lt; endDistance) {<br>          val wrapped = ((current % totalLen) + totalLen) % totalLen<br>          val segmentIndex = findSegmentIndex(geometry.segmentStarts, geometry.segmentLengths, wrapped)<br>          val segmentStart = geometry.segmentStarts[segmentIndex]<br>          val segmentEnd = segmentStart + geometry.segmentLengths[segmentIndex]<br>          val intervalEnd = min(endDistance, current + (segmentEnd - wrapped))<br><br>          drawSegmentPartNative(<br>              canvas = canvas,<br>              geometry = geometry,<br>              segmentIndex = segmentIndex,<br>              startDistance = wrapped,<br>              endDistance = wrapped + (intervalEnd - current),<br>              colorFrom = colorFrom,<br>              colorTo = colorTo,<br>              alphaAtDistance = alphaAtDistance,<br>              paint = paint<br>          )<br><br>          current = intervalEnd<br>      }<br>  }</pre><p><strong>Step 5. Render the snake inside drawWithCache</strong></p><p>At this point, the geometry and the snake intervals are already known, so only rendering remains. I do it inside drawWithCache to prepare geometry and paint objects once and reuse them during drawing. This keeps the<br>whole effect inside a single modifier.</p><p>From there, the logic is simple: draw every segment currently covered by the snake. Depending on the current head and tail positions, that may be one segment or several at once. With a large snake fraction such as<br> 0.75f, the snake can span most of the perimeter and sometimes touch all 8 segments. Each affected segment is rendered only for the interval that belongs to the current snake span.</p><p>Straight segments and corner arcs are rendered differently, but both follow the same interval logic. Segment joints have to use cut ends rather than round caps. If a rounded head or tail is needed, it should be<br>drawn separately as a circle or semicircle.</p><pre>fun Modifier.rectSnakeBorder(<br>      state: RectSnakeState,<br>      ...<br>  ): Modifier = this.drawWithCache {<br>      val geometry = buildRectSnakeTrackGeometry(...)<br>      if (geometry == null) {<br>          return@drawWithCache onDrawBehind {}<br>      }<br><br>      val snakeState = buildRectSnakeProgressState(<br>          progress = state.progress,<br>          snakeLengthFraction = snakeLengthFraction,<br>          totalLen = geometry.totalLen<br>      )<br><br>      val bodyStrokePaint = createBodyStrokePaint(...)<br>      val glowStrokePaint = createGlowStrokePaint(...)<br>      val glowHeadPaint = createGlowHeadPaint(...)<br><br>      val head = pointAtDistance(geometry, snakeState.headDistance)<br><br>      onDrawBehind {<br>          if (snakeState.snakeLength &gt; 0f) {<br>              drawIntoCanvas { canvas -&gt;<br>                  val nativeCanvas = canvas.nativeCanvas<br><br>                  // Draw the large glowing outer snake body.<br>                  drawSnakeLayerNative(<br>                      canvas = nativeCanvas,<br>                      geometry = geometry,<br>                      snakeState = snakeState,<br>                      colorFrom = glowColorFrom,<br>                      colorTo = glowColorTo,<br>                      paint = glowStrokePaint<br>                  )<br><br>                  // Draw the glowing outer snake head.<br>                  nativeCanvas.drawCircle(<br>                      head.x,<br>                      head.y,<br>                      glowingStrokeWidthPx / 2f,<br>                      glowHeadPaint<br>                  )<br><br>                  // Draw the thin inner snake body on top of the glow.<br>                  drawSnakeLayerNative(<br>                      canvas = nativeCanvas,<br>                      geometry = geometry,<br>                      snakeState = snakeState,<br>                      colorFrom = bodyColorFrom,<br>                      colorTo = bodyColorTo,<br>                      paint = bodyStrokePaint<br>                  )<br>              }<br><br>              // Draw the sharp inner head point.<br>              drawCircle(<br>                  color = bodyColorTo,<br>                  radius = bodyStrokeWidthPx / 2f,<br>                  center = head<br>              )<br>          }<br>      }<br>  }</pre><p>The nativeCanvas is needed here for the outer halo body and the halo head, because BlurMaskFilter is only available through the native Android paint pipeline. The inner snake body is not blurred, but it is still<br> rendered through the same native path simply to reuse the same drawSnakeLayerNative(...) logic.</p><h4>Blur mask seams problem</h4><p>There are still some small visual seams between blurred line segments and blurred arcs, especially when the halo body width becomes large. This current approach does not fully eliminate them. I will cover that separately in the next article using a sampling-based approach.</p><h4>Conclusion</h4><p>This solution supports CircleShape, RectangleShape, and RoundedCornerShape with varying corner radii, which in practice covers most regular UI elements. For any other generic Shape, I fail explicitly with: error(“rectSnakeBorder supports only RoundedCornerShape, RectangleShape, and CircleShape”)</p><p>But for custom shapes, is there a more universal solution that works for any closed shape?</p><p>Check out my next article for a more universal solution.</p><p><strong>Github link</strong>: <a href="https://github.com/yuriyskulskiy/android-compose-playground/tree/main/app/src/main/java/com/skul/yuriy/composeplayground/feature/animatedRectButton">whole article code</a> and <a href="https://github.com/yuriyskulskiy/android-compose-playground/tree/main/app/src/main/java/com/skul/yuriy/composeplayground/feature/animatedRectButton/snake">only animated snake package</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=31de5e9ef713" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/jetpack-compose-animated-snake-border-for-rectangle-shapes-31de5e9ef713">Jetpack Compose: Animated Snake Border for Rectangle Shapes</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Finally find the Android Service of your dreams after 13 years]]></title>
            <link>https://proandroiddev.com/finally-find-the-android-service-of-your-dreams-after-13-years-3aa499526248?source=rss----c72404660798---4</link>
            <guid isPermaLink="false">https://medium.com/p/3aa499526248</guid>
            <category><![CDATA[android]]></category>
            <category><![CDATA[android-app-development]]></category>
            <category><![CDATA[androiddev]]></category>
            <category><![CDATA[android-service]]></category>
            <category><![CDATA[daydream]]></category>
            <dc:creator><![CDATA[Pavel Nestsiarenka]]></dc:creator>
            <pubDate>Mon, 30 Mar 2026 01:55:53 GMT</pubDate>
            <atom:updated>2026-03-30T01:55:52.091Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*q7uVLUhbPfeKxWb7MDNZIw.png" /></figure><p>In this article I am going to show you cool Android feature, introduced in 2012, and I will try to write UI in Compose for it.</p><p>While exploring the depths of the Android system, I stumbled upon a feature that stole my attention. The class I discovered not only intrigued me with its name, but also surprised me yet again with the interesting features hidden within Android.</p><p>Back in 2012 <a href="https://android-developers.googleblog.com/2012/12/daydream-interactive-screen-savers.html">Android System UI team introduced</a> in Android 4.2 new feature Daydream, that allows to show content for user while phone is sleeping. You can simple draw any UI or animation for user while they phone is charging. UI elements may also have buttons so in general, it is possible to create a production-ready feature screen. Unfortunately, this feature can’t be enabled automatically. Users must enable it in the Display Settings.</p><p>The architecture of this feature similar to adding regular Service. First need to declare new component in the Manifest:</p><pre>&lt;service<br> android:name=&quot;.MyDreamService&quot;<br> android:exported=&quot;true&quot;<br> android:description=&quot;@string/dream_service_description&quot;<br> android:icon=&quot;@drawable/dream_service_icon&quot;<br> android:label=&quot;@string/dream_service_label&quot;<br> android:permission=&quot;android.permission.BIND_DREAM_SERVICE&quot;&gt;<br><br> &lt;intent-filter&gt;<br>    &lt;action android:name=&quot;android.service.dreams.DreamService&quot; /&gt;<br>    &lt;category android:name=&quot;android.intent.category.DEFAULT&quot; /&gt;<br> &lt;/intent-filter&gt;<br>&lt;/service&gt;</pre><p><a href="https://developer.android.com/reference/android/service/dreams/DreamService">The official documentation shows simple example</a> of implementing UI for DreamService:</p><pre>class MyDreamService : DreamService() {<br><br>    override fun onAttachedToWindow() {<br>        super.onAttachedToWindow()<br><br>        isInteractive = false;<br>        isFullscreen = true;<br>        setContentView(R.layout.dream_service_layout);<br>    }<br>}</pre><p>After that you can see you daydreamer in action</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hTA0SLIXx3lqFhfrMBIJDw.png" /></figure><p>Then question that came to mind — can I create UI in Compose? So, lets try to implement it</p><p>As long as setContentView requires only View , we have to make ComposeView</p><pre>val composeView = ComposeView(this).apply {<br>    setContent {<br>        DreamServiceTheme {<br>            DreamScreen(<br>                onOpenSettings = { openSettingsAndExitDream() }<br>            )<br>        }<br>    }<br>}<br><br>setContentView(composeView)</pre><p>Heres the tricky part. Android Service doesn’t really work with View and for this reason Service misses some important for us API. DreamService doesn’t implement</p><ul><li>ViewTreeLifecycleOwner</li><li>SavedStateRegistryOwner</li></ul><p>To solve this issue we can extend out DreamService with SavedStateRegistryOwner</p><pre>class MyDreamService : DreamService(), SavedStateRegistryOwner</pre><p>That requires us to implement lifecycle and savedStateRegistry</p><pre>class MyDreamService : DreamService(), SavedStateRegistryOwner {<br><br>    private val lifecycleRegistry = LifecycleRegistry(this)<br>    private val savedStateRegistryController = SavedStateRegistryController.create(this)<br><br>    override val lifecycle: Lifecycle<br>        get() = lifecycleRegistry<br>    override val savedStateRegistry: SavedStateRegistry<br>        get() = savedStateRegistryController.savedStateRegistry<br>        <br>    // ...  <br>}</pre><p>This two property you can call in each DreamService callback so the lifecycle will be in sync with the Compose View. Then we can simple set owner and registry to you ComposeView:</p><pre>val composeView = ComposeView(this).apply {<br>    setViewTreeLifecycleOwner(this@MyDreamService)<br>    setViewTreeSavedStateRegistryOwner(this@MyDreamService)<br>    // ... <br>}</pre><p>Hope this feature enrich you answer during technical interview since now you know that Activity is not the only component that can represent UI.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3aa499526248" width="1" height="1" alt=""><hr><p><a href="https://proandroiddev.com/finally-find-the-android-service-of-your-dreams-after-13-years-3aa499526248">Finally find the Android Service of your dreams after 13 years</a> was originally published in <a href="https://proandroiddev.com">ProAndroidDev</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>