Compose used to be just a smooth way of writing Android Applications in Kotlin. Now JetBrains has ported Compose to the Desktop, and it’s now easier than ever to prototype UI in real-time. 

Of course, you can write a web app-based UI or a single-page application, but sometimes, especially in the IoT world, and a small cluster of Raspberry Pis, the easiest way is to have actually a small Desktop UI.

A few months back, for a fast-paced PoC, I had to set up a lot of AWS EC2 instances, lambdas, Docker instances and had to monitor them; I actually reverted back to write a quick Desktop UI in Compose just for that. And. it. works. great.

Today’s article is about plugging in Compose with my de-facto Imaging/AI library, Origami, the only proper OpenCV wrapper for the JVM.

The article will focus on the ease of use of Compose and will leave apart an advanced thing that can be with Origami, so we will focus on writing a UI with a drag and drop area to accept an image, and when the image is shown, two sliders will accept a range of values for threshold 1 and threshold 2 of the OpenCV Canny function.

The end result looks like this:

Preview of End Result

And if you have IntelliJ installed, we are targeting to take you there in less than 15 minutes. 

Project Setup

Without much ado, let’s start by creating a new Compose/Desktop project in IntelliJ.

Project Setup

Any recent stable JavaVM will do, but let’s keep a stable JVM version 11.

Settings for the New Project are straightforward, and we are keeping the proposed settings as is and clicking Finish.

Setting settings

After you create the project and open the main.kt file the setup should look like the screenshot below:

main.kt File

You can straight away kick-start the program by running the main function.

Kick-starting Program With Run MainKt

The original program simply displays a button displaying “Hello, World!” that reacts on an onClick event.

Original Default Program ResultA bare minimum setup is done, you can play around with Compose widgets, straight out of this setup. What we want in this article is now display an image loaded by Origami.

Loading an Image With Origami

To use Origami in your project, edit the build.gradle.ks file, and add the new repository

Adding Origami Repo

In-text, that gives a Gradle repository section like the below.

repositories {
    maven { url = uri("") }
    maven {
        url = uri("")
    maven {
        url = uri("")

And the dependencies:

Origami Dependencies

Origami Core and the filters are separated, so you will need to add those two to the project. Again, in Gradle text, that gives:

dependencies {

You’ll be asked to reload the Gradle project settings, and this can be done by clicking the icon below:

Refresh Gradle Project Button

You can now import the Origami library, and call the init function.

Import Origami Library

And your main.kt file should have no error and should look like this:

main.kt Post-Origami Import

An image in Kotlin/Compose is quite easy to add with the Composable Image.

Adding Image

Image expects a bitmap, and Origami just like OpenCV works with an object called Mat. So we will need to write a small Kotlin function to convert a Mat to the expected bitmap directly via bytes.

fun asImageAsset(image: Mat): ImageBitmap {
    val bytes = MatOfByte()
    Imgcodecs.imencode(".jpg", image, bytes)
    val byteArray = ByteArray(( * image.channels()).toInt())
    bytes.get(0, 0, byteArray)
    return org.jetbrains.skija.Image.makeFromEncoded(byteArray).asImageBitmap()

We encode the OpenCV mat object into bytes, representing the JPG version of the image, and then use that to load into an ImageBitmap using makeFromEncoded.

Then, we can just read the image using the usual OpenCV imread and convert to bitmap.


Your main.kt file should now look like this:

main.kt Post-Image Import

And if you run the Kotlin code, and have the andy.jpg in the folder of your project, the window will look like the frame below:

andy.jpg ScreenshotYou are now done loading images using Origami, let’s improve on this and apply the Canny filter.

Canny Effect

A canny filter is used to easily and quickly detect contours in an image.

As you have noticed from the previous exercise of loading a picture in the Compose window using the asImageAsset function, you are in the land of Origami, and so you can apply any filter you want. 

Here, let’s try with the Canny filter, replacing the bitmap parameter of the image with:

bitmap = asImageAsset(Canny().apply(Imgcodecs.imread("andy.jpg"))),

Will nicely give you:

Canny Effect Applied to Andy

Note that the source image is kind of loaded statically with a hardcoded file name. Let’s get the user to load an image quickly with a drag and drop.

Drag and Drop

Drag and Drop is not natively supported yet by Kotlin/Compose, but we can make it work with a bit of glue. 

Here we plugin into the underlying, and terrifying, Java AWT Framework. Once the window receives a file via the DropTarget, we change the value of a mutable name variable.

    val name = remember { mutableStateOf("") }
    val target = object : DropTarget() {
        override fun drop(evt: DropTargetDropEvent) {
            val droppedFiles = evt.transferable.getTransferData(DataFlavor.javaFileListFlavor) as List<*>
            droppedFiles.first()?.let {
                name.value = (it as File).absolutePath
    } = target

After that, our application will show a text field if no image has been dropped yet, and the image if it can. This is done by making a quick check on the value of the name. 

No error detection done here, so better be an image!

MaterialTheme {
        if (name.value == "") {
            Text("Drop a file . . .")
        } else {
                bitmap = asImageAsset(Canny().apply(Imgcodecs.imread(name.value))),
                contentDescription = "Icon",
                modifier = Modifier.fillMaxSize()

Running the application will give:

Drag and Drop UI

And once we have dropped the image file on the window, Andy and his banana are back for more bananas.

Andy Dropped InWe’re close! Now we would like to assign the parameters of the Canny functions, with values coming from graphical sliders, and re-draw the image in real-time.

Complete With Sliders

Let’s wrap the rest of the code with the Compose sliders, and by creating our own CustomComponent, MyCustomOrigamiComponent.

Here we are simply taking values from the two sliders, and using those values as threshold1 and threshold2 for the Canny filter.

This component will use the MutableState value from the drag and drop settings.

fun MyCustomOrigamiComponent(name:MutableState<String>) {

    if (name.value == "") {
        Text("Drop a file . . .")
    } else {

        val value = remember { mutableStateOf(10.0F) }
        val value2 = remember { mutableStateOf(10.0F) }
        val filter = Canny()
        filter.threshold1 = value.value.toInt()
        filter.threshold2 = value2.value.toInt()

        Column {
            Slider(steps = 100, valueRange = 1f..250f, value = value.value, onValueChange = {
                value.value = it
            Slider(steps = 100, valueRange = 1f..250f, value = value2.value, onValueChange = {
                value2.value = it

                bitmap = asImageAsset2(filter.apply(imread(name.value))),
                contentDescription = "Icon",
                modifier = Modifier.fillMaxSize())

And now the core application code is just adding that CustomComponent directly in the top window.

MaterialTheme {

Now, by playing with the two sliders, you can see an instant update of the Image.

Slider Addition

 Et voila!

Source link

Write A Comment