How to Implement Declarative Deletion

At meshcloud we've implemented a declarative API and the biggest challenge has been the declarative deletion of objects.

That's why in this blog post I want to answer the question:

How do I implement deletion for a declarative API?

Ahead I cover the challenges we ran into, how other systems solve it and which solution we applied at meshcloud.

Shoot through to my blog post about implementing a declarative API if you want to start at the beginning of this two part endeavour.

For the topic of declarative deletion let's start by having a look at the advantages of declarative deletion:

Why handle deletion declaratively?

If your use-case fits a declarative API you should also think about the deletion of objects.

If an external system syncs objects into your system and also ensures that objects are deleted when they are no longer present in the primary system, a declarative API simplifies client code a lot.

If deletion could only be executed in an imperative way, the client would have to find out which objects actually have to be deleted in the target system to call the delete endpoint for those objects accordingly.

Let's have a look at the group synchronization process:

sync of groups from the IdP to your system

As you can see the client somehow needs to build a diff between the desired state and the actual state in the target system to determine which objects have to be deleted. Moving this complex logic to the server side extracts this high implementation effort to only do it once instead of n-times for different clients.

Additionally it can provide a big performance improvement as quite a lot of data might be needed to do the diff. If that data only needs to be handled by the backend, you can get rid of network latency and bandwidth limitation for getting that data to the client.

In case of handling it at the client you may also struggle with outdated data, as getting the current state and processing it takes some time during which the state may already have changed in the target system.

How to implement deletion for a declarative API

The central conceptual question for deletion is

"How can you identify which objects need to be deleted?".

As not only one client will be using your API, a central aspect is to group objects into a Declarative Set. If one item in this set is missing, it will be deleted.

But items from another set are untouched. For understanding how to solve declarative deletion, let's have a look at actual implementations of it that are productively in use.

Another implementation challenge is an efficient algorithm for comparing the actual state with the desired state. This topic isn't covered in this blog post. Aspects like reducing the amount of DB queries, doing bulk queries for the objects that are part of the Declarative Set as well as bulk deletions are things that should be considered.

How do existing declarative systems handle declarative deletion?

Terraform

In Terraform, you define a tf file that contains the desired state. When this state is applied via terraform, it writes atfstate locally or to a shared remote location. That way Terraform knows which state is actually expected in the target systems. It can create a diff between the actual state (tfstate) and what is expected to be the new state (tf ). This requires an up-to-date state file. It deletes all resources that have been present in tfstate but are no longer present in tf.

The tf file is basically what I described as a Declarative Set before. So in case of a shared remote location that keeps track of the state, it is always related to the tf file and will ever only delete resources that are in the related tfstate.

Terraform also provides a terraform plan command, which will show you the actual changes Terraform would apply. That way you can verify whether the intended changes will be applied.

Additionally Terraform provides an imperative CLI command to delete specific resources or all resources in a tf file (-target can be defined optionally to delete specific resources).

terraform destroy -target RESOURCE_TYPE.NAME -target RESOURCE_TYPE2.NAME

Kubernetes

The recommended approach in Kubernetes is using the imperative kubectl delete command to delete individual objects. They recommend it, because it is explicit about which objects will be deleted (either a specific object or the reference to a yaml file).

But Kubernetes also supports the declarative approach. You have to kubectl apply a complete configuration folder. Additionally you have to use the alpha option --prune that actually allows deleting all objects that are no longer present in the yaml files inside the folder (see Kubernetes docs). The different ways of object management in Kubernetes are described very well here.

The kubectl apply command also provides an option for a dry-run to verify whether the intended changes will be applied.

How Kubernetes actually handles declarative deletion is explained very detailed in their documentation.

Here's a brief summary:

Kubernetes saves the complete state on server-side. When doing a kubectl apply --prune you have to either provide a label with -l <label> or refer to all resources in the namespaces used in the folder's yaml files via --all. Both ways match to what I described as a Declarative Set before. They make it possible for Kubernetes to know which objects actually belong to the set of resources that are part of the intended desired state. So when you apply again e.g. with the same label, it will simply delete all objects with this label, if they don't exist in the folder anymore. Using the label is the safer approach, as just deleting every no longer present resource in all namespaces that are referenced in a tf file is rather dangerous.

Also regarding what needs to be updated in an object, Kubernetes uses an interesting approach. It sets the actual configuration that was present in the yaml file into a last-applied-configuration metadata on the object. That way it will only e.g. delete attributes that have been present in a previous application but not anymore. It does not overwrite other attributes that have only been set via implicit commands. So it does an actual patch, based on what was before and is currently present in the yaml file.

The determination of what changes need to be applied has traditionally been done in kubectl CLI tool. But recently a server-side apply implementation has been released as GA.

Sadly they didn't touch multi-object apply yet. It remains in the kubectl CLI. So actual declarative object deletion is not part of the server-side implementation.

AWS Cloud Formation

AWS Cloud Formation uses so called Stacks in which they are grouping created resources. The Stack is what I called Declarative Set before. You can update an existing stack by applying a Cloud Formation Template to that Stack. You can modify the template and re-apply it. This modification can also contain removing resources. They will then be deleted in AWS when the Cloud Formation Template is applied to the Stack again.

How does meshStack handle it?

In meshStack we provide an API that takes a list of meshObjects and creates and updates them. If you want to apply declarative deletion you have to provide a meshObjectCollection. This is how we decided to implemenent the Declarative Set I mentioned before. This is similar to adding a label to kubectl apply or using a Stack in AWS. meshObjects no longer present in the request will be deleted if they had been applied before using the same meshObjectCollection parameter. The check in our backend is rather simple as we just added a meshObjectCollection field to our entities and can query by it. That way we can do a diff between what is applied in the request and what existed already before.

When actually executing the import of meshObjects, the result contains information about whether meshStack was able to successfully reconcile the desiredState or which error occurred for a specific meshObject. A failed import of one meshObject won't result in breaking the complete process. The response will only contain details about the import result of every single meshObject.

Currently we only support access to our API for a few external systems that integrate with meshStack. Once we provide a CLI and every user can use the API that way, meshObjectCollections will be assigned to projects. That way a clear separation of access to meshObjectCollections can be guaranteed.

Scalability of this approach is definitely a topic we will have to solve in future. With the current approach all objects have to be applied in a single request. If a huge number of objects shall be processed during one request, timeouts or other performance issues could arise. A possible solution we are thinking about is doing it similar to Kubernetes. You can define your intended objects in several yaml files inside a folder. Those can be uploaded to meshStack via a CLI or an according REST call. Processing of these objects will be done asynchronously once all files are uploaded.

When to use declarative deletion

Deletion support in a declarative API can be a really nice comfort feature for your clients, as a lot of complexity will be removed for them.

declarative synchronization

Removing no longer existing objects in the request will be handled by the declarative system.

The downside of declarative deletion is that it is implicit and can easily result in removing objects that were not intended to be removed, just because they were not part of the request anymore. As long as you are managing stateless objects it might be fine to take that risk as you could simply recreate them with the next request. If objects are stateful (e.g. a databases or volumes), it might be a bad idea to remove the resource and recreate it again. In that case all data will be gone. But even in the stateful use-cases you can reduce the risk by:

  • making declarative deletion explicit via an additional parameter that needs to be provided, so the client is actually aware that declarative deletion will be applied.
  • exclude certain objects from deletion (e.g. volumes and databases). Those can only be deleted in an imperative way. It could also be an option to let the end-user decide which objects should be excluded from the declarative deletion by flagging them accordingly.
  • implement a dry-run functionality like terraform plan that will show the client which state will be applied in the end. This is a good option when providing access to the declarative API via a CLI for example. In case of an automated integration between two systems, it is not helpful as there is no-one to check the result of the dry-run. Still, some automation might be possible, but that would again require some complex logic on the client side, which we wanted to avoid in the first place.

In general it makes sense to additionally provide an imperative deletion, as the declarative deletion should always be an opt-in the client explicitly has to choose. The client always needs to be aware of the consequences the declarative deletion implies and that the client might need to be extra careful to always provide a complete list of resources.


Should I provide a declarative API? (You probably should)

Declarative APIs are becoming more and more popular, especially in the context of Infrastructure as Code.

At meshcloud we've implemented a declarative API. In this post I want to provide insights into the process and answer these questions:

  • Does it make sense to provide declarative APIs for all systems?
  • Which use-cases benefit from it and which don't?

But first things first, let's start with a look at what a declarative API actually is all about:

What is a declarative API?

At first let's have a look at the classical way of implementing an API. That is implementing it imperatively. With imperative APIs you have to provide dedicated commands the system has to execute: Create this VM, update the memory settings of this VM, remove a certain network from this VM, etc.

A declarative API is a desired state system. You provide a certain state you want the system to create. You don't care about all the steps needed to achieve that state. You just tell the system: "Please make sure that the state I provide will be there."

This approach is best known from Infrastructure as Code tools like Kubernetes or Terraform. You tell them that you want a certain set of resources with some given configuration. They take care of creating and updating the resources.

Why provide a declarative API?

With a declarative API you move complexity from the consumer of the system to the system itself. Creating, updating and even deleting objects is no longer a customer concern.

That means you can provide a way simpler API for your consumers by providing a declarative API to your system for some use-cases. This results in a reduced amount of errors due to misunderstandings between client and API provider.

It is for sure not the ideal solution to all use-cases, but more about that later.

Let's have a look at an example that shows how a declarative API can simplify the consumer's interaction with your system.

Example: Synchronizing Groups

Imagine you have a user group synchronized between a central Identity Provider (IdP) and your system (target). A group would have these properties:

Group:
    id = "123-456"
    displayName = "My Group"
    members = ["uid1", "uid2"]

Your system provides integrations for multiple IdPs and multiple clients of you API exist. These clients should focus on getting information from the IdP - and then on getting it into your target system.

Solving it with an Imperative API

Now imagine that you have an imperative API in the target system: What do you have to do to always keep all groups in sync?

Let's at first have a look at the operations that are available:

  • createGroup: Creates a group with the given attributes. If a group already exists, an error is returned.
  • updateGroup: Updates an existing group with the given attributes. If the group does not exist yet, it returns an error.
  • deleteGroup: Deletes a group by its id.
  • getAllGroups: Returns all groups that are available in the target system. This endpoint could provide some filter options, but this is not relevant for this blog post.

What you need to do for a full sync of groups from the IdP to your system:

sync of groups from the IdP to your system

As you can see, creating such a synchronization process is a rather complex task. Especially in the given example. You want a lightweight solution to implement multiple clients for the different IdPs.

This complexity requires a lot of effort every time you integrate a new IdP at a customer.

Solving it with a declarative API

In cases like the group sync, moving all the complex update logic to the target system and providing a declarative API simplifies the client by a lot.

How would a declarative API look like?

  • applyGroups You only need one operation to which you provide a list of groups, like this:
[
    {
        id: "123-456"
        groupName: "developers",
        members: ["dev1", "dev2"]
    },
  {
        id: "456-789"
        groupName: "managers",
        members: ["manager1"]
    },
    {
        id: "789-132"
        groupName: "operators",
        members: ["op1"]
    }
]

On the next synchronization dev2 became a manager and has been moved to the managers group. In addition, the operators group has been removed, as the company decided to go with a DevOps team. So op1 has been moved to the developers group and the developers group has been renamed to devops. All you have to do is run the exact same process as before, which is:

declarative synchronization

That means the second call will be looking like this:

[
    {
        id: "123-456"
        groupName: "devops",
        members: ["dev1", "ops1"]
    },
  {
        id: "456-789"
        groupName: "managers",
        members: ["manager1", "dev2"]
    }
]

The target system will take care of removing dev2 from the developers group and will add it to the managers group. It will rename the developers group to devops and add member ops1 to that group. It will also take care of removing the operators group.

For sure, all the logic mentioned in the imperative approach now needs to be implemented in your target system. BUT you will only have to implement it once.

You may even come up with an architecture of your system that simplifies implementation of that declarative approach further.

This example is limited to a holistic synchronization of all groups all the time. If you have multiple sources you get your input from, you need some kind of bucket for the groups coming from one system. You could e.g. simply add another input to the applyGroups function that allows submitting an additional bucket parameter. That allows your system to only take all groups related to the given bucket into consideration for the consolidation of which groups to create, update or delete.

As this is especially relevant for deleting groups, more details about this will be part of my upcoming blog post on "How to implement deletion for a declarative API?".

The as-code advantage

Another great advantage of the declarative approach is that you can store what you applied in a version control system (VCS). That approach provides you several advantages.

  • You have a full history of all changes
  • You have a nice overview of the expected state in a system.
  • You can work cooperatively on the desired state with a whole team.

Kubernetes or Terraform are good, real-world examples of storing the desired state in a VCS. The Kubernetes YAML files and the Terraform files are usually stored in a VCS.

If in contrast, you only apply imperative commands to the target system, you would always have to ask the target system about the current state. You may also override changes or reapply changes someone else had done before.

Declarative vs Imperative API, opponents or a nice team?

You may ask yourself: Should I build a completely declarative system without providing any imperative commands?

In some rare cases that might be the right thing to do. In most cases it makes much more sense to combine an imperative and a declarative API. Actually that is what the big Infrastructure as Code tools do. In the Kubernetes documentation you can find an imperative as well as a declarative way of managing your Kubernetes objects.

Besides use-cases like the group synchronization or Infrastructure as Code that profit a lot from a declarative system, there are also use-cases that benefit from an imperative API.

So you should decide dependent on the different use-cases in your system which API fits better for which case.

Example for less effort with Imperative API

Let's extend our previous example with the user-groups and say you need these groups to assign them to projects in your system. These projects cannot simply be created in your system, but a central workflow tool with an approval process must be used to create new projects. After the workflow completed, this tool will create the according project in your system via your API. Afterwards the project will only be maintained in your system. There will be no updates coming from the workflow tool.

project creation workflow

An imperative approach with an operation createProject is just the right thing for that use-case. Sure, it would also work with a declarative approach. You'd just call the apply operation once to create the project. But as only a create operation will be done once, there is no need for the complex handling of updating or deleting existing projects. So you can save quite a lot of effort by not implementing the declarative handling in your system, if you won't use it.

Example for simpler client implementation with Imperative API

Another example is only updating a certain attribute of e.g. the project. Let's say the project has some tags that can be set on it. Tags are simple key/value pairs. If you want to update them in a declarative API, you have to provide the complete project like this:

Project:
    id: "123-456",
    displayName: "My Project",
    assignedGroups: [
        "456-789"
    ],
    tags: [
        environment: "dev"
    ]

But if a system - that only knows about the project id and the tags - provides the tags, how does it get the other information it needs to update the project? It would have to call a get endpoint on the target system first to get all data of the project first. Then it could set the new tags and update the project.

declarative attribute update

For this use-case an imperative API makes the client's life much easier. It could just call an imperative operation like addTag or setTags.

target addTag "environment" to "prod"
target setTags ["environment" to "prod", "confidentiality" to "internal"]

When should you provide a declarative API?

In the end you have to decide per use-case which kind of API design fits best. The following comparison provides some advantages of the 2 approaches. They can help you in making that decision.

Imperative API

  • finer control on client side
  • fits perfectly fine when only creating new objects and not keeping these objects in sync after creation
  • easier to use when only updating partial data (especially when attributes of an object are sourced from multiple different systems)

Declarative API

  • way easier client code for data synchronization scenarios as the client only needs to provide the desired state and the rest is done by the target system
  • version control option for desired state
  • single point of implementation at server side
  • hide complexity of creating a certain state in the backend. Clients don't have to care about that complexity

In general, I think the following rule of thumb can also help with that decision:

  • Use a declarative API if you want to keep objects in sync with a central source. This is especially true if you have multiple clients of that API, as the complex handling of updates and deletion only have to be implemented once centrally in your system.
  • Use an imperative API for one-time operations. You just want to create, update or delete one specific thing and afterwards you don't care about the object's lifecycle anymore. In that case you should consider an imperative API.

But now I'd like to hear from you: Have you implemented a declarative API? What challenges did you come across? Did I miss anything in my blog post? Let me know!

I recommend you to visit our blog where you will find many interesting posts on engineering topics: e.g. our guides to TLS/SSL certificates or the guide to testing IaC.


Kotlin vs. Java Implementation of a Spring Boot application

A good approach for showing the differences between Java and Kotlin is creating an example application. Therefore, I created a simple Spring Boot application. I converted this app to 100% Kotlin. The project can be found on Github. The java branch contains the initial Java application and in the master branch, the converted Kotlin application can be found. Comparing the implementations shows the advantages of Kotlin compared to Java pretty well. I already described some of the benefits you can gain when switching from Java to Kotlin in one of my last blog posts. A list of all advantages, that are also reflected in the example project can be found on the slides I created in combination with the example project.

Some statistics regarding the conversion are already pretty interessting as the following table shows.

LoC LoC without data classes
Java 460 232
Kotlin 252 185
Reduction 46% 21%

Of course, data classes lead to the largest reduction of LoC. But still 21% code reduction without data classes shows, that there is much less boilerplate code you have to write and that can disctract you.

One of my favorite advantages shown in the example project is the usage of a when expression in Kotlin instead of a switch-case statement in Java.

Java:

private double getPlatformCost(Resource resource) {
    double cost;
    switch (resource.getType()) {
        case OPENSTACK:
            cost = 0.03;
        break;
        case CLOUDFOUNDRY:
            cost = 0.02;
        break;
        case KUBERNETES:
            cost = 0.01;
        break;
        default:
        break;
    }
    return resource.durationInHours() * cost;
}

Kotlin:

private fun getPlatformCost(resource: Resource): Double {
    val cost: Double = when (resource.type) {
        ResourceType.OPENSTACK -> 0.03
        ResourceType.CLOUDFOUNDRY -> 0.02
        ResourceType.KUBERNETES -> 0.01
    }
    return resource.durationInHours() * cost
}

By simply checking out both branches of the example project, you can easily compare class by class how much cleaner and more concise the Kotlin code is.

The example can also be used as a reference implementation for a Spring Boot 1.x application with Kotlin. Spring Boot 2.x already supports Kotlin and you can generate a basic application using Kotlin on start.spring.io. When creating a Spring Boot 1.x application with Kotlin, some specialties have to be considered. This example project can be used to see how this all fits together in an actual Spring Boot project.


Tips for using Kotlin with Spring Boot 1.x

Even though full Kotlin support is only available starting with Spring 5, writing Spring 4 applications in Kotlin is already working pretty well. You just have to know some of the specialties of this combination. At meshcloud, we have been working with Kotlin and Spring Boot 1.5, which is based on Spring 4, successfully for several months. In this post, we want to share some guidance about small stumbling blocks we encountered and how we got around them.

1. Spring components have to be open

All classes defined in Kotlin are final by default. That means you cannot overwrite the methods in child classes. As Spring uses a proxy approach to provide its components, all public methods of Spring components must be overridable in child classes. The explicit solution to this is pretty simple. You have to mark your class and all public methods with the keyword open.

open class PersonController {
  @GetMapping
  open fun findAll() {
    …
  }
}

A less obvious, but definitely much more comfortable way is using the Kotlin-Spring compiler plugin. It makes all Spring components open by default. As exceptions caused by missing open declarations can be very confusing and not indicate the actual problem, we definitely recommend using the plugin, as it will protect you from frustrating error analysis. IntelliJ detects the presence of this plugin in your gradle or maven configuration and will automatically highlight any redundant open modifiers and missing final declarations, for example for methods called from constructors.

2. Handling Autowired properties

If you are using property injection, auto conversion to Kotlin by IntelliJ results in converting these properties to nullable values.

open class PersonController {
  @Autowired
  private var personRepository: PersonRepository? = null
}

In order to take advantage of Kotlins support for explicit nullability, there are two solutions for refactoring this code.

1. Use lateinit vars

private lateinit var personRepository: PersonRepository

This allows making the autowired property not nullable. It tells the compiler, that the property will be set before actually accessing it. The Kotlin compiler will emit code that checks that the variable is initialized by the time it gets used and if it isn't, it throws an exception accordingly.

2. Use constructor injection

open class PersonController(
  private val personRepository: PersonRepository
){ ... }

This is our preferred solution, because it allows us to define the property as immutable, and we don't need the @Autowired annotation anymore. If the injection fails, an exception occurs during object creation and not somewhere later in the code.

Even fields annotated with @Value for configuration data can be injected in the constructor. One thing to consider here is that Kotlin uses the $ sign for string templates. Therefore the $ in the @Value annotation needs to be escaped explicitly.

open class PersonController(
  @Value(“\${mail.host}”)
  private val host: String
){ ... }

3. HATEOAS ControllerLinkBuilder.methodOn and null parameters

At least with Spring 4, using the ControllerLinkBuilder has some downsides. When the controller method has parameters like a principal or an @RequestBody, you usually submit null in the methodOn call. Let’s say you at first convert the Controller to Kotlin:

open class PersonController {
  @GetMapping
  open fun findAllOwnedByCurrentUser(principal: Principal) {
    …
  }
}

In the Controller, we want our parameters to be not nullable. As long as the class using the ControllerLinkBuilder is written in Java, everything works fine and null can be submitted, because the method is not actually called. So this would be a possible solution.

As we currently want to convert all our code to Kotlin, we use the following solution for now. We actually define the parameters as nullable in the controller and enforce their existence at the beginning of the method (thanks to Kotlin's smart casts!).

open class PersonController {
  @GetMapping
  open fun findAllOwnedByCurrentUser(principal: Principal) {
    principal!!
    ...
  }
}

This solution allows us to use the parameter like a not nullable parameter within the function. As soon as a better solution is available, we can easily remove this check.

As Spring 5 has native Kotlin support, we expect this issue to be solved when switching to it.

4. Default Constructor for JPA entities

Eventhough this is a JPA related topic, it is something worth mentioning in the context of Spring Boot, as many of those applications use JPA. JPA always requires a default constructor on entities. A @PersistenceConstructor exists, that should generate this constructor for you.

@Entity
data class Person @PersistenceConstructor constructor (
  val firstName: String,
  val lastName: String
)

Sadly, this didn't work for us in all cases. We switched to the solution of defining default values for all properties of an entity. That way a default constructor is available for JPA:

@Entity
open class Person (
  val firstName: String = "",
  val lastName: String = ""
)

By now there is also a Kotlin-JPA compiler plugin available which makes the default constructor available for JPA automatically. With this approach simple classes and data classes can be used for entities.


Our Top 5 instant benefits of the Java to Kotlin transition

I started working at meshcloud about one month ago and from the first day on, I was writing Kotlin code without ever having seen one line of Kotlin before. The most amazing thing about Kotlin is how quick and seamless you can shift from Java to Kotlin. There was not a single day on which I was lost with the new language. Of course, I currently only touch a small part of what Kotlin can do. But these few percent are already such a tremendous gain in productivity and come with almost no learning curve at all.

What is it that makes me talk about Kotlin so overwhelmingly? I want to explain it with our Top 5 advantages you can directly benefit from when switching from Java to Kotlin.

1. Null Safety

The biggest advantage of Kotlin is Null Safety in my opinion. It allows you to get rid of all those ugly null checks you have to write in Java. The time of NullPointerExceptions (NPEs) belongs to the past with Kotlin. Variables are not nullable by default in Kotlin. If you need a nullable variable, you can define it with an additional “?”:

val myText: String?

If you now try to access this variable without a null check, you get a compile error. That way NPEs cannot occur during runtime. Kotlin also provides further support in using null checks. You can write something like the following:

val length = myText?.length

If my Text is null, length will also be null. To find out some more details on Null Safety, please have a look at the reference documentation.

2. Implicit Getters and Setters

You will never have to write default getters and setters in Kotlin again. They are available by default for all class variables. You can even access those variables directly and don’t have to use the explicit getter and setter calls:

class Person {
  val firstName: String
  val lastName: String
}

// access class Variables
val person = Person()
val firstName = person.firstName

Even though accessing those variables directly, overwritten getters and setters are still applied:

// extend Person class with a setter
fun setFirstName(firstName: String) {
  this.firstName = "Kotlin"
}
// set firstName
person.firstName = "Brian"
println(person.firstName) // prints: Kotlin

3. Data Classes

Data classes take the default getters and setters approach and extend it further. You also get an implementation of equals, hashcode, copy and toString methods based on the class variables.

data class Person(val firstName: String, val lastName: String)

That’s all it takes to define a class with all the functionality you usually generated with your IDE in Java. As it was generated anyways, you can save those tens or hundreds of lines of code when using Kotlin. And of course, you can still override the default implementation.

4. String Templates

Kotlin provides a simple syntax to build Strings. In Java you usually either used string concatenation, string builders or String.format. All of these options have their drawbacks, especially regarding readability. With String templates in Kotlin, you simply write a text with references to variables like this:

val text = "Hello ${person.firstName} ${person.lastName}!"

You can directly see how this string will look like. In Java, you would most likely use String.format for that, which is harder to read:

String text = String.format("Hello %s %s!", person.firstName, person.lastName);

5. When Expressions

This is something you perhaps don’t need every day, but I always felt a bit awkward when writing switch-case statements or longer if-then-else cascades. Of course, you should avoid them in the first place but sometimes, e.g. for different handling of enum values, you need them. Kotlin provides a really nice and clean solution for this:

when {
  "Wayne" in lastNames -> println("Hello Batman!")
  "Kent" in lastNames -> println("Hi Superman!")
  lastNames is empty -> println("Nobody’s here")
  else -> println("Hello!")
}

When statements are also a lot more powerful than a switch-case statement. The example I used, checks on lists to show this. You can even check different objects in a when condition.

How to profit from Kotlin yourself

Integrating Kotlin into your existing Java application is just as easy as it is for a Java developer to read basic Kotlin code. I.e. when using Maven, you only have to add the Kotlin Maven Plugin.
Now you can start writing Kotlin files which can also be used from within Java classes and vice versa.

At meshcloud we started with a plain Java Spring Boot application and are now converting it step by step into Kotlin. We write new classes in Kotlin and convert existing classes step by step as we touch them. The best thing about converting existing classes is that IntelliJ IDEA provides an automatic conversion from Java to Kotlin. That way you don’t even have to care about the basic conversion of existing Java files. Even when you copy a Java snippet to your Kotlin class, the IDE will ask you, whether you want to convert this snippet to Kotlin.

Conclusion

Kotlin is often described as the better Java and that is exactly what we experienced here at meshcloud. You get rid of lots of boilerplate code you have to write in Java. We recommend every Java developer to have a look at Kotlin. There is almost no risk in trying out Kotlin due to the very low learning curve and the seamless integration into your existing Java application. We would be glad to hear about your experiences in switching to Kotlin in the comments section. We hope you enjoy it just as much as we do.