Why we're sponsoring the Dhall Language Project
We're very happy to announce that meshcloud is the first corporate sponsor of the Dhall Language Project via open collective. In this post I want to explain how we came to use Dhall at meshcloud, what challenges it solves for us and why we hope it will play a role in enabling software projects to more easily adapt to multi-cloud environments.
Enabling DevOps at scale
At the beginning of this year, we realized we had a challenge scaling configuration and operations of our software for customers. meshcloud helps enterprises become cloud-native organizations by enabling "DevOps at scale". Our tool helps hundreds or thousands of DevOps teams in an enterprise to provision and manage cloud environments like AWS Accounts or Azure Subscriptions for their projects while ensuring they are secured and monitored to the organization's standards.
Enabling DevOps teams with the shortest "time to cloud" possible involves the whole organization. Our product serves DevOps teams, IT Governance, Controlling and IT Management in large enterprises. That means meshcloud is an integration solution for a lot of things, so we need to be highly configurable.
Because we also manage private clouds (OpenStack, Cloud Foundry, OpenShift etc.) we often run on-premises and operate our software as a managed service. This presents unique challenges for our SRE team. Not only do we need to maintain and evolve configuration for our growing number of customers, but we also need to support deploying our own software on different infrastructures like OpenStack, AWS or Azure[1].
At the end of the day, this boils down to having good and scalable configuration management. After going through various stages of slinging around YAML with ever more advanced tricks, we realized we needed a more fundamental solution to really crack this challenge.
Configuration management at scale - powered by dhall
The Dhall configuration language solves exactly this problem. It\'s a programmable configuration language that was built to express configuration - and just that. Dhall is decidedly not turing complete. It\'s functional nature makes configuration easy to compose from a set of well-defined operations and ensures that configuration stays consistent.
Using Dhall allows us to compile and type check[2] all our configuration for all our customers before rolling things out. We use Dhall to compile everything we need to configure and deploy our software for a customer: Terraform, Ansible, Kubernetes templates, Spring Boot Config. We even use Dhall to automatically generate Concourse CI pipelines for continuous delivery of our product to customers.
Since adopting Dhall earlier this year, we measurably reduced our deployment defect rate. We feel more confident about configuration changes and can safely express configuration that affects multiple services in our software.
Empowering a Multi-Cloud Ecosystem
We believe that open-source software and open-source cloud platforms are crucial for enabling organizations to avoid vendor lock-in. Now that mature tools like Kubernetes exist and can do the heavy lifting, enabling portability between has become a configuration management challenge.
What we found especially interesting about Dhall is that it\'s not just an "incremental" innovation atop of existing configuration languages like template generators, but instead looks at the problem from a new angle and tries to solve it at a more fundamental level. This is something we can relate to very well as we\'re trying to solve multi-cloud management using an organization as code (like infrastructure as code) approach.
That's why we\'re happy to see Dhall innovating in this space and reached out to the Dhall community to explore ways we can support the project. We hope that providing a steady financial contribution will allow the community to further evolve the language, tooling and its ecosystem.
Footnotes:
-
[1]: In this way meshcloud is not only a multi-cloud management software but is also a multi-cloud enabled software itself.
-
[2]: Dhall purists will want to point out that expressions are not compiled, instead they\'re normalized.
Migrating a Galera Cluster with FlywayDB
Many developers rely and love the easy database migration FlywayDB provides. Some of you might also use a Galera Cluster on top of MySQL or Maria DB to provide better redundancy and high availability for your database backend. When using Cloud Foundry as the application runtime, this can cause undesired side effects:
In case multiple Cloud Foundry app instances are deployed in parallel we have observed sporadic deployment fails looking like the DB migration was the culprit. Searching the logs showed several race conditions occuring while updating the schema if two or more app instances are started simultaneously. This was unfortunate since Flyway should support a migration during the start of multiple apps as stated in the Flyway FAQ:
Can multiple nodes migrate in parallel?
Yes! Flyway uses the locking technology of your database to coordinate multiple nodes. This ensures that even if even multiple instances of your application attempt to migrate the database at the same time, it still works. Cluster configurations are fully supported.
It turns out that these locking mechanism are not fully supported by Galera:
Table Locking
Galera Cluster does not support table locking, as they conflict with multi-master replication. As such, the LOCKTABLES and UNLOCK TABLES queries are not supported. This also applies to lock functions, such as GET_LOCK() and RELEASE_LOCK()... for the same reason.
(https://mariadb.com/kb/en/mariadb-galera-cluster-known-limitations/#limitations-from-codershipcom)
That leads to the observed problems during migration. This blogpost shows an easy fix to the problem.
Since a fix via Galera seemed unlikely we needed a reliable way to safely perform database migrations. Since a single node deploy worked flawlessly, we came up with the following solution: Spring allows us to extend the FlywayMigratioStrategy. Combined with the environment variable set by Cloud Foundry we can detect in which app instance we are currently running and perform the migration only if we are the app instance ID zero (or none at all to allow migration on local developer machines).
This led us to the following code snippet fixing the problem:
class FlywayCustomMigrationConfig : FlywayMigrationStrategy {
val maxRetries = 60
val waitBeforeRetryMs = 1000L
override fun migrate(flyway: Flyway) {
val applicationIndex = System.getenv("CF_INSTANCE_INDEX").toIntOrNull()
val shouldMigrate = when (applicationIndex) {
0, null -> true
else -> false
}
log.debug { "CF_INSTANCE_INDEX found: $applicationIndex found." }
if (shouldMigrate) {
log.info { "DB migration preconditions match. Performing migration." }
flyway.migrate()
} else {
log.info { "Application index is $applicationIndex. Waiting for primary app instance with index 0 to perform migration." }
waitForMigrationsOnPrimaryAppInstance(flyway)
}
}
private fun waitForMigrationsOnPrimaryAppInstance(flyway: Flyway) {
for (i in 1..maxRetries) {
val pending = flyway.info().pending()
if (pending.isEmpty()) {
log.info { "Migrations completed on primary app instance, will start." }
return
}
log.info { "Waiting for ${pending.size} migrations to complete on primary app instance (retried $i times)." }
Thread.sleep(waitBeforeRetryMs)
}
throw TimeoutException("Exceeded $maxRetries retries waiting for migrations to complete on primary app instance.")
}
}
Let's talk about Kotlin!
Yeah! We are part of the Talking Kotlin podcast! In mid december we had a nice little recording session, where two of our meshPeople talked about their experiences during our Kotlin adoption in the meshcloud backend.
While we started with a plain Java Spring Boot application, we have reached about 70% Kotlin code by now. Stefan already highlighted the greatest Kotlin benefits in one of his previous blogposts.
In the podcast Johannes and Stefan talk in detail about their experiences and challenges during the adoption to Kotlin. In this context they also talk about the advantages and requirements of Spring Boot and as an example for this about an implementation of a simple Spring Boot application in Java vs. Kotlin.
You can listen to it here: http://talkingkotlin.com/kotlin-on-the-backend-at-meshcloud/
We hope you enjoy it just as much as we did. Do you have any questions or comments? Feel free to leave a comment.
Running Cron Jobs on Cloud Foundry
Many cloud applications need to run some sort of scheduled tasks for chores like processing statistics or doing internal housekeeping. There are two different strategies to implement scheduled tasks to cloud applications running on Cloud Foundry. You can either build scheduling and tasks into the application itself or schedule and run the task in separate containers
Some application frameworks like Spring include some built-in scheduling support. However, this scheduling support does not include a distributed coordination mechanism. This means that an application horizontally scaled to multiple instances will each run the scheduled task individually. Depending on the nature of the task, this may cause observable side effects like sending emails to your customers to be repeated.
It's thus preferable to have a central entity for scheduling. You could of course use e.g. a Java Spring App that needs approximately 1G of RAM to do that for you, but that would be very wasteful. Instead, we can build a simple cron scheduler that runs on 16 MB of RAM to get reliable task scheduling for just a few cents per month.
The task scheduler can then execute arbitrary scripts or code, for example to:
- invoke an https endpoint on your application to perform the task
- queue a message on RabbitMQ for processing by a worker
- trigger execution of the job in a seperate Cloud Foundry Task Container
meshcloud\'s cf-cron scheduler
Our sample repository demonstrates how to run scheduled tasks on Cloud Foundry with a very small footprint (8 to 16 MB RAM) using a traditional crontab. Traditional cron daemons need to run as root and have opinionated defaults for logging and error notifications. This makes them unsuitable for running in a containerized environment like Cloud Foundry. Instead of a system cron daemon we\'re thus using supercronic to run our cron tab.
How it works
This application is built using the binary buildpack and executes supercronic
on the crontab
file. The crontab
file specifies all your cron jobs. To add additional jobs, simply add a new line which specifies a schedule and command to the crontab
.
Note: By default,
supercronic
will log all output to stderr so we redirect that to stdout in our cf manifest.
You can also include additional scripts and binaries to execute more complex actions. This example allows you to install apt and debian packages to use in your cronjobs. You can specify these packages in apt.yml
and they will be installed during staging by apt-buildpackcourtesy of the magic multi-buildpack.
After cf push
ing this sample app to Cloud Foundry, you can see that it happily executes the jobs from the crontab
in the log output:
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg=starting iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="hello world, every 2 seconds" channel=stdout iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="job succeeded" iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.05+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="cf version 6.34.1+bbdf81482.2018-01-17" channel=stdout iteration=7 job.command="cf --version" job.position=0 job.schedule="*/1 * * * *"
2018-03-05T10:59:00.05+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="job succeeded" iteration=7 job.command="cf --version" job.position=0 job.schedule="*/1 * * * *"
Scheduling Cloud Foundry Tasks
While the cron container here is designed to be small and lightweight, you may want to use it to trigger more resource intensive tasks and processes. When a simple curl
to an http endpoint is not enough to kick off such a task on your existing app, Cloud Foundry Tasks are a great solution to run these processes.
This sample repository thus includes instructions to install the cf
cli tool which you can use to trigger such a task using a meshcloud Service User.
Deploying Meteor Apps on Cloud Foundry
This repository demonstrates different ways to deploy Meteor applications on Cloud Foundry. There are at least three different ways to deploy Meteor JS applications on Cloud Foundry:
- using a specialized meteor-js buildpack
- using the general node-js buildpack
- using your own docker container image
While a specialized meteor buildpack like meteor-buildpack-horse or cf-meteor-buildpack is a useful option, recent versions of meteor support simplified deployment on servers that provide just a node.js runtime environment. For this demonstration repository, we\'re focusing on this approach because the node.js buildpack is more widely used and better maintained.
The application we\'re going to deploy is located in the ./try-meteor
folder of this repository. We also assume you have provisioned a Mongo DB service on your space. In Cloud Foundry you can create a dedicated service instance suitable for use with meteor like this:
cf create-service MongoDB M meteor-mongo
Using the node.js Buildpack
Build a meteor app bundle
On your local machine with the meteor cli installed, build a distributable package into the deploy-buildpack
folder:
cd try-meteor && meteor build ../deploy-buildpack/. --server-only --architecture os.linux.x86_64
Building generates a try-meteor.tar.gz
file with the meteor application bundled as a plain node.js application, with some helfpful instructions in its README.md
file:
This is a Meteor application bundle. It has only one external dependency:
Node.js v8.9.3. To run the application:
$ (cd programs/server && npm install)
$ export MONGO_URL='mongodb://user:password@host:port/databasename'
$ export ROOT_URL='http://example.com'
$ export MAIL_URL='smtp://user:password@mailhost:port/'
$ node main.js
Use the PORT environment variable to set the port where the
application will listen. The default is 80, but that will require
root on most systems.
node.js buildpack wrapper
To deploy this tar.gz file on Cloud Foundry with the node.js buildpack we need to:
- upload and unpack the tar.gz bundle
- run
npm install
on the extracted bundle - set the correct environment variables using a launcher js script
We can easily achieve that through a custom package.json
that uses npm\'s postinstall
and start
script to execute these actions. You can find the package.json
and all required files for the deployment in the ./deploy-buildpack
folder.
Note: at the time of writing the bundles generated by meteor 1.6.0.1 machine lack the meteor-deque
dependency so we just explicitly add that by hand.
{
"name": "try-meteor",
"private": true,
"scripts": {
"start": "node launcher.js",
"postinstall": "tar -xf try-meteor.tar.gz && (cd bundle/programs/server && npm install)"
},
"engines": {
"node": "8.9.3"
},
"dependencies": {
"meteor-deque": "~2.1.0",
"cfenv": "1.0.4"
}
}
Have a look at the launcher.js
file if you want to change service names etc. The final bit that we need is a Cloud Foundry Manifest file to describe our application:
---
applications:
- name: try-meteor-app
memory: 512M
instances: 1
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
services:
- meteor-mongo
We're all set now, a simple cf push
and your app should be up and running on the cloud.
Using a Docker Container
The next option is to use a docker-based deployment of the application. This requires that we build our own docker image of the application and publish it to a docker registry.
You can find the code for the docker-based deployment of our sample application in the MARKDOWN_HASHd786272e16707f3074a294d21f8b83a0MARKDOWN<em>HASH</em>
folder. The docker image used in this example is based on the node:8-alpine base image. However, before we can build our container we need to build our meteor application and extract it:
cd try-meteor && meteor build ../deploy-docker/. --server-only --architecture os.linux.x86_64
cd ../deploy-docker && tar -xf try-meteor.tar.gz && rm try-meteor.tar.gz
The docker deployment demonstrated in this repository also uses the same launcher.js
script introduced above to automatically initialize meteor environment variables from their Cloud Foundry counterparts. With that out of the way, let\'s build and push the docker image
docker build -t meshcloud/meteor-cf-example .
docker push meshcloud/meteor-cf-example:latest
With the container available in a docker registry, we can push it to Cloud Foundry by specifying the docker image in the manifest:
---
applications:
- name: try-meteor-app-docker
memory: 512M
instances: 1
docker:
image: meshcloud/meteor-cf-example
services:
- meteor-mongo
Now all that\'s left to do is a simple cf push
and your app should be up and running on Cloud Foundry in no time.
iDevBook01:deploy-docker jr (master *) $ cf push
Pushing from manifest to org meshcloud-demo / space aproject as c9f7d64c-404d-4b29-b719-b2359f6c8157...
Using manifest file /Users/jr/dev/demo/meteor/deploy-docker/manifest.yml
Getting app info...
Updating app with these attributes...
name: try-meteor-app-docker
docker image: meshcloud/meteor-cf-example
command: node launcher.js
disk quota: 1G
health check type: port
instances: 1
memory: 512M
stack: cflinuxfs2
services:
meteor-mongo
routes:
try-meteor-app-docker.cf.eu-de-darz.msh.host
Updating app try-meteor-app-docker...
Mapping routes...
Stopping app...
Waiting for app to start...
name: try-meteor-app-docker
requested state: started
instances: 1/1
usage: 512M x 1 instances
routes: try-meteor-app-docker.cf.eu-de-darz.msh.host
last uploaded: Mon 12 Feb 10:25:29 CET 2018
stack: cflinuxfs2
docker image: meshcloud/meteor-cf-example
start command: node launcher.js
state since cpu memory disk details
#0 running 2018-02-12T10:12:23Z 0.1% 38.2M of 512M 1.3M of 1G
iDevBook01:deploy-docker jr (master *) $
Kotlin vs. Java Implementation of a Spring Boot application
A good approach for showing the differences between Java and Kotlin is creating an example application. Therefore, I created a simple Spring Boot application. I converted this app to 100% Kotlin. The project can be found on Github. The java branch contains the initial Java application and in the master branch, the converted Kotlin application can be found. Comparing the implementations shows the advantages of Kotlin compared to Java pretty well. I already described some of the benefits you can gain when switching from Java to Kotlin in one of my last blog posts. A list of all advantages, that are also reflected in the example project can be found on the slides I created in combination with the example project.
Some statistics regarding the conversion are already pretty interessting as the following table shows.
LoC | LoC without data classes | |
---|---|---|
Java | 460 | 232 |
Kotlin | 252 | 185 |
Reduction | 46% | 21% |
Of course, data classes lead to the largest reduction of LoC. But still 21% code reduction without data classes shows, that there is much less boilerplate code you have to write and that can disctract you.
One of my favorite advantages shown in the example project is the usage of a when expression in Kotlin instead of a switch-case statement in Java.
Java:
private double getPlatformCost(Resource resource) {
double cost;
switch (resource.getType()) {
case OPENSTACK:
cost = 0.03;
break;
case CLOUDFOUNDRY:
cost = 0.02;
break;
case KUBERNETES:
cost = 0.01;
break;
default:
break;
}
return resource.durationInHours() * cost;
}
Kotlin:
private fun getPlatformCost(resource: Resource): Double {
val cost: Double = when (resource.type) {
ResourceType.OPENSTACK -> 0.03
ResourceType.CLOUDFOUNDRY -> 0.02
ResourceType.KUBERNETES -> 0.01
}
return resource.durationInHours() * cost
}
By simply checking out both branches of the example project, you can easily compare class by class how much cleaner and more concise the Kotlin code is.
The example can also be used as a reference implementation for a Spring Boot 1.x application with Kotlin. Spring Boot 2.x already supports Kotlin and you can generate a basic application using Kotlin on start.spring.io. When creating a Spring Boot 1.x application with Kotlin, some specialties have to be considered. This example project can be used to see how this all fits together in an actual Spring Boot project.
Explicitly exposing APIs in Spring Data Rest
In our production system we use Spring Data Rest. We found out that it was too easy to leave a Repository method "exported" by default. We consider this as a security risk because it can be difficult to keep track of all repositories.
Therefore, we developed a new strategy to set the exported value of SDR methods to false to ensure security by default. The currently available detection strategies in SDR only allow to restrict REST repositories on class level. So when a Repository is exported, all of its methods are exported, too. Only by using RestResource(exported = false), you can prevent SDR from exporting a given method.
We created a new strategy to set the exported value of SDR methods to false. The currently available detection strategies in SDR only allow to restrict REST repositories on class level. So when a Repository is exported, all of its methods are exported, too. Only by using RestResource(exported = false), you can prevent SDR from exporting a given method.
We identified in our project, that there is a certain security risk in that case. Developers are not always aware of all the methods that are automatically exported via REST by the application. By simply adding new Repositories and just wanting a findAll()-method to be publicly available, even save and delete methods are exported by default. As most applications want to apply security especially on the write methods, an additional "pessimistic" strategy can be useful in Spring. That way you can still profit from all the benefits SDR provides, but you can be sure, that only methods you explicitly added and annotated with @RestResource are exported.
The following example shows how the exporting with the new strategy should work:
@RepositoryRestResource
interface PersonRepository extends Repository<Person, Long> {
@RestResource
Iterable findAll();
Iterable findByFirstname(@Param("firstname") String firstname);
}
In that case, only the findAll() method is exported via REST. The findByFirstName and all CRUD methods like save or delete are not exported via REST by default. They have to be added explictily and annotated with @RestResource if they shall be exported via REST.
We sent our solution as pull request to Spring Data REST. We got feedback and they decide that they would integrate our new feature (https://jira.spring.io/browse/DATAREST-1176 ).
Tips for using Kotlin with Spring Boot 1.x
Even though full Kotlin support is only available starting with Spring 5, writing Spring 4 applications in Kotlin is already working pretty well. You just have to know some of the specialties of this combination. At meshcloud, we have been working with Kotlin and Spring Boot 1.5, which is based on Spring 4, successfully for several months. In this post, we want to share some guidance about small stumbling blocks we encountered and how we got around them.
1. Spring components have to be open
All classes defined in Kotlin are final by default. That means you cannot overwrite the methods in child classes. As Spring uses a proxy approach to provide its components, all public methods of Spring components must be overridable in child classes. The explicit solution to this is pretty simple. You have to mark your class and all public methods with the keyword open.
open class PersonController {
@GetMapping
open fun findAll() {
…
}
}
A less obvious, but definitely much more comfortable way is using the Kotlin-Spring compiler plugin. It makes all Spring components open by default. As exceptions caused by missing open declarations can be very confusing and not indicate the actual problem, we definitely recommend using the plugin, as it will protect you from frustrating error analysis. IntelliJ detects the presence of this plugin in your gradle or maven configuration and will automatically highlight any redundant open modifiers and missing final declarations, for example for methods called from constructors.
2. Handling Autowired properties
If you are using property injection, auto conversion to Kotlin by IntelliJ results in converting these properties to nullable values.
open class PersonController {
@Autowired
private var personRepository: PersonRepository? = null
}
In order to take advantage of Kotlins support for explicit nullability, there are two solutions for refactoring this code.
1. Use lateinit vars
private lateinit var personRepository: PersonRepository
This allows making the autowired property not nullable. It tells the compiler, that the property will be set before actually accessing it. The Kotlin compiler will emit code that checks that the variable is initialized by the time it gets used and if it isn't, it throws an exception accordingly.
2. Use constructor injection
open class PersonController(
private val personRepository: PersonRepository
){ ... }
This is our preferred solution, because it allows us to define the property as immutable, and we don't need the @Autowired annotation anymore. If the injection fails, an exception occurs during object creation and not somewhere later in the code.
Even fields annotated with @Value for configuration data can be injected in the constructor. One thing to consider here is that Kotlin uses the $ sign for string templates. Therefore the $ in the @Value annotation needs to be escaped explicitly.
open class PersonController(
@Value(“\${mail.host}”)
private val host: String
){ ... }
3. HATEOAS ControllerLinkBuilder.methodOn and null parameters
At least with Spring 4, using the ControllerLinkBuilder has some downsides. When the controller method has parameters like a principal or an @RequestBody, you usually submit null in the methodOn call. Let’s say you at first convert the Controller to Kotlin:
open class PersonController {
@GetMapping
open fun findAllOwnedByCurrentUser(principal: Principal) {
…
}
}
In the Controller, we want our parameters to be not nullable. As long as the class using the ControllerLinkBuilder is written in Java, everything works fine and null can be submitted, because the method is not actually called. So this would be a possible solution.
As we currently want to convert all our code to Kotlin, we use the following solution for now. We actually define the parameters as nullable in the controller and enforce their existence at the beginning of the method (thanks to Kotlin's smart casts!).
open class PersonController {
@GetMapping
open fun findAllOwnedByCurrentUser(principal: Principal) {
principal!!
...
}
}
This solution allows us to use the parameter like a not nullable parameter within the function. As soon as a better solution is available, we can easily remove this check.
As Spring 5 has native Kotlin support, we expect this issue to be solved when switching to it.
4. Default Constructor for JPA entities
Eventhough this is a JPA related topic, it is something worth mentioning in the context of Spring Boot, as many of those applications use JPA. JPA always requires a default constructor on entities. A @PersistenceConstructor
exists, that should generate this constructor for you.
@Entity
data class Person @PersistenceConstructor constructor (
val firstName: String,
val lastName: String
)
Sadly, this didn't work for us in all cases. We switched to the solution of defining default values for all properties of an entity. That way a default constructor is available for JPA:
@Entity
open class Person (
val firstName: String = "",
val lastName: String = ""
)
By now there is also a Kotlin-JPA compiler plugin available which makes the default constructor available for JPA automatically. With this approach simple classes and data classes can be used for entities.
JSR-363 Units of Measurement API in Practice - JSON Serialization
In the last post we looked at how to persist JSR-363 types like Quantity<q>
with Spring Data using a pair of converters to serialize a Quantity
to string and back. In this post we will look at how to serialize Quantity
to JSON using a similar trick. This allows us to use Quantities and Units in a REST API.
The JacksonQuantityModule
Jackson offers the module SPI to allow consumers to register various hooks for JSON serialization and deserialization. For example, the Jackson Kotlin module uses this mechanism to help Jackson better serialize Kotlin specific types like Pair<T, U>
.
So here's our JacksonQuantityModule
:
@Suppress("unused")
class JacksonQuantityModule : SimpleModule(PackageVersion.VERSION) {
companion object {
private val serialVersionUID = 1L
}
override fun setupModule(context: SetupContext) {
addSerializer(Quantity::class.java, QuantitySerializer)
addDeserializer(Quantity::class.java, QuantityDeserializer)
super.setupModule(context)
}
object QuantitySerializer : JsonSerializer<Quantity<*>>() {
override fun handledType()API: Class<Quantity<*>> {
return Quantity::class.java
}
override fun serialize(value: Quantity<*>, gen: JsonGenerator, serializers: SerializerProvider?) {
val formatted = QuantityFormatting.QuantityToStringConverter.convert(value)
gen.writeString(formatted)
}
}
object QuantityDeserializer : JsonDeserializer<Quantity<*>>() {
override fun handledType(): Class<Quantity<*>> {
return Quantity::class.java
}
override fun deserialize(p: JsonParser, ctxt: DeserializationContext?): Quantity<*>? {
val source = p.valueAsString
return QuantityFormatting.StringToQuantityConverter.convert(source)
}
}
}
If you want to auto-register this module with Jackson through the SPI mechanism, just add a text file as a resource at META-INF/services/com.fasterxml.jackson.databind.Module
with the fully qualified classname of our module as its content:
my.package.JacksonQuantityModule
Unit Tests for JacksonQuantityModule
Let's add some quick unit tests for this module to verify SPI auto-registration and the module implementation work as expected:
val sut = ObjectMapper().apply {
findAndRegisterModules()
}
@Test
fun handlesQuantities() {
verify(ObjectWithQuantity(1.mega.byte), """{"q":"1 MBy"}""")
}
@Test
fun handlesNullQuantities() {
verify(ObjectWithQuantity(null), """{"q":null}""")
}
private inline fun verify(obj: T, expectedJson: String) {
val json = sut.writeValueAsString(obj)
Assert.assertEquals(expectedJson, json)
val result = sut.readValue(json)
Assert.assertEquals(obj, result)
}
Both tests pass, mission accomplished.
JSR-363 Units of Measurement API in Practice - Binary Prefixes
When working with units of measurement it's often useful to apply a prefix to capture the order of magnitude. The SI unit system has a standard set of prefixes based on powers of 10, e.g. kilo: 10³
or mega: 10⁶
. For quantities of information like bits and bytes, it\'s however often useful to have prefixes based on powers of 2 like MeBi: 2²⁰
. These are also called binary prefixes.
The tec.uom.se
library introduced eralier in this series of blog-posts has a class BinaryPrefix
, but it only offers methods for unit conversion. Let\'s build a short helper class and build a nice fluent API in Kotlin so that we can write e.g. 10.mebi.byte
for 10 MiB.
object QuantityFormatting {
private val unitFormatter = UcumFormatWithBinaryPrefixSupport()
private val numberFormatter = NumberFormat.getInstance(Locale.ROOT)
private val separator = " "
object QuantityToStringConverter : Converter<Quantity<*>, String> {
override fun convert(source: Quantity<*>): String {
val s = source.toWildcard()
val formattedUnit = unitFormatter.format(s.unit)
val formattedValue = numberFormatter.format(s.value)
return "$formattedValue$separator$formattedUnit"
}
}
object StringToQuantityConverter : Converter<String, Quantity<*>> {
override fun convert(source: String): Quantity<*> {
val (formattedValue, formattedUnit) = source.split(separator, limit = 2)
val parsedUnit: Unit = unitFormatter.parse(formattedUnit).toWildcard()
val parsedValue = numberFormatter.parse(formattedValue)
return Quantities.getQuantity(parsedValue, parsedUnit)
}
}
class UcumFormatWithBinaryPrefixSupport(val base: UCUMFormat = UCUMFormat.getInstance(UCUMFormat.Variant.CASE_SENSITIVE))
: UnitFormat by base {
override fun format(unit: Unit<*>): String {
val baseResult = base.format(unit)
val split = baseResult.split(".")
if (split.size == 1) {
return baseResult
}
val (symbol, converter) = split
return when (converter) {
BinaryPrefix.YOBI.converterFormat -> "${BinaryPrefix.YOBI.symbol}$symbol"
BinaryPrefix.ZEBI.converterFormat -> "${BinaryPrefix.ZEBI.symbol}$symbol"
BinaryPrefix.EXBI.converterFormat -> "${BinaryPrefix.EXBI.symbol}$symbol"
BinaryPrefix.PEBI.converterFormat -> "${BinaryPrefix.PEBI.symbol}$symbol"
BinaryPrefix.TEBI.converterFormat -> "${BinaryPrefix.TEBI.symbol}$symbol"
BinaryPrefix.GIBI.converterFormat -> "${BinaryPrefix.GIBI.symbol}$symbol"
BinaryPrefix.MEBI.converterFormat -> "${BinaryPrefix.MEBI.symbol}$symbol"
BinaryPrefix.KIBI.converterFormat -> "${BinaryPrefix.KIBI.symbol}$symbol"
else -> baseResult
}
}
override fun format(unit: Unit<*>, appendable: Appendable): Appendable {
return appendable.append(format(unit))
}
override fun parse(csq: CharSequence): Unit<*> {
// note: all binary prefixes have two chars
val prefixLength = 2
val (prefix, originalUnit) = when {
csq.length >= prefixLength -> Pair(csq.substring(0, prefixLength), csq.substring(prefixLength))
else -> Pair(null, csq)
}
return when (prefix) {
BinaryPrefix.YOBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.YOBI.converter)
BinaryPrefix.ZEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.ZEBI.converter)
BinaryPrefix.EXBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.EXBI.converter)
BinaryPrefix.PEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.PEBI.converter)
BinaryPrefix.TEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.TEBI.converter)
BinaryPrefix.GIBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.GIBI.converter)
BinaryPrefix.MEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.MEBI.converter)
BinaryPrefix.KIBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.KIBI.converter)
else -> base.parse(csq)
}
}
}
}
Another missing feature is support for string serialization of binary-prefixed units. Unfortunately, the UCUMFormatter
of the UOM library is not extensible, so we need to add a simple pre/post-processing step to our formatters from the last episode.
object QuantityFormatting {
private val unitFormatter = UcumFormatWithBinaryPrefixSupport()
private val numberFormatter = NumberFormat.getInstance(Locale.ROOT)
private val separator = " "
object QuantityToStringConverter : Converter<Quantity<*>, String> {
override fun convert(source: Quantity<*>): String {
val s = source.toWildcard()
val formattedUnit = unitFormatter.format(s.unit)
val formattedValue = numberFormatter.format(s.value)
return "$formattedValue$separator$formattedUnit"
}
}
object StringToQuantityConverter : Converter<String, Quantity<*>> {
override fun convert(source: String): Quantity<*> {
val (formattedValue, formattedUnit) = source.split(separator, limit = 2)
val parsedUnit: Unit = unitFormatter.parse(formattedUnit).toWildcard()
val parsedValue = numberFormatter.parse(formattedValue)
return Quantities.getQuantity(parsedValue, parsedUnit)
}
}
class UcumFormatWithBinaryPrefixSupport(val base: UCUMFormat = UCUMFormat.getInstance(UCUMFormat.Variant.CASE_SENSITIVE))
: UnitFormat by base {
override fun format(unit: Unit<*>): String {
val baseResult = base.format(unit)
val split = baseResult.split(".")
if (split.size == 1) {
return baseResult
}
val (symbol, converter) = split
return when (converter) {
BinaryPrefix.YOBI.converterFormat -> "${BinaryPrefix.YOBI.symbol}$symbol"
BinaryPrefix.ZEBI.converterFormat -> "${BinaryPrefix.ZEBI.symbol}$symbol"
BinaryPrefix.EXBI.converterFormat -> "${BinaryPrefix.EXBI.symbol}$symbol"
BinaryPrefix.PEBI.converterFormat -> "${BinaryPrefix.PEBI.symbol}$symbol"
BinaryPrefix.TEBI.converterFormat -> "${BinaryPrefix.TEBI.symbol}$symbol"
BinaryPrefix.GIBI.converterFormat -> "${BinaryPrefix.GIBI.symbol}$symbol"
BinaryPrefix.MEBI.converterFormat -> "${BinaryPrefix.MEBI.symbol}$symbol"
BinaryPrefix.KIBI.converterFormat -> "${BinaryPrefix.KIBI.symbol}$symbol"
else -> baseResult
}
}
override fun format(unit: Unit<*>, appendable: Appendable): Appendable {
return appendable.append(format(unit))
}
override fun parse(csq: CharSequence): Unit<*> {
// note: all binary prefixes have two chars
val prefixLength = 2
val (prefix, originalUnit) = when {
csq.length >= prefixLength -> Pair(csq.substring(0, prefixLength), csq.substring(prefixLength))
else -> Pair(null, csq)
}
return when (prefix) {
BinaryPrefix.YOBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.YOBI.converter)
BinaryPrefix.ZEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.ZEBI.converter)
BinaryPrefix.EXBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.EXBI.converter)
BinaryPrefix.PEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.PEBI.converter)
BinaryPrefix.TEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.TEBI.converter)
BinaryPrefix.GIBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.GIBI.converter)
BinaryPrefix.MEBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.MEBI.converter)
BinaryPrefix.KIBI.symbol -> base.parse(originalUnit).transform(BinaryPrefix.KIBI.converter)
else -> base.parse(csq)
}
}
}
}
With that, we can make the following test pass:
@Test
fun serializingBinaryPrefixedUnits(){
val sut = 1.mebi.byte
val serialized = QuantityFormatting.QuantityToStringConverter.convert(sut)
Assert.assertEquals("1 MiBy", serialized)
val deserialized = QuantityFormatting.StringToQuantityConverter.convert(serialized)
Assert.assertEquals(sut, deserialized)
}