meshcloud Jobs

The meshcloud Way of Work

At meshcloud we empower humans to build a better future. When you hear this vision for the first time, you may think it sounds incredibly ambitious and completely intangible at the same time. Yet, we mean and live this in everything that we do. In this post I want to describe how our way of work connects this ambitious vision to our everyday routines. You will learn why we work on cloud foundations, how we go about this and how our work empowers our meshis and our customers to build a better future every day.

Why

We believe that computing is an essential means of production in the 21st century and that cloud computing is the best way to deliver it. meshcloud builds a Cloud Foundation Platform that enables organizations to build on the endless possibilities of the cloud while protecting their digital freedom. Cloud Foundations help organizations deliver cloud environments of any provider and any technology to thousands of teams while staying in control of their infrastructure and reducing complexity. Our platform provides essential cloud foundation capabilities: cloud-native technology access, with cross-platform Identity and Access Management, Tenant Management, Compliance & Security and Cost Management covered.

The challenges that Cloud Foundations solve become most apparent in organizations that are fully committed on cloud transformation and embrace multi-cloud at scale. Consequently, our customers are among the largest organizations in the world and recognized as cloud transformation leaders in their industry.

Building Cloud Foundations is an exciting new challenge. Customer needs are rapidly evolving while organizations discover the challenges imposed by leveraging multi-cloud at scale. In fact, we believe that almost every larger organization will have to discover and solve these challenges within the next 5 years as multi-cloud adoption becomes more ubiquitous. And when that happens, we want them to think of meshcloud to solve them. This insight informs how we work at meshcloud and embrace every customer relationship as an opportunity to learn something new about the best way to build solid Cloud Foundations.

Customer Empathy

If there's one super-power that we look for in every meshi working in our team it's customer empathy.

Customer empathy is understanding the underlying needs and feelings of customers. It goes beyond recognizing and addressing their tactical requirements and puts things into further context by viewing things from their perspective. [...] Customer empathy sees users as real people and not just individuals trying to do something. It rounds out customers into whole people, provides a larger context for how products and solutions fit into the much broader ecosystem of their lives, their jobs, and their environment. (https://www.productplan.com/glossary/customer-empathy/)

For a startup looking to solve an entirely new category of problems, empathy for current and prospect customers is a real super-power. Customer empathy empowers every meshi to

  • iterate faster by collaborating and communicating on customer challenges with more context and deeper understanding
  • drive initiatives more successfully and with higher quality by making better decisions that anticipate customers’ perspectives

This is why we have ingrained customer empathy into our own implementation of an agile way of working.

The meshcloud model: Empowering meshis to build a better future

We embrace people and interactions over tools and processes (Agile Manifesto). That's why our way of working is handsomely agile, yet does not fit to any popular method like scrum. Our way of work rests on three pillars

  • We prioritize work using Customer Results: a description of a challenge and desired outcomes that we want to achieve for our customers
  • Customer Result Teams: Cross-functional teams empowered to make all decisions necessary to achieve this outcome
  • Flocks own a functional area of responsibility and its routines – championing cohesion and professional excellence

Customer Results

Working cross-functional by default in teams centered on delivering customer outcomes empowers every meshi to build a close connection and deep understanding of our customers. Of course, building customer empathy requires communication. Whereas purely functional organizations attempt to funnel customer touchpoints through specialized functions like support and customer success, our way of work enables high band-width communication for all functions. Championing customer empathy in this way helps us understand what building a better future means to our customers – and deliver on that mission.

We'll be sharing more on our way of work in the future and we would love to see you here again.

Learn More On The Way We Work

[display-posts image_size="medium" wrapper="div" wrapper_class="display-posts-listing grid" meta_key="_thumbnail_id" posts_per_page="3" id="14967"]


Why we're sponsoring the Dhall Language Project

We're very happy to announce that meshcloud is the first corporate sponsor of the Dhall Language Project via open collective. In this post I want to explain how we came to use Dhall at meshcloud, what challenges it solves for us and why we hope it will play a role in enabling software projects to more easily adapt to multi-cloud environments.

Enabling DevOps at scale


At the beginning of this year, we realized we had a challenge scaling configuration and operations of our software for customers. meshcloud helps enterprises become cloud-native organizations by enabling "DevOps at scale". Our tool helps hundreds or thousands of DevOps teams in an enterprise to provision and manage cloud environments like AWS Accounts or Azure Subscriptions for their projects while ensuring they are secured and monitored to the organization's standards.

Enabling DevOps teams with the shortest "time to cloud" possible involves the whole organization. Our product serves DevOps teams, IT Governance, Controlling and IT Management in large enterprises. That means meshcloud is an integration solution for a lot of things, so we need to be highly configurable.

Because we also manage private clouds (OpenStack, Cloud Foundry, OpenShift etc.) we often run on-premises and operate our software as a managed service. This presents unique challenges for our SRE team. Not only do we need to maintain and evolve configuration for our growing number of customers, but we also need to support deploying our own software on different infrastructures like OpenStack, AWS or Azure[1].

At the end of the day, this boils down to having good and scalable configuration management. After going through various stages of slinging around YAML with ever more advanced tricks, we realized we needed a more fundamental solution to really crack this challenge.

Configuration management at scale - powered by dhall


The Dhall configuration language solves exactly this problem. It's a programmable configuration language that was built to express configuration - and just that. Dhall is decidedly not turing complete. It's functional nature makes configuration easy to compose from a set of well-defined operations and ensures that configuration stays consistent.

Using Dhall allows us to compile and type check[2] all our configuration for all our customers before rolling things out. We use Dhall to compile everything we need to configure and deploy our software for a customer: Terraform, Ansible, Kubernetes templates, Spring Boot Config. We even use Dhall to automatically generate Concourse CI pipelines for continuous delivery of our product to customers.

Since adopting Dhall earlier this year, we measurably reduced our deployment defect rate. We feel more confident about configuration changes and can safely express configuration that affects multiple services in our software.

Empowering a Multi-Cloud Ecosystem


We believe that open-source software and open-source cloud platforms are crucial for enabling organizations to avoid vendor lock-in. Now that mature tools like Kubernetes exist and can do the heavy lifting, enabling portability between has become a configuration management challenge.

What we found especially interesting about Dhall is that it's not just an "incremental" innovation atop of existing configuration languages like template generators, but instead looks at the problem from a new angle and tries to solve it at a more fundamental level. This is something we can relate to very well as we're trying to solve multi-cloud management using an organization as code (like infrastructure as code) approach.

That's why we're happy to see Dhall innovating in this space and reached out to the Dhall community to explore ways we can support the project. We hope that providing a steady financial contribution will allow the community to further evolve the language, tooling and its ecosystem.

Footnotes:

[1]: In this way meshcloud is not only a multi-cloud management software but is also a multi-cloud enabled software itself.

[2]: Dhall purists will want to point out that expressions are not compiled, instead they're normalized.


How to apply for a technical role at meshcloud

On this page, we want to give you an overview of our values and interview process hiring for technical full-time positions in our team. We hope this guide helps you navigate the process successfully and answers your questions. Should you have any more questions, please don't hesitate to reach out at jobs@meshcloud.io.

We believe that hiring is as much about us getting to know you than it is about you getting to know us. Our application and interview process is thus designed to give both of us a chance to evaluate your fit for a position at meshcloud.

Overview and TL;DR

  • Application with CV and Portfolio (Github, Stackoverflow, etc.)
  • Phone Interview
  • On-Site Interview at our Office & Design Exercise
  • On-Site "MVP Test" with your future colleagues
  • Feedback and offer

Stage 0: Your Application

Present yourself and your skills in the best possible light. Let us know why you're interested in working for meshcloud and consider yourself a good fit for our team. Tell us about your values, achievements and contributions you have made in prior roles. If you're a recent graduate, tell us about a project you've worked on that you're proud of. Even more than your concise resume, we like seeing a sample of your work and abilities. Send us a link to your projects, your stackoverflow or github profile.

Please do not include information on your resume that we don't need to evaluate your application. All that matters to us is your qualifications and personality. We do specifically ask you to not include a photo, gender, marital status or religious orientation.

When we do list the technologies we work with in our job profiles, we always separate between "must have" skills
and "nice to have" skills. We believe that every technical skill is valuable. So while we may not use [insert obscure language] right now, there's a good chance you have learned something valuable and transferrable using it. So, please do include it on your CV! We're open-minded when it comes to integrating new tech & tools into our stack.
Our most recent addition is dhall.

1st Stage: Phone Interview

You'll meet one on one with the future manager of your position for a structured 30 minute phone interview.
We expect you to tell us briefly about yourself and your experience. We'll discuss the role and answer any questions you may have about the position. The second half of the interview is a set of technical questions that helps us get an indication of your skill level in competence areas relevant for the job. We're not looking for textbook answers and you should not prepare specifically for this part.

2nd Stage: On-Site Interview

The on-site interview typically lasts for 2-3 hours. You'll get to visit our office and meet members of the team you may be working with in the future! You'll also meet members of other teams at meshcloud.

We'll discuss in-depth about your prior experience and will together walk through a technical design exercise appropriate for the role. We use this exercise to see your problem-solving process and how you leverage your experience, skills and knowledge to solve the task at hand. This may also involve some whiteboarding or scribbling on paper, but we'll not ask you to come up with syntactically correct code on paper. The challenges are hands-on and real things we're working on, so they will allow you to discover about the things we work on and how our tech stack looks like.

3rd Stage: MVP Build

We don't belive take-home "coding exercises" or "coding tests" provides you with a good idea of how we work at meshcloud.
Instead, we want to give you a chance to experience being a part of the team and see how we work first hand.

So what we do is that we will together develop a small and focused "minimum viable product" (MVP) related to your role. We typically start in the morning and walk through the task at hand. The goal is to produce a working prototype in 3 hours.

When building the prototype, we totally expect you'll have to cut some corners. Of course you'll discuss requirements and implementation questions with your colleagues. When we start in the morning, we invite you out for lunch with the team and review your results together after we return. In review you'll present your approach and results, tell us about the corners you cut and what would be left to finish the work to your own quality standards.

If you're a remote candidate or considering relocation to work with us in Frankfurt am Main, we will try to schedule the On-Site Interview and MVP Test for the same day.

Final Stage: Feedback and Offer

You'll typically hear from us within a week of your interview whether we want to offer you the position. We know that you'd rather hear sooner than later whether your interview with us was a success. However, we want to make sure we give every application the attention it deserves. After your interview we collect feedback from all colleagues that got a chance to know you. We also give our colleagues some time to make up their mind and offer additional feedback after the experience has settled for a bit.

You want to learn more about us?

Please check our open positions and our company page on Stackoverflow.


computer and display

GPU Acceleration for Chromium and VSCode

At meshcloud we believe in using the best tools money can buy. Most developers in our team use Linux Workstations with 2x27" 4K displays as their daily drivers. Working with text all day is certainly less straining on a 4K (or hi-dpi) display: the characters are sharper and more easily readable.

Unfortunately, Chromium and consequentally all electron based apps like VSCode disable GPU acceleration on Linux, claiming Linux GPU drivers too buggy to support. If you're running a traditional Full-HD screen, falling back on (non-accelerated) software rendering is not a big deal. But for a 4K display your CPU has to push 4 times the amount of pixels and that can quickly lead to unpleasant input lag when working on code or a sluggish feeling browser. And all of that on powerful machines with tons of CPU cores, RAM and blazingly fast SSDs. Certainly not how a developer workstation should feel like in 2019.

With a clever combination of flags, you can force Chromium and VSCode to use GPU acceleration instead. You may experience a couple of graphic glitches here and there, but that's a small price to pay for a much more responsive browser and text editor. The settings below worked for me and my machine running Fedora 30 and made Chromium and VScode much more enjoyable to use.

Chromium

For chromium, use these flags (depending on your distro, you can also write them to a launcher or config file):

chromium-browser --ignore-gpu-blacklist --enable-gpu-rasterization --enable-native-gpu-memory-buffers

This should results in the following acceleration status on chrome://gpu:

Canvas: Hardware accelerated
Flash: Hardware accelerated
Flash Stage3D: Hardware accelerated
Flash Stage3D Baseline profile: Hardware accelerated
Compositing: Hardware accelerated
Multiple Raster Threads: Enabled
Native GpuMemoryBuffers: Hardware accelerated
Out-of-process Rasterization: Disabled
Hardware Protected Video Decode: Hardware accelerated
Rasterization: Hardware accelerated
Skia Renderer: Disabled
Surface Control: Disabled
Surface Synchronization: Enabled
Video Decode: Hardware accelerated
Viz Service Display Compositor: Enabled
Viz Hit-test Surface Layer: Disabled
WebGL: Hardware accelerated
WebGL2: Hardware accelerated

VSCode

You can use the same flags on VScode (tested on v1.36) too to get GPU acceleration.

code --ignore-gpu-blacklist --enable-gpu-rasterization --enable-native-gpu-memory-buffers

You can check acceleration status using code --status while you have another instance of the editor already running. This should result in:

Version:          Code 1.36.1 (2213894ea0415ee8c85c5eea0d0ff81ecc191529, 2019-07-08T22:55:08.091Z)
OS Version:       Linux x64 5.1.17-300.fc30.x86_64
CPUs:             Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz (12 x 800)
Memory (System):  31.29GB (22.39GB free)
Load (avg):       1, 1, 1
VM:               0%
Screen Reader:    no
Process Argv:     --enable-gpu-rasterization --enable-native-gpu-memory-buffers
GPU Status:       2d_canvas:                     enabled
                  flash_3d:                      enabled
                  flash_stage3d:                 enabled
                  flash_stage3d_baseline:        enabled
                  gpu_compositing:               enabled
                  multiple_raster_threads:       enabled_on
                  native_gpu_memory_buffers:     enabled
                  oop_rasterization:             disabled_off
                  protected_video_decode:        unavailable_off
                  rasterization:                 enabled
                  skia_deferred_display_list:    disabled_off
                  skia_renderer:                 disabled_off
                  surface_synchronization:       enabled_on
                  video_decode:                  unavailable_off
                  viz_display_compositor:        disabled_off
                  webgl:                         enabled
                  webgl2:                        enabled

Note that I haven't bothered with out of process rasterization yet. You can enable this using a flag too. But it appears not to be available on e.g. MacOS either, so I don't expect it will make a big difference performance-wise.


Timestamp Initialization

MySQL Timestamp Initialization

During testing of a new migration, we discovered that timestamps in an audit table were suddenly reset to the same timestamp (close to now). That tripped up quite some nerves. After some investigation, it turned out that MySQL and MariaDB may have dangerous default behavior when working with columns of type timestamp. Suppose you declare a an audit event table like this:

CREATE TABLE `Event` (
  `id` varchar(128) NOT NULL,
  `createdOn` timestamp NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET utf8mb4 COLLATE utf8mb4_unicode_ci;

When reading back the table definition (e.g. via MySQL Workbench), you will find it's actually:

CREATE TABLE `Event` (
  `id` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL,
  `createdOn` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

Note the automatic addition of DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP for the createdOn column. This creates two problems:

  • the timestamp values set by your code may not match the timestamps stored in the database
  • the timestamps on the table become mutable, i.e. in case a migration runs an UPDATE on the table all existing timestamps will be overwritten.

The source of this behavior is Automatic Initialization and Updating for TIMESTAMP and DATETIME, which is controlled by the explicit_defaults_for_timestamp configuration variable and also depends on the active SQL Mode.

To see your active configuration, run this SQL on your active connection

SHOW Variables WHERE Variable_name = "explicit_defaults_for_timestamp";
SELECT @@GLOBAL.sql_mode;
SELECT @@SESSION.sql_mode;

In our case, explicit_defaults_for_timestamp was off, which specifically results in the observed behavior:

The first TIMESTAMP column in a table, if not explicitly declared with the NULL attribute or an explicit DEFAULT or ON UPDATE attribute, is automatically declared with the DEFAULT CURRENT_TIMESTAMP and ON UPDATE CURRENT_TIMESTAMP attributes.

Another caveat: Your database team or provider may not have explicitly configured explicit_defaults_for_timestamp. To add insult to injury, the default value for this variable depends on your MySQL Version.

Database Version explicit_defaults_for_timestamp default
MySQL >= 8.0.2 ON
MySQL <= 8.0.1 OFF
MariaDB >= 10.1.8 OFF

Explicitly Controlling Timestamp Initialization

It's bad news when the behavior of your application depends on a database configuration variable outside of your team's direct control. We thus have to fix up our tables right after creating them as suggested in this stackoverflow answer and adapted for MariaDB:

ALTER TABLE `Event` MODIFY COLUMN `createdOn` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP;
ALTER TABLE `Event` ALTER COLUMN `createdOn` DROP DEFAULT;

Note: These statements are idempotent, i.e. we can safely run them even if explicit_defaults_for_timestamp is ON and our table will have the desired state, i.e. a timestamp column with no DEFAULT and no ON_UPDATE clause.

We also added a test to our migration test suite that verifies all timestamp columns are created as intended and no hidden behavior messes with our column definitions.


Running Cron Jobs on Cloud Foundry

Many cloud applications need to run some sort of scheduled tasks for chores like processing statistics or doing internal housekeeping. There are two different strategies to implement scheduled tasks to cloud applications running on Cloud Foundry. You can either build scheduling and tasks into the application itself or schedule and run the task in separate containers

Some application frameworks like Spring include some built-in scheduling support. However, this scheduling support does not include a distributed coordination mechanism. This means that an application horizontally scaled to multiple instances will each run the scheduled task individually. Depending on the nature of the task, this may cause observable side effects like sending emails to your customers to be repeated.

It's thus preferable to have a central entity for scheduling. You could of course use e.g. a Java Spring App that needs approximately 1G of RAM to do that for you, but that would be very wasteful. Instead, we can build a simple cron scheduler that runs on 16 MB of RAM to get reliable task scheduling for just a few cents per month.

The task scheduler can then execute arbitrary scripts or code, for example to:

  • invoke an https endpoint on your application to perform the task
  • queue a message on RabbitMQ for processing by a worker
  • trigger execution of the job in a seperate Cloud Foundry Task Container

meshcloud's cf-cron scheduler

Our sample repository demonstrates how to run scheduled tasks on Cloud Foundry with a very small footprint (8 to 16 MB RAM) using a traditional crontab. Traditional cron daemons need to run as root and have opinionated defaults for logging and error notifications. This makes them unsuitable for running in a containerized environment like Cloud Foundry. Instead of a system cron daemon we're thus using supercronic to run our cron tab.

How it works

This application is built using the binary buildpack and executes supercronic on the crontab file. The crontabfile specifies all your cron jobs. To add additional jobs, simply add a new line which specifies a schedule and command to the crontab.

Note: By default, supercronic will log all output to stderr so we redirect that to stdout in our cf manifest.

You can also include additional scripts and binaries to execute more complex actions. This example allows you to install apt and debian packages to use in your cronjobs. You can specify these packages in apt.yml and they will be installed during staging by apt-buildpack
courtesy of the magic multi-buildpack.

After cf pushing this sample app to Cloud Foundry, you can see that it happily executes the jobs from the crontab in the log output:

2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg=starting iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="hello world, every 2 seconds" channel=stdout iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="job succeeded" iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.05+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="cf version 6.34.1+bbdf81482.2018-01-17" channel=stdout iteration=7 job.command="cf --version" job.position=0 job.schedule="*/1 * * * *"
2018-03-05T10:59:00.05+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="job succeeded" iteration=7 job.command="cf --version" job.position=0 job.schedule="*/1 * * * *"

Scheduling Cloud Foundry Tasks

While the cron container here is designed to be small and lightweight, you may want to use it to trigger more resource intensive tasks and processes. When a simple curl to an http endpoint is not enough to kick off such a task on your existing app, Cloud Foundry Tasks are a great solution to run these processes.

This sample repository thus includes instructions to install the cf cli tool which you can use to trigger such a task using a meshcloud Service User.


Connect Database to Cloud Foundry

Securely connecting to Service Instances on Cloud Foundry

To connect to a managed service instance on your Cloud Foundry space, most developers use service keys. A service key is a set of authentication credentials that allows you to connect to your database instance via a public IP address and port. While this is quick and easy to do, meshcloud does not recommend keeping service keys open for extended periods of time. Instead, you should delete them as soon as possible and create a new service key anytime you need access again.

A more secure approach that does not involve exposing a connection to your database on a public IP is to spin up a shell container on Cloud Foundry and connect to it via cf ssh. This approach is also more suitable for long running or high performance operations that require close proximity between the database and the shell.

Here's how to do it showcased for MongoDB, but a similar approach also works for our other managed services like MySQL or PostgreSQL.

1) Create an app named MARKDOWN_HASH8dbc90b062fdf4d2f370bf28f06aa883MARKDOWNHASH based on a docker [container image](https://hub.docker.com//mongo/) containing the mongo cli. Tip: you can also specify a specific version using the appropriate container image tag, the example below uses :latest. Note that we tell Cloud Foundry that we need only very little RAM (128 MB), don't want a health-check on the App and that it doesn't need an HTTP route to be reachable from the outside. After all, we just want to ssh into this app.

cf push -o mongo:latest mongocli --no-route --no-start -u none -m 128M

2) Create a binding of the service instance to your new app. This makes a connection string available to the mongocli app that it can use to connect to the database instance on a private network, just like your proucution app does.

cf bind-service mongocli my-mongodb

3) Start the container, let it just run a bash

cf push -o mongo:latest mongocli --no-route -u none -m 128M -c bash

That's it, now we can easily ssh into the container using cf ssh mongocli and run env to find our connection string in the VCAP_SERVICES variable. The connection string looks approximately like this:

VCAP_SERVICES={"MongoDB":[{
"credentials": {
"password": "abc",
"database": "db",
"uri": "mongodb://user:pw@ip1:27017,ip2:27017,ip3:27017/db",
"username": "xxx"
},
"syslog_drain_url": null,
"volume_mounts": [

],
"label": "MongoDB",
"provider": null,
"plan": "S",
"name": "my-mongodb",
"tags": [

]
}]}

Now you can simply run mongo mongodb://user:pw@ip1:27017,ip2:27017,ip3:27017/db and you're securely connected to your managed database instance - on a docker container running mongo shell on Cloud Foundry - connected via ssh.


Deploying Meteor Apps on Cloud Foundry

This repository demonstrates different ways to deploy Meteor applications on Cloud Foundry.
There are at least three different ways to deploy Meteor JS applications on Cloud Foundry:

  • using a specialized meteor-js buildpack
  • using the general node-js buildpack
  • using your own docker container image

While a specialized meteor buildpack like meteor-buildpack-horse or cf-meteor-buildpack is a useful option, recent versions of meteor support simplified deployment on servers that provide just a node.js runtime environment. For this demonstration repository, we're focusing on this approach because the node.js buildpack is more widely used and better maintained.

The application we're going to deploy is located in the ./try-meteor folder of this repository. We also assume you have provisioned a Mongo DB service on your space. At meshcloud's Cloud Foundry, you can create a dedicated service instance suitable for use with meteor like this:

cf create-service MongoDB M meteor-mongo

Using the node.js Buildpack

Build a meteor app bundle

On your local machine with the meteor cli installed, build a distributable package into the deploy-buildpack folder:

cd try-meteor && meteor build ../deploy-buildpack/. --server-only --architecture os.linux.x86_64

Building generates a try-meteor.tar.gz file with the meteor application bundled as a plain node.js application, with some helfpful instructions in its README.md file:

This is a Meteor application bundle. It has only one external dependency:
Node.js v8.9.3. To run the application:

$ (cd programs/server && npm install)
$ export MONGO_URL='mongodb://user:password@host:port/databasename'
$ export ROOT_URL='http://example.com'
$ export MAIL_URL='smtp://user:password@mailhost:port/'
$ node main.js

Use the PORT environment variable to set the port where the
application will listen. The default is 80, but that will require
root on most systems.

node.js buildpack wrapper

To deploy this tar.gz file on Cloud Foundry with the node.js buildpack we need to:

  1. upload and unpack the tar.gz bundle
  2. run npm install on the extracted bundle
  3. set the correct environment variables using a launcher js script

We can easily achieve that through a custom package.json that uses npm's postinstall and start script to execute these actions. You can find the package.json and all required files for the deployment in the ./deploy-buildpack folder.

Note: at the time of writing the bundles generated by meteor 1.6.0.1 machine lack the meteor-deque dependency so we just explicitly add that by hand.

{
"name": "try-meteor",
"private": true,
"scripts": {
"start": "node launcher.js",
"postinstall": "tar -xf try-meteor.tar.gz && (cd bundle/programs/server && npm install)"
},
"engines": {
"node": "8.9.3"
},
"dependencies": {
"meteor-deque": "~2.1.0",
"cfenv": "1.0.4"
}
}

Have a look at the launcher.js file if you want to change service names etc.
The final bit that we need is a Cloud Foundry Manifest file to describe our application:

---
applications:
- name: try-meteor-app
memory: 512M
instances: 1
buildpack: https://github.com/cloudfoundry/nodejs-buildpack
services:
- meteor-mongo

We're all set now, a simple cf push and your app should be up and running on the cloud.

Using a Docker Container

The next option is to use a docker-based deployment of the application. This requires that we build our own docker image of the application and publish it to a docker registry.

You can find the code for the docker-based deployment of our sample application in the MARKDOWN_HASHd786272e16707f3074a294d21f8b83a0MARKDOWNHASH folder. The docker image used in this example is based on the [node:8-alpine](https://hub.docker.com//node/) base image. However, before we can build our container we need to build our meteor application and extract it:

cd try-meteor && meteor build ../deploy-docker/. --server-only --architecture os.linux.x86_64
cd ../deploy-docker && tar -xf try-meteor.tar.gz && rm try-meteor.tar.gz

The docker deployment demonstrated in this repository also uses the same launcher.js script introduced above to automatically initialize meteor environment variables from their Cloud Foundry counterparts. With that out of the way, let's build and push the docker image

docker build -t meshcloud/meteor-cf-example .
docker push meshcloud/meteor-cf-example:latest

With the container available in a docker registry, we can push it to Cloud Foundry by specifying the docker image in the manifest:

---
applications:
- name: try-meteor-app-docker
memory: 512M
instances: 1
docker:
image: meshcloud/meteor-cf-example
services:
- meteor-mongo

Now all that's left to do is a simple cf push and your app should be up and running on Cloud Foundry in no time.

iDevBook01:deploy-docker jr (master *) $ cf push
Pushing from manifest to org meshcloud-demo / space aproject as c9f7d64c-404d-4b29-b719-b2359f6c8157...
Using manifest file /Users/jr/dev/demo/meteor/deploy-docker/manifest.yml
Getting app info...
Updating app with these attributes...
name: try-meteor-app-docker
docker image: meshcloud/meteor-cf-example
command: node launcher.js
disk quota: 1G
health check type: port
instances: 1
memory: 512M
stack: cflinuxfs2
services:
meteor-mongo
routes:
try-meteor-app-docker.cf.eu-de-darz.msh.host

Updating app try-meteor-app-docker...
Mapping routes...

Stopping app...

Waiting for app to start...

name: try-meteor-app-docker
requested state: started
instances: 1/1
usage: 512M x 1 instances
routes: try-meteor-app-docker.cf.eu-de-darz.msh.host
last uploaded: Mon 12 Feb 10:25:29 CET 2018
stack: cflinuxfs2
docker image: meshcloud/meteor-cf-example
start command: node launcher.js

state since cpu memory disk details
#0 running 2018-02-12T10:12:23Z 0.1% 38.2M of 512M 1.3M of 1G

iDevBook01:deploy-docker jr (master *) $

Cloud native deployment for Single Page Applications

Single Page Applications (SPAs) are a popular way to deliver modern web apps. With SPAs, users download executable Javascript code that dynamically renders content in their browser, rather than retrieving pre-rendered pages from a server application. The server application is freed from rendering HTML and instead is only responsible for providing a re-usable API to its data. In this post, we're going to look at how we can leverage Cloud Foundry to do cloud native deployments for SPAs.

Delivering SPAs on Cloud Foundry

To deliver an SPA to its users, a web server only needs to serve the static assets making up the SPA. This also means that the web server can leverage HTTP caching to optimize delivery. On Cloud Foundry, the best way to serve an SPA is by using the Staticfile buildpack. A cf push using this buildpack will bundle your artifacts with an nginx server. To get started, let's assume you have all files of your web-root in your current working directory. We now create a short manifest.yml for Cloud Foundry:

---
applications:
- name: my-spa
instances: 1
memory: 64M
buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git

All we need to do now is to execute cf push and your SPA will be up and running in the cloud under a randomly-assigned URL shortly.

Scaling your SPA

The nginx container built for serving your files does not need a lot of memory, which makes it very cost-efficient to run on Cloud Foundry. To achieve high-availability and scale the service under load, we can easily use Cloud Foundry to scale it to two or more instances using cf scale my-spa -i $X.

Injecting Configuration

Cloud native applications should adhere to the 12-factor principles. One of the twelve factors is configuration through the environment. This allows us to deploy the same artifacts to different environments (e.g. dev and production) while keeping all configuration data separately. Cloud Foundry optimally supports workloads that implement this principle by providing configuration through environment variables.

At runtime, we need to inject this configuration data into the application. Because we want to keep serving our application as static SPA assets through nginx for optimum performance, there's no server side code execution like with PHP or a different backend language. However, we can still achieve dynamic injection of configuration data through the use of environment variables and server side includes.

The staticfile buildpack has a lot of useful configuration options. The most powerful of course is that it allows us to provide our own template for nginx.conf, which is evaluated during staging. In this template, we can access environment variables configured for the app on Cloud Foundry. For reference, here's the default config used by the buildpack.

For our custom template, we'll start with this portion of an nginx.conf file:

server {
listen <%= ENV["PORT"] %>;
server_name localhost;

<% if ENV["INJECT_ENVIRONMENT"] %>
location /inject-environment {
default_type application/json;
return 200 '<%= ENV["INJECT_ENVIRONMENT"] %>';
}
<% end %>

location / {
root <%= ENV["APP_ROOT"] %>/public;
index index.html index.htm Default.htm;
ssi on;
}
}

This creates a location mapping in nginx for the path /inject-environment that will respond with HTTP Status Code 200 OK and a static string read from the INJECT_ENVIRONMENT environment variable. This is useful if we can buid our SPA to retrieve this configuration data at runtime. But what if we want to load configuration data before the JavaScript in our application executes?

We can leverage an HTML feature called server-side-includes for this, which we enabled using the ssi on instruction in the nginx.conf above. In the index.html document that loads the javascript code of our SPA, we add a SSI instruction to include the string returned by the /inject-environment endpoint:

<!--#include virtual="/inject-environment" -->

Because a server-side-include does just plain text concatenation, we need to define our environment variable to be a "smart" string. So let's make it a <script> tag that executes some javascript that will put a JSON object into the global window. The easiest way to define this variable is to edit our app's Cloud Foundry manifest file:

---
applications:
- name: my-spa
  instances: 1
  memory: 64M
  buildpack: https://github.com/cloudfoundry/staticfile-buildpack.git
  env:
    INJECT_ENVIRONMENT: |
      <script type="text/javascript">window["INJECTED_ENVIRONMENT"] = {
        production: false,
        baseUrls: {
          api: "https://api.example.com"
        }
      };</script>

To access configuration data, your SPA can simply pick it up from the global window["INJECTED_ENVIRONMENT"] variable.

Pro tip: To dynamically reconfigure your application on Cloud Foundry at runtime, change your environment variables and restage the application. To stage the new version of the application while keeping the current one available, you can use the bg-restage cli plugin.

Monitoring

Cloud Foundry automatically collects your application's requests as they pass through the Cloud Foundry router. Entries like this are tagged with [RTR/$] in the logs. Additionally, Cloud Foundry will also collect all logs emitted by nginx, indicated by the [APP/$] tag in the logs. Here's how that looks like in practice:

Retrieving logs for my-spa panel in org meshstack / space production as demo@meshcloud.io...

2018-01-28T15:46:59.99+0100 [APP/PROC/WEB/0] OUT 95.222.25.157, 172.16.105.69 - https - - - [28/Jan/2018:14:46:59 +0000] "GET / HTTP/1.1" 200 1044
2018-01-28T15:46:59.99+0100 [RTR/0] OUT panel.meshcloud.io - [28/01/2018:14:46:59.994 +0000] "GET / HTTP/1.1" 200 0 1032 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" 172.16.105.69:38656 x_forwarded_for:"95.222.25.157, 172.16.105.69" x_forwarded_proto:"https" vcap_request_id:908529e7-3f5a-4a5c-40c2-d79419a3e5ae response_time:0.003193066 app_id:bc1e736b-6b8c-48b0-a390-8c8ac1aeae0a app_index:0

You can customize this logging in your nginx.conf file using the log_format configuration option.


Bitbucket Auth

Using Bitbucket Auth for Concourse Teams

All pipelines and resources in Concourse belong to a "team". Concourse supports different authentication mechanisms for allowing users to log in to these teams and manipulate these resources. When we just started out with Concourse, we configured a simple "dev" using basic auth and shared the password using pass.

As our engineering team at meshcloud grows (we're hiring!), we wanted to switch to external authentication that integrates with the rest of our infrastructure. Since we we're already using the Google Suite for office work like slides and docs, using Google OAuth and Concourse's generic OAuth provider was a natural first choice but ultimately lacked support for restricting permissions to users from our domain only.

Bitbucket Auth in Concourse 3.7

When Concourse 3.7 rolled around and offered a new Authentication Provider for Bitbucket Cloud, we we're really excited. We're already using Bitbucket for hosting some of our repos and developers are logged into their Bitbucket accounts all day anyway, so this would make for a seemless experience.

Unfortunately we couldn't find any info on how to set it up in the Concourse teams documentation. Turns out this wasn't a big deal, here's what the fly set-team command tells us to do:

Bitbucket Cloud Authentication:
--bitbucket-cloud-auth-client-id= Application client ID for enabling Bitbucket OAuth
--bitbucket-cloud-auth-client-secret= Application client secret for enabling Bitbucket OAuth
--bitbucket-cloud-auth-user=USER Bitbucket users that are allowed to log in
--bitbucket-cloud-auth-team=TEAM[:ROLE] Bitbucket teams which members are allowed to log in
--bitbucket-cloud-auth-repository=OWNER/REPO Bitbucket repositories whose members are allowed to log in
--bitbucket-cloud-auth-auth-url= Override default endpoint AuthURL for Bitbucket Cloud
--bitbucket-cloud-auth-token-url= Override default endpoint TokenURL for Bitbucket Cloud
--bitbucket-cloud-auth-api-url= Override default API endpoint URL for Bitbucket Cloud

Setting up an OAuth Consumer for Concourse

The first thing we need to do is set up an OAuth Consumer for Concourse in Bitbucket from the Bitbucket Team Settings page. This consumer needs to have permission to
read Account, Team Membership and Repository information. You may hit a slight road-bump figuring out the correct redirect URL to allow, but since that's passed from Concourse on the first (unsuccessful) redirect, we can figure out that it is https://myconcourse.domain/auth/bitbucket-cloud/callback.

Bitbucket OAuth Consumer for Concourse CI

After creating the consumer, you should also see its client id and client secret in the Bitbucket settings. The next thing we wanted to do is to grant all users in a specific group on our Bitbucket Team access to Concourse using --bitbucket-cloud-auth-team=meshcloud:developers. However, it turns out that "role" and "group" are different concepts and this attempt leads to an error:

error: invalid argument for flag `--bitbucket-cloud-auth-team' (expected []cloud.TeamConfig): unknown role in Bitbucket team specification: 'developers'

A quick look at the source code reveals that the only roles possible are member, contributor and admin.

No problem, we can also provision the group on a repository and have Concourse check whether the user has access to this repository before granting access to the Concourse team. Here's the full command to enable Bitbucket auth on the dev team for all users that have access to the myorg/myrepo repository:

fly -t ci set-team -n dev --bitbucket-cloud-auth-client-id=xxx --bitbucket-cloud-auth-client-secret=xxx --bitbucket-cloud-auth-repository=myorg/myrepo`

That's it, now we can log in to Concourse using Bitbucket: