Busy beehive as example for continous work

Corona Update: meshcloud operates at full capacity

How meshcloud is ensuring operating at full capacity and accompanying our customers through these times to continue their digital journey.

Read more

Multi-Cloud Maturity Title

The path to proper multi-cloud management – a maturity model

Cloud means Multi-Cloud

The cloud is the foundation of an efficient IT operation in the digital age. More often than not, one cloud platform is not sufficient for at least two reasons: 1) Different cloud platforms service different purposes and each has specific strengths and weaknesses hence embracing the plurality of platforms in a best-of-bread manner is more productive than trying to fight it. And 2) To avoid a technology lock-in on one vendor or platform, creating a diverse skill set preserves technical sovereignty.
Therefore, building a proper multi-cloud management capability is key to a successful and efficient cloud strategy. To give some guidance about the technical approaches to build such a capability, we introduce the meshcloud multi-cloud management maturity model.

Multi-Cloud Maturity

The meshcloud multi-cloud management maturity model

Stage 0: No Multi-Cloud Management

The first stage is easy to reach – it is the natural stage when you do not take any actions, maybe because you do not realize you have a multi-cloud challenge to solve. However, the effects rise nevertheless.
Due to disorientation of users individual platform or ad-hoc processes will be established and you end up with a silo-structured organization loaded with unnecessary complexity due to different access routines for different platforms.
But that is not necessarily the greatest worry: people tend to shortcut complexity and lengthy processes and public cloud vendors make it way to easy to access cloud resources within minutes with the simple (yet virtual) swipe of a company credit card. The nightmare has a name: Shadow IT! You end up with individual, unconnected, unregistered public cloud accounts the company pays for but has no efficient way to manage. All comes down to individual initiative and discipline which might go well, but it’s an anarchy and there is no way to ensure rules and regulations are followed and compliance is uphold. This has severe implications for security or regulatory duties, e.g. you cannot give full report about data processing activities as required by the GDPR. Accounts might be shared among colleagues. When people leave the company, no one guarantuees credentials are rotated. And so on.
Therefore, this stage is like purgatory – you suffer from sins of the past and want to get out as soon as possible. To receive remission, action is necessary.

Stage 1: Manual / Semi-automated Multi-Cloud Management

So we realize: let’s bring order to the chaos. We try to standardize, define common processes, keep track of tenants and workloads out there in the clouds or running within our own private cloud.
In Stage 1, we define a process to provision and configure tenants on the clouds we use. Someone is defined responsible for a certain platform, to keep security and governance up and to provide users with what they need to get their cloud projects up and running.
Regarding implementation, we move quick and take the tools we have and are familiar with: a ticketing system to order cloud tenants from the IT department which someone creates manually or provides manually or with some scripting on her or his own. What about controlling? Let’s keep track of cloud tenants in an Excel sheet or export a list of projects on a platform to see who’s doing what and how much of it.
It’s tedious and everyone hates it – those who want to use the cloud because there still different contacts for different clouds and because they have to wait so long to receive their credentials. And operators hate it because they are doing dull work and lots of it instead of improving processes and evaluating further advanced technologies.
The cloud is all about automation and the multi-cloud is, too! So this stage does not fulfill the promise of self-serving, on-demand, scaling capacities with a few key strokes, but falls behind the spirit that comes with cloud technologies: automate ALL THINGS.

Stage 2: Imperative / Workflow-based Multi-Cloud Management

With further investment, we reach stage 2 where we have reached the stage of integrated, automated processes to provision tenants and user accounts in all relevant cloud platforms. We might even have established a self-service where users are able to set up cloud projects 24/7 at their own pace and needs. There is a portal where they can order cloud projects or ressources and after a while they are ready to run.
So all good now? At first, it might be. But over time, workflows and tenant configurations evolve and after some months or years it becomes hard to tell which tenants were created and configured by which version of the process. Also, do people reliably remove resources when they are not needed anymore? And how are resources in different clouds connected and how do they depend on each other? Hopefully, the documentation is available and up to date – otherwise, we still lose orientation between platforms and projects consuming and possibly spanning them.
But what now?

Stage 3: Declarative Multi-Cloud Management

Meet declarative multi-cloud management! You already know the concept from the deployments of your workloads where it labels as infrastructure as code. You describe the desired target state and the system works it out for you. Declarative multi-cloud management means the same, but on an organizational, cross-platform level – Organization as Code as we framed it.
So you define which projects should exist on which platform and which users should have what kind of access. You save the new configuration in a versioned repository and after a short while the whole multi-cloud federation converges to the new state you described. The version control enables you to frantically audit every single change that happened to your multi-cloud landscape. This has several advantages:

  • Robustness: Distributed systems might suffer from weak network links which might cause process flows to stall and fail. However, with the target state clearly defined, the system can try again and again to finally reach the desired state.
  • Prevention of local changes: You know the “colleague is on vacation, but I need urgent access” situation which might lead to a quick and dirty permission to cloud projects by an administrator. More often than not, those changes stick longer than intended – the human lazyness is my witness. However, a declarative multi-cloud management system would roll back those changes at the next sync.
  • Decoupling of semantics and implementation logic: With a descriptive configuration you describe what you want, not how to do it. Therefore, flexibility remains with the system on how to implement reaching the desired state which means more freedom to improve the multi-cloud system over time.
  • Documentation and Auditing: If every configuration change is tracked in the manifest and version controlled, it is always easy to report, audit and track who had access to which system or data and how long, why and by whom.

You see that declarative multi-cloud management has many advantages – it’s for some reason the paradigm turned so successful with configuration management. Therefore, meshcloud put the declarative paradigm at the heart of its vision of an agile, but full-control multi-cloud management. If you want to learn more, we are happy to start the conversation with you.
Reach out to us via mail, social webs, phone or any events we attend. We are looking forward to hear your thoughts!

Platform services model classification – be aware of what you need

Platform services play an increasingly important role in cloud infrastructures. They enable application operators to quickly stick together the dependencies they need to run their applications. For example, when deploying your application and you need a database, you just request a service instance for the database of your choice and connect it to your application. Done.

Specifications like the Open Service Broker API provide a standard interface to provide backing services such as databases or analytics to applications. However, the nature of a service offered by someone else to your application is not always that clearly defined and therefore you should pay attention to the exact contract the service offers.

At meshcloud we work with customers to integrate Open Service Broker API compatible services into a private multi-cloud marketplace. Based on that experience, we provide a compact check matrix for clear communication and understanding what type of service you receive when requesting a service instance. The following abstract scheme might not necessarily be complete (please comment if you have anything to add), but it gives a first idea which questions to ask and to ensure there is no misunderstanding between service owner and service consumer.

Service Model classification matrix

Unmanaged Operated Managed
Whitebox The service provides an instance in your space. You have full access to its internal wiring and start from there customizing for your purposes. Example: IaC-Template for MySQL Galera Cluster. This is a tricky model, as the service vendor offers operation support, but service consumers are also able to access the service interna, e.g. the service vendor offers a joint internal monitoring and provides an automatic recovery, but if you break anything, you need to recreate the service instance on your own. Example: AWS RDS (Managed Whitebox would not work because the service vendor is responsible for smooth operations, therefore the interna will be protected from outside access. Exception would be supported operations where you have service operators support you in running your systems.) Examples: Monitoring & Incident Response Whitebox means you see details of the service deployment, e.g. the service instance will be deployed into an IaaS space you have access to.
Blackbox (Unmanaged Blackbox would not work as no one would control the service instance) Example: Appliance as a Service without SLA The service vendor has automatic tooling in place to check service health and recover the instance. Service consumers can not access interna, so the vendor has exclusive control and hence should be able to provide a consistent service experience. Example: DBaaS, MLaaS This is the most usual and common perception of a platform service. You request a service instance and use it for your application without any further effort. Examples: AWS DynamoDB Blackbox means you only have a defined interface to control your service instance (e.g. the Open Service Broker API), but no insight where or how this service is deployed or operated.
Unmanaged: The service vendor does not undertake any measures to ensure the service is running smoothly. You get a service instance provisioned and from there, you're on your own with operations, but you don't need to start from scratch. You usually only pay for the bare resources consumed by the service. Operated: The service vendor monitors execution of the service itself and recovers the service when it fails. However, how this affects your application or your data is out of scope for the service vendor, but as a
service consumer you need to add your own routines on top of the service instance.
Managed: The service vendor takes over responsibility for availability and, often, data backup and recovery. Further, service updates will be conducted by the service owner without your involvement as a service consumer. This is the most comfortable, but also most expensive way of consuming a service.

The gold standard and typical understanding of a platform service is certainly: Managed Blackbox. However, there are cases when the other service models make sense, e.g. for highly customizable systems – let's say a Kubernetes cluster. Providing a K8s cluster as unmanaged whitebox service would mean you get a fully provisioned K8s cluster and take over from there in further configuration and maintenance. You still save the time to setup and provision a cluster on your own, but don't have to bear the costs of a fully managed K8s cluster.

In any case, there should be no misunderstanding between service vendor and consumer as to what the level of support really is. Especially, when procuring services becomes fast and easy and happens with a few clicks, simply assuming the vendor will take care of everything might create unpleasant surprises. Be sure to be aware of the exact service conditions, which are hopefully communicated transparently and easy to access.

Which Cloud Platform is the right choice for my company?

You’ve probably been through this: You know you must transform your operations to improve agility and adopt new technologies which speed up your delivery capabilities. You collect information about the candidates – Kubernetes, OpenShift, Cloud Foundry, and others – and try to evaluate their suitability for your business. But it seems to hard to match all requirements with one platform. Especially, when it is still unsure where your own business is heading to.

So how to find the right platform?

There is an easy answer to this problem: You don’t.
Every platform has its strengths and weaknesses and – sometimes even more important – their supporters and communities. Because apart from the technical performance it is sometimes more important to get the people and skills you need to create your solutions. And if you are flexible with your infrastructure, you have a greater selection of expertise on the market. Given the diverse – and sometimes conflicting – requirements of applications, it is easy to see that there is hardly a platform that serves all of them best.

Therefore it makes sense to chose different tools for different purposes. Welcome to the world of Multi-Cloud! A best-of-breed approach to cloud platforms enables you to stay flexible, use the right tool for the matching purpose and find sufficient skills on the market to implement your solutions. Vendor lock-in is no longer a threat to you and you can always use the most efficient technology to do the job.

Multi-Cloud – the freedom or burden of choice?

Sounds great, right? But there is a danger threatening the shiny dream: complexity. Every platform adds its own procedures for managing projects, users, permissions and so on. Your organization needs to build up all those skills and stick those tedious administrative burdens together. You create overhead, people are busy with dull routine tasks instead of increasing functionality. Everything slows down.

People start to use two different platforms for the same project and now they have to keep the configuration in sync and manage an inventory of dependencies. Hopefully it is up to date (you know it probably won’t be for very long). And how do people access shared services like databases and where do they find them? So much implicit knowledge is dangerous for reliable operations. Oh and did we already talk about cost distribution? How do you split platform costs to the responsible projects? Another task not related to providing features for your product.

Choice without lose of control

We here at meshcloud are dedicated to make Multi-Cloud a joyful experience. Our software takes over the task of the tedious wiring and provides you with a unified interface for your platform orchestration. Your users can quickly access all the cloud resources they need, while the IT department stays in control of who is doing what and where. We currently support management of OpenStack, Cloud Foundry, Kubernetes, OpenShift and AWS platforms and continue to expand on other technologies.

Do you want to learn more? Please send us a note and we will get in touch with you.

Featured image extracting key

Extract your public key from your private key

If you should ever lose your public key, but still have your private key, you can always compute your public key from your private key.

The following command prints your public key to stdout (with id_rsa being your private key file, e.g. under ~/.ssh)

ssh-keygen -y -f id_rsa

Or just pipe directly to a file with:

ssh-keygen -y -f id_rsa > id_rsa.pub

Now you can redistribute your public key where ever you need it.

Any questions? Please comment.

Federated Authentication with the OpenStack CLI

Multi-Cloud applications are a core business for us here at meshcloud. Therefore, we also put some effort into the integration of several OpenStack locations using federation. After all, meshcloud is a federation of public clouds.

Recently, we re-assessed the federated authentication with the OpenStack CLI client using v3.12.0 against a Mitaka OpenSource environment. On the identity provider (IdP) side, we use Keycloak's OIDC capabilities.

Password vs. Token Auth

We look at two options for federated authentication at the command line:


Auth Method: User/Password credentials of Identity Provider


  • Static RC file, no refresh of token necessary
  • RC file itself does not provide any access without password


  • Password necessary at CLI
  • Manual Keystone Token Issuing
  • Risk of using password at CLI


Auth Method: OIDC Java Web Token (JWT) issued by Identity Provider


  • Direct bearer usage, no password necessary (=> automation)
  • Keystone token automatically issued


  • Access not secured if RC file is leaked

So, depending on your use case you might prefer the one over the other. In any case, you prepare an appropriate RC file (=a shell script setting the environment variables to configure the openstack cli).



If you want to use the OpenStack command line using your Identity Provider's user/password credentials, create an RC file like this:

# clear unnecessary settings
# set needed settings
export OS_INTERFACE="public"
# insert here Keystone auth url for the specific cloud
export OS_AUTH_URL="https://keystone.example.com:5000/v3"
export OS_IDENTITY_PROVIDER="keycloak-idp"
export OS_PROTOCOL="oidc"
export OS_CLIENT_ID="meshfed-oidc"
# use any value for client secret if we use a public oidc client
export OS_CLIENT_SECRET="ac2aa84b7685-5ffb-9f1d"
export OS_DISCOVERY_ENDPOINT="https://idp.example.org/auth/realms/meshfed/.well-known/openid-configuration"
export OS_AUTH_TYPE="v3oidcpassword"
# insert here the user name to authenticate
export OS_USERNAME="username"
# this is the local openstack project id
export OS_PROJECT_ID="27a7e59d391d55c6cf4ead12227da57e"
# set password by querying user
export OS_PASSWORD=""
echo "Please enter your Meshcloud Password: "

You need the OS_AUTH_URL refers to the Keystone auth endpoint of the cloud you want to access. OS_IDENTITY_PROVIDER is the label you created with openstack identity provider create when creating the identity provider within OpenStack. OS_CLIENT_ID is the name of the OIDC client configured at the IdP. Now it is important that you set OS_CLIENT_SECRET even if you have not a confidential client configured at the IdP because the OpenStack cli expects it to be present. If you use a "public" OIDC client configuration, just put in a dummy value. OS_DISCOVERY_ENDPOINT sets the metadata endpoint for the IdP server. It saves you to set a lot of other config options (it's already more than enough, isn't it?).

Of course, we set OS_AUTH_TYPE to v3oidcpassword. The username of your IdP account goes to OS_USERNAME and OS_PROJECT_ID needs the local project id of your OpenStack project you want to access. See below if you do not know the project id. The remainder of the script is to ask you for your password on the command line, but you can also set OS_PASSWORD directly.

Command line usage

Now get to your prompt and source the RC file:

➜ source v3oidcpassword.sh
Please enter your OpenStack Password:

After you entered your password, you need to issue a Keystone token and you can start issuing commands:

➜ cli-test openstack token issue
| Field      | Value                            |
| expires    | 2017-08-23T13:37:33+0000         |
| id         | d6d222ccfba24bfa9c85d5baa039f110 |
| project_id | 5edf2f36ac334618a614731a146a60ec |
| user_id    | 7ca9d31f46da4e52b44ac263745e4a77 |
(openstack) router create test
(openstack) router list
| ID           | Name | Status | State | Distributed | HA    | Project                          |
| d1907a5b-... | test | ACTIVE | UP    | False       | False | 5edf2f36ac334618a614731a146a60ec |

If you would like to see the projects you have access to, e.g. to get the right project id for the RC file, issue:

openstack federation project list

And you will see a list of projects you can access with your federated account. Pretty cool, huh? :-)


If you don't want to deal with passwords at the command line, v3oidcaccesstoken is an alternative to authenticate with OpenStack using the JWT issued by the IdP. A JWT is an encoded JSON data set containing identity and authorization information. You can decode them using jwt.io.


export OS_INTERFACE="public"
export OS_AUTH_TYPE="v3oidcaccesstoken"
export OS_AUTH_URL="https://keystone.example.com:5000/v3"
export OS_IDENTITY_PROVIDER="keycloak-idp"
export OS_PROTOCOL="oidc"
export OS_ACCESS_TOKEN="eyJhbGciOiJSUzI1NiIsIn..........9uFum6TWK_69OAbM3RjFbjiDvg"
export OS_PROJECT_ID="27a7e59d391d55c6cf4ead12227da57e"

You see the configuration is much simpler as lots of information is contained in the JWT access token (it usually is a very long string, we just cut it here for display purposes). Sourcing this file enables you to issue a Keystone token (and hence do work in the project) as long as the OIDC JWT is valid.

Command line usage

Source the file, then start working:

➜ source v3oidcaccesstoken.sh
➜ openstack server list
| ID                                   | Name             | Status | Networks                                    | Image                                    | Flavor     |
| 6447040b-cc8c-46f3-91a8-949aa1744981 | flinktest        | ACTIVE | test=,         | ubuntu-xenial-16.04_softwareconfig_0.0.2 | gp1.medium |

You see it is much shorter in application and does not require to set a password.

Using CLI access with meshcloud

To make multi-cloud access easier, meshcloud already prepares the necessary RC files for your project. Just go to the Panel and select the project and location you want to access. Choose "CLI Access" and you find the RC files for download. Source them into your shell and you're ready to go.


The v3oidcpassword and v3oidcaccesstoken are very helpful to use CLI access on federated OpenStack clouds. However, for automated access (e.g. scripts) neither of them is perfect – either you have to store a password or you have only a limited access token. The OIDC standard has a solution called Offline Tokens. Those are refresh tokens (ie. tokens used to re-issue a new access token without credential re-authentication) that never expire. They are intended for automatic procedures who need access to protected resources. Upon access, the bearer requests a fresh (short-living) access token using the Offline Token. The major advantage is that Offline Tokens can be revoked at the IdP site. A revoked Offline Token cannot be used to acquire new access tokens and hence access is disabled. So you still have a central veto when providing decentral access without passwords.

Soon, meshcloud will provide an integrated CLI client that makes accessing multiple clouds and projects more convenient and helps you speed up with your open-source multi-cloud experience. Stay tuned :).


Any questions left? How are you using federated authentication with OpenStack? Let us know. We're looking forward to hear from you.

Bridge over gap between public and private cloud

Healing the public vs private cloud feature gap

Bridge over gap between public and private cloud

I ran into an interesting article by David Linthicum on InfoWorld. He describes the functional deficiencies of private cloud solutions compared to public cloud platforms:

The feature gap between public and private clouds has grown so wide that the private cloud demos that I attend are laughable considering the subsystems that enterprises need, such as security, governance, databases, IoT, and management, versus what private clouds actually deliver.

In addition, he refers to the upfront effort necessary to put a private cloud solution in place. In total he concludes that companies should try to avoid private clouds until absolutely necessary.

I think he is right looking at the current solutions in the market where you have to choose which side you're on. Opting for the public cloud gives you an abundant bunch of out-of-the-box services ready to be deployed, but locking you into the ecosphere of a specific vendor like AWS. Also, total costs are not always easy to estimate, but can surge unexpectedly if load increases – not always covered by compensating increases in revenue. Private Clouds require a larger (fixed) upfront investment, but give you total control over every aspect of the stack – for the price of having to pay extra for every piece of functionality.

Economies of scale can explain the observed effect: Rolling out a service in the public cloud immediately serves a very large user base, while private cloud solutions usually are long-running projects and need to consider very specific requirements and customized software stacks – after all, there is probably a reason to have a private cloud.

To overcome those limitations, we apply a "one stack everywhere" approach at meshcloud. We deploy the meshStack – consisting of OpenStack for IaaS and Cloud Foundry for PaaS – in our partner data centers for public cloud usage, but also provide the same stack to operate in private data centers. This ensures, functionality developed and distributed to public cloud users is also available for private cloud usage. This enables users to build their infrastructure on well-established open source platforms and hence keep their investments safe without vendor lock-in. Companies need only one skill set and have the freedom to choose if they run their deployments public or private. Therefore, we enable users to reap the benefits of a hybrid approach without compromise.

What is your experience with private vs. public cloud deployments? Share your thoughts.

PEM file layout for HAProxy

To use Loadbalancer-as-a-Service with the HAProxy driver and SSL termination, you usually acquire a certificate from a CA.

This results in three files:

  • The secret key you created (PEM format)
  • The certificate itself, usually ending in .crt (PEM format)
  • The intermediate certificates, also called bundle or chain (PEM format)

Now what is the right order to concatenate those files for HAProxy?

The chain hierarchy of the certificates needs to go upside down in the PEM file, so:

  1. The Certificate for your domain
  2. The intermediates in ascending order to the Root CA
  3. A Root CA, if any (usually none)
  4. Private Key

If you want to include a private key as well, it apparently does not matter if it's at the beginning or at the end, but we put it in the end.

So an easy command would be:

cat certificate.crt intermediates.pem private.key > ssl-certs.pem

You can add this file in HAProxy with a line like this for example in a frontend section:

bind *:443 ssl crt ssl-certs.pem

You like going deep and fixing stuff? We're always looking for great engineers! Check out our Job Openings.

Weg mit den Haustieren – Die Cloud ist mehr als Server-Virtualisierung

Die Cloud bringt neue Ansätze für die Gestaltung von Anwendungsarchitekturen. Einer davon wird oft mit „Pet vs Cattle“ beschrieben. Read more

Executive Order weakens US data protection for foreigners

US President Donald Trump signed an executive order, which could further weaken the data protection level for non-US citizens (as reported by heise.de). Section 14 of the order of January 25 states:

Sec. 14.  Privacy Act.  Agencies shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.

This means US agencies shall not apply privacy regulations to foreigners. This might threaten the new Privacy Shield agreement between the EU and the US which requires a data protection level comparable to EU standards in order to allow transfer of personal data to the US. Companies might therefore face legal uncertainties in future when they process personal data under US legislation. To avoid those risks, European companies should rely on European infrastructure providers solely.

Further details at heise.de (in German).