Explanaition of the

GitOps and the Rise of Everything-as-Code

GitOps is a term that is getting more and more popular to describe a certain way of handling the operations of an application or IT system.

In this blog post, we will look at the definition of GitOps and at its most important core principle: Infrastructure as Code (often referred to as “IaC”).

GitOps is considered the future of doing operations and we will line out why, what the benefits of GitOps are, and why you should also consider going GitOps!

What is Infrastructure-as-code?

Before we dive deeper into GitOps, we have to understand the concept of Infrastructure as Code (IaC). IaC is a different way of managing IT infrastructure. Instead of performing manual tasks in a tool such as the Azure portal, the “state” of the infrastructure is centrally managed via a git repository. This could even be the same git repository that already hosts the code of an IT application. The engineer declares what the state should be, and the IaC tool takes care of achieving that state. We’ve written a guide to testing Infrastructure as Code that goes more into detail.

The opposite of doing declarative coding is known as imperative. To understand the difference and benefits of doing your infrastructure as code, see the following example for running a virtual machine in Azure, in an imperative way and declarative way:

Imperative - the old way (self-written logic)

Retrieve all virtual machines. If there is no virtual machine in Azure, set up a new virtual machine in Azure, if it is already there, do nothing.

The engineer in this case has to write logic and verify the state and whether that state is achieved, for example by calling the Azure CLI for all virtual machines and checking whether the one that needs to be there is already there or not.

Even worse: in a pre-DevOps time this was a manual task done via calling the operations department: “Hey John, I need a new virtual machine in Azure!”.

Declarative - the new way (as code)

I want one virtual machine to be there in Azure.

The engineer only has to write the desired state: the tool (e.g. Terraform) takes care of actually achieving the desired state.

As you can see, managing infrastructure becomes a lot easier this way. The engineer no longer has to keep track of what is already there and what should be there. Instead, the engineer just declares what there should be and the tool takes care of doing so.

Read our blog post on providing a declarative API if you want to learn how and why we built an API that uses the declarative approach, as opposed to a regular imperative REST API.

Everything as Code

IaC has meanwhile become such a success that we can actually see other aspects of IT being handled “as code” as well. This is usually described as “Everything-as-code” or “X-as-code”. The as-code movement has meanwhile made its way to CI/CD, security, and even code quality standards as well. The rule nowadays for developers is: if it can be managed as code, we do so.

Just last week, I ran into this example by myself, in this case for the tool Netlify. Netlify is a way to build and host your front-end project in an easy way. The screenshot below quickly demonstrates how closely related “configuration as code” and the actual “manual” configuration in the user interface are:

Explanaition of the "as code" concept
Examples like these prove that engineers are enabled to do more and more in code, and less via (manual) work in various user interfaces.

What is GitOps?

So then, to make the main question of this post: what is GitOps?

It has to be said that although IaC and GitOps could be considered separate things, there is a good chance that people using IaC are already doing GitOps. If you have hooked up a CI/CD pipeline to your IaC repository, you’re already doing GitOps, nice job!

GitOps builds on top of the principles of Infrastructure as code (or anything as code). By declaring our desired state in code, we can now also hook up CI/CD tools to actually operate the desired state that is declared in our code.

To put it briefly: GitOps is doing IaC in Git and automating away the Ops through CI/CD. Together they become GitOps. Historically, Kubernetes is the first tool that heavily used GitOps.

What are the benefits of GitOps?

By declaring and operating your infrastructure in a GitOps way a lot of benefits are unlocked. These are comparable to the benefits of tracking your application code in Git:

  1. Including our infrastructure as code in a git repository creates a single source of truth. By being able to check out the git repository, anyone at any time can figure out what the state of the infrastructure should be. There is no longer a Word document or Excel sheet laying around in some folder that should contain this information.
  2. When making infrastructure code declarative, instead of writing custom code that rolls out the infrastructure, it becomes less complex. This makes the infrastructure as a whole better readable. No need to guess anymore what the code might be doing. It is just stated what the state should be, which is much easier to grasp than imperative code.
  3. The infrastructure is as close to the application code as it can be. If possible, you can manage the infrastructure of your application fully in the same git repository as the application code that is run on the infrastructure. This also enables kickass workflows such as doing application & infrastructure changes in the same pull requests.
  4. When you write your infrastructure as code, then you allow yourself to reuse infrastructure. You could for example define your infrastructure once, and deploy it both for a test and a staging environment from a single declaration.
  5. You are unlocking all benefits of using git. By tracking infrastructure changes in a git repository, all git benefits are automatically unlocked. Think about branching, tracking history, and most importantly, collaborative pull request workflows for reviewing and accepting changes.
  6. Given the rise of CI/CD tools out of the success of Git, it is also obvious that GitOps provides the best automation opportunities. Declaring your infrastructure as code is one thing, but automating all other tasks away is an obvious next step to improve the way you do operations.

It’s pretty cool what people do with GitOps. For example, the contributors at Infracost combined the benefits of points 5 & 6 and built an open-source tool that estimates the $ you increase or decrease on your infrastructure costs when opening up a new pull request in GitHub.
Tool estimates changes in infrastructure cost based on github pull requests

GitOps enables agility

GitOps in the end achieves that same, known workflow for DevOps teams that they are already used to for managing IT infrastructure, or other aspects.

Need to change the size of the VM used? Just open up a pull request/merge request and the change will automatically be rolled out to the right environments.

The GitOps workflow will ensure that teams get the same speed and agility of changing application code. This is key to enabling business excellence, which is growing closer and closer to a company’s ability to deliver software fast.


Playing Field

Cloud Foundation: 3 valuable Players on the Field

Building Cloud Foundations can be cumbersome and tedious. In this post we’ll introduce 3 of our offerings in that field that will make your life easier.

If you are an Enterprise Architect and/or responsible for your organization’s Cloud Center of Excellence or Cloud Foundation Team this post is for you.

It will show you ways to accelerate your Cloud Journey, provide insights on Cloud Foundation best-practices and finally demonstrate how our offerings play together to help you build the best Cloud Foundation.

The Cloud Foundation Maturity Model – Getting Clarity

As a member of your Cloud Foundation Team, you may have come across the Cloud Foundation Maturity Model (CFMM). It’s a strategic model that helps you understand the capabilities you’ll need to build up during your Cloud Journey, regardless of which clouds you use, where you’re currently at or what approach to implementation you have taken.

You can consider it the coach on the playing field. It gives you guidance, shows where you’ll want to go and provides insights into different implementation options that will get you there.

We have used this model successfully with leading cloud-native organizations from various industries.

Building Cloud Foundations is an Iterative Process

Lots of organizations we work with run thousands of applications in the cloud, managed by a central Cloud Foundation Team. However, they didn’t get there within a day. Implementing a Cloud Foundation is best approached with an iterative approach. You start with the basics and implement more and more sophisticated capabilities over time as you get to know more about your users and their requirements.

The Cloud Foundation Maturity Model helps you to break the journey down into small steps that systematically build upon each other.

It’s not only about how far along the Cloud Journey you are, but also what type of vehicle you’re using

When Cloud Foundation Teams are faced with large-scale cloud migration projects, they know that whatever Cloud Foundation capabilities they build they’ll have to be highly automated in order to keep up with the upcoming scale of cloud demand.

That’s why we cover two dimensions when it comes to maturity within our Cloud Foundation Maturity Model:

  1. Journey Stage: Varying from Essential building blocks (1) to Industry Leading capabilities (5)
  2. Implementation Maturity: Varying from manual to fully automated

Assessing both brings you valuable insights when evaluating the current state of your Cloud Foundation and that’s exactly what the Cloud Foundation Maturity Model can be used for.

As a result, you’ll have a structured analysis of your Cloud Foundation, demonstrating your achievements as well as areas you’ll need to improve on a single page, which makes it a great tool for internal communication.

Collie – A Trustworthy Helper On Your Way To Cloud Foundation Maturity

Most organizations start their cloud journeys in high spirits on a green field. By the time you realize that you need proper cloud governance and decide to establish a Cloud Foundation, the once green grass has turned into a brownfield landscape. Collie is an open-source multi-cloud CLI that helps exploring and sorting, preparing for systetmatic management. It’s built for power-users, single player.

Collie CLI easily provides multi-cloud transparency on cloud tenants, corresponding access rights, tags and cost information. In regard to the Cloud Foundation Maturity Model it will help you to implement the following building blocks:

To fully automate your cloud foundation capabilities and establish collaboration between the Cloud Foundation and your DevOps teams, you’ll need to go one step further and use a platform like meshStack that reflects this organizational model.

meshStack – Your Cloud Foundation Platform

At meshcloud, we focus on enabling DevOps teams to use cloud and speeding up their applications’ time-to-market. Providing self-service access to native cloud tenants like AWS accounts, GCP projects or Azure subscriptions is critical to achieve this.

We know that every organization is unique and comes with specific requirements, especially in regard to the integration of existing IT Landscapes. Our aim with meshStack is to help you get the basics right and enable you to build upon them.

That’s why a lot of Cloud Foundation capabilities come out-of-the-box, while others can flexibly be adapted to your organizations’ needs.

In general there are 3 ways meshStack can help you to implement Cloud Foundation Building Blocks.

  • meshStack provides this Cloud Foundation Building Block out-of-the-box

    Outcomes Graphic

    Example: When using meshStack in a multi-cloud setup, you’ll have a Self-Service Multi-Cloud Tenant Database, out-of-the-box.

  • meshStack enables strong facilities for an easy implementation of that building block

    Outcomes Graphic

    Example: With Landing Zones and Services you’ll have everything you need to implement Modular Landing Zones.

  • There are established solution patterns that facilitate the implementation of this building block, when using meshStack

    Outcomes Graphic

    Example: There is an established solution pattern that will help you provide managed Service Accounts with meshStack.

The following graphic demonstrates in detail, which Cloud Foundation Building Blocks are implemented by meshStack in which of the 3 aforementioned ways:

meshStack implements

In The End It’s About the People

Our offerings aim to help organizations throughout their entire Cloud Foundation journey:

  • From understanding and defining the Cloud Foundation for your organization based on our Cloud Foundation Maturity Model
  • to analyzing and understanding existing cloud landscapes with Collie CLI
  • all the way to implementing large-scale Cloud Foundation Platforms to manage thousands of teams and applications based on meshStack.

However, establishing a Cloud Foundation within an organization is not only about building up the relevant capabilities and deciding on implementation options. It also is a huge organizational shift that requires involvement of all stakeholders and great communication across teams and up to the C-Level.

By allowing organizations to easily experience the power of self-service and providing means for internal communications, we want to enable Cloud Foundation Stakeholders to drive acceptance for change throughout the entire organization in order to pave the way for a new way to organize IT.


6 Things to Watch out for when Starting Your Cloud Journey

Enterprises plan their cloud transformation carefully and thoroughly. And that's exactly what they need to do in order to set their cloud journey up for success.

But the truth is that many organizations don't have a lot of experience when it comes to migrating to the cloud. They are up for a steep learning curve.

That's why we've compiled a list of 6 aspects you need to keep in mind when embarking on your cloud journey:

  1. Breaking up silo structures
  2. Assessing the technical expertise of your teams
  3. Understanding cloud vendor lock-in costs
  4. Understanding the shared responsibilities in the cloud
  5. Considering Managed Services
  6. Develop an agile, cloud native way of working

Let's get to it:

1. Breaking up silo structures

Moving to the cloud requires a change in the organizational structure. Just signing a contract with AWS, GCP or Azure is not enough. Infrastructure silos focusing on databases, networks, and so on are not ideal, to say the least. Everybody working on an application has to communicate with those silos.

Developing and running applications in this scenario puts a lot of overhead responsibilities on the shoulders of DevOps teams. And it grows with each cloud platform they add.

Optimizing in silos can make each silo run their cloud platform perfectly but it won't remove inefficiencies in the overall cloud transformation effort.

A cloud foundation team that sees itself as an enabler for DevOps is the best practice. The cloud foundation can optimize for applications and go-to-market.

2. Assessing the technical expertise of your teams

You have decided on one or more cloud platforms - like AWS, Azure, or GCP - to migrate to and build on. It is now important to focus on assessing the technical expertise in your organization and upskilling your teams to enable them to work with these cloud platforms.

Migrating to the cloud will most likely - and this is often overlooked and not talked about - automate certain positions out of existence. But keeping skilled and qualified IT staff on board should be a priority: Identifying and reskilling people in these positions and offering them new and valuable opportunities within the organization is the way to go.

A cloud foundation team can offer consulting and training to support the ramp up.

3. Understanding cloud vendor lock-in costs

Enterprises must review and fully understand the costs that come with choosing a cloud service provider. The cost reduction promised by the cloud can only be achieved if the cloud transformation is done right and all costs are made explicit.

Going all-in with one cloud vendor leads to a strong dependence on their proprietary technologies. Switching costs are high and may prohibit the move to competing vendors further down the road.

Make sure to have a viable cloud exit strategy in place and go with a cloud governance solution that makes the organizational and technical transition to another vendor economically feasible.

In addition being credibly able to switch providers gives you strong leverage in negotiations.

4. Understanding the shared responsibilities in the cloud

A general rule is that the cloud provider is responsible for the security of the cloud, while you as their customer are responsible for the security in the cloud. However, there are other aspects that affect the shared responsibility model, e.g. the service model you are using (Iaas/PaaS/SaaS).
Here is an example: If you use a SaaS service like Google Docs, you don't have to take care of the VM images or databases this application is running on. If you deploy your own MySQL cluster to AWS EC2 instances, however, you are responsible to encrypt the connection via SSL or TLS.


Another important factor is to assign responsibilities clearly to the cloud foundation team and the DevOps teams. The cloud foundation can offer a security baseline with predefined cloud landing zones and takes care of organizational overhead. The DevOps teams have more freedom when working with the cloud - compared to the data center approach - and with that freedom comes the responsibility to take care of application security. The Cloud Foundation Maturity Model provides more insights on how to structure shared responsibility in a Cloud Foundation context.

5. Considering managed services

Migrating to the cloud is a major task in terms of organization, technology, and operations. Wanting to do everything in-house may be understandable but the already very busy IT teams just might not have the capacity or skill set to take on every project.

Making use of higher-level managed services may be the right choice to keep the cloud migration on track and within budget. You may want to have more than just infrastructure-as-a-service (IaaS) and use more than just one cloud service provider: That's also why an abstraction layer that unifies all clouds brings no value to your cloud transformation.

Even if you start off with a pilot project that your organization can handle capacity- and expertise-wise: The challenges will build up as you move on and broaden the scope of your cloud journey. That is a development we see quite often in the market - companies wasting time and money and then turning to external partners a good way down the road.

The same goes for intra-organizational services: Not every team should have to solve problems that other teams have already successfully overcome. Teams should be enabled to offer their solutions and services to other teams - via a cloud service marketplace - to push innovation and speed up development.

6. Developing an agile, cloud-native way of working

Going with a cloud strategy is only part of making full use of the competitive advantage the cloud can offer. Without an agile and cloud-native way of working the potential will not be fully explored. It is the prerequisite to moving the actual workload to the cloud and taking advantage of the scalability, flexibility, and speed the cloud can provide.

A cloud foundation or a cloud competence center should take care of the organizational overhead and enable developers to fully focus on their products.

A DevOps team lead should be able to provision a cloud account and deploy applications without the interference of some kind of central IT. Offering a self-service cloud account creation requires a high degree of automation. This reduces manual workload and with that reduces the "time-to-cloud" for the developers. Using an existing ITSM tool for cloud resource provisioning seriously limits the usefulness of the cloud.

Moving to the cloud is a deep-rooted transformation in an IT organization and means fundamental changes in how things are done. A cloud foundation team needs to evangelize the use of the cloud and empower the teams on their way. It can not be expected that everybody is on board with the cloud strategy right away. Some applications will have to be refactored - a lot of work - the transformation will only be successful if there are communication efforts to show that it’s worth it.


To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to getting in touch with you.


The Cloud Foundation - Key to Cloud Excellence

Organizing your IT the cloud-native way: Read why the creation of a central cloud team needs to be a central piece in your cloud strategy.

A cloud journey is as much an organizational journey as it is a technological one. The establishment of a Cloud Foundation, also known as the Cloud Competence Center or Cloud Center of Excellence (CCoE) is a best practice to leverage the advantages of the cloud.

In this post on the Cloud Foundation, we want to introduce you to the core concept of such a team and why it makes such a big difference.

So let’s dive right in:

What is a Cloud Foundation?

A Cloud Foundation is a multi-disciplinary team of enterprise architects, developers, and operators, network and security engineers, system and database administrators. The team governs and enables the organization's cloud transformation process.

Enterprises that follow a cloud strategy to reduce costs and become more agile need to take on organizational transformation to leverage the cloud to the fullest.

One central piece of this puzzle is the creation of a central cloud team: Cloud Foundation, Cloud Competence Center or Cloud Center of Excellence - many names, one core concept: Centralizing cloud governance to enable more productive, more agile, and more innovative DevOps.

It is effectively the team that pioneers and paves the way that is used by DevOps teams to safely travel to and navigate the cloud.

The Advantages of a Cloud Foundation over Cloud Silos

Many IT departments in enterprises are still organized in so-called silos. When it comes to the cloud that may mean there is a division for Azure, one for AWS, and one for GCP. Every one of these silos has to build up know-how and tooling for proper cloud governance.

Developing and running applications in this scenario puts a lot of overhead responsibilities on the shoulders of DevOps teams. And it grows with each cloud platform they add.

Optimizing in silos can make each silo run their cloud platform perfectly, but it won't remove inefficiencies in the overall cloud transformation effort.

Global optimization is not possible in a silo structure: With a Cloud Foundation on the other hand you can optimize the entire cloud journey of your organization. The Cloud Foundation centralizes cloud governance competencies to enable and drive the cloud journey.

Cloud governance is not platform-specific - and so it does not make sense to reinvent the cloud governance wheel for every platform in every silo. In a Cloud Foundation team boundaries and best practices can be shared better and faster leading to better platform-specific implementations.

Done well, this enablement function will achieve the following outcomes:

  • Accelerate cloud adoption across the organization
  • Enable and encourage innovation
  • Optimize costs (based on FinOps practices)
  • Minimize the risks involved (based on continuous compliance and compliance as code practices)

Done badly, however, a Cloud Foundation can actually end up acting as a barrier to consuming cloud within the organization.

It is important to have a value mindset and focus on what value the cloud can bring to the teams: If it is just a top to bottom company initiative there will be resistance and you will lose speed.

The Main Tasks of a Cloud Foundation

Let's talk about more specific tasks for the Cloud Foundation team.

To achieve the goals we've talked about in the previous section a Cloud Foundation has to take on the following eight tasks:

  1. Implement cloud transformation and strategy
  2. Manage cloud cost and billing
  3. Manage and report cloud transformation
  4. Implement and enforce organization-wide cloud journey policies
  5. Provide guidance and training
  6. Manage cloud identities and access for DevOps teams
  7. Keep up with the latest cloud technologies
  8. Manage cloud security and compliance risks

How to build a Cloud Foundation Team

Ideally, you start building a Cloud Foundation team well before the first workloads are migrated to the cloud. But that doesn't mean that at some point it may be too late to start: It is never too late to tackle the organizational transformation needed to fully capitalize on the competitive advantages of the cloud.

Let's say you have a small team that has a cloud-native use case, and they are the first lighthouse team going into the cloud. After this move, the organization itself learned a lot from actually doing and learning instead of planning a long time and never implementing anything. So sometimes the better approach is just trying because it puts you on a steeper learning curve. The cloud foundation team could be a team of architects accompanying the first five teams and then evaluate what went well, what bad, and how they can support others in building a solid practice on their cloud journey.

Regardless of the size of the business or the extent of its presence in the cloud, a Cloud Foundation team should start small. Building the team from developers, system administrators, network engineers, IT operations, and database administrators: start small, learn fast, and grow big!

Building a solid foundation first and then the house on top of it is definitely a good sequence of events. However, it is absolutely worthwhile equipping the building that houses your business success with a stable foundation; even after you came to realize it might be built on softer ground.


To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to get in touch with you.


Sticky notes on a whiteboard symbolizing a tagging strategy workshop.

Your Path to a Winning Multi-Cloud Tagging Strategy

This is an introduction to cloud resource tagging and labeling: If you are concerned with building a cloud architecture, then this blog post is for you! Tagging and labeling is an early stage topic of your cloud journey. It forms the foundation of organized and structured growth.

In this post we will cover:

  • Why tagging is an integral part of every cloud journey
  • 5 steps to a winning cloud tagging strategy
  • Common use cases of cloud resource tagging
  • How to stay consistent across multiple platforms
  • How meshcloud takes tagging to the next level

What are Cloud Resource Tags?

A tag is a label assigned to a cloud resource to apply custom metadata. Anything is taggable - from the cloud tenant on the top level to resource groups to single resources like virtual machines and databases.

Tags come as key value pairs:

The key describes the kind of tag that is then further specified by its value. For example, the key could be environment and the values could be development, staging, or production.

There are two different kinds of tags: The ones that are automatically generated by the cloud service provider - e.g. instance or subnet IDs - and user-defined tags.

For this post, we'll focus on the user-defined tags since they enable us to freely enrich our cloud resources with the information we consider relevant.

Why a tagging strategy is an absolute must-have

One central advantage of using the cloud is rapid scalability. And with this comes the necessity to keep track of what is going on in your cloud infrastructure while it is constantly growing and changing. That's where tags come in: You will need a consistent set of tags to apply globally across all of your resources following a consistent set of rules. Tagging is the cornerstone to effective cloud governance: Cost allocation, reporting, chargeback and showback, cost optimization, compliance, and security - all these aspects can only be managed with proper tagging in place.

For example, you can build a cloud cost dashboard in under 10 minutes.

Everything can be put into this mnemonic: Tag early, tag often.

Five steps to a winning tag management strategy

Tagging early and tagging often requires a tag management strategy that streamlines tagging practices across all teams, platforms, and resources.

The cloud governance team - or cloud foundation team - should take the lead in defining your global tagging strategy.

Here are 5 steps to get you started:

  1. Bring the stakeholders together

    Get everyone involved in the process who will be using tags or might have something to contribute to the integration of the strategy in the overall company processes. Of course, these are DevOps representatives, but also non-technical roles from accounting or marketing or any other group using cloud resources. Meet as a group to get the full picture, hear everybody's concerns, avoid misunderstandings and save yourself the trouble of making changes later. If your organization already uses tags, start with auditing what is there.

  2. Understand the purpose

    It is important to have a common understanding of what problems cloud resource tagging is supposed to solve. Define these questions early on in the process - here are some examples of what they could be:

    Which business unit within the organization should this cost be charged to?

    Which cost centers are driving my costs up or down?

    How much does it cost to operate a product that I’m responsible for?

    Are there unused resources in my dev/test environment?

  3. Focus and keep it simple

    You will not be able to set up an all-encompassing tagging strategy that will be valid for eternity. So don't make that your objective - keep it simple and set your focus. To get started, choose a small set of required tags you will need in the short term and build on them
    as needed. Choose three to five pressing areas you want to understand. For example, focus on cost reporting and align these tags with internal reporting requirements. Aim for an intuitive system to build on - more layers and granularity can be added further down the road.

  4. Define the naming convention

    You will need to decide on a naming convention for your tagging system. This is the backbone of everything you're trying to accomplish with your tagging strategy and must be enforced globally. If your company uses multiple cloud platforms or is planning on doing so, take into account that the platforms have different requirements for character count, allowed characters, case-sensitivity, and so on. You can consult our tags and labels cheat sheet to help you with that.

  5. Document everything and make it count

    Make sure to document everything you agree upon in this cross-sectional team working on the tagging strategy. This documentation should cover the naming convention, the policies when to use which tags, and the reasoning behind these decisions.

An organization-wide tagging strategy should make sure that tagging stays consistent on a global level. But take into account that individual teams or applications may add additional tags for their specific needs as well.

Common Use Cases for Cloud Resource Tagging

We've been talking about how tagging is essential and coming up with a tagging strategy should be an early stage step in setting up your cloud governance.

Here are the most common use cases to show you why:

  1. Cloud Cost Management

    Gain transparency when it comes to cloud usage and costs: Tagging cost centers, business units, and specific purposes help you keep track.

  2. Cloud Access Management

    Proper tagging enables administrators to control the access of users or user groups to resources or services.

  3. Cloud Security Management

    Tags are essential to identify sensitive resources and keeping them secure. For example, tagging the confidentiality classification helps to find the S3 bucket that's public and definitely shouldn't be or prevent that from happening in the first place (we'll come to that later).

  4. Automation

    The added metadata of tags enable a whole new level of automation: Many different automation tools can read these tags and utilize them to make your life easier in almost every regard concerning the previously mentioned use cases.

Challenges of Tagging in Multi-Cloud Architectures

Most companies use multiple cloud platforms and - in one way or another - struggle with the governance of their cloud architecture. Tags are here to help! BUT there are a few caveats that you need to be aware of to actually make things better.

Each cloud platform has its own tagging constraints - Google doesn't even call them tags but labels.

These questions will come up:

  • How many tags per resource are possible?
  • How many characters per tag and which characters are not allowed?
  • Is there a difference in requirements for keys and values?
  • What exceptions are there?

To help you with that we've created our Cheat Sheet for Tags and Labels on Cloud Platforms. There you can look up the differences in Azure, AWS, and GCP tagging and labeling.

Consistency in the usage and naming of tags becomes even more important when working in a multi-cloud architecture. It is extremely critical if you want to do queries based on tags - inconsistencies and typos can ruin the whole point of what you were trying to achieve.

Making the Most of Tagging with meshcloud

Now that we've covered what tags are, what they are good for and how to create a tagging strategy to drastically expand the possibilities for cloud governance, we'll talk about how meshcloud takes this to a whole new level:

With meshcloud cloud governance or cloud foundation, teams can define tags globally in one single place. This is incredibly helpful in keeping tags consistent across all platforms, teams, and resources.

meshcloud enables you to set and enforce tag formats, patterns, and constraints globally and make them work with all cloud platforms. With meshcloud, you define your tags as JSON and these can be entered in the UI either by employees themselves, or only administrators.

A code example that shows tag definition in meshcloud.
Tag definition example, in this case for classifying the environment of a project.

UI showing tags previously defined as JSON
The JSON you define will render into UI for your users

meshcloud enables cloud foundations teams to enforce possible tag values to a very granular level. You'll never have to worry if team members make typos or use different values for your tags. It is even possible to enforce the format of values using RegEx. For example, if your cost centers look like ACME-12345, you can enforce this format globally for all clouds.

And, remember when we discussed tag constraints on cloud platforms? We got you covered here. If a tag value is not valid in a cloud platform, meshcloud automatically converts this value to a valid value inside of the cloud. For example, GCP would not allow www.meshcloud.io as a value. It will automatically be converted to www_meshcloud_io, which is a valid GCP value.

Implementing your global tagging strategy across all clouds is not the only value meshcloud has to offer. With our policies we enable our customers to set and enforce rules based on tags across all platforms, teams, projects, and landing zones. This gives cloud foundation teams even more control over who has access to what. For example, you could enforce a certain Azure blueprint to be only used for production projects. Or you enforce that teams can only create projects for the environment they have been approved for. This makes sure that teams will not create production projects without being approved first.

Authors: Wulf Schiemann and Jelle den Burger


To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to get in touch with you.


Cloud Exit Strategy: Ensure Compliance and Prevent Vendor Lock-in

A clear cloud exit strategy is absolutely necessary when moving to the cloud. Ensuring both compliance and business continuity - and avoiding vendor lock-in - are the primary reasons.

Today, large companies can no longer do without the advantages of the cloud. The competitive pressure of digitalization requires migration to the cloud. At the same time, high regulatory demands are being placed on outsourcing - especially in critical industries. And using public cloud is outsourcing!

Placing oneself in the hands of the hyperscalers involves risks. You generally don't have to worry about security: Amazon, Microsft, and Google do a lot to keep their offers safe. Compliance and vendor lock-in is a different matter: It is important to clarify which data and solutions end up with which cloud providers. For this purpose, a clear exit strategy is required in addition to the (multi-)cloud architecture and the cloud sourcing strategy. A high-profile example is that of Dropbox leaving AWS in favor of hosting their workloads themselves.

Regulatory Requirements

In certain critical industries, a documented exit strategy is more than just a good idea, it is a regulatory requirement.

The banking sector is one of the most heavily regulated industries. The regulators also deal with the cloud use of companies. The European Banking Authority, for example, requires an exit strategy for outsourced critical or important functions in its EBA Guidelines on outsourcing arrangements under chapter 15. This includes the use of the public cloud.

The German financial supervisory authority BaFin also prescribes the development of an exit strategy in its banking supervisory requirements for IT.

4 aspects of vendor lock-in

Vendor lock-in means not being able to shift workloads from one vendor to another without too much hassle. That it brings great advantages to be able to do this shows the recent downtime at IBM.

Not being able to do so has different possible reasons:

  • Cost
  • Contracts
  • Skills
  • Technology

Cost is a major factor in prohibiting migration from one vendor to another. The vendor might charge for exporting data. In addition to that costs will pile up for training staff, consultants, and the lowered productivity. The larger the workload the larger the costs. A good example to look at is Netflix: The streaming service is all in on AWS and won't be able to change that - at least not with reasonable costs.

Contracts can play a big role in vendor lock-in. Some cloud service providers make it hard to decide for a migration to an alternative vendor by implementing a continuously upward pricing model that aims at drawing their customers deeper and deeper into a factual lock-in. At some point, a partial exit may no longer be economical and a complete and difficult withdrawal from the whole contract the only option.

Skills play a big role in migrating and operating workloads. Cloud architects, DevOps teams, and security experts are specialized and it takes time and money to shift that knowledge to newly adopted cloud platforms. That can be a major hurdle when considering leaving a vendor for another. Going multi-cloud from the start provides companies with a larger talent pool and that takes the trouble out of transitioning a little bit.

Technology causes vendor lock-in as well - at least when it comes to proprietary technology vendors use to differentiate. On the one hand, that's great and can offer a competitive edge. On the other hand, it can get companies locked in on this technology and hinder the adoption of the next big thing in cloud technologies.

The 4 key aspects to every cloud exit strategy

So here are 4 aspects you will have to have an eye on when building your cloud exit strategy:

  1. Most importantly: Take inventory! Knowing your assets is essential. Exit strategies often apply to critical business functions only. So it’s important to know what you have running in which cloud – an up-to-date cloud inventory is of great help.
  2. Open-source infrastructure is key. Open-source infrastructure components like Kubernetes or OpenShift clusters or open-source databases can make a move between clouds much easier. The more proprietary services you use, the harder it will be to adapt your application to run in a new cloud environment.
  3. Go multi-cloud from the beginning. Contract negotiations between enterprises and cloud providers can take a while. It’s too late to start the process when it’s actually time to move
  4. Watch out for organizational lock-in. Even if from a technical perspective your application can easily be moved to a different cloud provider, there’s more to it. If you are running cloud applications at scale, setting up the corresponding cloud environments transferring permissions and configurations comes with massive complexity. Use a centralized governance system like meshcloud to keep your organizational structures independent from specific providers.

To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to getting in touch with you.


Cloud Outlook 2020

(Multi-)Cloud Outlook in 2020

Cloud Outlook 2020
New challenges and developments in multi-cloud.

As meshcloud, we are proud to look back on a successful year 2019, in which we won multiple new customers. During this time we've had the great chance to accompany different types of companies in their cloud transformation. While our customers belong to many different industries, e.g. Automotive, Financial Services or Utilities, we've been able to observe many similarities in their move to the cloud – showing the relevance of cloud management and cloud governance among large organisations.

At meshcloud, we support large organisations in their cloud transformation. Our platform provides consistent governance with cross-platform Identity and Access Management, Tenant management, Cloud Security & Compliance and Cost Management and is used by large enterprises to deliver secure cloud environments to thousands of teams while staying in control and reducing complexity.

Based on our experiences in this past year, here are some predictions on upcoming cloud challenges in 2020:

Experiments come to an end: Bringing the Cloud Journey to the next level

A lot of companies have surpassed the experimental stage of using cloud. They are moving towards production deployments of cloud applications, have to manage rising costs and face tightening cloud security requirements. While in the beginning of a cloud journey, we often times face scattered cloud initiatives across different parts of the organisation, we now see more and more companies consolidating these initiatives and bringing them to the next level, setting up professional guardrails, preparing for wider user groups. One of the biggest challenges in this move is a reorganisation of responsibilities, especially in terms of cloud security, as the use now will surpass a couple of cloud experts.

In order to fulfill these requirements, there will be a consolidation of deviating cloud governance activities, bringing them together and solving them consistently across different cloud providers, often conducted by a dedicated team of (multi-)cloud experts. These new solutions have to be set up for scale in order to address the rising demand for cloud resources. Self-service and automation help to accelerate the time-to-cloud to a couple of minutes for increased productivity.

Watch out: Organizational lock-in is on the move

Being obsessed with avoiding vendor lock-in on the workload level a lot of companies have managed to transfer workload to infrastructure like Kubernetes and data to open-source databases like MySQL, PostgreSQL or MongoDB. At the same time the maturity of managed Kubernetes offers has greatly improved. However, as the use of cloud grows organisational processes and integrations towards cloud platforms are being established. And they are a new way for cloud providers to keep their customers close. As most organsations use more than one cloud provider to meet all their requirements, they are confronted with multiple organisational models. This is rather challenging, as the integration of new platforms becomes time and resource consuming and slows down the actual adoption of the new technologies.

We are very happy to be working on a declarative definition of organizations ("org-as-code") that enables companies to define their organization once, while meshcloud takes care of translating this into the different cloud platforms. The concept is known from infrastructure as code and based on the idea of defining a desired target state in an abstract way, while the system works out the specific implementations in the background. Having this, it becomes very easy to integrate new cloud platforms and provide technological freedom. (Learn more about org-as-code in our multi-cloud maturity model)

Cloud Security Complexity is bursting

When cloud is widely used across the organisation and the number of cloud projects is continuously growing, it is essential to have clearly defined cloud security concepts as well as proper process documentation to keep transparency on ongoing activities. Especially cloud access management (e.g. provisioning and retrieving access rights to cloud tenants) must be handled consistently, as the amount of managed permissions grows exponentially over time: more users, more projects, more cloud platforms. An easy and highly automated approach will help to keep control and enables to run continuous governance in order to comply with cloud security frameworks and certifications.

At the same time, multi-cloud teams will have to help cloud users and DevOps teams to implement cloud security concepts (e.g. via landing zones in their cloud tenants to achieve consistent security configurations across accounts and help them to deal with the increasing responsibility of DevOps practices.

Therefore, for 2020 we see an increased demand for clear governance of (multi-)cloud environments. Consistent cloud configurations and policy definitions will be key as companies scale up their use of cloud. Not only do they have to be controlled once implemented, their definition and scalable rollout will be in focus in order to provide a basic cloud security level for all cloud activity, which helps to align cloud configurations and accelerate basic cloud security assessments, even in regulated environments.

Are these topics relevant to you in 2020? At meshcloud, we provide best practices for multi-cloud management and multi-cloud governance, integrated into a powerful software platform. Feel free to reach out and get in touch with our multi-cloud experts or book a demo.


Make or Buy Assessment for Multi-Cloud Management

Multi-Cloud Management – Make or Buy?

tl;dr:

Don't build your own (multi-)cloud management platform. We all face limits in time and resources. Don't invest this time and resources into a solution that each of your competitors will have to build, too. Focus on your core competencies to build products your customers love and make a difference in today's competitive market environment.

The start of a cloud journey

cloud journey
Most organizations start off using a single cloud provider. They go for cloud to leverage the benefits in speed and scale those technologies provide: On-demand access to a large pool of resources, a large offer of stable services, waiting to be used in new innovative applications and finally a cost model that reflects the actual resource usage, rather than a big block of inflexible fixed costs.

You have to balance agility and control to use cloud in the enterprise


Unfortunately, in an enterprise world cloud cannot be integrated by pulling out the company credit card and registering for an AWS account (replace AWS by any other cloud provider of your choice). Instead, the new provider will be integrated into an existing process flow, regulations have to be respected, security guidelines followed, considerations on alternative providers will be carried out and possible exit strategies evaluated. And all this is right: The gain in agility and speed has to be balanced with more control or (for those who don't like this word) transparency on responsibilities.

Silos, fragments and growing pains

Cloud Silos
A lot of companies initially choose to build this integration of new technologies into the existing organization themselves. This may work well in the beginning, while the number of users and projects is small, only one platform has to be integrated and the cloud is mainly used by experienced cloud-native developers and before the first compliance audits are done. At scale, such efforts can easily result in chaos and intransparancy (Have a look at our cloud maturity model to learn more on the different stages of multi-cloud management).

One of the reasons this happens is that there is neither a comprehensive strategy behind the activities nor a central team accountable for the result. The solution mainly emerges from bits and pieces needed on the way that result in heterogeneous cloud silos (Isn’t this how we started?).

  • Each cloud platform will follow its own process. Teams implementing the integration will do duplicate work. And teams that are new to cloud may have a hard time to get started as they won't know how and where.

  • Only parts of cloud management will be covered, wherever the pain is most pressing: e.g. IAM integration, provisioning of cloud tenants, configuration of cloud tenants, billing. Mostly the former ones, as this is where the journey starts.

  • In the end the use of cloud is part of a comprehensive IT strategy. However, its effectiveness can only be evaluated, if I actually get an overview on the acceptance of the new technologies, the resources running in different clouds, the type of services required by my teams, etc.

  • There are some processes there, that will help me at first but

    • are they documented and resilient to withstand audits or regulative requirements?
    • are they really cloud-native, e.g. resulting in a time-to-cloud of seconds or minutes?

Why it doesn't make sense to build your own cloud management tool

Apart from the result being different than what you expected and a high probability that you'll find yourself starting all over again, trying to integrate the partial and provisional solutions to a comprehensive view of your (multi-)cloud environment, there are a couple of reasons, why it doesn't make sense to build your own multi-cloud management tool in the first place.
Cloud Management Make or Buy

1) Time is against you

Building your own tool for multi-cloud management will take too much time. Planning the project, finding the required resources or building up specific cloud know-how and implementing it, will take some time, even if you are fast and have unlimited monetary resources. You don't have this time and most probably you also don't have unlimited resources.

No matter what industry you are in, you are probably facing a strong competition. Traditional industries like banks or automotive companies in particular have to fight for their markets, have to reinvent themselves and use their competitive advantage (extensive know-how, a large customer base, a known brand and a large amount of data to work with) to stay in the game.

The competition is strong and won't wait for you. And in regard to IT, you start in unfair conditions. While new players in the market can start building their business from scratch and in a cloud-native manner, you are carrying a heavy backpack of legacy infrastructure and processes.

To keep up, it's not enough to make your existing processes a bit better, you'll have to build new cloud-native processes.

2) Developers are hard to find

Finding qualified IT specialists is tough and probably one of the biggest challenges every kind of organization small or large faces nowadays. To build a multi-cloud management platform you'll need platform experts for the cloud platforms that you aim to use. As this hasn't been your focus so far, you'll be looking to hire those and they are even harder to find than regular software engineers.

Of course you could access external resources that have the skills and know-how to help you out faster. However, they are expensive and more importantly they are not sustainable. Building a cloud management tool is not a one-off project. Working with cloud resources is more complex than ordering physical servers, a phone or a monitor. Everything you deal with in the cloud has a lifecycle: users, cloud projects, permissions, certifications and so on and your tool will have to take care of it. This requires continuous governance and transparency on the state of your multi-cloud environment.

Furthermore, the cloud-native ecosystem is incredibly dynamic. Requirements for cloud management and governance evolve over time, new platforms pop-up on the market, new regulations will be defined and you'll have to comply.

These circumstances scream for a software service that comes with updates and adapts to a fastly changing environment as we face it here.

3) You have a business to run

We've figured out that time and resources are limited for all of us. But there is another point that shouldn't be neglected in a make-or-buy assessment. And that is something you definitely learn as an entrepreneur, but which is true for organizations of any size: With limited time and resources, you have to focus on your core competencies.

And these are most probably not multi-cloud integration and governance best-practices (for us they are), but rather products and services in your industry. Invest your resources in activities that create value for your customers. Work on innovation and new business models that will help you to make a difference on the market.

If your customers are IT teams, take them on the cloud journey and provide them with the tools they need to deliver software faster. It is your job to make it fun to use the cloud, to attract new talent because of the freedom and possibilities IT teams experience in your company.

Having a tool in place that takes care of the basics, will relieve you from being a bottleneck and allow you to focus on the specifics of your organization: Which services you want to offer, how to get cloud-newbies on board, architectures for newly build applications, security and network configurations for your specific setup, providing insights for better stability or investment decisions.

About meshcloud

meshcloud helps organizations leverage cloud technology in a cloud-native way. Our platform provides consistent governance with cross-platform Identity and Access Management, Tenant Management, Compliance & Security and Cost Governance. Large enterprises use meshcloud to deliver cloud environments to thousands of teams while staying in control and reducing complexity.

Please reach out to us for further information. Our team is more than happy to give you a free introduction to our platform.


Multi-Clouds in Banking – 3 Takeaways from the 2nd EBF Cloud Banking Conference

It’s already been a while. On July 9th, I visited the 2nd EBF Cloud Banking Forum in Brussels. The topic of the day: “Shaping a Multi-Cloud Environment”. Representatives of banks, regulators and cloud providers met to discuss how cloud computing can be used in the financial services sector, with one common goal in mind: To provide secure banking services to the people. With all the FinTechs and Neobanks showing up all over the place, it is obvious that traditional banks have to adopt new technologies to stay competitive in the market. The EBF Cloud Banking forum brings this discussion to the European level. With joint forces, we can be an example not only to other countries, but also be a pioneer in regard to other sectors.

The adoption of cloud technologies comes with some hurdles and quickly raises questions for the users: “How do I stay compliant with existing regulations?” as well as the regulators: “How do I make clear and transparent rules?”. “How are we going to treat the overcententration on the market?” There basically are only a handful of cloud providers to choose from and implementing a multi-cloud strategy is one way to act against overconcentration. However, it also raises the administrative complexity of adopting cloud. Therefore, a proper cloud management has to be put in place to help the banks avoid having more complexity than before. From a whole day of intensive discussions, I’ve put together the three main take-aways of the day.

Identity and access management (IAM) is crucial to a bank’s cloud transformation

This topic is not new to external environments like clouds. Defining how to manage identities and access to infrastructure resources is and has been a complex issue for many years. In order to mitigate the risk of abuse or unauthorized access to confidential information and critical systems, access rights have to be reduced to a minimum, following the principle of least privilege. Having a multi-cloud environment may raise the complexity of achieving this.
On the one hand, you want to make it easy for your users to access cloud resources, in order to speed up software development and lower the barrier for cloud adoption. On the other hand, you need to be in control of the access process, make sure it is documented and auditable to avoid the risk of undiscovered information leakage.

With meshcloud, we help large organizations like banks with a governance framework for multi-cloud environments. Our platform plugs into the existing organizational structures and identity providers like Azure AD or LDAP and unifies the way access and permissions are handled across different cloud providers. As a result, we use existing identities and provide secure SSO access to all cloud providers as well as a platform to manage fine-granular permissions centrally before replicating them to the attached platforms.
In addition, cloud providers like AWS and Azure have published services like AWS CloudTrail that monitor access and operations related to your cloud infrastructure.

An exit strategy is necessary to avoid vendor lock-in to a single cloud provider

The EBA guidelines on outsourcing arrangements include a paragraph on exit strategies, which requires banks and other financial institutions to have a well-documented exit strategy in place that ensures business continuity in case of an interruption of outsourced services. This can be the result of a failure, malfunctioning service offers as well as contractual or legal obstacles in regard to the service provider. These scenarios should be well documented and practiced, as it is crucial to be familiar with the workflows and know what to do in such exceptional circumstances.

While multi-cloud is a way to distribute the risk across more than one service provider, there are strategic steps to be considered when deciding on a sustainable cloud strategy. The use of open-source infrastructure components on different layers, such as Kubernetes, PostgreSQL or NGINX for example, can facilitate migrations from one cloud to another drastically. They can even be operated in a private datacenter. However, most companies decide on a multi-cloud strategy as a result of a best-of-breed approach. They want to use each cloud for the specific (and often proprietary) services it offers, for example in the field of machine learning or artificial intelligence. The use of such tools can accelerate development because they bring a lot of functionality out-of-the box and enable teams to focus on truly differentiating functionality. An assessment of the criticality of applications as well as the consequences of the failure of such proprietary services should be considered within an exit strategy.

As meshcloud, we help companies to avoid vendor lock-in by facilitating the use of multiple cloud platforms and the distribution of infrastructure and applications across them. We define organizational structures (teams, projects, users, permissions) in a declarative manner to enable our customers to integrate new service providers very fast and with little administrative overhead. To address the rise of higher-level infrastructure services, such as managed databases, message queues or machine learning and AI services, we offer a service marketplace that enables customers to provide a large variety of services based on an open standard (OSB API). This unifies the process of provisioning services and decouples it from the underlying infrastructure.

Configuration is key

A recent study of KPMG found that 80% of IT security incidents happen due to manual misconfigurations. By spreading infrastructure across different service providers it gets even harder to keep control on correctness and consistency of cloud configurations and this does not consider any application-specific configurations yet.

To improve security and compliance of cloud environments, our landing zones help to configure cloud accounts upon creation. This enables our customers to roll-out consistent configurations in automatically created cloud accounts, according to their use case (test, development, production). To give an example these configurations can limit the use of cloud infrastructure to specific geographical regions or blacklist certain services that are not compliant. By rolling them out consistently across all cloud accounts, our customers can relieve their development team from defining compliant configurations individually and instead provide them with a framework that has been approved by the security department.

To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to get in touch with you.


Multi-Cloud Stakeholders

Multi-Cloud Stakeholders – Why Multi-Cloud concerns the whole organization

In times of digital transformation multi-cloud strategies are often implemented top-down. The CIO decides "We are going to implement a multi-cloud strategy, because we want to avoid vendor lock-in to a single platform." While this is a very legitimate reason for a multi-cloud strategy (more can be found here), the actual use of multiple cloud platforms touches many stakeholders within the organization. They have to be taken on the journey and their everyday work has to be considered to make the strategy yield fruit. Depending on your position within the company your interest is either in agility or control. So while a developer is striving to write new code, rather than dealing with repetitive approval tasks, IT managers or security officers are scared to loose control due to increasing complexity.

stakeholders of multi-cloud environments
stakeholders of multi-cloud environments

IT Leaders

With the digital transformation, the role of software increases for companies of all industries. Software delivery performance has become the key metric to improve in this context. If you want to be better than your competition, you have to build up capabilities to develop new applications and bring them to market faster than they do it. As an IT leader it is your responsibility to provide the appropriate tools to your development teams, from cloud computing platforms like Kubernetes to state-of-the art tooling for CI/CD, monitoring, logging etc – the whole cloud-native stack. Keeping control of the occuring costs and making sure the tools are easily accessible to your teams to avoid shadow IT are also included in your set of responsibilities.

Developers

In times where software developers have become a scarce resource and enterprises compete for the best talent, providing an attractive work environment is key to success. Ask 5 developers what they consider important in their work environment: Being productive and having the right tools to work with will be among their answers. Apart from bad tooling, organizational inefficiencies can be a reason for productivity losses. Dependencies between departments for routine tasks like the creation of cloud accounts, configuration changes or quota management may lead to bottlenecks and unnecessary idle times. Self-service capabilities that provide autonomy help to solve this and drive productivity as well as employee satisfaction. They are in the core of cloud computing technologies, according to the NIST definition.

Cloud Operators

Being in operations, the main goal is to achieve platform and service stability. Night shifts or at least on-call duty are common practice as service disruptions bear a high cost risk. Customers may not be able to purchase or use a certain service during downtimes. As a result, support incidents pile up and a large amount of time has to be invested even after service recovery. To avoid these incidents, infrastructure has to be set up redundantly, aiming for high availability. The staff should be trained for failures to make sure involved people know what to do in such high pressure situations. An important prerequisite to make this work is to have a consistent environment that doesn't depend on individuals – automation, version control and documentation are important here. But operations don't only happen in emergency night shifts. During the day ops teams deal with day-to-day work to provide a set of services to development teams. However, the reality is often different. Instead of evaluation and provisioning of new cloud services or automation tasks, ad-hoc processes and manual routine tasks fill the operators' days, leading to inconsistencies and impatient developers finding their detours – shadow IT – to achieve their goals, which is a huge security risk. In multi-cloud environments, heterogeneous processes increase this complexity drastically if not taken care of, especially at scale. Again, the reduction of dependencies will help to avoid bottlenecks. Furthermore, it should be easy to provide services to developers. How? By providing a standardized framework for service provisioning like the meshMarketplace operators can easily provide a service e.g. a database or a logging framework to all development teams. While the operators can focus on the stability of each service as well as the addition of new services, the marketplace informs development teams about the existence of a service and boosts its reusability.

Controllers

Multi-cloud can be used as a tool to reduce cloud costs. In many cases however, the tools to define, compare or evaluate the cost of different projects, applications or deployment environments are missing or immature. Costs often occur in large vendor-dependent blocks that are hard to map and distribute to specific applications. The AWS or Azure cloud spending for example: Budgets for these positions are put aside and only when they are exceeded, action is taken. In these cases DevOps teams are asked to manually check on their resources, to make sure they are all actually needed and no zombie workloads exist.
In private clouds it gets even worse. Platforms used in private cloud contexts often don't even provide usage information or produce an invoice. As a result companies don't know the cost of deploying an application internally not to mention how to evaluate the best deployment environment for an application, in terms of cost efficiency. To leverage financial benefits of multi-cloud strategies, cost transparency is necessary. Private clouds need usage metering and billing capabilities and the cloud spend has to be allocated to the actual applications using the resources.

To successfully implement a multi-cloud strategy all the stakeholder needs have to be addressed. Multi-cloud management tools are a way to support this. The meshcloud multi-cloud-platform reduces the complexity of multi-cloud environments. It provides agility to developers and operators while providing visibility to keep control on cost and security. To learn more on our platform, feel free to contact us or schedule a demo with one of our cloud experts.