Designing meshStack Building Blocks
In this post I want to provide some technical background of meshStack’s new building blocks design that our team is actively working. As outlined in our previous post, the key challenges we set out to solve are publishing building blocks for self-service, resource orchestration using familiar tooling, incremental automation and enabling composition of building blocks.
Defining the Problem Space for Building Blocks
Equipped with a lot of learnings from helping cloud foundation teams build hundreds of landing zones and delivering them to thousands of application teams, we defined the problem space we want to design for
- Automation is a a must, though for many building blocks it becomes crucial only once you hit a scale > 50 cloud tenants (→ obligatory xkcd).
- Building blocks ought to represent meaningful high-level capabilities – as such application teams usually compose only a rather small number of them. Extrapolating from our existing experience with marketplace services, we expect most applications will have < 20 building blocks per cloud tenant. We think this is fundamentally different from orchestrating on the cloud resources level where it’s common to have hundreds or thousands of resource to deploy an application on.
- Cloud foundation teams will leverage building blocks to design and deliver modular landing zones. The lifecycle of building blocks like VPCs or firewall rules will consequently be closer tied to the lifecycle of the cloud tenant than to the lifecycle of an application deployment.
- The key value proposition of a cloud foundation platform like meshStack is integrating all essential governance functions like tenant management, IAM, cost management, security and compliance in a single application. It’s thus important that building blocks seamlessly integrate with every governance function available via meshStack.
- Most smaller cloud foundation teams lack the capacity to implement all cloud foundation capabilities by themselves. The ability to tap into an established ecosystem offering a starting point in the form of reusable building blocks is an immense value add.
Orchestrating Building Blocks
We found it helpful to phrase the underlying problem for implementing building blocks in meshStack as an orchestration problem. Platform Engineers and Enterprise Architects already interact with many different incarnations of this problem on a day to day basis like Terraform, Kubernetes, or CI/CD Tools like GitHub Actions.
Adding and reconciling building blocks
Our design takes inspiration from these examples, combining the relative strengths of these orchestration solutions in a way that makes sense for our problem space. The figure below describes the key elements of our design
- Application teams can select building blocks from a catalog. Building blocks can have inputs of different types like manually specified inputs (entered by the application team or platform engineers), outputs from other building blocks (creating a dependency between blocks) or metadata derived from other meshObjects like meshTags
- meshStack validates and updates the building block graph. The building block graph is a DAG (directed acyclic graph) representing the dependencies between building blocks. meshStack always maintains a single source of truth for the desired state of this graph.
- block-runners are independent processes that can reconcile a certain building block implementation type. Runners connect to meshStack via meshObject API, polling for runnable blocks. Building blocks are runnable when they have all their inputs satisfied.
- meshStack passes the desired state of the building block to the runner, including the desired lifecycle state of the block as well as all of its inputs.
- The block-runner reconciles the building block, for example by executing a
terraform apply
in case of a terraform building block. - The block-runner collects the resulting output and returns it to meshStack. This information can also include detailed execution logs to help debugging.
- Application teams can at any team inspect the status of their cloud tenants and associated building block graph.
Benefits of meshStack's Building Block Design
This design offers a number of interesting properties relevant to our problem space.
Desired state reconciliation - a typical meshStack customer will manage thousands of building blocks with meshStack across multiple cloud providers. Desired state reconciliation of each individual block makes management robust in face of the inevitable cloud failure and curbs configuration drift.
Autonomous reconciliation - for managing a huge number of building blocks it’s important to intelligently prioritize reconciliation of building blocks with changed inputs with detecting configuration drift while carefully observing cloud API rate limits and automatically recovering from transient error conditions.
Predictable execution - when failures occurs its important that cloud engineers are able to quickly troubleshoot what’s gone wrong. With a single source of truth for block graphs and a central coordinator, reconciliation failures always occur at a defined point in the execution plan and do not propagate further.
Simple Composition - the types of composition we need to enable between building blocks are mostly simple input/output relations. For example, a firewall rule building block may need a VPC id specified as an output of a VPC building block. This means that we will start our design from simple 1:1 mapping from inputs to outputs. There will be different types. Building block implementations can perform more complex computations from inputs with familiar and more suitable tools like HCL (for a terraform building block), pushing this complexity to the edges of the system.
Swappable Block Definitions - it’s important that platform engineers can reuse various existing automation technologies to implement building blocks. In that sense, building block definitions act like an interface for which platform engineers can seamlessly swap the implementation.
Extensible Runner model - simple things should be simple, so meshStack will include out-of-the-box runners for common scenario like terraform modules. Similar to GitHub actions, our runners it will be open source and easy to self-host. This enables advanced scenarios like deploying runners with access to sensitive environments like on-premises or special secrets, runners implementing custom automations and so on.
The Migration path for Marketplace Service based Open Service Broker API
When we designed the first version of meshStack’s marketplace almost five years ago in 2018, we wanted to provide a capable platform that enabled private cloud as well as public cloud use cases alike. The problem space we had in mind was to provide application teams with a PaaS experience as pioneered by platforms like Heroku and Cloud Foundry. Approaching the challenge from that angle, OSB API shines with great support for service catalogs to aid application team’s discovery of compatible services as well as metering service usage for internal chargeback.
From the perspective of a cloud foundation team however, implementing building blocks on top of OSB API poses a few challenges. Implementing a service broker requires implementing a conformant API – a software engineering problem. While especially bigger cloud foundation teams are willing and able to develop and operate custom service brokers, we learned that smaller teams are keenly aware of the their limited bandwidth.
To deal with this, we sought to enable cloud platform engineers by building on workflows they already experts at like writing simple scripts and IaC. With the unipipe open source project we tried to make OSB API more accessible by transparently translating it to a GitOps workflow. Despite all of our efforts, the resulting experience still fell short of our ambitions while simultaneously critically lacking desirable composition capabilities as outlined our previous post on modular landing zones.
Building blocks will offer a clean migration path for customers already leveraging the OSB API marketplace - OSB API service instances will be just another type of building block implementation. We will be tackling OSB API integration with an OSB API compatible block runner in a later milestone, but our current plan is to support the migration as much possible out of the box without requiring changes to existing service brokers, service definitions and service instances.
Enabling Custom Platforms through Building Blocks
One of the design areas we are actively looking into as well is exposing existing meshStack tenant replication capabilities as built-in building blocks. This will give platform engineers fine-granular control over how their modular landing zones apply to cloud tenants.
Modular Landing Zones – The next frontier for Cloud Foundations
Modular landing zones enable cloud foundation teams to deliver cloud tenants to application teams that these teams can flexibly extend and configure with optional building blocks like virtual networks, on-premise connectivity or vertically integrated DevOps toolchains. The capability to build and deliver modular landing zones is essential for delivering use-case tailored landing zones for the variety of different workloads most organizations have from traditional lift & shift deployments over container platforms to cloud-native workloads.
Our cloud foundation platform meshStack helps cloud foundation teams deliver modular landing zones at scale with full self-service for application teams. We see a clear trend that application teams expect these landing zones to serve as internal platforms for their workloads. This means that the landing zone should not only deliver a secure cloud tenant but also come with “batteries included” building blocks that accelerate application deployment and reduce operational overhead.
Building landing zones and internal platforms is challenging. In this post I want to share our plans and vision for improving the experience for enterprise architects and platform engineers designing and building these landing zones. I will also be covering how meshStack’s current marketplace features will evolve to align with this vision.
Key Challenges delivering Modular Landing Zones
Based on our learnings helping cloud foundation teams build more than one hundred landing zones, we identified four key challenges faced by enterprise architects and platform engineers who want to deliver a modular landing zone.
- Publishing building blocks for self-service
Cloud foundation teams make building blocks available to application teams via a publishing process that enables staging and versioning. Application teams must be able to discover the building blocks available for their landing zone in self-service. Self-service makes it possible to add, modify and remove building blocks without manual interaction by the cloud foundation team and ensures there’s a single source of truth for the configuration of every cloud tenant. - Resource orchestration using familiar tooling
Building blocks typically have to orchestrate cloud resources into a configuration that provides a reusable capability like a secured object storage bucket or virtual network. Platform engineers already use infrastructure as code tools like terraform very successfully to automate resource configuration. It must thus be possible to easily define building blocks from existing automation like terraform modules instead of adding additional complexity with yet another tool. - Incremental automation
Many landing zone capabilities like on-premise connectivity have dependencies on legacy infrastructure like on-premise firewalls or IP address management systems that are difficult to automate. It’s thus crucial that cloud foundation teams can take an incremental approach to automate building block implementation, maybe even starting from a fully manual fulfillment approach backed by old-school ITSM over to a semi-automated workflow. Incremental automation allows cloud foundation teams to focus their efforts on those building blocks where automation provides the best return on invest. - Enable composition of building blocks
Building blocks have to compose seamlessly with landing zones, cloud tenants and other building blocks. This enables use cases like modeling the dependency between a “firewall rule” building block and a “default VPC” building block added to a cloud tenant by mandatory landing zone configuration.
Modular Landing Zones Support in meshStack
meshStack has historically supported the capabilities to build and deliver modular landing zones using an internal service marketplace, which we appropriately enough called “service marketplace”. The service marketplace has been an integral part of meshStack since the earliest inception of the product and many of our customers have developed a strong and successful service ecosystem for their landing zones. In fact, the number of service instances managed by our marketplace far outweighs the number of cloud tenants.
While we think that meshStack’s service marketplace fundamentally provides the right kind of self-service experience to application teams, we are highly aware of some major shortcomings in its current technical design when it comes to empowering cloud foundation teams to deliver modular landing zones.
- Leveraging familiar tooling like terraform requires a lot of plumbing with unipipe and a GitOps workflow, adding a lot of complexity to simple automation use cases like “just deploy this terraform module”.
- Starting with a manual service implementation as part of an incremental automation strategy is not possible directly from meshStack’s user interface, instead requiring unipipe and manual GitOps operations via its companion cli tooling.
- Service instances are independent of cloud tenants by default. It requires additional steps to connect them to a cloud tenant, increasing complexity of standard landing zone use cases like a Virtual Network Service that adds a default virtual network to a cloud tenant.
- Service instances are also “flat” and do not compose with each other, requiring clever workarounds to model dependencies and interactions between different services.
After a lot of discussions with our customers and cloud foundation stakeholders we have decided to fundamentally reboot our approach to building and delivering modular landing zones with meshStack. Which brings us to…
Building Blocks – Modular Landing Zones with Ease
Going forward, building blocks will become the new universal primitive for assembling landing zones and cloud tenants in meshStack. Each building block represents an encapsulated piece of functionality provided to an application team. Explained in a single picture below, application teams can flexibly assemble building blocks on the landing zone’s “baseplate” as required to support their use case.
To design landing zones, cloud foundation teams can designate building blocks as mandatory (pink) and optional (blue) building blocks, giving application teams a great deal of flexibility while retaining essential control.
Key Design Elements of Building Blocks
On a high level, here are the key design elements of meshStack’s new building blocks.
Building blocks:
- are individually reconciled, receiving inputs and producing outputs
- can depend on other building blocks, forming a directed acyclic graph (DAG)
- can attach directly to cloud tenants (meshTenants), with each tenant having its own independent building block graph
- are mandatory and optional components of a modular landing zone, delivered to application teams in a self-service experience similar to meshStack’s current service marketplace
- provide swappable implementation options:
- “manual” building blocks enable incremental automation starting from a GUI-based manual process
- meshStack will include out-of-the-box support for deploying terraform modules as building blocks
- platform engineers can implement custom blocks using on an external block-runner API
- are aware of their desired and actual state
How Building Blocks address Key Challenges of Cloud Foundation Teams
The new building block design directly addresses the key challenges faced by cloud foundation teams:
- out-of-the-box support for popular IaC tooling like terraform for implementing building block leverages widely available skills and enables plug&play reuse of existing automation assets instead of requiring complex GitOps pipeline setup
- incremental automation becomes a first class concept supported by meshStack’s GUI, enabling cloud foundation teams to first focus on overall landing zone design before investing into automation to solve operational challenges
- a conceptually simple yet powerful composition of building blocks enables advanced scenarios without requiring challenging API integrations or hidden backchannels between services to coordinate their functionality
Moving forward with Building Blocks
I will be sharing some more in-depth insights about the technical design of building blocks in an upcoming post. Our vision is that building blocks will ultimately supersede the existing tenant replication and marketplace functionality – unifying both in a single design that is conceptually less complex yet more flexible. This will make meshStack considerably more useful across a wider array of platform use cases.
Building on building blocks as a foundation, we plan to empower cloud foundation teams to define custom cloud platforms and landing zones more easily, for example to integrate internal developer platforms and specialized cloud providers. We will enable this by making more of the meshObject model available as APIs so that cloud foundation teams can tap into the same concepts meshStack uses to deliver out-of-the box capabilities for building AWS, Azure, GCP and other cloud platforms.
We are very excited about these changes and will ship our first MVP of building blocks this week. As part of the MVP we will first enable “manual building blocks”, followed by supporting building blocks based on terraform modules. We will be sharing more updates about our planned and upcoming features soon, including how building blocks will integrate with meshStack’s other capabilities like cost management, security and compliance.
6 Things to Watch out for when Starting Your Cloud Journey
Enterprises plan their cloud transformation carefully and thoroughly. And that's exactly what they need to do in order to set their cloud journey up for success.
But the truth is that many organizations don't have a lot of experience when it comes to migrating to the cloud. They are up for a steep learning curve.
That's why we've compiled a list of 6 aspects you need to keep in mind when embarking on your cloud journey:
- Breaking up silo structures
- Assessing the technical expertise of your teams
- Understanding cloud vendor lock-in costs
- Understanding the shared responsibilities in the cloud
- Considering Managed Services
- Develop an agile, cloud native way of working
Let's get to it:
1. Breaking up silo structures
Moving to the cloud requires a change in the organizational structure. Just signing a contract with AWS, GCP or Azure is not enough. Infrastructure silos focusing on databases, networks, and so on are not ideal, to say the least. Everybody working on an application has to communicate with those silos.
Developing and running applications in this scenario puts a lot of overhead responsibilities on the shoulders of DevOps teams. And it grows with each cloud platform they add.
Optimizing in silos can make each silo run their cloud platform perfectly but it won't remove inefficiencies in the overall cloud transformation effort.
A cloud foundation team that sees itself as an enabler for DevOps is the best practice. The cloud foundation can optimize for applications and go-to-market.
2. Assessing the technical expertise of your teams
You have decided on one or more cloud platforms - like AWS, Azure, or GCP - to migrate to and build on. It is now important to focus on assessing the technical expertise in your organization and upskilling your teams to enable them to work with these cloud platforms.
Migrating to the cloud will most likely - and this is often overlooked and not talked about - automate certain positions out of existence. But keeping skilled and qualified IT staff on board should be a priority: Identifying and reskilling people in these positions and offering them new and valuable opportunities within the organization is the way to go.
A cloud foundation team can offer consulting and training to support the ramp up.
3. Understanding cloud vendor lock-in costs
Enterprises must review and fully understand the costs that come with choosing a cloud service provider. The cost reduction promised by the cloud can only be achieved if the cloud transformation is done right and all costs are made explicit.
Going all-in with one cloud vendor leads to a strong dependence on their proprietary technologies. Switching costs are high and may prohibit the move to competing vendors further down the road.
Make sure to have a viable cloud exit strategy in place and go with a cloud governance solution that makes the organizational and technical transition to another vendor economically feasible.
In addition being credibly able to switch providers gives you strong leverage in negotiations.
4. Understanding the shared responsibilities in the cloud
A general rule is that the cloud provider is responsible for the security of the cloud, while you as their customer are responsible for the security in the cloud. However, there are other aspects that affect the shared responsibility model, e.g. the service model you are using (Iaas/PaaS/SaaS).
Here is an example: If you use a SaaS service like Google Docs, you don't have to take care of the VM images or databases this application is running on. If you deploy your own MySQL cluster to AWS EC2 instances, however, you are responsible to encrypt the connection via SSL or TLS.
Another important factor is to assign responsibilities clearly to the cloud foundation team and the DevOps teams. The cloud foundation can offer a security baseline with predefined cloud landing zones and takes care of organizational overhead. The DevOps teams have more freedom when working with the cloud - compared to the data center approach - and with that freedom comes the responsibility to take care of application security. The Cloud Foundation Maturity Model provides more insights on how to structure shared responsibility in a Cloud Foundation context.
5. Considering managed services
Migrating to the cloud is a major task in terms of organization, technology, and operations. Wanting to do everything in-house may be understandable but the already very busy IT teams just might not have the capacity or skill set to take on every project.
Making use of higher-level managed services may be the right choice to keep the cloud migration on track and within budget. You may want to have more than just infrastructure-as-a-service (IaaS) and use more than just one cloud service provider: That's also why an abstraction layer that unifies all clouds brings no value to your cloud transformation.
Even if you start off with a pilot project that your organization can handle capacity- and expertise-wise: The challenges will build up as you move on and broaden the scope of your cloud journey. That is a development we see quite often in the market - companies wasting time and money and then turning to external partners a good way down the road.
The same goes for intra-organizational services: Not every team should have to solve problems that other teams have already successfully overcome. Teams should be enabled to offer their solutions and services to other teams - via a cloud service marketplace - to push innovation and speed up development.
6. Developing an agile, cloud-native way of working
Going with a cloud strategy is only part of making full use of the competitive advantage the cloud can offer. Without an agile and cloud-native way of working the potential will not be fully explored. It is the prerequisite to moving the actual workload to the cloud and taking advantage of the scalability, flexibility, and speed the cloud can provide.
A cloud foundation or a cloud competence center should take care of the organizational overhead and enable developers to fully focus on their products.
A DevOps team lead should be able to provision a cloud account and deploy applications without the interference of some kind of central IT. Offering a self-service cloud account creation requires a high degree of automation. This reduces manual workload and with that reduces the "time-to-cloud" for the developers. Using an existing ITSM tool for cloud resource provisioning seriously limits the usefulness of the cloud.
Moving to the cloud is a deep-rooted transformation in an IT organization and means fundamental changes in how things are done. A cloud foundation team needs to evangelize the use of the cloud and empower the teams on their way. It can not be expected that everybody is on board with the cloud strategy right away. Some applications will have to be refactored - a lot of work - the transformation will only be successful if there are communication efforts to show that it’s worth it.
To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to getting in touch with you.
The Cloud Foundation - Key to Cloud Excellence
Organizing your IT the cloud-native way: Read why the creation of a central cloud team needs to be a central piece in your cloud strategy.
A cloud journey is as much an organizational journey as it is a technological one. The establishment of a Cloud Foundation, also known as the Cloud Competence Center or Cloud Center of Excellence (CCoE) is a best practice to leverage the advantages of the cloud.
In this post on the Cloud Foundation, we want to introduce you to the core concept of such a team and why it makes such a big difference.
So let’s dive right in:
What is a Cloud Foundation?
A Cloud Foundation is a multi-disciplinary team of enterprise architects, developers, and operators, network and security engineers, system and database administrators. The team governs and enables the organization's cloud transformation process.
Enterprises that follow a cloud strategy to reduce costs and become more agile need to take on organizational transformation to leverage the cloud to the fullest.
One central piece of this puzzle is the creation of a central cloud team: Cloud Foundation, Cloud Competence Center or Cloud Center of Excellence - many names, one core concept: Centralizing cloud governance to enable more productive, more agile, and more innovative DevOps.
It is effectively the team that pioneers and paves the way that is used by DevOps teams to safely travel to and navigate the cloud.
The Advantages of a Cloud Foundation over Cloud Silos
Many IT departments in enterprises are still organized in so-called silos. When it comes to the cloud that may mean there is a division for Azure, one for AWS, and one for GCP. Every one of these silos has to build up know-how and tooling for proper cloud governance.
Developing and running applications in this scenario puts a lot of overhead responsibilities on the shoulders of DevOps teams. And it grows with each cloud platform they add.
Optimizing in silos can make each silo run their cloud platform perfectly, but it won't remove inefficiencies in the overall cloud transformation effort.
Global optimization is not possible in a silo structure: With a Cloud Foundation on the other hand you can optimize the entire cloud journey of your organization. The Cloud Foundation centralizes cloud governance competencies to enable and drive the cloud journey.
Cloud governance is not platform-specific - and so it does not make sense to reinvent the cloud governance wheel for every platform in every silo. In a Cloud Foundation team boundaries and best practices can be shared better and faster leading to better platform-specific implementations.
Done well, this enablement function will achieve the following outcomes:
- Accelerate cloud adoption across the organization
- Enable and encourage innovation
- Optimize costs (based on FinOps practices)
- Minimize the risks involved (based on continuous compliance and compliance as code practices)
Done badly, however, a Cloud Foundation can actually end up acting as a barrier to consuming cloud within the organization.
It is important to have a value mindset and focus on what value the cloud can bring to the teams: If it is just a top to bottom company initiative there will be resistance and you will lose speed.
The Main Tasks of a Cloud Foundation
Let's talk about more specific tasks for the Cloud Foundation team.
To achieve the goals we've talked about in the previous section a Cloud Foundation has to take on the following eight tasks:
- Implement cloud transformation and strategy
- Manage cloud cost and billing
- Manage and report cloud transformation
- Implement and enforce organization-wide cloud journey policies
- Provide guidance and training
- Manage cloud identities and access for DevOps teams
- Keep up with the latest cloud technologies
- Manage cloud security and compliance risks
How to build a Cloud Foundation Team
Ideally, you start building a Cloud Foundation team well before the first workloads are migrated to the cloud. But that doesn't mean that at some point it may be too late to start: It is never too late to tackle the organizational transformation needed to fully capitalize on the competitive advantages of the cloud.
Let's say you have a small team that has a cloud-native use case, and they are the first lighthouse team going into the cloud. After this move, the organization itself learned a lot from actually doing and learning instead of planning a long time and never implementing anything. So sometimes the better approach is just trying because it puts you on a steeper learning curve. The cloud foundation team could be a team of architects accompanying the first five teams and then evaluate what went well, what bad, and how they can support others in building a solid practice on their cloud journey.
Regardless of the size of the business or the extent of its presence in the cloud, a Cloud Foundation team should start small. Building the team from developers, system administrators, network engineers, IT operations, and database administrators: start small, learn fast, and grow big!
Building a solid foundation first and then the house on top of it is definitely a good sequence of events. However, it is absolutely worthwhile equipping the building that houses your business success with a stable foundation; even after you came to realize it might be built on softer ground.
To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to get in touch with you.
Cloud Exit Strategy: Ensure Compliance and Prevent Vendor Lock-in
A clear cloud exit strategy is absolutely necessary when moving to the cloud. Ensuring both compliance and business continuity - and avoiding vendor lock-in - are the primary reasons.
Today, large companies can no longer do without the advantages of the cloud. The competitive pressure of digitalization requires migration to the cloud. At the same time, high regulatory demands are being placed on outsourcing - especially in critical industries. And using public cloud is outsourcing!
Placing oneself in the hands of the hyperscalers involves risks. You generally don't have to worry about security: Amazon, Microsft, and Google do a lot to keep their offers safe. Compliance and vendor lock-in is a different matter: It is important to clarify which data and solutions end up with which cloud providers. For this purpose, a clear exit strategy is required in addition to the (multi-)cloud architecture and the cloud sourcing strategy. A high-profile example is that of Dropbox leaving AWS in favor of hosting their workloads themselves.
Regulatory Requirements
In certain critical industries, a documented exit strategy is more than just a good idea, it is a regulatory requirement.
The banking sector is one of the most heavily regulated industries. The regulators also deal with the cloud use of companies. The European Banking Authority, for example, requires an exit strategy for outsourced critical or important functions in its EBA Guidelines on outsourcing arrangements under chapter 15. This includes the use of the public cloud.
The German financial supervisory authority BaFin also prescribes the development of an exit strategy in its banking supervisory requirements for IT.
4 aspects of vendor lock-in
Vendor lock-in means not being able to shift workloads from one vendor to another without too much hassle. That it brings great advantages to be able to do this shows the recent downtime at IBM.
Not being able to do so has different possible reasons:
- Cost
- Contracts
- Skills
- Technology
Cost is a major factor in prohibiting migration from one vendor to another. The vendor might charge for exporting data. In addition to that costs will pile up for training staff, consultants, and the lowered productivity. The larger the workload the larger the costs. A good example to look at is Netflix: The streaming service is all in on AWS and won't be able to change that - at least not with reasonable costs.
Contracts can play a big role in vendor lock-in. Some cloud service providers make it hard to decide for a migration to an alternative vendor by implementing a continuously upward pricing model that aims at drawing their customers deeper and deeper into a factual lock-in. At some point, a partial exit may no longer be economical and a complete and difficult withdrawal from the whole contract the only option.
Skills play a big role in migrating and operating workloads. Cloud architects, DevOps teams, and security experts are specialized and it takes time and money to shift that knowledge to newly adopted cloud platforms. That can be a major hurdle when considering leaving a vendor for another. Going multi-cloud from the start provides companies with a larger talent pool and that takes the trouble out of transitioning a little bit.
Technology causes vendor lock-in as well - at least when it comes to proprietary technology vendors use to differentiate. On the one hand, that's great and can offer a competitive edge. On the other hand, it can get companies locked in on this technology and hinder the adoption of the next big thing in cloud technologies.
The 4 key aspects to every cloud exit strategy
So here are 4 aspects you will have to have an eye on when building your cloud exit strategy:
- Most importantly: Take inventory! Knowing your assets is essential. Exit strategies often apply to critical business functions only. So it’s important to know what you have running in which cloud – an up-to-date cloud inventory is of great help.
- Open-source infrastructure is key. Open-source infrastructure components like Kubernetes or OpenShift clusters or open-source databases can make a move between clouds much easier. The more proprietary services you use, the harder it will be to adapt your application to run in a new cloud environment.
- Go multi-cloud from the beginning. Contract negotiations between enterprises and cloud providers can take a while. It’s too late to start the process when it’s actually time to move
- Watch out for organizational lock-in. Even if from a technical perspective your application can easily be moved to a different cloud provider, there’s more to it. If you are running cloud applications at scale, setting up the corresponding cloud environments transferring permissions and configurations comes with massive complexity. Use a centralized governance system like meshcloud to keep your organizational structures independent from specific providers.
To learn more about the meshcloud platform, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to getting in touch with you.