Woman uses AWS Landing Zone Feature

AWS Landing Zone Comparison

You are on your path to migrate your teams’ workloads to AWS, whether from on-premise or from other cloud providers? Then you have most probably stumbled upon the term “Landing Zones” being an important piece in the puzzle to a smoother cloud migration journey. If this term does not ring a bell, check out our blog post about landing zone lifecycle. Simply put, landing zones should set your foundation in AWS according to your organization’s security and compliance requirements.

It is not often the case that only one landing zone solution exists. This poses the challenge of choosing the right landing zone implementation for the right use case.

In this blog post we go through AWS’ ecosystem on landing zones. The goal is that after you finish reading, you will have a clearer picture on different AWS landing zone solutions to help you choose the landing zone solution that fits most to your organization.

ℹ️ Note that custom-built landing zone solutions are out of the scope of this blog post.
💡 If you are interested in Landing Zone solutions for Azure or GCP, please have a look at our comparison blog posts of these cloud platforms.

AWS Landing Zone Ecosystem

When starting your journey to search for a best fit landing zone option to pursue, you will most certainly come across different solutions. We will mainly describe a set of landing zone solutions that are provided and maintained, or recommended by AWS, namely AWS Control Tower (with plain account factory or account factory for terraform) and AWS Landing Zone Accelerator.

There are other solutions that we will not cover in the comparison, namely AWS Landing Zone and AWS Secure Environment Accelerator (SEA). The former being on long term support and not receiving more features, as well as not being recommended by AWS. The latter being targeted on governmental and public sector environments that require high-level of restrictions. We will give a brief explanation on each solution, but if you are interested in comparing these solutions as well, let us know know in the comments ⬇️ .

AWS Control Tower (CT)

We recommend that all new landing zones start with AWS Control Tower. AWS Control Tower helps you build out an initial prescriptive landing zone configuration, use out-of-the-box guardrails and blueprints, and create new accounts using AWS Control Tower account factory.
-AWS Documentation

This is the first solution you will probably come across. AWS repeatedly recommends this approach going forward. Unless you already have your landing zone setup customized with older approaches, you should definitely try it out.

AWS Control Tower’s main functionality is that it sets up a basic landing zone for you using multiple AWS services behind the scenes, such as AWS Organizations and AWS CloudFormation StackSets. It is pretty easy to set up as AWS provides a step-by-step walkthrough (i.e. a wizard). In that wizard, you will first choose the region of your control tower, and also have the option to deny services in regions you specify. Then you configure organizational units (OUs), shared accounts, CloudTrail (e.g. log retention periods) and encryption. Eventually, control tower will create the following:

  1. Two Organizational Units (OUs)
    Security: This OU contains shared accounts (log archive and audit accounts).
    Sandbox: This OU will be empty, to be filled with user provisioned accounts. Although its name indicates an environment with loose restrictions, by default it includes similar guardrails as all other OUs.
  2. Three shared accounts; management account (note that this account already exists, and is the account from which you setup CT) as well as already mentioned accounts under Security OU (log archive and audit accounts).
  3. A native cloud directory with preconfigured groups and single sign-on access.
  4. 20 preventive guardrails to enforce policies (using Service Control Policy (SCP)) and 3 detective guardrails to detect configuration violations (using AWS Config).

AWS Control Tower also offers an account factory where you can manually provision accounts or enroll existing accounts. These accounts are then governed by control tower. AWS also offers a framework around account factory to enable automation using a GitOps workflow which we explain in more details below.

AWS Control Tower Account Factory for Terraform (AFT)
AWS Control Tower Account Factory for Terraform
AFT is built on top of AWS Control Tower. It provides automation around account provisioning and further customizations using a GitOps approach.

AFT can be set up after setting up control tower, and is done mainly through two parts; manually creating the AFT management account and then applying the AFT terraform module. After applying the module, there will be 4 empty repositories (aft-account-customizations, aft-account-provisioning-customizations, aft-account-request, aft-global-customizations) in your configured VCS (e.g. CodeCommit, GitHub, etc.) that you need to populate with source repositories.

As shown in the “AFT Deployment Module” diagram above, AFT performs multiple automated steps to provision and modify accounts, as well as create custom resources. This is done by providing an interface to users (e.g. developers). For example, users can create a commit to aft-account-request repository (directly or through a pull-request based flow) with a main.tf file that calls the account request. The commit then triggers the configured workflow to provision an AWS account governed by CT.

AWS provides various resources to get more familiar with AFT. For information on broad concepts and some technical details, we recommend you check AWS AFT documentation. For a deep dive technical guidance, we recommend you go through AWS Control Tower Immersion / Activation Day - AFT workshop.

Landing Zone Accelerator on AWS

Landing Zone Accelerator on AWS is the latest approach by AWS to provide a solution for a secure Landing Zone based on best practices.

It is recommended to be used together with AWS Control Tower. But it also works in regions that don’t support Control Tower. In that case, it uses AWS Organizations directly and creates the intended OUs with according policies applied. It also works with AWS CloudGov (US), which is not supported by AWS Control Tower. So it is a solution that can be used by every AWS customer.

The Accelerator is completely executed via AWS native services. An AWS CloudFormation Stack is used to roll out a CodePipeline that creates and updates the Landing Zones according to the configuration that can be customized to your needs.

AWS Landing Zone Accelerator

The CodePipeline is the central component of it. It does the initial role out of the Landing Zone and also enables Continous Integration and Deployment. All that is needed to roll out changes to the Landing Zone is a commit to the Config Code Repository with the config changes you would like to apply.

Rolling it out will actually result in the following environment.

Resource Hierarchy AWS Landing Zone Accelerator

With this Resource Hierarchy responsibilities are clearly segregated into different OUs and accounts.

💡 If you try to roll out the Accelerator, please use the this page from the AWS documentation to get a step-by-step guide including pre-conditions. Without this page you will easily miss some important steps.

And when using the Accelerator in combination with AWS Control Tower, you have to make sure that you enter the management, audit and logging account email addresses of the accounts that were already created by Control Tower as the landing zone accelerator builds on top of it. Otherwise you will run into a code pipeline prepare stage issue.

AWS Landing Zone

AWS Landing Zone is currently in long-term support and will not receive any additional features. Customers who want to set up a new landing zone should review AWS Control Tower and Customizations for AWS Control Tower .
-AWS Documentation

This is the old approach on building an AWS landing zone which is based on AWS Organizations. One of its key components is the Account Vending Machine (AVM) which is a Service Catalog product that enables preconfigured account provisioning. It creates a multi-account structure; a security, logging, and a shared services account under a “Core OU”.

At the time of writing, searching over AWS Landing Zone in AWS documentation does not provide enough information and you will be redirected to AWS Control Tower in many occasions.

If you are already using this solution and are actually happy with it, let us know in the comments below!

AWS Secure Environment Accelerator (ASEA)

This solution has been around since 2020 and was initially built with focus on the Government of Canada. The main target of this project is environments that require high-level of security and restrictions such as governments and public sector organizations. It offers an opinionated and prescriptive architecture for your AWS multi-account setup, while also making it possible to customize as needed. It provides sample configurations that will build you an end-to-end landing zone.

There is extensive AWS secure environment accelerator documentation that will lay out details on the solution including workshops for both administrators and project teams.

If you are already using this solution and are actually happy with it, let us know in the comments below!

Comparing the Options

For this comparison, we assign 1 to 3 ⭐ for different aspects that we look at for these options. If a certain aspect is not covered at all we assign a ❌. Additionally, you can find some details about why we rated an aspect. These details also provide a deeper insight into how these aspects are covered within the options.

Feature Comparison

The feature comparison is based on the Cloud Foundation Maturity Model (CFMM). You can read details of what these blocks mean on the CFMM website. You can also follow the links in every row that redirect you directly to the details of the according CFMM block.

AWS Control Tower with Account Factory AWS Control Tower with AFT AWS Landing Zone Accelerator
Resource Hierarchy ⭐⭐ Creates 2 Organizational units (OUs), Security and Sandbox. It also creates 3 shared accounts; a standalone management account (not belonging to an OU), and log archive and security audit in the Security OU. The Sandbox OU remains empty to contain the new provisioned accounts. You can create new OUs based on your desired structure. They will be governed by CT. AWS also offers a sample hierarchy with additional OUs that you can manually include. ⭐⭐⭐ Additional to what CT provides, AFT offers a GitOps approach to provision accounts under specified OUs through an automated workflow. ⭐⭐⭐ By default it creates 2 OUs, Infrastructure and Security. You can easily define more OUs in the config. Another great feature it provides is the option to use a Quarantine OU where almost everything is forbidden. It is used for newly created accounts that are still under construction by the accelerator. That way unintended access or changes during creation can be avoided.
Resource Policies ⭐⭐⭐ AWS Control Tower sets up 20 preventive guardrails (implemented by SCPs) and 3 detective guardrails (implemented by AWS Config). Those are the “Mandatory” guardrails enabled by default that we see are reasonable, such as disallowing deletion of the log archive bucket, disallowing changes to AWS IAM roles set up by CT and CloudFormation, etc. There are also more guardrails available that are not enabled by default, which are “Elective” and “Strongly Recommended” guardrails. Newly provisioned accounts with Account Factory inherit guardrails from their parent OU. ⭐⭐⭐  Accounts provisioned through AFT also inherit all guardrails from their parent OU which is covered by CT. Additional policies can be applied through AFT customization framework. ⭐⭐⭐ Reasonable guardrails are applied to the default domains Infrastructure, Security and Quarantine. It is easy to also define additional guardrails or to define them for additional, custom OUs.
Centralized Audit Logs ⭐⭐⭐ Enables CloudTrail service to aggregate action and event logs into the shared Log Archive account from different AWS accounts and records them in CloudWatch. With CT release ≥ 3.0, an organization trail is created to gather event logs on an organization level instead of a member trail that gathers logs on a per account basis. ⭐⭐⭐ With AFT, you can also enable CloudTrail data events. Data events are events performed on or within a resource. These are disabled by default as they often incur high costs. This is done when deploying AFT by setting aft_feature_cloudtrail_data_events to true. ⭐⭐⭐ Uses the Log Archive Account to store the logs. It can be configured easily which logs shall be included.
Cloud Tenant Tagging ❌ No AWS account is tagged when setting up CT. ⭐ By default provisioned accounts are not tagged. It is possible to tag those accounts through a terraform variable account_tags. ❌ Only Tag Policies on the org level can be defined. They only ensure that if a certain tag is set, it complies with the definition in this policy. The policy will ignore those resources if no tags are set at all. During account creation , no tags can be set. So the tagging of Resources must be done outside of the accelerator.
Tenant Provisioning/Deprovisioning ❌ Initial provisioning of your multi AWS accounts. This does not qualify as tenant provisioning/deprovisioning, which is partly covered by account factory. ⭐⭐ AFT’s main functionality is automation around tenant provisioning. Deprovisioning is not supported. Removing the aft-account-request terraform for a specific account only removes it from AFT management. ⭐ Workload Accounts can be defined in the accounts-config.yaml. Only the very basic account information can be entered here. Tags, Service Accounts, etc cannot be defined for workload accounts.
Playground/Sandbox Environments ❌ Creates a Sandbox OU. However, it also has the same guardrails as the Security OU. ❌ No predefined Sandbox OU is available
Privileged Access Management (PAM) ⭐⭐⭐ Uses IAM Identity Center service to offer preconfigured groups. You can then add users to those groups based in their role in the organization. ⭐⭐⭐ By default, the user email set in the account request is assigned AdministratorAccess to the account. Additionally, groups created with CT are assigned with specific permissions. That is, AWSSecurityAuditPowerUsers group is assigned. AWSPowerUserAccess, AWSControlTowerAdmins group is assigned AWSOrganizationsFullAccess, AWSSecurityAuditors group is assigned AWSReadOnlyAccess to that account. ⭐⭐⭐ By default it defines only an Administrator group. But as you can base it on AWS Control Tower, you can make use of the advanced default roles created by AWS Control Tower.
Service Account Management
Virtual Networks ⭐⭐⭐ CT creates a default VPC with 3 subnets in the management account per availability zone. You can edit VPC configurations for new provisioned accounts with the account factory. For example, you can allow internet access for created subnets, which will lead to the creation of a NAT Gateway. When provisioning a new account, CT automatically deletes the default VPC and creates a new VPC configured by CT. ⭐⭐⭐ Similar to normal account factory, with addition to AFT customization framework that allows you to include additional terraform resources. ⭐⭐⭐ It provides a reasonable default network setup in the network-config.yaml. Additionally the default config contains a lot of additional options as commented code. You just have to active them and adapt to your needs.
On-Prem Connect ❌ CT workflow does not offer on-prem connection out of the box. It can however be implemented using Transit Gateways and Site-to-site VPN. AWS Control Tower workshop provides a detailed guide on interconnecting control tower accounts VPCs as well as connecting them to on-prem. ❌ While it is not supported out of the box by e.g. switching on a flag, AFT customization framework enables administrators to include terraform resources as they see fit. There are a bunch of “Site-to-Site VPN” terraform resources available that can be leveraged. ⭐⭐ It provides a TransitGateway in the network config to connect easily to a hub that makes the on-prem connect. As also directConnectGateways can be defined in the network config, everything that is needed to establish an on-prem connection is available.
Cloud Interconnect ❌ Similar to on-prem connect, CT does not offer cloud interconnect out of the box, but can be implemented as an “on-prem” connection. Additionally, AWS Control Tower workshop offers guidance on interconnecting AWS accounts VPCs. ❌ Same as for CT itself applies. ⭐⭐ In the network config directConnectGateways can be defined to connect to networks at other cloud providers.
Managed Key Vault
Automation in CI/CD ❌ Runs automation under the hood, not configurable by the user. ⭐⭐⭐ Automation is the core of AFT. It sets up a pipeline that is triggered by a code change in a version control system (VCS). It uses AWS CodeCommit natively, but also supports other alternatives such as GitHub and BitBucket. ⭐⭐ The whole Accelerator framework is based on AWS CodeBuild and CodePipelines. Any updates to the config is rolled out via that pipeline. The only downside is, that you can only use the AWS Services for it. Integrating it into any other CI/CD tool is not possible.

Non-Functional Comparison

AWS Control Tower with Account Factory AWS Control Tower with AFT AWS Landing Zone Accelerator
Extensibility/Modularity/Scalability ⭐⭐ CT only provides the baseline setup. It can be extended with “Customization for Control Tower - CfCT”. You can also create your own OU structure to which CT will also apply guardrails. ⭐⭐⭐  In addition to what can be extended and customized in CT, AFT provides a customization framework that allows users to include AWS resources with terraform. ⭐⭐ You can customize the config rather easily to your needs. But it seems like it is not possible to add completely new modules for other services easily to the Accelerator. As they have a DSL to define all the resources via their config it feels like you are pretty locked to the config options that are provided by the accelerator. But the config actually covers quite a lot of services (35+ services are mentioned in the AWS docs) and seems to be sufficient for many use-cases. But if you want to integrate an AWS service into your Landing Zone that is not yet supported by the accelerator, it looks like you have to create a Pull Request to the accelerator to get it in.
Terraform ⭐⭐ As its name suggests, it offers a workflow based on terraform. It requires a bit of ClickOps initially to create the AFT management account. ❌ It is completely based on AWS CloudFormation.
Learning Curve ⭐⭐ ⭐ It is easy to start with CT and set it up. Having an advanced AWS knowledge is not required. ⭐⭐ There are many AWS services involved in the workflow, making it a bit harder to grasp. Also, AWS account provisioning is not directly performed via terraform but an account request is created which triggers an account provisioning process. ⭐⭐ Except for some current issues, that should be related to the accelerator being quite new on the market, rolling it out is rather easy. You only have to deploy a StackSet and the rest of the bootstrapping is done automatically for you. As a next step you have to set the configuration you like to have in 6 yaml files, that are rather easy to understand. You also get a good reference implementation based on best practices where you can copy and adapt your own configuration.
Community ⭐ Scattered community across multiple platforms. There doesn’t seem to be a specific place to ask questions but you can find some answers on different platforms such as stackoverflow. ⭐⭐ In addition to different community platforms, you can open a GitHub issue in AFT repo in case you encounter problems. AFT maintainers seem to be responsive to issues, but they still don’t accept contributions at the time of writing. ⭐ As it is a rather new solution you cannot find a lot around this yet. The GitHub Repo only has 75 stars and 12 contributors yet. Only a few issues exist yet in the GitHub repo and the responsiveness of the developers is rather low at the moment. At least 2 tickets we created or commented in did not get any response from official contributors within more than a week. Eventhough community around it is still rather low, an advantage is that AWS Support fully covers support for it.
Costs ⭐⭐⭐ There are costs associated with services that CT uses such as CloudTrail and S3 buckets. The resources it creates can incur costs depending on the usage, for example the more accounts created, the more logs are ingested into the log archive account. AWS has a pricing page with details on different scenarios and how much cost is associated. Another example is, in account factory settings, you can allow internet access for created subnets in the new account, which will lead to the creation of a NAT Gateway and drive some cost. ⭐⭐ There are no additional costs for AFT but also for resources it creates. Transparency in AFT is limited and you need to figure out which options to enable/disable to control which resources are created. For example, AFT will by default create AWS PrivateLinks. In order to disable that, you need to set aft_vpc_endpoints=false when deploying the AFT terraform module. ⭐ AWS is very transparent around costs that are associated with running the best-practice landing zone. At the time of writing this article it is ±360€ per month. If you are not using all resources that are configured in the default landing zone you can save a bit of money. Main cost drivers are the KMS and the networking.
Time To Success ⭐⭐⭐ Setting up AWS Control Tower for the first time takes 1-2 hours, which includes going through “set up your landing zone” that takes roughly 15-30 minutes. This assumes that you do not have an existing setup that you need to decommission; as that will increase the time to success. ⭐⭐ Setting up AFT technically takes around ~1 hour, which includes creating the AFT management account and deploying the AFT terraform module. Post-steps like populating your repositories for automation would rather be quick. All in all, assuming the process goes smoothly, setting up CT with AFT would take ~4 hours. ⭐⭐ The concept of AWS accelerator is quite promising to set up a secure Landing Zone rather quickly due to the best practice defaults and the easy to understand configuration. But currently there is the downside that if something goes wrong while rolling out the pipeline you might receive really bad error messages that don’t help you resolving the issue. This can delay the time to success significantly. But these issues could vanish over time when the accelerator becomes more mature.

Find your way to building AWS Landing Zones

Building your landing zone in AWS eventually implies it will be built on top of either AWS Organizations directly or AWS Control Tower to enable your multi-account setup. Using either of these would also impact your decision on which landing zone solution to pursue. AWS Organizations is merely a tool that enables a multi-account setup but does not actually build a landing zone, while AWS Control Tower provides an abstraction to multiple AWS services including AWS Organizations and builds a basic landing zone that you can extend atop. The below simplified flow chart conveys the relation between the different solutions.

AWS Landing-Zone Solution Comparison

Unless you are using a region that is not supported by AWS Control Tower or you need absolute control over all the components of your landing zone, you should always go with AWS Control Tower. Most solutions nowadays support it, and it is highly recommended by AWS. But besides the guardrails it provides, there isn’t much that it adds in the meantime and it can be considered a very basic landing zone when used solo. Therefore, if you’re only interested in a basic landing zone setup that only provides you with a multi-account framework with a set of preventive and detective guardrails, then you should definitely consider AWS Control Tower.

From there you can choose to stay with only Account Factory in which you will pretty much only benefit from CT’s guardrails when provisioning accounts, or go for Account Factory for Terraform which additionally provides a GitOps based framework to automatically provision accounts with customization capabilities.

If you’re interested in an end-to-end landing zone solution, then consider AWS Landing Zone Accelerator. It is a solution that promises to build full-fledged AWS landing zones with automation capabilities. It supports both AWS Control Tower and AWS Organizations. This solution was first released on 23 May 2022 which is fairly new. We identified that the product itself and its documentation is not yet battle proven and you can run in some unexpected issues when trying to roll it out. As of this writing, we stumbled across a blocker that prevented us from actually deploying it and therefore we cannot provide a full recommendation yet. We like the simplicity and customizability of the config files that are steering the AWS Accelerator and the understandable pipeline that deploys it. But we are wondering whether more extensive customization that goes beyond the services and features supported by the configuration files of AWS Accelerator are feasible.

The different solutions presented in this blog post are actually not really alternatives that compete against each other. Instead they combine and extend each other very well. For a full-fledged Landing Zone we therefore recommend the combination of AWS Control Tower with AFT for automated account provisioning and Landing Zone Accelerator on AWS to add further highly recommended features to your Landing Zone.

Best-Practice AWS Landing Zone Solution Diagram

If the concerns we mentioned regarding the production-readiness of AWS Landing Zone Accelerator, or if you need to have full control over your landing zones, then go for a custom-built landing zone, either via an AWS partner or with your AWS team.

Did this blog post help you decide which Landing Zone Solution you will apply to AWS? Did you already gather your own insights to building an AWS Landing Zone and want to back up, add or discuss insights from this blog post, please do so and leave a comment! We look forward to hearing about your experience.


10 Things you should consider when looking for a Cloud Management Platform

Cloud Management Platform: 10 Things to Consider When Choosing One

A Cloud Management Platform helps Cloud Centers of Excellence master the complexity of their multi-cloud environments by implementing governance and control mechanisms.
In this post we provide guidance and orientation for the evaluation of Cloud Management Platforms to help you make the right choice for your organization.

What is a Cloud Management Platform?

Coined as its own product category by Gartner, Cloud Management Platforms provided by vendors like meshcloud, CloudCheckR or Morpheus, have been the tool of choice for lots of organizations. A Cloud Management Platform helps control their cloud infrastructure landscape when cloud adoption increases.

Here are 10 criteria to consider when choosing a Cloud Management Platform:

(✅ 1) What is your intention of moving to the cloud?

There are different strategies that let organization leaders decide to move to the cloud. I’ll shortly describe the 2 extremes and how they can impact your choice of Cloud Management Platform.

  1. Empty Datacenter with Lift&Shift: High hardware and operations costs that aren’t flexible drive organizations to shut down their data centers and move workload to the cloud (no matter how). This is not much about innovation or speed, it’s a shift from CapEx to OpEx for more flexibility. What you will need from a Cloud Management Platform to achieve your goals is easy provisioning of commodity services: VMs, Networks and so on. Teams do not have (and need) in-depth cloud know-how. They just move from the actual datacenter to a virtual one.
  2. Accelerating with cloud-native development: For other organizations the main driver of cloud is speed. Accelerating development, collecting customer feedback and shortening release cycles for fast iterations are crucial. Teams have to adopt new ways of working (DevOps), leverage cloud-native practices like CI/CD or Infrastructure as Code and get familiar with the cloud providers’ services. A Cloud Management Platform should be almost invisible to developers that are working in this mode. It should help them get access to the cloud faster and support with standard setups for developers: Git repositories, CI/CD tooling etc. (Developer Toolchain). But once the initial setup is done, they should be free to access native cloud environments: AWS accounts, Azure subscriptions, GCP projects and choose the services they like.

Of course there are variations of these 2 strategies. And in some cases you might have to enable both as they are executed in parallel. So if your Cloud Management Platform supports both: Self-service provisioning of simple cloud services through a marketplace-like interface as well as native cloud environments and the access to them, you are on the safe side.

The alternative is to look for Cloud Management Platforms that provide DevOps teams with entire cloud environments like AWS accounts, Azure Subscriptions or GCP projects and leaving the choice of specific services within these platforms to the teams.

Download your Cloud Foundation Maturity Poster with over 50 capabilities covering everything you need from Security to a Service Ecosystem. Asses where you're at and plan your road map to cloud-success.

(✅ 2) What is the right level of abstraction for you?

This question is not new to cloud. Enterprise architects have been dealing with abstraction levels for decades, and it’s a key decision to take when looking at Cloud Management Platforms. So let’s have a look at the options.

There are 2 main approaches we see on the market:

  1. Abstraction on the resource level: When abstracting on the resource level, teams that have access to the Cloud Management Platform get offered different resource types, e.g. VMs, databases and so on. They choose what resource they need and can usually provision it in self-service via a self-service portal. Depending on the “depth” of the abstraction they will or will not be aware in which cloud the resource will be provisioned. This approach implies that every cloud service a developer wants to use has to be integrated into the Cloud Management Platform, either by the CMP vendor or your team. Especially, when expecting a scaling demand for cloud, this can make it hard to accommodate all the different use cases.
  2. Abstraction on the tenant level: In contrast to workload abstraction, abstracting on the tenant level means that the Cloud Management Platform provides the capability to provision native cloud tenants, e.g. AWS accounts, Azure subscriptions, GCP projects and so on, in self-service. The developers get the freedom to access these native cloud environments and choose their preferred cloud resources directly in the cloud. However, thanks to the tenant-level abstraction, they will not have to deal with different tenant provisioning processes for each cloud platform, figure out how they’ll get access to the environments, or how to extract costs. That will be handled homogeneously by the Cloud Management Platform to reduce complexity and lower organizational overhead.

(✅ 3) How important is vendor lock-in to you and your organization?

Vendor lock-in is one of the drivers of multi-cloud strategies. By having multiple providers in place you reduce the dependency you have on each individual provider and can easily define an exit strategy.

In reality this is a bit more complex:

If you abstract on the resource level, you can make sure that you only provide services that are available across multiple clouds, rather than proprietary and highly specialized services. This facilitates moving from one cloud to the other. However, in this case vendor lock-in to the Cloud Management Platform itself becomes a critical risk. Removing the CMP from your setup, would leave you without any organizational structure or processes as well as badly isolated cloud resources.

If you abstract on the tenant level, you will have your organizational structure represented within each cloud platform. Most importantly, you will have isolated tenants for your workloads that people can continue to access when removing the CMP from your setup. The downside: Depending on the used resources within each cloud, migrating workload from one cloud to the other may be a bit more complex.

(✅ 4) What do you want the onboarding process for new cloud tenants to look like?

Cloud Onboarding is where everything starts. The process you set up here defines how different teams and stakeholders within your organization will get in touch with the “new” cloud offering you are building. It has a great impact on how fast and intensively cloud will be adopted and finally on the speed of digitization for your whole company. The smoother the onboarding, the lower the hurdle to access cloud resources. If you want to call yourself cloud-native, self-service is a must here.

One way to measure how mature your cloud onboarding process is, is to define your time-to-cloud: How long does it take a product owner or developer from requesting a cloud environment to him/her actually having access to that environment. A good answer is 5 minutes. This should include all the various aspects of cloud onboarding like assigning a budget, defining access rights, documenting it in a central tenant database and actually creating the environment. inThis should at least be the case for low-risk sandbox environments.

(✅ 5) What kinds of resources do you want your application teams to use?

IaaS is a classic, but most organizations are looking to leverage the cloud for higher-level services: K8s, serverless functions and more importantly managed PaaS services like PostgreSQL databases, BigQuery or Kafka just to name a few. They free you from heavy operating efforts and allow you to move faster and focus on what matters: custom functionality that stands out in the market.

A lot of Cloud Management Platforms (CMPs) originate from an IaaS time and focus on provisioning infrastructure like VMs, Networks and nowadays K8s rather than making the entire cloud portfolio accessible to teams. These commodity services are integrated into the platform and can be easily provisioned by DevOps teams. For Lift&Shift workloads this can work out nicely, because the teams do not need to deal with the specifics of each cloud platform. If they have more sophisticated requirements though, it will be hard for a CCoE to accommodate them. New services will have to be integrated individually which will slow down software delivery.

(✅ 6) Are you aiming to change the way you work?

You might recall the old times when requesting a server required filling out lengthy forms and maintaining a good relationship with the server team to accelerate the process from months to weeks.

In cloud-native development code is king. You describe everything you can as code to make it fast, consistent and easily repeatable. Leveraging cloud-native paradigms like CI/CD and Infrastructure as Code will help you shorten your release cycles and delivering new features faster. To support these this shift, your Cloud Management Platform has to provide you access to the native cloud APIs and allow you to use cloud-native tools like terraform or GitLab.

(✅ 7) What is your approach to achieve compliance?

Staying compliant is one of the core goals of cloud management. Simplified a lot, there are 2 basic approaches to achieve compliance when moving to the cloud: Preventive or reactive. You usually recognize a reactive approach by a list of policy violations you need to fix, often times displayed with the classical “traffic lights”. However, it may require a lot of resources to remediate the errors reactively once something has been detected.

In contrast, cloud environments can be secured preventively in a fully automated fashion. Landing Zones are the “tool” of your choice to accomplish that. They define the guardrails for cloud usage within your organization and prevent DevOps teams from doing things they shouldn’t: E.g. deploying outside of Europe (GDPR) or leaving S3 buckets public.

Your Cloud Management Platform should support an automated rollout of Landing Zones. It means that developers won’t be able to do basic mistakes and that security configurations are implemented consistently across different use cases. This makes it much easier for security to assess application’s security as they start from a solid baseline.

(✅ 8) Are you ready to trust your developers?

The cloud only unfolds its full potential when you use it to enable a wide range of use cases within your organization. What does that imply? A lot of engineering moves from IT into functional departments, like production or sales. These teams know best what they want to build and even how they want to build it.

Giving them their freedom requires a new mindset and even more important: A new shared responsibility model. I don’t mean the one between your company and the cloud provider, but your internal one: between your CCoE and developers.

When choosing your Cloud Management Platform you’ll have to make sure your developers get the technological freedom they need to be successful. Specifically check, whether they can create cloud environments in self-service, use the cloud-native APIs for automation and access the large variety of services the cloud providers offer without anyone’s assistance.

(✅ 9) No native Policy Orchestration

One reason to opt for a Cloud Management Platform is to avoid lock-in to the specifics of each individual cloud platform. And yet there are some aspects you can hardly abstract away. One of them is policy orchestration. All three hyperscalers provide native policy orchestration tools. You most probably have come across at least one of them: Cloud Formation (AWS), Azure Blueprints or Google Deployment Manager.

As the cloud itself is the place where policies are enforced (the so-called policy enforcement point), it’s helpful if your Cloud Management Platform leverages this native policy orchestration. You want to have full flexibility when it comes to security and compliance, even if this may cause more effort in setting up an appropriate policy framework as part of your Landing Zone.

(✅ 10) The beauty of Desired State

As humans, we tend to think in workflows that execute one step after the other. Automation enables us to improve workflows by making them faster and more reliable (You are sure it’s exactly the same, every time you run it). Looking at the tooling landscape, ITSM tools are a common way to model workflows digitally. They give you the power to easily automate your workflows and save you time while increasing consistency. They also facilitate collaboration when they automatically trigger a next step, once the first one has been completed to reduce the need for coordination.

Here is a workflow example in the context of cloud account creation:

  1. Product owner requests an AWS account by providing her name and use case
  2. Admin receives service request and triggers automation to create the account with the provided parameters
  3. An account automation script creates the account and tags it
  4. Once the account is created, the IAM team receives a notification to provide access to the account
  5. Once access is provided, the Product owner receives a notification and can access the account

While this is already much better than handing over tasks manually, a workflow approach for Cloud Management comes with weaknesses, especially when you have a large-scale cloud migration project ahead of yourself. Often times workflows still include manual steps. In the long-term they suffer from inconsistencies. How do you integrate future changes? You’ll have to implement a new workflow which will further increase complexity. Or what if there was an error that led the workflow to end early? How do I know after months and tens or hundreds of cloud environments what the actual state of a specific cloud environment is?

That is where declarative definitions or Desired State Models come in to play. Instead of defining each step individually, you define the desired outcome. E.g. an Azure Subscription with access for “Tom Teamlead” and a security configuration that allows productive workload. That’s the desired state that will be “replicated” to the cloud (creation of cloud environment, tagging for production, rolling out of a Landing Zone, providing permissions and so on). While in the beginning the results of workflows and the desired state model may seem similar, the approach to get there is fundamentally different. And the value of that materializes in the long term. Declarative automation in the context of Cloud Management focuses on achieving continuous compliance. What does that mean? A desired state that is defined upon the creation of a cloud environment can be compared against the actual state (actual existance of a cloud environment, actual permission set, actual security configuration etc.) in the cloud. In case of divergence the actual state can automatically be set back to the desired one. Infrastructure-as-Code is a commonly known example for a desired state approach.

meshStack as a Cloud Management Platform follows a desired state approach that is mostly focused on organizational aspects of Cloud Management like Tenant Management, IAM, Security & Compliance or Cost Management.

Cloud Management Platforms are a great way to improve the structure of your cloud landscape and ensure transparency across multiple cloud platforms. However, the choice of platform is a strategic decision that will affect the way you work and the speed in which you’ll be able to adopt cloud in the future. Taking these 10 aspects into consideration will help you take the right choice and better understand its implications.


Playing Field

Cloud Foundation: 3 valuable Players on the Field

Building Cloud Foundations can be cumbersome and tedious. In this post we’ll introduce 3 of our offerings in that field that will make your life easier.

If you are an Enterprise Architect and/or responsible for your organization’s Cloud Center of Excellence or Cloud Foundation Team this post is for you.

It will show you ways to accelerate your Cloud Journey, provide insights on Cloud Foundation best-practices and finally demonstrate how our offerings play together to help you build the best Cloud Foundation.

The Cloud Foundation Maturity Model – Getting Clarity

As a member of your Cloud Foundation Team, you may have come across the Cloud Foundation Maturity Model (CFMM). It’s a strategic model that helps you understand the capabilities you’ll need to build up during your Cloud Journey, regardless of which clouds you use, where you’re currently at or what approach to implementation you have taken.

You can consider it the coach on the playing field. It gives you guidance, shows where you’ll want to go and provides insights into different implementation options that will get you there.

We have used this model successfully with leading cloud-native organizations from various industries.

Building Cloud Foundations is an Iterative Process

Lots of organizations we work with run thousands of applications in the cloud, managed by a central Cloud Foundation Team. However, they didn’t get there within a day. Implementing a Cloud Foundation is best approached with an iterative approach. You start with the basics and implement more and more sophisticated capabilities over time as you get to know more about your users and their requirements.

The Cloud Foundation Maturity Model helps you to break the journey down into small steps that systematically build upon each other.

It’s not only about how far along the Cloud Journey you are, but also what type of vehicle you’re using

When Cloud Foundation Teams are faced with large-scale cloud migration projects, they know that whatever Cloud Foundation capabilities they build they’ll have to be highly automated in order to keep up with the upcoming scale of cloud demand.

That’s why we cover two dimensions when it comes to maturity within our Cloud Foundation Maturity Model:

  1. Journey Stage: Varying from Essential building blocks (1) to Industry Leading capabilities (5)
  2. Implementation Maturity: Varying from manual to fully automated

Assessing both brings you valuable insights when evaluating the current state of your Cloud Foundation and that’s exactly what the Cloud Foundation Maturity Model can be used for.

As a result, you’ll have a structured analysis of your Cloud Foundation, demonstrating your achievements as well as areas you’ll need to improve on a single page, which makes it a great tool for internal communication.

Collie – A Trustworthy Helper On Your Way To Cloud Foundation Maturity

Most organizations start their cloud journeys in high spirits on a green field. By the time you realize that you need proper cloud governance and decide to establish a Cloud Foundation, the once green grass has turned into a brownfield landscape. Collie is an open-source multi-cloud CLI that helps exploring and sorting, preparing for systetmatic management. It’s built for power-users, single player.

Collie CLI easily provides multi-cloud transparency on cloud tenants, corresponding access rights, tags and cost information. In regard to the Cloud Foundation Maturity Model it will help you to implement the following building blocks:

To fully automate your cloud foundation capabilities and establish collaboration between the Cloud Foundation and your DevOps teams, you’ll need to go one step further and use a platform like meshStack that reflects this organizational model.

meshStack – Your Cloud Foundation Platform

At meshcloud, we focus on enabling DevOps teams to use cloud and speeding up their applications’ time-to-market. Providing self-service access to native cloud tenants like AWS accounts, GCP projects or Azure subscriptions is critical to achieve this.

We know that every organization is unique and comes with specific requirements, especially in regard to the integration of existing IT Landscapes. Our aim with meshStack is to help you get the basics right and enable you to build upon them.

That’s why a lot of Cloud Foundation capabilities come out-of-the-box, while others can flexibly be adapted to your organizations’ needs.

In general there are 3 ways meshStack can help you to implement Cloud Foundation Building Blocks.

  • meshStack provides this Cloud Foundation Building Block out-of-the-box

    Outcomes Graphic

    Example: When using meshStack in a multi-cloud setup, you’ll have a Self-Service Multi-Cloud Tenant Database, out-of-the-box.

  • meshStack enables strong facilities for an easy implementation of that building block

    Outcomes Graphic

    Example: With Landing Zones and Services you’ll have everything you need to implement Modular Landing Zones.

  • There are established solution patterns that facilitate the implementation of this building block, when using meshStack

    Outcomes Graphic

    Example: There is an established solution pattern that will help you provide managed Service Accounts with meshStack.

Check here

In The End It’s About the People

Our offerings aim to help organizations throughout their entire Cloud Foundation journey:

  • From understanding and defining the Cloud Foundation for your organization based on our Cloud Foundation Maturity Model
  • to analyzing and understanding existing cloud landscapes with Collie CLI
  • all the way to implementing large-scale Cloud Foundation Platforms to manage thousands of teams and applications based on meshStack.

However, establishing a Cloud Foundation within an organization is not only about building up the relevant capabilities and deciding on implementation options. It also is a huge organizational shift that requires involvement of all stakeholders and great communication across teams and up to the C-Level.

By allowing organizations to easily experience the power of self-service and providing means for internal communications, we want to enable Cloud Foundation Stakeholders to drive acceptance for change throughout the entire organization in order to pave the way for a new way to organize IT.


Architecture Diagram of ITSM and CMP Integration

Why your ITSM Tool is the wrong Choice for cloud-native Cloud Management

ITSM tools have been used by large enterprises for ordering and workflow processes for a long time. They provide an easy way to order phones, monitors and other IT gadgets that could make an employee happy.

Providing Azure Subscriptions or AWS Accounts within your ITSM Tool

As cloud has become a new and common IT resource, it may be an obvious solution to use an existing ITSM tool to order specific cloud resources or entire cloud tenants.

A central IT service portal has the benefit that it consolidates all offered services that are used across the organization in a single place. Every employee knows that it exists and will probably use it as a first access point when it comes to IT requirements.

However, this general applicability is also its weakness, looking at highly specific services like cloud tenants. The cloud has much more complexity than a phone or a monitor. Here are just a couple of examples why:

  • Access and Permission Management: The user circle of a cloud project will change and evolve over time (new team members, team members leaving the company).
  • Cloud providers have heterogeneous organizational models. For example you have Management Groups in Azure and Service Control Policies in AWS. Both of these concepts can be used to group accounts within the organization and apply certain security guardrails to them.
  • Cloud projects have a dynamic lifecycle. Not only users and resources change over time, governance guardrails may have to be updated eventually, which can require reconfiguration of accounts for example.

So while a workflow automation as provided by ITSM tools may help when it comes to the initial cloud tenant provisioning, it will hardly be able to cope with the complexity and required cloud governance that a large scale use of cloud by DevOps teams requires.

Building a Cloud Management Platform for Cloud Resources

Due to this complexity of cloud infrastructure, a continuously evolving cloud service offer and organizational implications resulting from the use of cloud (DevOps teams), the product category of cloud management platforms has evolved.

And it certainly makes sense to address cloud complexity with a tool that has been built for this specific purpose, which is what we at meshcloud do with our platform. Our tool enables organizations to be cloud-native. While we provide a framework for cloud governance, DevOps teams are able to use the native cloud tooling they are used to – no barriers, no abstractions. It's the organizational structure that we take care of: IAM integrations, cloud access management, tenant creation, policies and billing, not the workload.

But there is another aspect that should be considered:

This may sound a bit dramatic, but a lot of central IT departments have lost trust by their (internal) customers (DevOps teams) due to a lack of process adoption. They are associated with slow enterprise processes, legacy tooling and security restrictions that don't allow for any fun tech experiments or technological freedom.

This image problem brings IT departments into a difficult position to launch a new and isolated tool for cloud management. In order to gain back trust by the DevOps teams, they would have to market it internally to raise awareness ensure its adoption.

Therefore, it often makes sense to use an existing IT service portal as a central entry point to the (mesh)cloud world to increase the discoverability and avoid parallel activities in this field. Furthermore, there may be adjacent service offers, that better fit into such a service portal.

So, how to combine your IT Service Portal with a new Cloud Management Platform?

The central IT service portal stays the central access point for all IT services. However, you will not directly order cloud services like AWS accounts, Azure subscriptions or GCP projects within this portal. Instead, you will use it to get access to a cloud-native cloud management.

In the case of meshcloud as a cloud-native multi-cloud management tool, this process looks as follows:

Let's say you are DevOps team lead Theresa and you need to get cloud access for your DevOps team to develop this cool new application that runs in the cloud.

meshcloud ITSM Integration

1) The central service portal is the initial entry point. The DevOps team lead Theresa can access it to request cloud access. Instead of providing her with individual cloud services, she will get access to meshcloud, her entry ticket to the cloud.

2 + 3) In some cases organizations want to include some approval workflows, which is something that workflow tools do pretty well. For example, Theresa may need to get a budget approved before actually getting access to the cloud. Another use case we often face is that cloud access requires some consultancy services, security trainings or a "cloud license" when used for the first time. These aspects can be covered in these two steps.

4) Once Theresas access has been approved, the central IT service portal will trigger the creation of a meshcloud account with the provided metadata. This process can easily be automated via our meshcloud API.

5) Now, our DevOps lead Theresa can access meshcloud to manage the cloud environments for her application. This includes permissions for other team members as well as the creation of cloud tenants in the available cloud platforms. In order to be able to map her activities in the cloud to the resulting application, she can provide some metadata, like an application ID, a cost center, a responsible product owner or a data security classification for the application.

6) The tenant creation and configuration is an automated process.

7) As this is a cloud-native portal, the DevOps team will have direct access to the cloud native consoles to set up and automate their deployments.

8) Cost management and chargeback can be a pain when managing a large number of projects in the cloud. Costs have to be mapped to their causing applications. meshcloud takes care of this cost allocation even across multiple clouds and provides cost data to be processed in further controlling and chargeback systems.

What's the Role of Central IT in this?

Right, we don't see central IT in this picture, but this doesn't mean they disappeared. So I want to spend the last words on common organizational structures we face in large enterprises. Within the central IT department, we often find a group of cloud experts (Enterprise Architects, Security and Governance, Platform Operators) – often called a cloud foundation. Their aim is to deliver secure cloud tenants to their internal customers (DevOps teams), to provide them with managed services like ELK stacks or databases and to assist them when it comes to security and compliance questions. meshcloud provides them with a tool, to do this in a consistent and scalable fashion.

To learn more about meshStack, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to get in touch with you.


Cloud Outlook 2020

(Multi-)Cloud Outlook in 2020

Cloud Outlook 2020
New challenges and developments in multi-cloud.

As meshcloud, we are proud to look back on a successful year 2019, in which we won multiple new customers. During this time we've had the great chance to accompany different types of companies in their cloud transformation. While our customers belong to many different industries, e.g. Automotive, Financial Services or Utilities, we've been able to observe many similarities in their move to the cloud – showing the relevance of cloud management and cloud governance among large organisations.

At meshcloud, we support large organisations in their cloud transformation. Our platform provides consistent governance with cross-platform Identity and Access Management, Tenant management, Cloud Security & Compliance and Cost Management and is used by large enterprises to deliver secure cloud environments to thousands of teams while staying in control and reducing complexity.

Based on our experiences in this past year, here are some predictions on upcoming cloud challenges in 2020:

Experiments come to an end: Bringing the Cloud Journey to the next level

A lot of companies have surpassed the experimental stage of using cloud. They are moving towards production deployments of cloud applications, have to manage rising costs and face tightening cloud security requirements. While in the beginning of a cloud journey, we often times face scattered cloud initiatives across different parts of the organisation, we now see more and more companies consolidating these initiatives and bringing them to the next level, setting up professional guardrails, preparing for wider user groups. One of the biggest challenges in this move is a reorganisation of responsibilities, especially in terms of cloud security, as the use now will surpass a couple of cloud experts.

In order to fulfill these requirements, there will be a consolidation of deviating cloud governance activities, bringing them together and solving them consistently across different cloud providers, often conducted by a dedicated team of (multi-)cloud experts. These new solutions have to be set up for scale in order to address the rising demand for cloud resources. Self-service and automation help to accelerate the time-to-cloud to a couple of minutes for increased productivity.

Watch out: Organizational lock-in is on the move

Being obsessed with avoiding vendor lock-in on the workload level a lot of companies have managed to transfer workload to infrastructure like Kubernetes and data to open-source databases like MySQL, PostgreSQL or MongoDB. At the same time the maturity of managed Kubernetes offers has greatly improved. However, as the use of cloud grows organisational processes and integrations towards cloud platforms are being established. And they are a new way for cloud providers to keep their customers close. As most organsations use more than one cloud provider to meet all their requirements, they are confronted with multiple organisational models. This is rather challenging, as the integration of new platforms becomes time and resource consuming and slows down the actual adoption of the new technologies.

We are very happy to be working on a declarative definition of organizations ("org-as-code") that enables companies to define their organization once, while meshcloud takes care of translating this into the different cloud platforms. The concept is known from infrastructure as code and based on the idea of defining a desired target state in an abstract way, while the system works out the specific implementations in the background. Having this, it becomes very easy to integrate new cloud platforms and provide technological freedom. (Learn more about org-as-code in our multi-cloud maturity model)

Cloud Security Complexity is bursting

When cloud is widely used across the organisation and the number of cloud projects is continuously growing, it is essential to have clearly defined cloud security concepts as well as proper process documentation to keep transparency on ongoing activities. Especially cloud access management (e.g. provisioning and retrieving access rights to cloud tenants) must be handled consistently, as the amount of managed permissions grows exponentially over time: more users, more projects, more cloud platforms. An easy and highly automated approach will help to keep control and enables to run continuous governance in order to comply with cloud security frameworks and certifications.

At the same time, multi-cloud teams will have to help cloud users and DevOps teams to implement cloud security concepts (e.g. via landing zones in their cloud tenants to achieve consistent security configurations across accounts and help them to deal with the increasing responsibility of DevOps practices.

Therefore, for 2020 we see an increased demand for clear governance of (multi-)cloud environments. Consistent cloud configurations and policy definitions will be key as companies scale up their use of cloud. Not only do they have to be controlled once implemented, their definition and scalable rollout will be in focus in order to provide a basic cloud security level for all cloud activity, which helps to align cloud configurations and accelerate basic cloud security assessments, even in regulated environments.

Are these topics relevant to you in 2020? At meshcloud, we provide best practices for multi-cloud management and multi-cloud governance, integrated into a powerful software platform. Feel free to reach out and get in touch with our multi-cloud experts or book a demo.


Make or Buy Assessment for Multi-Cloud Management

Multi-Cloud Management – Make or Buy?

Don't build your own (multi-)cloud management platform. We all face limits in time and resources. Don't invest this time and resources into a solution that each of your competitors will have to build, too. Focus on your core competencies to build products your customers love and make a difference in today's competitive market environment.

The start of a cloud journey

cloud journey
Most organizations start off using a single cloud provider. They go for cloud to leverage the benefits in speed and scale those technologies provide: On-demand access to a large pool of resources, a large offer of stable services, waiting to be used in new innovative applications and finally a cost model that reflects the actual resource usage, rather than a big block of inflexible fixed costs.

You have to balance agility and control to use cloud in the enterprise


Unfortunately, in an enterprise world cloud cannot be integrated by pulling out the company credit card and registering for an AWS account (replace AWS by any other cloud provider of your choice). Instead, the new provider will be integrated into an existing process flow, regulations have to be respected, security guidelines followed, considerations on alternative providers will be carried out and possible exit strategies evaluated. And all this is right: The gain in agility and speed has to be balanced with more control or (for those who don't like this word) transparency on responsibilities.

Silos, fragments and growing pains

Cloud Silos
A lot of companies initially choose to build this integration of new technologies into the existing organization themselves. This may work well in the beginning, while the number of users and projects is small, only one platform has to be integrated and the cloud is mainly used by experienced cloud-native developers and before the first compliance audits are done. At scale, such efforts can easily result in chaos and intransparancy (Have a look at our cloud maturity model to learn more on the different stages of multi-cloud management).

One of the reasons this happens is that there is neither a comprehensive strategy behind the activities nor a central team accountable for the result. The solution mainly emerges from bits and pieces needed on the way that result in heterogeneous cloud silos (Isn’t this how we started?).

  • Each cloud platform will follow its own process. Teams implementing the integration will do duplicate work. And teams that are new to cloud may have a hard time to get started as they won't know how and where.

  • Only parts of cloud management will be covered, wherever the pain is most pressing: e.g. IAM integration, provisioning of cloud tenants, configuration of cloud tenants, billing. Mostly the former ones, as this is where the journey starts.

  • In the end the use of cloud is part of a comprehensive IT strategy. However, its effectiveness can only be evaluated, if I actually get an overview on the acceptance of the new technologies, the resources running in different clouds, the type of services required by my teams, etc.

  • There are some processes there, that will help me at first but

    • are they documented and resilient to withstand audits or regulative requirements?
    • are they really cloud-native, e.g. resulting in a time-to-cloud of seconds or minutes?

Why it doesn't make sense to build your own cloud management tool

Apart from the result being different than what you expected and a high probability that you'll find yourself starting all over again, trying to integrate the partial and provisional solutions to a comprehensive view of your (multi-)cloud environment, there are a couple of reasons, why it doesn't make sense to build your own multi-cloud management tool in the first place.
Cloud Management Make or Buy

1) Time is against you

Building your own tool for multi-cloud management will take too much time. Planning the project, finding the required resources or building up specific cloud know-how and implementing it, will take some time, even if you are fast and have unlimited monetary resources. You don't have this time and most probably you also don't have unlimited resources.

No matter what industry you are in, you are probably facing a strong competition. Traditional industries like banks or automotive companies in particular have to fight for their markets, have to reinvent themselves and use their competitive advantage (extensive know-how, a large customer base, a known brand and a large amount of data to work with) to stay in the game.

The competition is strong and won't wait for you. And in regard to IT, you start in unfair conditions. While new players in the market can start building their business from scratch and in a cloud-native manner, you are carrying a heavy backpack of legacy infrastructure and processes.

To keep up, it's not enough to make your existing processes a bit better, you'll have to build new cloud-native processes.

2) Developers are hard to find

Finding qualified IT specialists is tough and probably one of the biggest challenges every kind of organization small or large faces nowadays. To build a multi-cloud management platform you'll need platform experts for the cloud platforms that you aim to use. As this hasn't been your focus so far, you'll be looking to hire those and they are even harder to find than regular software engineers.

Of course you could access external resources that have the skills and know-how to help you out faster. However, they are expensive and more importantly they are not sustainable. Building a cloud management tool is not a one-off project. Working with cloud resources is more complex than ordering physical servers, a phone or a monitor. Everything you deal with in the cloud has a lifecycle: users, cloud projects, permissions, certifications and so on and your tool will have to take care of it. This requires continuous governance and transparency on the state of your multi-cloud environment.

Furthermore, the cloud-native ecosystem is incredibly dynamic. Requirements for cloud management and governance evolve over time, new platforms pop-up on the market, new regulations will be defined and you'll have to comply.

These circumstances scream for a software service that comes with updates and adapts to a fastly changing environment as we face it here.

3) You have a business to run

We've figured out that time and resources are limited for all of us. But there is another point that shouldn't be neglected in a make-or-buy assessment. And that is something you definitely learn as an entrepreneur, but which is true for organizations of any size: With limited time and resources, you have to focus on your core competencies.

And these are most probably not multi-cloud integration and governance best-practices (for us they are), but rather products and services in your industry. Invest your resources in activities that create value for your customers. Work on innovation and new business models that will help you to make a difference on the market.

If your customers are IT teams, take them on the cloud journey and provide them with the tools they need to deliver software faster. It is your job to make it fun to use the cloud, to attract new talent because of the freedom and possibilities IT teams experience in your company.

Having a tool in place that takes care of the basics, will relieve you from being a bottleneck and allow you to focus on the specifics of your organization: Which services you want to offer, how to get cloud-newbies on board, architectures for newly build applications, security and network configurations for your specific setup, providing insights for better stability or investment decisions.

About meshcloud

meshcloud helps organizations leverage cloud technology in a cloud-native way. Our platform provides consistent governance with cross-platform Identity and Access Management, Tenant Management, Compliance & Security and Cost Governance. Large enterprises use meshcloud to deliver cloud environments to thousands of teams while staying in control and reducing complexity.

Please reach out to us for further information. Our team is more than happy to give you a free introduction to our platform.


Multi-Clouds in Banking – 3 Takeaways from the 2nd EBF Cloud Banking Conference

It’s already been a while. On July 9th, I visited the 2nd EBF Cloud Banking Forum in Brussels. The topic of the day: “Shaping a Multi-Cloud Environment”. Representatives of banks, regulators and cloud providers met to discuss how cloud computing can be used in the financial services sector, with one common goal in mind: To provide secure banking services to the people. With all the FinTechs and Neobanks showing up all over the place, it is obvious that traditional banks have to adopt new technologies to stay competitive in the market. The EBF Cloud Banking forum brings this discussion to the European level. With joint forces, we can be an example not only to other countries, but also be a pioneer in regard to other sectors.

The adoption of cloud technologies comes with some hurdles and quickly raises questions for the users: “How do I stay compliant with existing regulations?” as well as the regulators: “How do I make clear and transparent rules?”. “How are we going to treat the overcententration on the market?” There basically are only a handful of cloud providers to choose from and implementing a multi-cloud strategy is one way to act against overconcentration. However, it also raises the administrative complexity of adopting cloud. Therefore, a proper cloud management has to be put in place to help the banks avoid having more complexity than before. From a whole day of intensive discussions, I’ve put together the three main take-aways of the day.

Identity and access management (IAM) is crucial to a bank’s cloud transformation

This topic is not new to external environments like clouds. Defining how to manage identities and access to infrastructure resources is and has been a complex issue for many years. In order to mitigate the risk of abuse or unauthorized access to confidential information and critical systems, access rights have to be reduced to a minimum, following the principle of least privilege. Having a multi-cloud environment may raise the complexity of achieving this.
On the one hand, you want to make it easy for your users to access cloud resources, in order to speed up software development and lower the barrier for cloud adoption. On the other hand, you need to be in control of the access process, make sure it is documented and auditable to avoid the risk of undiscovered information leakage.

With meshcloud, we help large organizations like banks with a governance framework for multi-cloud environments. Our platform plugs into the existing organizational structures and identity providers like Azure AD or LDAP and unifies the way access and permissions are handled across different cloud providers. As a result, we use existing identities and provide secure SSO access to all cloud providers as well as a platform to manage fine-granular permissions centrally before replicating them to the attached platforms.
In addition, cloud providers like AWS and Azure have published services like AWS CloudTrail that monitor access and operations related to your cloud infrastructure.

An exit strategy is necessary to avoid vendor lock-in to a single cloud provider

The EBA guidelines on outsourcing arrangements include a paragraph on exit strategies, which requires banks and other financial institutions to have a well-documented exit strategy in place that ensures business continuity in case of an interruption of outsourced services. This can be the result of a failure, malfunctioning service offers as well as contractual or legal obstacles in regard to the service provider. These scenarios should be well documented and practiced, as it is crucial to be familiar with the workflows and know what to do in such exceptional circumstances.

While multi-cloud is a way to distribute the risk across more than one service provider, there are strategic steps to be considered when deciding on a sustainable cloud strategy. The use of open-source infrastructure components on different layers, such as Kubernetes, PostgreSQL or NGINX for example, can facilitate migrations from one cloud to another drastically. They can even be operated in a private datacenter. However, most companies decide on a multi-cloud strategy as a result of a best-of-breed approach. They want to use each cloud for the specific (and often proprietary) services it offers, for example in the field of machine learning or artificial intelligence. The use of such tools can accelerate development because they bring a lot of functionality out-of-the box and enable teams to focus on truly differentiating functionality. An assessment of the criticality of applications as well as the consequences of the failure of such proprietary services should be considered within an exit strategy.

As meshcloud, we help companies to avoid vendor lock-in by facilitating the use of multiple cloud platforms and the distribution of infrastructure and applications across them. We define organizational structures (teams, projects, users, permissions) in a declarative manner to enable our customers to integrate new service providers very fast and with little administrative overhead. To address the rise of higher-level infrastructure services, such as managed databases, message queues or machine learning and AI services, we offer a service marketplace that enables customers to provide a large variety of services based on an open standard (OSB API). This unifies the process of provisioning services and decouples it from the underlying infrastructure.

Configuration is key

A recent study of KPMG found that 80% of IT security incidents happen due to manual misconfigurations. By spreading infrastructure across different service providers it gets even harder to keep control on correctness and consistency of cloud configurations and this does not consider any application-specific configurations yet.

To improve security and compliance of cloud environments, our landing zones help to configure cloud accounts upon creation. This enables our customers to roll-out consistent configurations in automatically created cloud accounts, according to their use case (test, development, production). To give an example these configurations can limit the use of cloud infrastructure to specific geographical regions or blacklist certain services that are not compliant. By rolling them out consistently across all cloud accounts, our customers can relieve their development team from defining compliant configurations individually and instead provide them with a framework that has been approved by the security department.

To learn more about meshStack, please get in touch with our sales team or book a demo with one of our product experts. We're looking forward to get in touch with you.


Multi-Cloud Deep Dive

Multi-Cloud-Deep-Dive mit meshcloud im CloudComputing Insider

meshcloud unterstützt Unternehmen im Enterprisebereich ein "mesh von Clouds" aufzubauen und diese ihren Nutzern zugänglich zumachen.

Was das bedeutet? Mit meshcloud können Entwickler einfach im Self-Service auf Cloud-Ressourcen von verschiedenen Cloud-Providern und -Plattformen zugreifen.

Warum braucht man das? Jeder Cloud-Anbieter ist ein einzelnes Silo – für Unternehmen mit Multi-Cloud-Strategie bringt das große Komplexität mit sich, zum einen bei der Nutzung der verschiedenen Services, insbesondere aber dann, wenn es darum geht eine einheitlich Governance über die gesamte Cloud-Landschaft zu schaffen. meshcloud unterstützt Unternehmen organisatorische Prozesse zu vereinheitlichen, um die Arbeit mit den Multi-Clouds einfach und sicher zu gestalten. Organisatorische Prozesse, das sind zum Beispiel das Identitäts- und Zugriffsmanagement, die automatisierte Erstellung und Konfiguration von Cloud-Accounts in den verschiedenen Cloud-Plattformen, aber auch die Abrechnung über verschiedene Clouds hinweg.

Wie das genau funktioniert und wie sich die Nutzung der Plattform für Entwickler anfühlt, erläutert meshcloud-Mitgründer und CTO Johannes Rudolph im Deep Dive des Cloud Computing Insiders.

Hier geht's zum Artikel im CloudComputing Insider.

Wer möchte kann auch direkt zur Live-Demo springen. Teil 1 zeigt, wie die Nutzer, also Entwickler, mit der meshcloud-Plattform arbeiten:

 

In Teil 2 zeigen wir die Admin- bzw. Ops-Perspektive. Sie bietet Überblick über die gesamte Cloud-Landschaft, die Anzahl an Projekten und Nutzern und macht Kosten cloud-übergreifend transparent.

https://youtu.be/0wuCZtut_zc

 

Viel Spaß damit!


Multi-Cloud Stakeholders

Multi-Cloud Stakeholders – Why Multi-Cloud concerns the whole organization

In times of digital transformation multi-cloud strategies are often implemented top-down. The CIO decides "We are going to implement a multi-cloud strategy, because we want to avoid vendor lock-in to a single platform." While this is a very legitimate reason for a multi-cloud strategy (more can be found here), the actual use of multiple cloud platforms touches many stakeholders within the organization. They have to be taken on the journey and their everyday work has to be considered to make the strategy yield fruit. Depending on your position within the company your interest is either in agility or control. So while a developer is striving to write new code, rather than dealing with repetitive approval tasks, IT managers or security officers are scared to loose control due to increasing complexity.

stakeholders of multi-cloud environments stakeholders of multi-cloud environments[/caption]

IT Leaders

With the digital transformation, the role of software increases for companies of all industries. Software delivery performance has become the key metric to improve in this context. If you want to be better than your competition, you have to build up capabilities to develop new applications and bring them to market faster than they do it. As an IT leader it is your responsibility to provide the appropriate tools to your development teams, from cloud computing platforms like Kubernetes to state-of-the art tooling for CI/CD, monitoring, logging etc – the whole cloud-native stack. Keeping control of the occuring costs and making sure the tools are easily accessible to your teams to avoid shadow IT are also included in your set of responsibilities.

Developers

In times where software developers have become a scarce resource and enterprises compete for the best talent, providing an attractive work environment is key to success. Ask 5 developers what they consider important in their work environment: Being productive and having the right tools to work with will be among their answers. Apart from bad tooling, organizational inefficiencies can be a reason for productivity losses. Dependencies between departments for routine tasks like the creation of cloud accounts, configuration changes or quota management may lead to bottlenecks and unnecessary idle times. Self-service capabilities that provide autonomy help to solve this and drive productivity as well as employee satisfaction. They are in the core of cloud computing technologies, according to the NIST definition.

Cloud Operators

Being in operations, the main goal is to achieve platform and service stability. Night shifts or at least on-call duty are common practice as service disruptions bear a high cost risk. Customers may not be able to purchase or use a certain service during downtimes. As a result, support incidents pile up and a large amount of time has to be invested even after service recovery. To avoid these incidents, infrastructure has to be set up redundantly, aiming for high availability. The staff should be trained for failures to make sure involved people know what to do in such high pressure situations. An important prerequisite to make this work is to have a consistent environment that doesn't depend on individuals – automation, version control and documentation are important here. But operations don't only happen in emergency night shifts. During the day ops teams deal with day-to-day work to provide a set of services to development teams. However, the reality is often different. Instead of evaluation and provisioning of new cloud services or automation tasks, ad-hoc processes and manual routine tasks fill the operators' days, leading to inconsistencies and impatient developers finding their detours – shadow IT – to achieve their goals, which is a huge security risk. In multi-cloud environments, heterogeneous processes increase this complexity drastically if not taken care of, especially at scale. Again, the reduction of dependencies will help to avoid bottlenecks. Furthermore, it should be easy to provide services to developers. How? By providing a standardized framework for service provisioning like the meshMarketplace operators can easily provide a service e.g. a database or a logging framework to all development teams. While the operators can focus on the stability of each service as well as the addition of new services, the marketplace informs development teams about the existence of a service and boosts its reusability.

Controllers

Multi-cloud can be used as a tool to reduce cloud costs. In many cases however, the tools to define, compare or evaluate the cost of different projects, applications or deployment environments are missing or immature. Costs often occur in large vendor-dependent blocks that are hard to map and distribute to specific applications. The AWS or Azure cloud spending for example: Budgets for these positions are put aside and only when they are exceeded, action is taken. In these cases DevOps teams are asked to manually check on their resources, to make sure they are all actually needed and no zombie workloads exist.
In private clouds it gets even worse. Platforms used in private cloud contexts often don't even provide usage information or produce an invoice. As a result companies don't know the cost of deploying an application internally not to mention how to evaluate the best deployment environment for an application, in terms of cost efficiency. To leverage financial benefits of multi-cloud strategies, cost transparency is necessary. Private clouds need usage metering and billing capabilities and the cloud spend has to be allocated to the actual applications using the resources.

To successfully implement a multi-cloud strategy all the stakeholder needs have to be addressed. Multi-cloud management tools are a way to support this. The meshcloud multi-cloud-platform reduces the complexity of multi-cloud environments. It provides agility to developers and operators while providing visibility to keep control on cost and security. To learn more on our platform, feel free to contact us or schedule a demo with one of our cloud experts.


The Cloud Project Lifecycle

The Cloud Project Lifecycle

How you implement cloud computing technologies matters

Cloud technologies are an essential tool to increase software delivery performance. They bring the necessary automation and scalability that enables developers to spin up instances within minutes, get their applications up and running and implement CI/CD routines to develop software faster and close to customer requirements. However, the State of DevOps Report 2018 found that it's not enough to just use cloud computing technologies, it matters how you use them. "Companies that leverage all of the essential cloud characteristics are 23 times more likely to be elite performers" – a group of development teams that delivers software at the highest levels.

In this context, we want to introduce the concept of the cloud project lifecycle to illustrate the stages a cloud project traverses, how this impacts your organization and what you need to do to leverage your cloud to the fullest.

Let's consider the following situation: Within the next 3 years you aim to migrate several thousand projects to the cloud. As you run a large variety of applications, you have decided to use more than one cloud platform, you have a multi-cloud strategy. Each of those projects will run through a lifecycle from its creation to its deletion. The way you design every activity within the lifecycle will have a significant effect on the efficiency of your cloud transformation.


The Cloud Project Lifecycle

The Cloud Project Lifecycle

Tenant Provisioning

This is what you will need to get started. Depending on the cloud platforms you work with, this activity corresponds to creating a cloud account in AWS, creating a namespace in Kubernetes or a space in Cloud Foundry.

Tenant Configuration

Each tenant can be configured individually. Configuration can include setting up Single-Sign-On (SSO) integration, quota setting, security policy definition or VPN setups. Different platforms may offer different capabilities here.

User Provisioning

Once your tenant is up and running you can invite users to start working in the platform. This includes managing their authorization and permissions.

Resource Provisioning

With a user account in the cloud, developers can start to spin up resources and deploy applications. Resource usage will be metered and checked against the configured policies. Data has to be managed according to the corresponding laws (GDPR).

User Deprovisioning

Employees change roles or leave companies. When they do, their access to the cloud has to be retreived as well.

Tenant Deprovisioning

At some point a cloud account may have to be torn down, retreiving authorization and deleting resources.


How the cloud project lifecycle affects your organization

If you are facing a large cloud transformation process, you won't go through these activities only once or twice, you are operating at scale. There may be thousands of projects about to be migrated to the cloud. Tenants have to be re-configured continuously, users come and go and resources are provisioned multiple times per day. A lack of automation and standardized processes will result in immense inefficiencies, putting your cloud investments at risk. Just having cloud platforms available doesn't do the job. You'll need to integrate them into your organization and adapt your processes to the cloud-native way – a high level of automation to accelerate workflows and ensure consistency, self-service for developers to avoid dependencies and resulting bottlenecks and transparency to provide the necessary control on your systems.

What you can do to leverage your cloud to the fullest

Back to the cloud project life cycle, you should ask yourself these two questions for each of those activities:

1.) How long does this take in my environment?
2.) How many people are involved?

If the answer is anything else than "less than a minute" and "max one person", there is still some room for improvement. Cloud management portals can be a means to help you manage administrative processes. meshcloud offers a cloud management portal to unify organizational processes across different cloud technologies. Thereby the platform helps large companies to leverage their move to the cloud by significantly reducing the complexity of heterogeneous IT environments. It provides a self-service-portal that facilitates configuration, user on- and offboarding and resource provisioning across clouds. Having this in place, introducing new technologies becomes a walk in the park. Find more information here.