AWS Landing-Zone Solution Comparison

AWS Landing Zone Comparison

AWS Landing Zone Comparison

You are on your path to migrate your teams’ workloads to AWS, whether from on-premise or from other cloud providers? Then you have most probably stumbled upon the term “Landing Zones” being an important piece in the puzzle to a smoother cloud migration journey. If this term does not ring a bell, check out our blog post about landing zone lifecycle. Simply put, landing zones should set your foundation in AWS according to your organization’s security and compliance requirements.

It is not often the case that only one landing zone solution exists. This poses the challenge of choosing the right landing zone implementation for the right use case.

In this blog post we go through AWS’ ecosystem on landing zones. The goal is that after you finish reading, you will have a clearer picture on different AWS landing zone solutions to help you choose the landing zone solution that fits most to your organization.

ℹ️ Note that custom-built landing zone solutions are out of the scope of this blog post.
💡 If you are interested in Landing Zone solutions for Azure or GCP, please have a look at our comparison blog posts of these cloud platforms.

AWS Landing Zone Ecosystem

When starting your journey to search for a best fit landing zone option to pursue, you will most certainly come across different solutions. We will mainly describe a set of landing zone solutions that are provided and maintained, or recommended by AWS, namely AWS Control Tower (with plain account factory or account factory for terraform) and AWS Landing Zone Accelerator.

There are other solutions that we will not cover in the comparison, namely AWS Landing Zone and AWS Secure Environment Accelerator (SEA). The former being on long term support and not receiving more features, as well as not being recommended by AWS. The latter being targeted on governmental and public sector environments that require high-level of restrictions. We will give a brief explanation on each solution, but if you are interested in comparing these solutions as well, let us know know in the comments ⬇️ .

AWS Control Tower (CT)

We recommend that all new landing zones start with AWS Control Tower. AWS Control Tower helps you build out an initial prescriptive landing zone configuration, use out-of-the-box guardrails and blueprints, and create new accounts using AWS Control Tower account factory.
-AWS Documentation

This is the first solution you will probably come across. AWS repeatedly recommends this approach going forward. Unless you already have your landing zone setup customized with older approaches, you should definitely try it out.

AWS Control Tower’s main functionality is that it sets up a basic landing zone for you using multiple AWS services behind the scenes, such as AWS Organizations and AWS CloudFormation StackSets. It is pretty easy to set up as AWS provides a step-by-step walkthrough (i.e. a wizard). In that wizard, you will first choose the region of your control tower, and also have the option to deny services in regions you specify. Then you configure organizational units (OUs), shared accounts, CloudTrail (e.g. log retention periods) and encryption. Eventually, control tower will create the following:

  1. Two Organizational Units (OUs)
    Security: This OU contains shared accounts (log archive and audit accounts).
    Sandbox: This OU will be empty, to be filled with user provisioned accounts. Although its name indicates an environment with loose restrictions, by default it includes similar guardrails as all other OUs.
  2. Three shared accounts; management account (note that this account already exists, and is the account from which you setup CT) as well as already mentioned accounts under Security OU (log archive and audit accounts).
  3. A native cloud directory with preconfigured groups and single sign-on access.
  4. 20 preventive guardrails to enforce policies (using Service Control Policy (SCP)) and 3 detective guardrails to detect configuration violations (using AWS Config).

AWS Control Tower also offers an account factory where you can manually provision accounts or enroll existing accounts. These accounts are then governed by control tower. AWS also offers a framework around account factory to enable automation using a GitOps workflow which we explain in more details below.

AWS Control Tower Account Factory for Terraform (AFT)
AWS Control Tower Account Factory for Terraform
AFT is built on top of AWS Control Tower. It provides automation around account provisioning and further customizations using a GitOps approach.

AFT can be set up after setting up control tower, and is done mainly through two parts; manually creating the AFT management account and then applying the AFT terraform module. After applying the module, there will be 4 empty repositories (aft-account-customizations, aft-account-provisioning-customizations, aft-account-request, aft-global-customizations) in your configured VCS (e.g. CodeCommit, GitHub, etc.) that you need to populate with source repositories.

As shown in the “AFT Deployment Module” diagram above, AFT performs multiple automated steps to provision and modify accounts, as well as create custom resources. This is done by providing an interface to users (e.g. developers). For example, users can create a commit to aft-account-request repository (directly or through a pull-request based flow) with a file that calls the account request. The commit then triggers the configured workflow to provision an AWS account governed by CT.

AWS provides various resources to get more familiar with AFT. For information on broad concepts and some technical details, we recommend you check AWS AFT documentation. For a deep dive technical guidance, we recommend you go through AWS Control Tower Immersion / Activation Day - AFT workshop.

Landing Zone Accelerator on AWS

Landing Zone Accelerator on AWS is the latest approach by AWS to provide a solution for a secure Landing Zone based on best practices.

It is recommended to be used together with AWS Control Tower. But it also works in regions that don’t support Control Tower. In that case, it uses AWS Organizations directly and creates the intended OUs with according policies applied. It also works with AWS CloudGov (US), which is not supported by AWS Control Tower. So it is a solution that can be used by every AWS customer.

The Accelerator is completely executed via AWS native services. An AWS CloudFormation Stack is used to roll out a CodePipeline that creates and updates the Landing Zones according to the configuration that can be customized to your needs.

AWS Landing Zone Accelerator

The CodePipeline is the central component of it. It does the initial role out of the Landing Zone and also enables Continous Integration and Deployment. All that is needed to roll out changes to the Landing Zone is a commit to the Config Code Repository with the config changes you would like to apply.

Rolling it out will actually result in the following environment.

Resource Hierarchy AWS Landing Zone Accelerator

With this Resource Hierarchy responsibilities are clearly segregated into different OUs and accounts.

💡 If you try to roll out the Accelerator, please use the this page from the AWS documentation to get a step-by-step guide including pre-conditions. Without this page you will easily miss some important steps.

And when using the Accelerator in combination with AWS Control Tower, you have to make sure that you enter the management, audit and logging account email addresses of the accounts that were already created by Control Tower as the landing zone accelerator builds on top of it. Otherwise you will run into a code pipeline prepare stage issue.

AWS Landing Zone

AWS Landing Zone is currently in long-term support and will not receive any additional features. Customers who want to set up a new landing zone should review AWS Control Tower and Customizations for AWS Control Tower .
-AWS Documentation

This is the old approach on building an AWS landing zone which is based on AWS Organizations. One of its key components is the Account Vending Machine (AVM) which is a Service Catalog product that enables preconfigured account provisioning. It creates a multi-account structure; a security, logging, and a shared services account under a “Core OU”.

At the time of writing, searching over AWS Landing Zone in AWS documentation does not provide enough information and you will be redirected to AWS Control Tower in many occasions.

If you are already using this solution and are actually happy with it, let us know in the comments below!

AWS Secure Environment Accelerator (ASEA)

This solution has been around since 2020 and was initially built with focus on the Government of Canada. The main target of this project is environments that require high-level of security and restrictions such as governments and public sector organizations. It offers an opinionated and prescriptive architecture for your AWS multi-account setup, while also making it possible to customize as needed. It provides sample configurations that will build you an end-to-end landing zone.

There is extensive AWS secure environment accelerator documentation that will lay out details on the solution including workshops for both administrators and project teams.

If you are already using this solution and are actually happy with it, let us know in the comments below!

Comparing the Options

For this comparison, we assign 1 to 3 ⭐ for different aspects that we look at for these options. If a certain aspect is not covered at all we assign a ❌. Additionally, you can find some details about why we rated an aspect. These details also provide a deeper insight into how these aspects are covered within the options.

Feature Comparison

The feature comparison is based on the Cloud Foundation Maturity Model (CFMM). You can read details of what these blocks mean on the CFMM website. You can also follow the links in every row that redirect you directly to the details of the according CFMM block.

AWS Control Tower with Account Factory AWS Control Tower with AFT AWS Landing Zone Accelerator
Resource Hierarchy ⭐⭐ Creates 2 Organizational units (OUs), Security and Sandbox. It also creates 3 shared accounts; a standalone management account (not belonging to an OU), and log archive and security audit in the Security OU. The Sandbox OU remains empty to contain the new provisioned accounts. You can create new OUs based on your desired structure. They will be governed by CT. AWS also offers a sample hierarchy with additional OUs that you can manually include. ⭐⭐⭐ Additional to what CT provides, AFT offers a GitOps approach to provision accounts under specified OUs through an automated workflow. ⭐⭐⭐ By default it creates 2 OUs, Infrastructure and Security. You can easily define more OUs in the config. Another great feature it provides is the option to use a Quarantine OU where almost everything is forbidden. It is used for newly created accounts that are still under construction by the accelerator. That way unintended access or changes during creation can be avoided.
Resource Policies ⭐⭐⭐ AWS Control Tower sets up 20 preventive guardrails (implemented by SCPs) and 3 detective guardrails (implemented by AWS Config). Those are the “Mandatory” guardrails enabled by default that we see are reasonable, such as disallowing deletion of the log archive bucket, disallowing changes to AWS IAM roles set up by CT and CloudFormation, etc. There are also more guardrails available that are not enabled by default, which are “Elective” and “Strongly Recommended” guardrails. Newly provisioned accounts with Account Factory inherit guardrails from their parent OU. ⭐⭐⭐  Accounts provisioned through AFT also inherit all guardrails from their parent OU which is covered by CT. Additional policies can be applied through AFT customization framework. ⭐⭐⭐ Reasonable guardrails are applied to the default domains Infrastructure, Security and Quarantine. It is easy to also define additional guardrails or to define them for additional, custom OUs.
Centralized Audit Logs ⭐⭐⭐ Enables CloudTrail service to aggregate action and event logs into the shared Log Archive account from different AWS accounts and records them in CloudWatch. With CT release ≥ 3.0, an organization trail is created to gather event logs on an organization level instead of a member trail that gathers logs on a per account basis. ⭐⭐⭐ With AFT, you can also enable CloudTrail data events. Data events are events performed on or within a resource. These are disabled by default as they often incur high costs. This is done when deploying AFT by setting aft_feature_cloudtrail_data_events to true. ⭐⭐⭐ Uses the Log Archive Account to store the logs. It can be configured easily which logs shall be included.
Cloud Tenant Tagging ❌ No AWS account is tagged when setting up CT. ⭐ By default provisioned accounts are not tagged. It is possible to tag those accounts through a terraform variable account_tags. ❌ Only Tag Policies on the org level can be defined. They only ensure that if a certain tag is set, it complies with the definition in this policy. The policy will ignore those resources if no tags are set at all. During account creation , no tags can be set. So the tagging of Resources must be done outside of the accelerator.
Tenant Provisioning/Deprovisioning ❌ Initial provisioning of your multi AWS accounts. This does not qualify as tenant provisioning/deprovisioning, which is partly covered by account factory. ⭐⭐ AFT’s main functionality is automation around tenant provisioning. Deprovisioning is not supported. Removing the aft-account-request terraform for a specific account only removes it from AFT management. ⭐ Workload Accounts can be defined in the accounts-config.yaml. Only the very basic account information can be entered here. Tags, Service Accounts, etc cannot be defined for workload accounts.
Playground/Sandbox Environments ❌ Creates a Sandbox OU. However, it also has the same guardrails as the Security OU. ❌ No predefined Sandbox OU is available
Privileged Access Management (PAM) ⭐⭐⭐ Uses IAM Identity Center service to offer preconfigured groups. You can then add users to those groups based in their role in the organization. ⭐⭐⭐ By default, the user email set in the account request is assigned AdministratorAccess to the account. Additionally, groups created with CT are assigned with specific permissions. That is, AWSSecurityAuditPowerUsers group is assigned. AWSPowerUserAccess, AWSControlTowerAdmins group is assigned AWSOrganizationsFullAccess, AWSSecurityAuditors group is assigned AWSReadOnlyAccess to that account. ⭐⭐⭐ By default it defines only an Administrator group. But as you can base it on AWS Control Tower, you can make use of the advanced default roles created by AWS Control Tower.
Service Account Management
Virtual Networks ⭐⭐⭐ CT creates a default VPC with 3 subnets in the management account per availability zone. You can edit VPC configurations for new provisioned accounts with the account factory. For example, you can allow internet access for created subnets, which will lead to the creation of a NAT Gateway. When provisioning a new account, CT automatically deletes the default VPC and creates a new VPC configured by CT. ⭐⭐⭐ Similar to normal account factory, with addition to AFT customization framework that allows you to include additional terraform resources. ⭐⭐⭐ It provides a reasonable default network setup in the network-config.yaml. Additionally the default config contains a lot of additional options as commented code. You just have to active them and adapt to your needs.
On-Prem Connect ❌ CT workflow does not offer on-prem connection out of the box. It can however be implemented using Transit Gateways and Site-to-site VPN. AWS Control Tower workshop provides a detailed guide on interconnecting control tower accounts VPCs as well as connecting them to on-prem. ❌ While it is not supported out of the box by e.g. switching on a flag, AFT customization framework enables administrators to include terraform resources as they see fit. There are a bunch of “Site-to-Site VPN” terraform resources available that can be leveraged. ⭐⭐ It provides a TransitGateway in the network config to connect easily to a hub that makes the on-prem connect. As also directConnectGateways can be defined in the network config, everything that is needed to establish an on-prem connection is available.
Cloud Interconnect ❌ Similar to on-prem connect, CT does not offer cloud interconnect out of the box, but can be implemented as an “on-prem” connection. Additionally, AWS Control Tower workshop offers guidance on interconnecting AWS accounts VPCs. ❌ Same as for CT itself applies. ⭐⭐ In the network config directConnectGateways can be defined to connect to networks at other cloud providers.
Managed Key Vault
Automation in CI/CD ❌ Runs automation under the hood, not configurable by the user. ⭐⭐⭐ Automation is the core of AFT. It sets up a pipeline that is triggered by a code change in a version control system (VCS). It uses AWS CodeCommit natively, but also supports other alternatives such as GitHub and BitBucket. ⭐⭐ The whole Accelerator framework is based on AWS CodeBuild and CodePipelines. Any updates to the config is rolled out via that pipeline. The only downside is, that you can only use the AWS Services for it. Integrating it into any other CI/CD tool is not possible.

Non-Functional Comparison

AWS Control Tower with Account Factory AWS Control Tower with AFT AWS Landing Zone Accelerator
Extensibility/Modularity/Scalability ⭐⭐ CT only provides the baseline setup. It can be extended with “Customization for Control Tower - CfCT”. You can also create your own OU structure to which CT will also apply guardrails. ⭐⭐⭐  In addition to what can be extended and customized in CT, AFT provides a customization framework that allows users to include AWS resources with terraform. ⭐⭐ You can customize the config rather easily to your needs. But it seems like it is not possible to add completely new modules for other services easily to the Accelerator. As they have a DSL to define all the resources via their config it feels like you are pretty locked to the config options that are provided by the accelerator. But the config actually covers quite a lot of services (35+ services are mentioned in the AWS docs) and seems to be sufficient for many use-cases. But if you want to integrate an AWS service into your Landing Zone that is not yet supported by the accelerator, it looks like you have to create a Pull Request to the accelerator to get it in.
Terraform ⭐⭐ As its name suggests, it offers a workflow based on terraform. It requires a bit of ClickOps initially to create the AFT management account. ❌ It is completely based on AWS CloudFormation.
Learning Curve ⭐⭐ ⭐ It is easy to start with CT and set it up. Having an advanced AWS knowledge is not required. ⭐⭐ There are many AWS services involved in the workflow, making it a bit harder to grasp. Also, AWS account provisioning is not directly performed via terraform but an account request is created which triggers an account provisioning process. ⭐⭐ Except for some current issues, that should be related to the accelerator being quite new on the market, rolling it out is rather easy. You only have to deploy a StackSet and the rest of the bootstrapping is done automatically for you. As a next step you have to set the configuration you like to have in 6 yaml files, that are rather easy to understand. You also get a good reference implementation based on best practices where you can copy and adapt your own configuration.
Community ⭐ Scattered community across multiple platforms. There doesn’t seem to be a specific place to ask questions but you can find some answers on different platforms such as stackoverflow. ⭐⭐ In addition to different community platforms, you can open a GitHub issue in AFT repo in case you encounter problems. AFT maintainers seem to be responsive to issues, but they still don’t accept contributions at the time of writing. ⭐ As it is a rather new solution you cannot find a lot around this yet. The GitHub Repo only has 75 stars and 12 contributors yet. Only a few issues exist yet in the GitHub repo and the responsiveness of the developers is rather low at the moment. At least 2 tickets we created or commented in did not get any response from official contributors within more than a week. Eventhough community around it is still rather low, an advantage is that AWS Support fully covers support for it.
Costs ⭐⭐⭐ There are costs associated with services that CT uses such as CloudTrail and S3 buckets. The resources it creates can incur costs depending on the usage, for example the more accounts created, the more logs are ingested into the log archive account. AWS has a pricing page with details on different scenarios and how much cost is associated. Another example is, in account factory settings, you can allow internet access for created subnets in the new account, which will lead to the creation of a NAT Gateway and drive some cost. ⭐⭐ There are no additional costs for AFT but also for resources it creates. Transparency in AFT is limited and you need to figure out which options to enable/disable to control which resources are created. For example, AFT will by default create AWS PrivateLinks. In order to disable that, you need to set aft_vpc_endpoints=false when deploying the AFT terraform module. ⭐ AWS is very transparent around costs that are associated with running the best-practice landing zone. At the time of writing this article it is ±360€ per month. If you are not using all resources that are configured in the default landing zone you can save a bit of money. Main cost drivers are the KMS and the networking.
Time To Success ⭐⭐⭐ Setting up AWS Control Tower for the first time takes 1-2 hours, which includes going through “set up your landing zone” that takes roughly 15-30 minutes. This assumes that you do not have an existing setup that you need to decommission; as that will increase the time to success. ⭐⭐ Setting up AFT technically takes around ~1 hour, which includes creating the AFT management account and deploying the AFT terraform module. Post-steps like populating your repositories for automation would rather be quick. All in all, assuming the process goes smoothly, setting up CT with AFT would take ~4 hours. ⭐⭐ The concept of AWS accelerator is quite promising to set up a secure Landing Zone rather quickly due to the best practice defaults and the easy to understand configuration. But currently there is the downside that if something goes wrong while rolling out the pipeline you might receive really bad error messages that don’t help you resolving the issue. This can delay the time to success significantly. But these issues could vanish over time when the accelerator becomes more mature.

Find your way to building AWS Landing Zones

Building your landing zone in AWS eventually implies it will be built on top of either AWS Organizations directly or AWS Control Tower to enable your multi-account setup. Using either of these would also impact your decision on which landing zone solution to pursue. AWS Organizations is merely a tool that enables a multi-account setup but does not actually build a landing zone, while AWS Control Tower provides an abstraction to multiple AWS services including AWS Organizations and builds a basic landing zone that you can extend atop. The below simplified flow chart conveys the relation between the different solutions.

AWS Landing-Zone Solution Comparison

Unless you are using a region that is not supported by AWS Control Tower or you need absolute control over all the components of your landing zone, you should always go with AWS Control Tower. Most solutions nowadays support it, and it is highly recommended by AWS. But besides the guardrails it provides, there isn’t much that it adds in the meantime and it can be considered a very basic landing zone when used solo. Therefore, if you’re only interested in a basic landing zone setup that only provides you with a multi-account framework with a set of preventive and detective guardrails, then you should definitely consider AWS Control Tower.

From there you can choose to stay with only Account Factory in which you will pretty much only benefit from CT’s guardrails when provisioning accounts, or go for Account Factory for Terraform which additionally provides a GitOps based framework to automatically provision accounts with customization capabilities.

If you’re interested in an end-to-end landing zone solution, then consider AWS Landing Zone Accelerator. It is a solution that promises to build full-fledged AWS landing zones with automation capabilities. It supports both AWS Control Tower and AWS Organizations. This solution was first released on 23 May 2022 which is fairly new. We identified that the product itself and its documentation is not yet battle proven and you can run in some unexpected issues when trying to roll it out. As of this writing, we stumbled across a blocker that prevented us from actually deploying it and therefore we cannot provide a full recommendation yet. We like the simplicity and customizability of the config files that are steering the AWS Accelerator and the understandable pipeline that deploys it. But we are wondering whether more extensive customization that goes beyond the services and features supported by the configuration files of AWS Accelerator are feasible.

The different solutions presented in this blog post are actually not really alternatives that compete against each other. Instead they combine and extend each other very well. For a full-fledged Landing Zone we therefore recommend the combination of AWS Control Tower with AFT for automated account provisioning and Landing Zone Accelerator on AWS to add further highly recommended features to your Landing Zone.

Best-Practice AWS Landing Zone Solution Diagram

If the concerns we mentioned regarding the production-readiness of AWS Landing Zone Accelerator, or if you need to have full control over your landing zones, then go for a custom-built landing zone, either via an AWS partner or with your AWS team.

Did this blog post help you decide which Landing Zone Solution you will apply to AWS? Did you already gather your own insights to building an AWS Landing Zone and want to back up, add or discuss insights from this blog post, please do so and leave a comment! We look forward to hearing about your experience.

Google Cloud Checklist

Google Cloud Landing Zone Comparison

Google Cloud Landing Zone Comparison

When researching how to set up your Landing Zone in GCP you come across different options provided by Google. In this blog post, we compare several key aspects of the different solutions. We had a deep look into these solutions and rolled them out in a GCP Test Organization to evaluate how they perform. After reading this blog post you will understand what the different options are, which features they provide, how easy it is to use them and which option fits best to your use-case.

Your Options to Create Google Landing Zones

Google Cloud setup checklist

In the GCP web console, you can easily set up a basic cloud foundation via a wizard called “Set up your Foundation”. It is based on the Cloud setup checklist by Google. It covers the most relevant topics that should be considered for a secure and scalable cloud foundation.

It does not apply most of the configuration directly via the web console but generates Terraform Modules. So the wizard is a nice and handy frontend for configuring some Terraform. You are guided very well through the different things that will be applied and how they are configured.

Google Cloud Checklist

Cloud Foundation Toolkit - Example Foundation

Based on the Cloud Foundation Toolkit, Google provides a Terraform Example Foundation. It can be used as a basis and customized to individual needs. It already applies best practices and reasonable defaults that should fit many companies.

Terraform Modules are structured into different stages (bootstrap, org & resource hierarchy, environments, networking, projects, and app infrastructure). After bootstrapping the basic structure and projects by executing Terraform, it provides the option to roll out the remaining stages via CICD. Google Cloud Build and Jenkins are supported for this. But you still have to prepare the different stages by executing several commands manually before the deployment is triggered via CICD. For execution via CICD, it uses a Google Cloud Git repository per stage.

[cta_box title="Do you need support building your Landing Zone?" text="Book a free expert call here, to learn more about modular Landing Zone Best-Practices." background-color="000000" button_text="Book Free Expert Call →" button_url=""]

💡 We did not complete rolling out the Example Foundation as we ran into permission issues we couldn’t figure out. As we came to the conclusion that Fabric FAST is the more mature and future-proof approach, we decided not to dig deeper into rolling out the example foundation. Details about this can be found in the next sections.

Cloud Foundation Fabric FAST

Cloud Foundation Fabric is the latest approach by Google to provide a landing zone solution based on the Cloud Foundation Toolkit that is easier to apply, extend and roll out than the Terraform Example Foundation mentioned before. The result of this is the production-ready blueprint for a secure organization called Fabric FAST. It is a set of Terraform modules structured into stages (bootstrap, cicd, resource hierarchy, security, network, project factory, data platform).

Fabric FAST comes from engineers in Google Cloud's Professional Services Organization, with a combined experience of decades solving the typical technical problems faced by GCP customers.

Beyond setting up and configuring a secure Organization and Resource Hierarchy it also provides a Project Factory which enables a GitOps Workflow to provision new projects.

Initially, you need a Super Admin for Google Workspace to set up some groups and assign people to these groups. Afterward, everything is done via Terraform.

Stages of Fabric FAST

Comparing GCP Landing Zone Options

For this comparison, we assign 1 to 3 ⭐ for different aspects that we look at for the three options. If a certain aspect is not covered at all we assign a ❌. Additionally, you can find some details about why we rated an aspect. These details also provide a deeper insight into how these aspects are covered within the options.

GCP Landing Zone Feature Comparison

The feature comparison is based on the Cloud Foundation Maturity Model (CFMM). You can read details of what these blocks mean on the CFMM Website. You can also follow the links in every row that redirect you directly to the details of the according CFMM block.

Checklist Fabric FAST Example Foundation
Resource Hierarchy ⭐⭐ You can pick from 4 different hierarchies, which makes it quite easy to find a hierarchy that matches your needs. Using even more custom ones is also possible by adapting the generated Terraform code in the end. ⭐⭐ A hierarchy that fits many companies is applied by default. It is divided into common branches like networking or security. A “Teams” branch contains all end-user projects. It can be easily customized via input variables (also see Tenant Provisioning below). Applying a completely different structure requires adaption to the Terraform files. For other hierarchy structures, you can also use blueprints provided outside the context of fast stages, see “example foundations”. ⭐ The example foundation only rolls out one specific resource hierarchy, which is separated by environment (dev, non-prod, prod). Using custom hierarchies has to be implemented on your own.
Resource Policies ⭐ 3 basic policies can be applied manually via an optional step in the wizard, they are not part of the generated Terraform code. ⭐⭐⭐ Reasonable defaults are set and additional helpful policies are proposed via code comments in the terraform files. They can be enabled on demand. ⭐⭐⭐ A good amount of reasonable default policies are rolled out via Terraform. They match the best practices provided by GCPs Cloud Foundation Toolkit.
Centralized Audit Logs ⭐⭐ Basic centralized logging is configured by default. You have to follow manual steps to push logs to BigQuery, SIEM, etc. ⭐⭐⭐ By default, it captures logs from GCP’s Cloud Audit and VPC Service Control violations. It can be further customized to capture even more logs by configuring what is so-called log sinks. That way you can i.e. push logs to BigQuery or a SIEM solution. ⭐⭐⭐ Centralized Audit Logs are applied and you can configure exports to BigQuery, pub/sub systems, and more.
Cloud Tenant Tagging ⭐⭐ Tenants can be tagged via the project’s YAML file of the project factory. Defaults that shall be applied to all projects can also be defined. Tags cannot be defined at the Team level yet. ⭐ It sets some default labels like environment, security contact, etc for all projects. But adding any custom tags requires modification of the terraform files.
Tenant Provisioning/Deprovisioning ⭐⭐ With the project factory, GitOps-based tenant provisioning can be applied. As the project configs should better be reviewed by a cloud foundation member, it is not full self-service for the end users. When removing the project definition again, the tenant will be deprovisioned by terraform. ⭐ With stage 4-projects you can define projects, but you have to touch Terraform files to create or change projects. This feels more like coding instead of configuring it as it is done with Fabric FAST. So this approach of modifying Terraform does not seem to result in the best GitOps flow for managing projects.
Playground/Sandbox Environments ⭐⭐ It provides a dedicated folder for Sandbox environments. This folder has different and softer org policies applied. That allows for quicker evaluation as more complex, but secure policies don’t have to be applied here. Sandbox projects can be created via the Project Factory just like normal projects. They are just put into the Sandbox folder. Automatic expiration of Sandbox environments is not part of Fabric FAST. It has to be implemented in the GitOps flow around it. ❌You could consider adding “Sandbox” as an additional environment with some special policies applied, but the example foundation does not provide any guidance for this.
Privileged Access Management (PAM) ⭐⭐ Leverages the use of groups instead of directly assigning roles to users. The principle of least privileged is applied by assigning only necessary roles for each group. ⭐⭐⭐ Leverages the use of groups instead of directly assigning roles to users. The principle of least privileged is applied by assigning only necessary roles for each group. Furthermore, service accounts are created for automation that can be impersonated by selected groups. ⭐⭐⭐ Leverages the use of groups instead of directly assigning roles to users. The principle of least privileged is applied by assigning necessary roles for each group. Furthermore, service accounts are created for automation that can be impersonated by selected groups.
Service Account Management ⭐⭐⭐ With the project factory, project configuration can also include service accounts provisioning alongside their permissions. By default, iam.disableServiceAccountKeyCreation organization policy is enforced on organization-level. This is a best practice that makes use of Workload Identity Federation (WIF) as an alternative to key creation. ⭐ I couldn’t find out for sure, but one service account seems to be created per project. This SA can then be impersonated by the Cloud Build SA to perform CI/CD tasks within the project. By default, iam.disableServiceAccountKeyCreation organization policy is enforced on organization-level. This is a best practice that makes use of Workload Identity Federation (WIF) as an alternative to key creation.
Virtual Networks ⭐ Separate networks per environment are created and some basic firewall rules are applied. That way you can have the connectivity within one environment (e.g. all production services can talk to each other securely via a VPC). Advanced options like configuring peering or VPN approaches are not provided. ⭐⭐⭐ Offers sophisticated virtual network setups based on the “hub and spoke” design. One can choose the type of connectivity between the hub and spokes, which are: VPC Peering, Network Virtual Appliances (NVA), or VPN ⭐⭐⭐ Offers sophisticated virtual network setups. One can choose the type of connectivity between Dual SVPC or Hub & Spoke.
On-Prem Connect ⭐⭐⭐ On-prem VPN is offered with all 3 setups of the networking stages. ⭐⭐⭐ On-Prem connectivity is provided in 3 different ways for all network setups mentioned above.
Cloud Interconnect Network Documentation only mentions that Cloud Interconnects should be done similar to the HA VPN setup. So it seems like the basis is there, but no specific support for setting up a concrete inter-connect. ⭐⭐⭐ It supports Direct and Partner Interconnect.
Managed Key Vault ⭐⭐⭐ A Cloud KMS is rolled out to every environment so e.g. all production services have a way to reliably and securely share secret keys. ❌ One of the sample projects creates a KMS, but only within the project.
Automation in CI/CD ⭐⭐ Supports automation with GitHub Actions, GitLab, and Source Repo. An in-depth look into the terraform code might be needed to get it to work. While it provides a great benefit, it could use more directed documentation. You can add this automation whenever you like. Even if you applied Terraform manually for some time, you can still add CI/CD later on. ⭐ Cloud Build or Jenkins can be used to roll out the Foundation. Additionally, Google Cloud Build seems to be set up for all end-user projects, so they can also quickly start with CI/CD. With Cloud Build and Jenkins, only 2 established CI/CD solutions are provided. Modern CI/CD tools such as GitHub Actions or GitLab are not supported.Documentation for rolling out the modules is heavily centered around using a CI/CD tool. So you are triggered to use it right from the beginning instead of growing your solution to using CI/CD over time to keep complexity as low as possible at the beginning.

GCP Landing Zone Non-Functional Comparison

Checklist Fabric FAST Example Foundation
Extensibility/Modularity/Scalability ⭐ The checklist generates Terraform code to roll out the resource hierarchy, networking, audit logging, and a basic IAM approach. Sadly org policies are not applied via Terraform. In general, you can extend those easy-to-understand templates with whatever you need. But they don’t provide any sophisticated structure or approaches that help you scale your foundation to something big. ⭐⭐⭐ It contains several ready-made terraform modules that the different stages utilize. Each stage has outputs that are used as input variables for the next stage. It can therefore be extendable as long as a “contract” regarding input and output variables is followed. tfvars files are created and uploaded to GCP buckets that can be accessed by different stages. ⭐⭐ Terraform code is structured into different stages. So this Terraform Module structure supports scaling. The generated default projects might be a bit cumbersome to be replaced by the projects you really want to have. In general, the example foundation is not as configurable as Fabric FAST and e.g. relies quite heavily on a folder structure based on the different environments.
Terraform ⭐ Terraform is actually generated for most of the configs, but e.g. org policies are not part of that terraform code but have to be applied manually in a guided dialog in the GCP web console. Additionally, the Terraform modules are very basic and don’t provide a structure that is ready for scaling them to way more modules in the future. ⭐⭐⭐ A sophisticated structure for the Terraform modules is applied. Especially structuring them into different stages makes it scalable. Additionally transferring input from one module to the next is handled in a reasonable way. Using TerraGrunt might enhance this transfer even more. ⭐⭐ It’s a structured and sophisticated Terraform structure. But transferring data between the steps must be done completely manually. A more automated approach like TerraGrunt is not being used here.
Learning Curve ⭐⭐⭐ The learning curve is very low. You simply follow the wizard and everything you need to know is explained to a sufficient degree on the wizard pages. That way you can easily and quickly understand what it does and how to configure it. ⭐ Documentation guides through the stages, but several questions and details remain open and you have to look into the TF modules to understand details or how to customize certain areas. Especially the way IAM and PAM are handled is not documented well and is not easy to understand. Also, parameter documentation is very basic and is therefore not always helpful. ⭐ Documentation in the READMEs for the different stages is rather short and only explains the most important things. Details have to be read up in the general Google CFT docs. But it is also quite hard to understand what the modules are doing while applying them, as the steps you have to do manually are not providing you insight into the modules. It’s just some git, copy, etc commands you have to execute step by step for every stage.
Community ⭐ As the checklist is well guided by the wizard and in general it creates easily understandable Terraform modules there is not really a need for a community around it. ⭐ it is really hard to currently find resources on Fabric FAST besides the documentation by Google. Looking at the contributors and the activity on the Repo it is quite active. There are around 70 internal and external contributors to that repository. We expect the community to grow in the future. ⭐⭐ It seems to be the most commonly used solution for GCP at the moment. You can find several blog posts about it and more.
Time To Success ⭐⭐⭐ If you have a user who is Super Admin and Org Admin at hand you can really quickly set up your landing zone. Within a few hours you have everything configured and applied and you understand what it does. ⭐⭐ Considering the quite big scope of Fabric FAST, you can also rather quickly achieve your goal of a landing zone. It most likely will take you 1-2 days to go through all stages and apply them according to your needs and to roughly understand what is deployed. For understanding the Landing Zone you built here more deeply requires a few more days. ⭐⭐ Google says you can set up your foundation within a day. Looking at the problems we had when trying to roll it out, I think it takes longer. The process is also quite error-prone as you have to execute that many commands manually. Understanding more deeply how the Landing Zone you applied here behaves exactly requires additional effort.

Which GCP Landing Zone Should I Pick?

The Terraform Example Foundation from the CFT should not be used anymore as Fabric FAST supersedes it and provides a way cleaner and usable approach. With the Example Foundation you have to execute >20 commands per stage manually (there are 6 stages in total). We also heard from one of the Google Consultants, that Fabric Fast is the direction they want to go to in future. As we tried to roll out the Example Foundation, we noticed that it is way more painful to use than Fabric FAST and it feels way more hacky.

The Google Cloud Setup Checklist is a nice and small solution that fits well for smaller companies or companies just wanting to get started quickly with their Cloud Journey in a reasonably secure environment. It also allows growing your foundation step-by-step by extending the generated Terraform modules to your needs.

But, if you have a really big cloud foundation in mind, the Terraform Modules provided by the Cloud Setup Checklist might not suffice to scale them to something large. It also doesn’t contain solutions for advanced problems many companies face. If you want to grow your Cloud Foundation Landing Zone, you should better consider starting with Fabric FAST right from the beginning as it applies a sophisticated IAM and PAM concept and a scalable structure to organize your Terraform modules with those stages. It also provides more features that make sense for many companies.

Starting with the Google Cloud Setup Checklist first, and adopting Fabric FAST later on is also a feasible option. Both use the same user groups and adopting the resource hierarchy, networking, and audit logging should also be possible as they don’t differ drastically.

Did this blog post help you decide which Landing Zone Solution you will apply to our GCP Org? Did you already gather your own insights to building a GCP Landing Zone and want to back up, add or discuss insights from this blog post, please do so and leave a comment! We look forward to hearing about your experience.

Landing zone Heroes

Mastering Landing Zones Like a Superhero

How to build a Landing Zone in Azure with our construction-kit

All beginnings are difficult. This is particularly true for your first cloud project. The sheer amount of settings, in order to provide a safe, compliant and productive environment for your organization, is almost overwhelming.

To turn your cloud journey from “zero to hero”, we equip you with our cloud superpowers: our Landing Zone construction-kit enables Enterprise Architects and Platform Engineers to deploy a Landing Zone from scratch.

Exploit the full potential of the cloud

In this blog post we want to show you how you can build a ready to use Landing Zone with our Landing Zone construction-kit and become the hero your company needs!

But before we are getting into the demo, let’s see how Landing Zones help your organization getting started on their cloud journey and how you can exploit the full potential of the cloud.

By enabling the DevOps teams in your company to build scalable applications without compromising security or compliance, Landing Zones are perfect for any greenfield cloud project.

Landing zone Heroes

Benefits of using Landing Zones

  • Ensuring security & compliance
  • Preventing misconfiguration of cloud environments
  • Supporting cloud-native services
  • Saving platform engineering resources
  • Ruling the huge surface area of cloud services

Do you really need it? Test yourself!

If you're not convinced whether Landing Zones are what you need, then you might check if you find yourself in one of the following scenarios:

  • If you are uncertain what a good base security configuration is…
  • If your application teams are hesitant when it comes to the cloud, as they are not familiar with the services…
  • If you are in lack of resources for the implementation of automation and security assets...

… then Landing Zones are what you and your organization need. Now, after seeing what superpowers our Landing Zone construction-kit holds for you, let’s get down to business and create a ready to use Landing Zone in Azure.

How to: Building a Landing Zone in Azure

meshcloud has developed a lightning fast way to build an out-of-the-box Landing Zone with a Landing Zone-Kit. Keep on reading or check our video to see how to get a working Landing Zone in Azure with all basic resources that are typically included.

Azure Landing Zones Terraform Module

Building a Landing Zone in Azure can be a daunting task, especially since Microsoft provides various options to pursue, from Azure resource manager (ARM) templates to Terraform modules. If you’d like to see how the different ways compare, check out our Azure Landing Zone Comparison. In the end, any approach will implement the Azure Landing Zone conceptual architecture (also called enterprise scale in other contexts), or part of it.

Azure Landing Zone conceptual architecture

That already seems like a lot, doesn’t it? Certainly not something that a solo Platform Engineer or an Enterprise Architect will find easy to do. But that’s fine, since Azure already has a ready-to-use Terraform module that will create the baseline of that architecture (image below).

Azure Landing Zone conceptual architecture (Azure enterprise-scale Terraform Module)
Azure Landing Zone conceptual architecture (Azure enterprise-scale Terraform Module)

There are many good reasons to start out with Azure Landing Zones Terraform module:

  • Microsoft recommends it for most organizations.
  • It is lightweight, and quick to deploy.
  • Their repository includes a Wiki with detailed examples to use.
  • It paves your way to adopting GitOps from the get go.
  • It prepares your Landing Zones setup for scale.

There are some remaining questions which are not answered by using this module:

  • How can I store the terraform state of this module?
  • How can I restrict access to the terraform state file to specific users?
  • As a Platform Engineer or an Enterprise Architect, I have other resources I want to include to my cloud foundation that are not covered by this module, how can I do that?

These questions are answered by the previously mentioned Landing Zone Construction-Kit. In the next section, we will explain how to use this tool to become the new superhero by setting up a new Landing Zone with only a few quick commands.

In one hour, you will have ready-to-use Landing Zones to start your compliant and secure Cloud Journey.

How to: Using the Landing Zone construction-kit to build Landing Zones in Azure

We will utilize the collie command line tool. We are using collie because it gives you a fully automated deployment process and requires very few manual steps from your side. Let’s jump in.


Before building Landing Zones we will need to have the following:

  • An Azure Active Directory (AAD) Tenant
  • An Azure subscription
  • A high privileged user (With Global Administrator role and User Access Admin on the root management group)


Follow collie-cli installation guidelines to install collie-cli. After that, we will check that collie works properly.

  1. collie -V

We see that collie is installed, and is also checking if all dependencies are installed.

After installing collie and its required dependencies, login with az cli to the AAD tenant where you will deploy your landing zones.

  1. az login --tenant <aadTenantPrimaryDomain>

At the beginning you will want to create a new cloud foundation with collie which makes it super simple to organize your code and later manage multiple of them.

  1. mkdir cloudfoundation && cd cloudfoundation
  2. collie init
  3. collie foundation new tutorial

When creating your new tutorial foundation you will go into interactive mode and be prompted to add a new cloud platform and configure it for your foundation.

Save and exit
Save and exit

As you can see from the screenshots above, first select add cloud platform, then choose Azure as your new platform, and finally configure your cloud platform and the Azure subscription that you will create all resources in.

Once you’re done, select save and exit.

Bootstrapping Action

Now, in order to actually build a Landing Zone, we simply make use of the Azure Landing Zones Terraform Module and include it in the Landing Zone construction-kit framework.

Let’s execute the following:

  1. collie kit bundle tutorialbundle
Selecting the KitBundle for Azure Enterprise Scale Modules
Selecting the KitBundle for Azure Enterprise Scale Modules
Configuring the KitBundle
Configuring the KitBundle

What this does is the following:

  1. Downloads the required kits (bootstrap and base)
  2. Queries basic information required to configure the inputs of your foundation. These are inputs that will be passed to the downloaded kits.
  3. Bootstraps your foundation; by creating an Azure object storage to store Terraform state and creating a service principal with required permissions to deploy your Landing Zones. This will also reconfigure the Terraform backend, and prompt you to migrate the Terraform state to the newly created storage!
Bootstraping at work: Terraform asks to migrate its state to the new remote storage
Bootstraping at work: Terraform asks to migrate its state to the new remote storage

Rolling out the Landing Zone

Now all you need to do is to deploy your Landing Zones with this command:

  1. collie foundation deploy tutorial --module base
Collie instructs Terraform to rollout Azure Enterprise Scale-Kit
Collie instructs Terraform to rollout Azure Enterprise Scale-Kit

This uses the previously configured inputs to deploy the Azure Zanding Zone Terraform module. It will take roughly 30 minutes to create all the cloud resources.

That's it!

Now you have deployed your Landing Zones with our bootstrap module and Azure Enterprise Scale module

  • You have your Terraform state managed in your new object storage
  • Access to that storage is restricted to specific users (those included in the foundation platform engineers group).
  • Your foundation has everything defined as code and you can utilize the Landing Zone Construction Kit to update your Zanding Zones.

How great is that? Almost no work starting from nothing to deploying your first Landing Zones that can otherwise take days, weeks or even longer! And the best thing is that everything is kept open and modular, so you can adopt the Landing Zone to your wishes and make it your own. But you have been kick-started with all the best practices recommended and developed by Azure.

What's next?

You have seen how easy it is now to set up a working Azure Landing Zone that follows Azure’s best practices. Now, you have everything at hand to go from “zero to hero”. Deploying Landing Zones, that is day 1 of Cloud Landing Zones lifecycle, is already a huge step towards a smooth cloud journey. But what about operating them (i.e. day 2)?

On a larger scale, when deploying multiple Landing Zones, their management becomes essential. Upgrading your Landing Zones when new functionalities arrive, when a fix is needed, or when a new security vulnerability is identified should be as fast and effortless as possible. This is where meshcloud’s meshStack comes into play.

Book a demo now!

Running Cron Jobs on Cloud Foundry

Many cloud applications need to run some sort of scheduled tasks for chores like processing statistics or doing internal housekeeping. There are two different strategies to implement scheduled tasks to cloud applications running on Cloud Foundry. You can either build scheduling and tasks into the application itself or schedule and run the task in separate containers

Some application frameworks like Spring include some built-in scheduling support. However, this scheduling support does not include a distributed coordination mechanism. This means that an application horizontally scaled to multiple instances will each run the scheduled task individually. Depending on the nature of the task, this may cause observable side effects like sending emails to your customers to be repeated.

It's thus preferable to have a central entity for scheduling. You could of course use e.g. a Java Spring App that needs approximately 1G of RAM to do that for you, but that would be very wasteful. Instead, we can build a simple cron scheduler that runs on 16 MB of RAM to get reliable task scheduling for just a few cents per month.

The task scheduler can then execute arbitrary scripts or code, for example to:

  • invoke an https endpoint on your application to perform the task
  • queue a message on RabbitMQ for processing by a worker
  • trigger execution of the job in a seperate Cloud Foundry Task Container

meshcloud's cf-cron scheduler

Our sample repository demonstrates how to run scheduled tasks on Cloud Foundry with a very small footprint (8 to 16 MB RAM) using a traditional crontab. Traditional cron daemons need to run as root and have opinionated defaults for logging and error notifications. This makes them unsuitable for running in a containerized environment like Cloud Foundry. Instead of a system cron daemon we're thus using supercronic to run our cron tab.

How it works

This application is built using the binary buildpack and executes supercronic on the crontab file. The crontabfile specifies all your cron jobs. To add additional jobs, simply add a new line which specifies a schedule and command to the crontab.

Note: By default, supercronic will log all output to stderr so we redirect that to stdout in our cf manifest.

You can also include additional scripts and binaries to execute more complex actions. This example allows you to install apt and debian packages to use in your cronjobs. You can specify these packages in apt.yml and they will be installed during staging by apt-buildpack
courtesy of the magic multi-buildpack.

After cf pushing this sample app to Cloud Foundry, you can see that it happily executes the jobs from the crontab in the log output:

2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg=starting iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="hello world, every 2 seconds" channel=stdout iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.00+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="job succeeded" iteration=237 job.command="echo "hello world, every 2 seconds"" job.position=1 job.schedule="*/2 * * * * * *"
2018-03-05T10:59:00.05+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="cf version 6.34.1+bbdf81482.2018-01-17" channel=stdout iteration=7 job.command="cf --version" job.position=0 job.schedule="*/1 * * * *"
2018-03-05T10:59:00.05+0100 [APP/PROC/WEB/0] OUT time="2018-03-05T09:59:00Z" level=info msg="job succeeded" iteration=7 job.command="cf --version" job.position=0 job.schedule="*/1 * * * *"

Scheduling Cloud Foundry Tasks

While the cron container here is designed to be small and lightweight, you may want to use it to trigger more resource intensive tasks and processes. When a simple curl to an http endpoint is not enough to kick off such a task on your existing app, Cloud Foundry Tasks are a great solution to run these processes.

This sample repository thus includes instructions to install the cf cli tool which you can use to trigger such a task using a meshcloud Service User.

Connect Database to Cloud Foundry

Securely connecting to Service Instances on Cloud Foundry

To connect to a managed service instance on your Cloud Foundry space, most developers use service keys. A service key is a set of authentication credentials that allows you to connect to your database instance via a public IP address and port. While this is quick and easy to do, meshcloud does not recommend keeping service keys open for extended periods of time. Instead, you should delete them as soon as possible and create a new service key anytime you need access again.

A more secure approach that does not involve exposing a connection to your database on a public IP is to spin up a shell container on Cloud Foundry and connect to it via cf ssh. This approach is also more suitable for long running or high performance operations that require close proximity between the database and the shell.

Here's how to do it showcased for MongoDB, but a similar approach also works for our other managed services like MySQL or PostgreSQL.

1) Create an app named MARKDOWN_HASH8dbc90b062fdf4d2f370bf28f06aa883MARKDOWNHASH based on a docker [container image]( containing the mongo cli. Tip: you can also specify a specific version using the appropriate container image tag, the example below uses :latest. Note that we tell Cloud Foundry that we need only very little RAM (128 MB), don't want a health-check on the App and that it doesn't need an HTTP route to be reachable from the outside. After all, we just want to ssh into this app.

cf push -o mongo:latest mongocli --no-route --no-start -u none -m 128M

2) Create a binding of the service instance to your new app. This makes a connection string available to the mongocli app that it can use to connect to the database instance on a private network, just like your proucution app does.

cf bind-service mongocli my-mongodb

3) Start the container, let it just run a bash

cf push -o mongo:latest mongocli --no-route -u none -m 128M -c bash

That's it, now we can easily ssh into the container using cf ssh mongocli and run env to find our connection string in the VCAP_SERVICES variable. The connection string looks approximately like this:

"credentials": {
"password": "abc",
"database": "db",
"uri": "mongodb://user:pw@ip1:27017,ip2:27017,ip3:27017/db",
"username": "xxx"
"syslog_drain_url": null,
"volume_mounts": [

"label": "MongoDB",
"provider": null,
"plan": "S",
"name": "my-mongodb",
"tags": [


Now you can simply run mongo mongodb://user:pw@ip1:27017,ip2:27017,ip3:27017/db and you're securely connected to your managed database instance - on a docker container running mongo shell on Cloud Foundry - connected via ssh.

Deploying Meteor Apps on Cloud Foundry

This repository demonstrates different ways to deploy Meteor applications on Cloud Foundry.
There are at least three different ways to deploy Meteor JS applications on Cloud Foundry:

  • using a specialized meteor-js buildpack
  • using the general node-js buildpack
  • using your own docker container image

While a specialized meteor buildpack like meteor-buildpack-horse or cf-meteor-buildpack is a useful option, recent versions of meteor support simplified deployment on servers that provide just a node.js runtime environment. For this demonstration repository, we're focusing on this approach because the node.js buildpack is more widely used and better maintained.

The application we're going to deploy is located in the ./try-meteor folder of this repository. We also assume you have provisioned a Mongo DB service on your space. At meshcloud's Cloud Foundry, you can create a dedicated service instance suitable for use with meteor like this:

cf create-service MongoDB M meteor-mongo

Using the node.js Buildpack

Build a meteor app bundle

On your local machine with the meteor cli installed, build a distributable package into the deploy-buildpack folder:

cd try-meteor && meteor build ../deploy-buildpack/. --server-only --architecture os.linux.x86_64

Building generates a try-meteor.tar.gz file with the meteor application bundled as a plain node.js application, with some helfpful instructions in its file:

This is a Meteor application bundle. It has only one external dependency:
Node.js v8.9.3. To run the application:

$ (cd programs/server && npm install)
$ export MONGO_URL='mongodb://user:password@host:port/databasename'
$ export ROOT_URL=''
$ export MAIL_URL='smtp://user:password@mailhost:port/'
$ node main.js

Use the PORT environment variable to set the port where the
application will listen. The default is 80, but that will require
root on most systems.

node.js buildpack wrapper

To deploy this tar.gz file on Cloud Foundry with the node.js buildpack we need to:

  1. upload and unpack the tar.gz bundle
  2. run npm install on the extracted bundle
  3. set the correct environment variables using a launcher js script

We can easily achieve that through a custom package.json that uses npm's postinstall and start script to execute these actions. You can find the package.json and all required files for the deployment in the ./deploy-buildpack folder.

Note: at the time of writing the bundles generated by meteor machine lack the meteor-deque dependency so we just explicitly add that by hand.

"name": "try-meteor",
"private": true,
"scripts": {
"start": "node launcher.js",
"postinstall": "tar -xf try-meteor.tar.gz && (cd bundle/programs/server && npm install)"
"engines": {
"node": "8.9.3"
"dependencies": {
"meteor-deque": "~2.1.0",
"cfenv": "1.0.4"

Have a look at the launcher.js file if you want to change service names etc.
The final bit that we need is a Cloud Foundry Manifest file to describe our application:

- name: try-meteor-app
memory: 512M
instances: 1
- meteor-mongo

We're all set now, a simple cf push and your app should be up and running on the cloud.

Using a Docker Container

The next option is to use a docker-based deployment of the application. This requires that we build our own docker image of the application and publish it to a docker registry.

You can find the code for the docker-based deployment of our sample application in the MARKDOWN_HASHd786272e16707f3074a294d21f8b83a0MARKDOWNHASH folder. The docker image used in this example is based on the [node:8-alpine]( base image. However, before we can build our container we need to build our meteor application and extract it:

cd try-meteor && meteor build ../deploy-docker/. --server-only --architecture os.linux.x86_64
cd ../deploy-docker && tar -xf try-meteor.tar.gz && rm try-meteor.tar.gz

The docker deployment demonstrated in this repository also uses the same launcher.js script introduced above to automatically initialize meteor environment variables from their Cloud Foundry counterparts. With that out of the way, let's build and push the docker image

docker build -t meshcloud/meteor-cf-example .
docker push meshcloud/meteor-cf-example:latest

With the container available in a docker registry, we can push it to Cloud Foundry by specifying the docker image in the manifest:

- name: try-meteor-app-docker
memory: 512M
instances: 1
image: meshcloud/meteor-cf-example
- meteor-mongo

Now all that's left to do is a simple cf push and your app should be up and running on Cloud Foundry in no time.

iDevBook01:deploy-docker jr (master *) $ cf push
Pushing from manifest to org meshcloud-demo / space aproject as c9f7d64c-404d-4b29-b719-b2359f6c8157...
Using manifest file /Users/jr/dev/demo/meteor/deploy-docker/manifest.yml
Getting app info...
Updating app with these attributes...
name: try-meteor-app-docker
docker image: meshcloud/meteor-cf-example
command: node launcher.js
disk quota: 1G
health check type: port
instances: 1
memory: 512M
stack: cflinuxfs2

Updating app try-meteor-app-docker...
Mapping routes...

Stopping app...

Waiting for app to start...

name: try-meteor-app-docker
requested state: started
instances: 1/1
usage: 512M x 1 instances
last uploaded: Mon 12 Feb 10:25:29 CET 2018
stack: cflinuxfs2
docker image: meshcloud/meteor-cf-example
start command: node launcher.js

state since cpu memory disk details
#0 running 2018-02-12T10:12:23Z 0.1% 38.2M of 512M 1.3M of 1G

iDevBook01:deploy-docker jr (master *) $

Cloud native deployment for Single Page Applications

Single Page Applications (SPAs) are a popular way to deliver modern web apps. With SPAs, users download executable Javascript code that dynamically renders content in their browser, rather than retrieving pre-rendered pages from a server application. The server application is freed from rendering HTML and instead is only responsible for providing a re-usable API to its data. In this post, we're going to look at how we can leverage Cloud Foundry to do cloud native deployments for SPAs.

Delivering SPAs on Cloud Foundry

To deliver an SPA to its users, a web server only needs to serve the static assets making up the SPA. This also means that the web server can leverage HTTP caching to optimize delivery. On Cloud Foundry, the best way to serve an SPA is by using the Staticfile buildpack. A cf push using this buildpack will bundle your artifacts with an nginx server. To get started, let's assume you have all files of your web-root in your current working directory. We now create a short manifest.yml for Cloud Foundry:

- name: my-spa
instances: 1
memory: 64M

All we need to do now is to execute cf push and your SPA will be up and running in the cloud under a randomly-assigned URL shortly.

Scaling your SPA

The nginx container built for serving your files does not need a lot of memory, which makes it very cost-efficient to run on Cloud Foundry. To achieve high-availability and scale the service under load, we can easily use Cloud Foundry to scale it to two or more instances using cf scale my-spa -i $X.

Injecting Configuration

Cloud native applications should adhere to the 12-factor principles. One of the twelve factors is configuration through the environment. This allows us to deploy the same artifacts to different environments (e.g. dev and production) while keeping all configuration data separately. Cloud Foundry optimally supports workloads that implement this principle by providing configuration through environment variables.

At runtime, we need to inject this configuration data into the application. Because we want to keep serving our application as static SPA assets through nginx for optimum performance, there's no server side code execution like with PHP or a different backend language. However, we can still achieve dynamic injection of configuration data through the use of environment variables and server side includes.

The staticfile buildpack has a lot of useful configuration options. The most powerful of course is that it allows us to provide our own template for nginx.conf, which is evaluated during staging. In this template, we can access environment variables configured for the app on Cloud Foundry. For reference, here's the default config used by the buildpack.

For our custom template, we'll start with this portion of an nginx.conf file:

server {
listen <%= ENV["PORT"] %>;
server_name localhost;

location /inject-environment {
default_type application/json;
return 200 '<%= ENV["INJECT_ENVIRONMENT"] %>';
<% end %>

location / {
root <%= ENV["APP_ROOT"] %>/public;
index index.html index.htm Default.htm;
ssi on;

This creates a location mapping in nginx for the path /inject-environment that will respond with HTTP Status Code 200 OK and a static string read from the INJECT_ENVIRONMENT environment variable. This is useful if we can buid our SPA to retrieve this configuration data at runtime. But what if we want to load configuration data before the JavaScript in our application executes?

We can leverage an HTML feature called server-side-includes for this, which we enabled using the ssi on instruction in the nginx.conf above. In the index.html document that loads the javascript code of our SPA, we add a SSI instruction to include the string returned by the /inject-environment endpoint:

<!--#include virtual="/inject-environment" -->

Because a server-side-include does just plain text concatenation, we need to define our environment variable to be a "smart" string. So let's make it a <script> tag that executes some javascript that will put a JSON object into the global window. The easiest way to define this variable is to edit our app's Cloud Foundry manifest file:

- name: my-spa
  instances: 1
  memory: 64M
      <script type="text/javascript">window["INJECTED_ENVIRONMENT"] = {
        production: false,
        baseUrls: {
          api: ""

To access configuration data, your SPA can simply pick it up from the global window["INJECTED_ENVIRONMENT"] variable.

Pro tip: To dynamically reconfigure your application on Cloud Foundry at runtime, change your environment variables and restage the application. To stage the new version of the application while keeping the current one available, you can use the bg-restage cli plugin.


Cloud Foundry automatically collects your application's requests as they pass through the Cloud Foundry router. Entries like this are tagged with [RTR/$] in the logs. Additionally, Cloud Foundry will also collect all logs emitted by nginx, indicated by the [APP/$] tag in the logs. Here's how that looks like in practice:

Retrieving logs for my-spa panel in org meshstack / space production as

2018-01-28T15:46:59.99+0100 [APP/PROC/WEB/0] OUT, - https - - - [28/Jan/2018:14:46:59 +0000] "GET / HTTP/1.1" 200 1044
2018-01-28T15:46:59.99+0100 [RTR/0] OUT - [28/01/2018:14:46:59.994 +0000] "GET / HTTP/1.1" 200 0 1032 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36" x_forwarded_for:"," x_forwarded_proto:"https" vcap_request_id:908529e7-3f5a-4a5c-40c2-d79419a3e5ae response_time:0.003193066 app_id:bc1e736b-6b8c-48b0-a390-8c8ac1aeae0a app_index:0

You can customize this logging in your nginx.conf file using the log_format configuration option.

Step-by-Step: Deploying a Java Spring App to Cloud Foundry

You are a developer and have been dreaming of someone to take care of deploying and operating your cloud-native applications? With Cloud Foundry you might have found a solution to this.

What is Cloud Foundry?

Cloud Foundry is a container-based open-source software for application deployment. It enables developers to deploy apps of all common programming languages, such as Java, Python, Node.js, .NET Core or Go within minutes. The platform takes care of deployment, scaling and administration of cloud-native applications throughout their lifecycle.

What you need to get started

  • An account to use Cloud Foundry. As Cloud Foundry is an open-source platform there are multiple providers.
  • If not done yet, you should get started by installing the Cloud Foundry CLI. Type cf to your terminal to confirm that the installation was successful.
  • If not there already, you need to install Gradle.

So let's get our hands dirty now and deploy an application on the platform.

Step 1: Deploying an application to Cloud Foundry

In general Cloud Foundry offers 2 ways to deploy applications. You can either let the platform build a container out of your application and the integrated buildpacks or, in case you already have a Docker container up and running, you can deploy your Docker container to Cloud Foundry. We will choose the first way in this example and will show how to deploy a Java Spring application.

To start, we need to login to Cloud Foundry. If you are using meshcloud you do this, by choosing a location in the meshPanel and then executing the command shown in the CLI-Access tab.

It will be something like this, depending on the location you chose.

$ cf api

Now, we need to authenticate against this endpoint, by executing:

$ cf login -sso

You will be asked for a passcode, which you also get from the meshPanel (CLI-Access).

We will use an application that is provided within the Cloud Foundry sample apps on Github.  It is an app to store music albums. While it has an integrated in-memory database, you can easily connect it to one of the databases in our marketplace to get persistence. We will get there in a moment. Let's  clone the code first:

$ git clone

and access the folder.

Once you navigated to the app folder, you can build the app:

$ ./gradlew clean assemble

And push it to the cloud:

$ cf push

And there you are: The app is running.

You should see something like this. Have a look at the manifest.yml file in the app folder. It contains the specifications of name, memory, route, etc. for your app.

You can reach the app's web interface by copying the url to your browser.  Changes in the app will be stored in-memory. However, this is no long-term solution, as your are currently only running a single instance of your app. A restart will lead to a loss of your changes. So what we are going to do next is to bind the app to a persistent database and scale it up to 3 instances.

Step 2: Binding a Data Service to your application

A common microservice architecture consists of multiple apps and services like databases or message queues for communication. You can provision services from the Cloud Foundry Marketplace.

You reach the marketplace by executing:

$ cf marketplace

in your terminal. As a result you get a list of services that are currently offered on the platform. Let's create a MongoDB service to bind to our application.

$ cf create-service MongoDB S musicDB

We are specifying the database, choosing a plan and a name for the database. The database will be created straight away. You will see it in your service list:

$ cf services
Getting services in org meshcloud-demo / space aproject as c9f7d64c-404d-4b29-b719-b2359f6c8157...

name              service              plan              bound apps               last operation
musicDB           MongoDB              S                                           create succeeded

Now we can bind the database to our app with a single command.

$ cf bind-service spring-music musicDB

This will cause changes within the app to be persistent.

Step 3: Scaling your application

As we saw in our manifest.yml file and the CLI, we just started a single instance of the application. If you have high-availability requirements, you should always run a minimum of 2 app instances. You can easily scale your application at any time. We will now scale our app horizontally to 4 instances.

$ cf scale spring-music -i 4

Alternatively, you can also scale vertically, increasing RAM or disk of your application instance.


Congratulations! You did it.

The Cloud Foundry-ABC

If you are just about to run your first application in Cloud Foundry, this is the place to start. First of all: Congratulations! Cloud Foundry is a great technology used by developers and it is amazing how it can make your life easier. If you feel like you have been a sys admin, even though you thought of yourself as a developer, I promise, you will love Cloud Foundry. Sys admin time is over, you are finally free to work on your code. Do you think this sounds like Docker or Kubernetes? Wait for it, it can get even easier than this.

In this article, we'll just get you familiar with the most common terms of the Cloud Foundry world. So once we dig deeper, you'll know what we are talking about.


There are multiple public Cloud Foundry installations across the world. meshcloud offers a couple of them within Germany. So once you start working with Cloud Foundry, you will have to define, which of the installations you are communicating to. Each of those installations is called a target. In order to switch locations within the meshcloud Federation, you will also have to switch target.

For example

cf api

will lead you to our DARZ location in Darmstadt.


"CLI" stands for command line interface. The Cloud Foundry CLI will help you to interact with Cloud Foundry, to push your applications, scale them, create services and so on. In order to use Cloud Foundry, you will download and install the Cloud Foundry CLI tools first. Once you installed it, you can use your terminal or Powershell to execute functions, which in the background communicate to the Cloud Foundry API via REST calls. Every call you make to a function, simply starts with cf.


A Cloud Foundry Organization defines a single tenant on the system. So within meshcloud this is one customer account. In order to see the organization that you are currently working with in your terminal just type

cf orgs

and the result will be something like this:


Getting orgs as xyz...



Within meshcloud, a Cloud Foundry Space represents a meshcloud project. You may have multiple spaces/projects within your organization. All your applications and service instances will be scoped to a specific space/project. In order to see the spaces/projects you have access to within your organization, you can type:cf spaces


We differentiate applications and services within Cloud Foundry. While you bring the application to the platform. As meshcloud, we offer various stateful services for you to use. Services are offered in the so-called Market Place. To see the market place, just type: cf marketplace and you will see a list of services to choose from. They are mainly databases and message queues. Each service is offered in various plans, that may differ in size or performance. For example there are shared database plans (our "S" plans) and dedicated database plans (our "M" and "L" plans) within meshcloud's Cloud Foundry.


And that's it. That is the basic vocabulary that you should be familiar with, when starting to use Cloud Foundry.

Hosting your WordPress Website in Cloud Foundry

If you are looking for a way to host your WordPress website and want to enjoy the freedom of the cloud, here is how you do it! This tutorial is based on Cloud Foundry.

Getting started

At first we have to download and unzip WordPress to prepare it for the Cloud Foundry deployment. In our example, we download the English version from the WordPress homepage. If you need another version, you can find it under this link.

$ wget
$ tar xzvf
$ cd WordPress

If you are using a Mac and don't have wget installed, you can install it via hombrew. brew install wget

Now we have to start with the customisation for the cloud use. Being in the WordPress directory now, we have to create the ".bp-config" folder where we'll create the "options.json" file that contains the PHP extensions we need to run WordPress.

$ mkdir .bp-config
$ nano .bp-config/options.json
$ cat .bp-config/options.json
"PHP_EXTENSIONS": ["mysql", "bz2", "zlib", "curl", "mcrypt", "mbstring", "mysqli", "gd", "zip"]

Prepare Database

Having done that, we have to prepare our database. The first thing to do when a Database is needed in Cloud Foundry is to start a service. To see which services are available in your CF Infrastructure type:

$ cf marketplace
service plans description
MongoDB S*, M*, L* MongoDB Instances
MySQL S*, L*, M* MySQL Instances
PostgreSQL S*, M*, L* PostgreSQL Instances
RabbitMQ S*, M*, L* RabbitMQ Instances

For WordPress we need a MySQL database. You have to decide which plan you want to choose and give the database a descriptive name. Now you can create the service.

$ cf create-service MySQL S WordPress

After doing so, we have to link the credentials of our newly created database in our wp-config. One of the key advantages of cloud deployment is that you can configure your software completely dynamically, so you can deploy it anywhere anytime. To do so, we have to read the database connection information dynamically out of the Cloud Foundry environment instead of referencing to a concrete database. The "diff" command shows the difference between the files. You don't have to run it yourself it only shows you what you have to change in order to get your site running ("-"-Lines are old +-"Lines" are new).

$ cp wp-config-sample.php wp-config.php
$ nano wp-config.php
$ diff wp-config-sample.php wp-config.php

--- wp-config-sample.php 2015-12-16 10:58:26.000000000 +0100
+++ wp-config.php 2017-10-13 14:34:31.000000000 +0200
@@ -18,18 +18,27 @@
* @package WordPress

+$services = json_decode($_ENV['VCAP_SERVICES'], true);
+$service = $services['MySQL'][0]['credentials']; // pick the first MySQL service
// ** MySQL settings - You can get this info from your web host ** //
/** The name of the database for WordPress */
-define('DB_NAME', 'database_name_here');
+define('DB_NAME', $service['database']);

/** MySQL database username */
-define('DB_USER', 'username_here');
+define('DB_USER', $service['username']);

/** MySQL database password */
-define('DB_PASSWORD', 'password_here');
+define('DB_PASSWORD', $service['password']);

/** MySQL hostname */
-define('DB_HOST', 'localhost');
+define('DB_HOST', $service['host'] . ':' . $service['port']);

/** Database Charset to use in creating database tables. */
define('DB_CHARSET', 'utf8');
@@ -46,14 +55,14 @@
* @since 2.6.0
-define('AUTH_KEY', 'put your unique phrase here');
-define('SECURE_AUTH_KEY', 'put your unique phrase here');
-define('LOGGED_IN_KEY', 'put your unique phrase here');
-define('NONCE_KEY', 'put your unique phrase here');
-define('AUTH_SALT', 'put your unique phrase here');
-define('SECURE_AUTH_SALT', 'put your unique phrase here');
-define('LOGGED_IN_SALT', 'put your unique phrase here');
-define('NONCE_SALT', 'put your unique phrase here');
+define('AUTH_KEY', 's1,d-+8?BFD$n`|0r+sk[/O<e7.86={p');
+define('SECURE_AUTH_KEY', '=|DiG2C+Y1FLKFIZv_S#GZUvfz~Z<FkI#qGV=U25r5ofI). iZZ:atO7R6ru@ms]&C/=K5VxImw#du$}PfiO=?Qc&e-1}Jc*yI$;/87r{!}-Qq9');
+define('SECURE_AUTH_SALT', '.qUX{|m?o4j-(Bs2%6|Sf=^wT<y1dJ-+RV&BTfhEqNi`VB,+?i)DPC5r-FcJe9d<');
+define('LOGGED_IN_SALT', '&ah.M&Vij;X;[85Ox,}Z10PT#f ssFp5UjNU<?5LnK+]1ICBq*E{%8H@v(_Y#Jwz');
+define('NONCE_SALT', '2f3kM}yX5gTJ]AD~3P(/P`~N#L&y*,A-0ldJ-wTr-h(Jild]IAW{K])`^5<0qP `');

We initialised a php variable called "services", got the 'VCAP_SERVICES'-Json out of "$_ENV" and decoded it. Then we grabbed the first MySQL service credentials and saved it to the variable "service". Now we have the credentials in our service variable and can use them to initialise WordPress. Due to security concerns, we removed the salts of our example project. Just follow the description in the config file to create your own.

If you have problems establishing the database connection, you can check the content of the environment variable by using the following command.

cf env $APPNAME

Pushing WordPress to Cloud Foundry

Before we can finally push the app, we have to create the "manifest.yml" file.

- name: wordpressmeshcloud
memory: 256M
- wordpress

Let Cloud Foundry do it's magic now.

cf push

How to Proceed

If you followed the steps, you should now be able to reach your WordPress blog with the browser of your choice. To get the application's URL and it's status you can use the CF CLI again. Simply type:

cf app wordpressmeshcloud

You should be able to see your url now, it's called "routes:" in Cloud Foundry.

For productive use make sure you use a datastore to persist your data. You can do so by using FUSE or a Wordpress-Plugin for Cloud Storage.