Lazy Tab Navigation Concept

How to Implement a Lazy Tab Navigation System in Angular

How to Implement a Lazy Tab Navigation System in Angular

In this blog post you’ll learn why you’d want and how to implement a lazy tab navigation system in Angular.

Why go for a lazy tab navigation system?

We at meshcloud provide a solution to manage cloud access, users and projects across cloud platforms. One of our main goals is to make the interaction with the cloud as easy as possible. Besides that we want to decrease the effort of management workflows to a minimum because cloud management is a complex topic with a lot of items to manage.

But how do we want to achieve this?

It is mostly a user facing topic. Therefore we’ve improved user experience in our Angular SPA (Single Page Application) and decreased the necessary interactions for specific management workflows. It is important to navigate between the sections without lagging or freezing.

It is really easy to say that the improvement of the user experience is the key. But what makes the difference to support user friendly behavior?

We’ve invested a lot of time to find a proper solution. We came up with a lazy tab navigation system is what we need which covers all mentioned challenges.
Lazy Tab Navigation Example

What is a lazy tab navigation system?

As the name suggests, the idea is a tab navigation system in which the content is loaded lazy. This will enable us to implement scalable views with a lot of clearly separated sections. But first to the general tab navigation system.

1. Tab navigation system

The tab navigation system is a central component which coordinates the tab creation and holds the tab state. We’ve decided to handle the tab creation based on the route config. This makes total sense because the routes are uniquely identifiable. This means we can automatically prevent the creation of multiple tabs which correspond to the same section. Besides that we have the possibility to create deep links to specific tab sections. This enables us to reference specific tab sections within other views. In general we’ll determine the tab state based on the activated route.

2. Lazy Loading

The other aspect is the lazy loading. Lazy loading is a central Angular feature. It is a design pattern that loads NgModules as needed. For comparison, there is the option to load NgModules eagerly. With eager loading all NgModules are loaded when accessing the web application. This increases the bundle size and for large applications it is not a best practice due to the load time. Eager loading is for a tab navigation system not an option because we’ve a lot of different sections. And we want to allow multiple sub-sections. This would mean that we load every section content up front. From the performance and UX perspective this is not a good approach.

If we now combine the tab navigation system approach with the lazy loading feature we’ll get a really flexible and scalable pattern. The tab navigation system takes the role of describing the context. It is kind of a container which defines the available routes and manages the tabs. In conjunction with the lazy loading feature, we are able to load the content of the respective tab in a dedicated manner.
Lazy Tab Navigation Concept

What is necessary to implement the lazy tab navigation system?

We’ve got an overview about the lazy tab navigation system from the conceptual perspective. Now we’ll take a look at the technical realization and how we’ve implemented this approach.

We need a shared component which covers all criteria of the tab navigation system. This includes the tab creation based on the router config and tab selection based on the activated route. We call the component RouterOutletNavComponent.

Besides that we’ve defined an interface which describes each tab:

  • displayName
  • routerLink
  • selected

Our interface declares additional properties. For example to show/hide a tab depending on a specific condition and to attach a badge with countable information. But this is not relevant for the general implementation. So we’ll leave it out for now.

We call the tab interface RouterOutletNavTab.

Sure, we didn’t implement the tab creation and tab selection logic within the component. We’ve implemented a service that exposes the functionality. This is a best practice to increase modularity and reusability. The component should only handle the user experience. With a clear separation we’ll also increase the testability of the functionalities. This approach should be followed every time ;)

We call the service RouterOutletNavService.

Now let's combine the RouterOutletNavComponent, RouterOutletNavTab and RouterOutletNavService. ****

  selector: 'mst-router-outlet-nav',
  templateUrl: './router-outlet-nav.component.html',
  styleUrls: ['./router-outlet-nav.component.scss']
export class RouterOutletNavComponent implements OnInit, OnDestroy {

  public readonly styleClass: RouterOutletNavStyleClass = 'nav-child';

  public tabs: RouterOutletNavTab[];

  private sub: Subscription;

    private readonly router: Router,
    private readonly activatedRoute: ActivatedRoute,
    private readonly routerOutletNavService: RouterOutletNavService
  ) {
    this.tabs = this.routerOutletNavService.initializeTabs(this.activatedRoute);

  ngOnInit() {
     * Select initial tab and navigate to child route.
      .subscribe((routerLink: string) => this.routerOutletNavService.selectTab(routerLink, this.tabs));

     * We listen to the router events to select the specific tab.
    this.sub =
        filter(x => x instanceof NavigationEnd),
        switchMap((x: NavigationEnd) => {
           * If the firstChild is available then we don't determine the first child tab.
          if (this.activatedRoute.firstChild) {
            return of(x.urlAfterRedirects);

           * If child route doesn't exists then we'll determine the child route and select the tab.
          return this.navigateToFirstChildRoute();
        next: (routerLink: string) => this.routerOutletNavService.selectTab(routerLink, this.tabs)

  ngOnDestroy(): void {

  private setupInitiallySelectedTab(): Observable<string> {
     * If childs are applied for example in case of redirection then we'll use the existing destination url.
    const currentUrl = this.router.url;
    if (this.activatedRoute.firstChild) {
      return of(currentUrl);
    return this.navigateToFirstChildRoute();

  private navigateToFirstChildRoute(): Observable<string> {
    return this.routerOutletNavService.findFirstChildRoute(this.tabs)
         * This side effect is necessary to apply the determined tab route.
        tap((routerLink: string) => {
          const extras = {
            relativeTo: this.activatedRoute,
            replaceUrl: true

          this.router.navigate(['./', routerLink], extras);

We want to support not only a tab navigation system. We want to support a lazy tab navigation system. So it is necessary to embed a RouterOutlet into the tab container. For the styling we use Bootstrap as external dependency.

The corresponding HTML file would look like this:

<div [ngClass]="styleClass">
  <ul class="container nav nav-tabs">
    <ng-container *ngFor="let t of tabs">
      <li class="nav-item">
        <a class="nav-link" []="t.selected" [routerLink]="[t.routerLink]" [id]="t.routerLink">

  <div class="tab-content p-4">

It is a valid use case to nest multiple lazy tab navigation systems. But for simplification we allow only one additional layer. Therefore we’ve defined two different css classes ‘nav-root’ and ‘nav-child’ to tell the levels apart.

We call the style class type RouterOutletNavStyleClass.

export type RouterOutletNavStyleClass = 'nav-root' | 'nav-child';

Very well done. We've implemented our shared lazy tab navigation system.

Now we have to feed the tab navigation system with the corresponding data. The information about the router link and if a tab is selected will be determined based on the route config and activated route. It is really important to declare the available routes with the lazy loading approach from Angular.

But how do we get the display name of each tab? Sure, we could use the route path name. But the name is our uniquely identifiable name. It could also be a human unreadable string. From the user perspective, it is not a good approach to use an identifier string as tab display name. Besides that we need a way to determine some data up front. Keep in mind, besides the display name we’ve also a condition and badge information in place. So we need the possibility to attach specific session data to each tab. Therefore we need to declare a resolver which returns a RouterOutletNavSessionData observable.

We call the abstract resolver class RouterOutletNavSessionResolver.

export interface RouterOutletNavTabData {
  displayName: string;

export interface RouterOutletNavSessionData {
  [key: string]: RouterOutletNavTabData;

export abstract class RouterOutletNavSessionResolver implements Resolve<RouterOutletNavSessionData> {

  abstract resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<RouterOutletNavSessionData>;


Now we’ve everything in place to use the lazy tab navigation system within a specific use case.

Lazy tab navigation system in action

At this point it makes sense to apply the lazy tab navigation system to one of our use cases. We’ve mentioned at the beginning that we provide a solution to manage cloud tenants. This includes for example the customer management. Within a customer we can manage projects, users, financials and much more. So it makes total sense to use the lazy tab navigation system with one additional level to make the management as easy as possible.

We’ll only consider the customer root level. The nested levels are out of scope for now. But in general they follow the same approach.

Our top level component would be the CustomerManagementComponent. This component declares the RouterOutletNavComponent within the HTML file and applies the ‘nav-root’ style class.

<mst-router-outlet-nav styleClass="nav-root"></mst-router-outlet-nav>

Then we’ll add all available routes to the CustomerManagementRoutingModule. Besides that, it's important to add a CustomerManagementNavSessionResolver to determine the session data up front. This provides our basis for the tab creation.

  providedIn: 'root'
export class CustomerManagementNavSessionResolver extends RouterOutletNavSessionResolver {

  resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<RouterOutletNavSessionData> {

    const sessionData = {} as RouterOutletNavSessionData;

    sessionData['projects'] = this.getProjectsNavTabData();
    sessionData['access-control'] = this.getAccessControlNavTabData();
    sessionData['financials'] = this.getFinancialsNavTabData();

    return of(sessionData);

  private getProjectsNavTabData(): RouterOutletNavTabData {
    return {
      displayName: 'Projects'

  private getAccessControlNavTabData(): RouterOutletNavTabData {
    return {
      displayName: 'Access Control'

  private getFinancialsNavTabData(): RouterOutletNavTabData {
    return {
      displayName: 'Financials'

const routes: Routes = [
    path: '',
    component: CustomerManagementComponent,
    resolve: {
      session: CustomerManagementNavSessionResolver
    children: [
        path: 'projects',
        loadChildren: () => import('./projects').then(m => ...)
        path: 'access-control',
        loadChildren: () => import('./access-control').then(m => ...)
        path: 'financials',
        loadChildren: () => import('./customer-financials').then(m => ...)

  imports: [RouterModule.forChild(routes)],
  exports: [RouterModule]
export class CustomerManagementRoutingModule { }

That's it. No additional steps are necessary. We can easily add tabs and nested lazy tab navigation systems to the CustomerManagementComponent. For example we could think about a settings section or projects overview section. There are no limits to the imagination.

As inspiration our customer management view looks like this:
Lazy Tab Navigation Angular

AWS Landing-Zone Solution Comparison

AWS Landing Zone Comparison

AWS Landing Zone Comparison

You are on your path to migrate your teams’ workloads to AWS, whether from on-premise or from other cloud providers? Then you have most probably stumbled upon the term “Landing Zones” being an important piece in the puzzle to a smoother cloud migration journey. If this term does not ring a bell, check out our blog post about landing zone lifecycle. Simply put, landing zones should set your foundation in AWS according to your organization’s security and compliance requirements.

It is not often the case that only one landing zone solution exists. This poses the challenge of choosing the right landing zone implementation for the right use case.

In this blog post we go through AWS’ ecosystem on landing zones. The goal is that after you finish reading, you will have a clearer picture on different AWS landing zone solutions to help you choose the landing zone solution that fits most to your organization.

ℹ️ Note that custom-built landing zone solutions are out of the scope of this blog post.
💡 If you are interested in Landing Zone solutions for Azure or GCP, please have a look at our comparison blog posts of these cloud platforms.

AWS Landing Zone Ecosystem

When starting your journey to search for a best fit landing zone option to pursue, you will most certainly come across different solutions. We will mainly describe a set of landing zone solutions that are provided and maintained, or recommended by AWS, namely AWS Control Tower (with plain account factory or account factory for terraform) and AWS Landing Zone Accelerator.

There are other solutions that we will not cover in the comparison, namely AWS Landing Zone and AWS Secure Environment Accelerator (SEA). The former being on long term support and not receiving more features, as well as not being recommended by AWS. The latter being targeted on governmental and public sector environments that require high-level of restrictions. We will give a brief explanation on each solution, but if you are interested in comparing these solutions as well, let us know know in the comments ⬇️ .

AWS Control Tower (CT)

We recommend that all new landing zones start with AWS Control Tower. AWS Control Tower helps you build out an initial prescriptive landing zone configuration, use out-of-the-box guardrails and blueprints, and create new accounts using AWS Control Tower account factory.
-AWS Documentation

This is the first solution you will probably come across. AWS repeatedly recommends this approach going forward. Unless you already have your landing zone setup customized with older approaches, you should definitely try it out.

AWS Control Tower’s main functionality is that it sets up a basic landing zone for you using multiple AWS services behind the scenes, such as AWS Organizations and AWS CloudFormation StackSets. It is pretty easy to set up as AWS provides a step-by-step walkthrough (i.e. a wizard). In that wizard, you will first choose the region of your control tower, and also have the option to deny services in regions you specify. Then you configure organizational units (OUs), shared accounts, CloudTrail (e.g. log retention periods) and encryption. Eventually, control tower will create the following:

  1. Two Organizational Units (OUs)
    Security: This OU contains shared accounts (log archive and audit accounts).
    Sandbox: This OU will be empty, to be filled with user provisioned accounts. Although its name indicates an environment with loose restrictions, by default it includes similar guardrails as all other OUs.
  2. Three shared accounts; management account (note that this account already exists, and is the account from which you setup CT) as well as already mentioned accounts under Security OU (log archive and audit accounts).
  3. A native cloud directory with preconfigured groups and single sign-on access.
  4. 20 preventive guardrails to enforce policies (using Service Control Policy (SCP)) and 3 detective guardrails to detect configuration violations (using AWS Config).

AWS Control Tower also offers an account factory where you can manually provision accounts or enroll existing accounts. These accounts are then governed by control tower. AWS also offers a framework around account factory to enable automation using a GitOps workflow which we explain in more details below.

AWS Control Tower Account Factory for Terraform (AFT)
AWS Control Tower Account Factory for Terraform
AFT is built on top of AWS Control Tower. It provides automation around account provisioning and further customizations using a GitOps approach.

AFT can be set up after setting up control tower, and is done mainly through two parts; manually creating the AFT management account and then applying the AFT terraform module. After applying the module, there will be 4 empty repositories (aft-account-customizations, aft-account-provisioning-customizations, aft-account-request, aft-global-customizations) in your configured VCS (e.g. CodeCommit, GitHub, etc.) that you need to populate with source repositories.

As shown in the “AFT Deployment Module” diagram above, AFT performs multiple automated steps to provision and modify accounts, as well as create custom resources. This is done by providing an interface to users (e.g. developers). For example, users can create a commit to aft-account-request repository (directly or through a pull-request based flow) with a file that calls the account request. The commit then triggers the configured workflow to provision an AWS account governed by CT.

AWS provides various resources to get more familiar with AFT. For information on broad concepts and some technical details, we recommend you check AWS AFT documentation. For a deep dive technical guidance, we recommend you go through AWS Control Tower Immersion / Activation Day - AFT workshop.

Landing Zone Accelerator on AWS

Landing Zone Accelerator on AWS is the latest approach by AWS to provide a solution for a secure Landing Zone based on best practices.

It is recommended to be used together with AWS Control Tower. But it also works in regions that don’t support Control Tower. In that case, it uses AWS Organizations directly and creates the intended OUs with according policies applied. It also works with AWS CloudGov (US), which is not supported by AWS Control Tower. So it is a solution that can be used by every AWS customer.

The Accelerator is completely executed via AWS native services. An AWS CloudFormation Stack is used to roll out a CodePipeline that creates and updates the Landing Zones according to the configuration that can be customized to your needs.

AWS Landing Zone Accelerator

The CodePipeline is the central component of it. It does the initial role out of the Landing Zone and also enables Continous Integration and Deployment. All that is needed to roll out changes to the Landing Zone is a commit to the Config Code Repository with the config changes you would like to apply.

Rolling it out will actually result in the following environment.

Resource Hierarchy AWS Landing Zone Accelerator

With this Resource Hierarchy responsibilities are clearly segregated into different OUs and accounts.

💡 If you try to roll out the Accelerator, please use the this page from the AWS documentation to get a step-by-step guide including pre-conditions. Without this page you will easily miss some important steps.

And when using the Accelerator in combination with AWS Control Tower, you have to make sure that you enter the management, audit and logging account email addresses of the accounts that were already created by Control Tower as the landing zone accelerator builds on top of it. Otherwise you will run into a code pipeline prepare stage issue.

AWS Landing Zone

AWS Landing Zone is currently in long-term support and will not receive any additional features. Customers who want to set up a new landing zone should review AWS Control Tower and Customizations for AWS Control Tower .
-AWS Documentation

This is the old approach on building an AWS landing zone which is based on AWS Organizations. One of its key components is the Account Vending Machine (AVM) which is a Service Catalog product that enables preconfigured account provisioning. It creates a multi-account structure; a security, logging, and a shared services account under a “Core OU”.

At the time of writing, searching over AWS Landing Zone in AWS documentation does not provide enough information and you will be redirected to AWS Control Tower in many occasions.

If you are already using this solution and are actually happy with it, let us know in the comments below!

AWS Secure Environment Accelerator (ASEA)

This solution has been around since 2020 and was initially built with focus on the Government of Canada. The main target of this project is environments that require high-level of security and restrictions such as governments and public sector organizations. It offers an opinionated and prescriptive architecture for your AWS multi-account setup, while also making it possible to customize as needed. It provides sample configurations that will build you an end-to-end landing zone.

There is extensive AWS secure environment accelerator documentation that will lay out details on the solution including workshops for both administrators and project teams.

If you are already using this solution and are actually happy with it, let us know in the comments below!

Comparing the Options

For this comparison, we assign 1 to 3 ⭐ for different aspects that we look at for these options. If a certain aspect is not covered at all we assign a ❌. Additionally, you can find some details about why we rated an aspect. These details also provide a deeper insight into how these aspects are covered within the options.

Feature Comparison

The feature comparison is based on the Cloud Foundation Maturity Model (CFMM). You can read details of what these blocks mean on the CFMM website. You can also follow the links in every row that redirect you directly to the details of the according CFMM block.

AWS Control Tower with Account Factory AWS Control Tower with AFT AWS Landing Zone Accelerator
Resource Hierarchy ⭐⭐ Creates 2 Organizational units (OUs), Security and Sandbox. It also creates 3 shared accounts; a standalone management account (not belonging to an OU), and log archive and security audit in the Security OU. The Sandbox OU remains empty to contain the new provisioned accounts. You can create new OUs based on your desired structure. They will be governed by CT. AWS also offers a sample hierarchy with additional OUs that you can manually include. ⭐⭐⭐ Additional to what CT provides, AFT offers a GitOps approach to provision accounts under specified OUs through an automated workflow. ⭐⭐⭐ By default it creates 2 OUs, Infrastructure and Security. You can easily define more OUs in the config. Another great feature it provides is the option to use a Quarantine OU where almost everything is forbidden. It is used for newly created accounts that are still under construction by the accelerator. That way unintended access or changes during creation can be avoided.
Resource Policies ⭐⭐⭐ AWS Control Tower sets up 20 preventive guardrails (implemented by SCPs) and 3 detective guardrails (implemented by AWS Config). Those are the “Mandatory” guardrails enabled by default that we see are reasonable, such as disallowing deletion of the log archive bucket, disallowing changes to AWS IAM roles set up by CT and CloudFormation, etc. There are also more guardrails available that are not enabled by default, which are “Elective” and “Strongly Recommended” guardrails. Newly provisioned accounts with Account Factory inherit guardrails from their parent OU. ⭐⭐⭐  Accounts provisioned through AFT also inherit all guardrails from their parent OU which is covered by CT. Additional policies can be applied through AFT customization framework. ⭐⭐⭐ Reasonable guardrails are applied to the default domains Infrastructure, Security and Quarantine. It is easy to also define additional guardrails or to define them for additional, custom OUs.
Centralized Audit Logs ⭐⭐⭐ Enables CloudTrail service to aggregate action and event logs into the shared Log Archive account from different AWS accounts and records them in CloudWatch. With CT release ≥ 3.0, an organization trail is created to gather event logs on an organization level instead of a member trail that gathers logs on a per account basis. ⭐⭐⭐ With AFT, you can also enable CloudTrail data events. Data events are events performed on or within a resource. These are disabled by default as they often incur high costs. This is done when deploying AFT by setting aft_feature_cloudtrail_data_events to true. ⭐⭐⭐ Uses the Log Archive Account to store the logs. It can be configured easily which logs shall be included.
Cloud Tenant Tagging ❌ No AWS account is tagged when setting up CT. ⭐ By default provisioned accounts are not tagged. It is possible to tag those accounts through a terraform variable account_tags. ❌ Only Tag Policies on the org level can be defined. They only ensure that if a certain tag is set, it complies with the definition in this policy. The policy will ignore those resources if no tags are set at all. During account creation , no tags can be set. So the tagging of Resources must be done outside of the accelerator.
Tenant Provisioning/Deprovisioning ❌ Initial provisioning of your multi AWS accounts. This does not qualify as tenant provisioning/deprovisioning, which is partly covered by account factory. ⭐⭐ AFT’s main functionality is automation around tenant provisioning. Deprovisioning is not supported. Removing the aft-account-request terraform for a specific account only removes it from AFT management. ⭐ Workload Accounts can be defined in the accounts-config.yaml. Only the very basic account information can be entered here. Tags, Service Accounts, etc cannot be defined for workload accounts.
Playground/Sandbox Environments ❌ Creates a Sandbox OU. However, it also has the same guardrails as the Security OU. ❌ No predefined Sandbox OU is available
Privileged Access Management (PAM) ⭐⭐⭐ Uses IAM Identity Center service to offer preconfigured groups. You can then add users to those groups based in their role in the organization. ⭐⭐⭐ By default, the user email set in the account request is assigned AdministratorAccess to the account. Additionally, groups created with CT are assigned with specific permissions. That is, AWSSecurityAuditPowerUsers group is assigned. AWSPowerUserAccess, AWSControlTowerAdmins group is assigned AWSOrganizationsFullAccess, AWSSecurityAuditors group is assigned AWSReadOnlyAccess to that account. ⭐⭐⭐ By default it defines only an Administrator group. But as you can base it on AWS Control Tower, you can make use of the advanced default roles created by AWS Control Tower.
Service Account Management
Virtual Networks ⭐⭐⭐ CT creates a default VPC with 3 subnets in the management account per availability zone. You can edit VPC configurations for new provisioned accounts with the account factory. For example, you can allow internet access for created subnets, which will lead to the creation of a NAT Gateway. When provisioning a new account, CT automatically deletes the default VPC and creates a new VPC configured by CT. ⭐⭐⭐ Similar to normal account factory, with addition to AFT customization framework that allows you to include additional terraform resources. ⭐⭐⭐ It provides a reasonable default network setup in the network-config.yaml. Additionally the default config contains a lot of additional options as commented code. You just have to active them and adapt to your needs.
On-Prem Connect ❌ CT workflow does not offer on-prem connection out of the box. It can however be implemented using Transit Gateways and Site-to-site VPN. AWS Control Tower workshop provides a detailed guide on interconnecting control tower accounts VPCs as well as connecting them to on-prem. ❌ While it is not supported out of the box by e.g. switching on a flag, AFT customization framework enables administrators to include terraform resources as they see fit. There are a bunch of “Site-to-Site VPN” terraform resources available that can be leveraged. ⭐⭐ It provides a TransitGateway in the network config to connect easily to a hub that makes the on-prem connect. As also directConnectGateways can be defined in the network config, everything that is needed to establish an on-prem connection is available.
Cloud Interconnect ❌ Similar to on-prem connect, CT does not offer cloud interconnect out of the box, but can be implemented as an “on-prem” connection. Additionally, AWS Control Tower workshop offers guidance on interconnecting AWS accounts VPCs. ❌ Same as for CT itself applies. ⭐⭐ In the network config directConnectGateways can be defined to connect to networks at other cloud providers.
Managed Key Vault
Automation in CI/CD ❌ Runs automation under the hood, not configurable by the user. ⭐⭐⭐ Automation is the core of AFT. It sets up a pipeline that is triggered by a code change in a version control system (VCS). It uses AWS CodeCommit natively, but also supports other alternatives such as GitHub and BitBucket. ⭐⭐ The whole Accelerator framework is based on AWS CodeBuild and CodePipelines. Any updates to the config is rolled out via that pipeline. The only downside is, that you can only use the AWS Services for it. Integrating it into any other CI/CD tool is not possible.

Non-Functional Comparison

AWS Control Tower with Account Factory AWS Control Tower with AFT AWS Landing Zone Accelerator
Extensibility/Modularity/Scalability ⭐⭐ CT only provides the baseline setup. It can be extended with “Customization for Control Tower - CfCT”. You can also create your own OU structure to which CT will also apply guardrails. ⭐⭐⭐  In addition to what can be extended and customized in CT, AFT provides a customization framework that allows users to include AWS resources with terraform. ⭐⭐ You can customize the config rather easily to your needs. But it seems like it is not possible to add completely new modules for other services easily to the Accelerator. As they have a DSL to define all the resources via their config it feels like you are pretty locked to the config options that are provided by the accelerator. But the config actually covers quite a lot of services (35+ services are mentioned in the AWS docs) and seems to be sufficient for many use-cases. But if you want to integrate an AWS service into your Landing Zone that is not yet supported by the accelerator, it looks like you have to create a Pull Request to the accelerator to get it in.
Terraform ⭐⭐ As its name suggests, it offers a workflow based on terraform. It requires a bit of ClickOps initially to create the AFT management account. ❌ It is completely based on AWS CloudFormation.
Learning Curve ⭐⭐ ⭐ It is easy to start with CT and set it up. Having an advanced AWS knowledge is not required. ⭐⭐ There are many AWS services involved in the workflow, making it a bit harder to grasp. Also, AWS account provisioning is not directly performed via terraform but an account request is created which triggers an account provisioning process. ⭐⭐ Except for some current issues, that should be related to the accelerator being quite new on the market, rolling it out is rather easy. You only have to deploy a StackSet and the rest of the bootstrapping is done automatically for you. As a next step you have to set the configuration you like to have in 6 yaml files, that are rather easy to understand. You also get a good reference implementation based on best practices where you can copy and adapt your own configuration.
Community ⭐ Scattered community across multiple platforms. There doesn’t seem to be a specific place to ask questions but you can find some answers on different platforms such as stackoverflow. ⭐⭐ In addition to different community platforms, you can open a GitHub issue in AFT repo in case you encounter problems. AFT maintainers seem to be responsive to issues, but they still don’t accept contributions at the time of writing. ⭐ As it is a rather new solution you cannot find a lot around this yet. The GitHub Repo only has 75 stars and 12 contributors yet. Only a few issues exist yet in the GitHub repo and the responsiveness of the developers is rather low at the moment. At least 2 tickets we created or commented in did not get any response from official contributors within more than a week. Eventhough community around it is still rather low, an advantage is that AWS Support fully covers support for it.
Costs ⭐⭐⭐ There are costs associated with services that CT uses such as CloudTrail and S3 buckets. The resources it creates can incur costs depending on the usage, for example the more accounts created, the more logs are ingested into the log archive account. AWS has a pricing page with details on different scenarios and how much cost is associated. Another example is, in account factory settings, you can allow internet access for created subnets in the new account, which will lead to the creation of a NAT Gateway and drive some cost. ⭐⭐ There are no additional costs for AFT but also for resources it creates. Transparency in AFT is limited and you need to figure out which options to enable/disable to control which resources are created. For example, AFT will by default create AWS PrivateLinks. In order to disable that, you need to set aft_vpc_endpoints=false when deploying the AFT terraform module. ⭐ AWS is very transparent around costs that are associated with running the best-practice landing zone. At the time of writing this article it is ±360€ per month. If you are not using all resources that are configured in the default landing zone you can save a bit of money. Main cost drivers are the KMS and the networking.
Time To Success ⭐⭐⭐ Setting up AWS Control Tower for the first time takes 1-2 hours, which includes going through “set up your landing zone” that takes roughly 15-30 minutes. This assumes that you do not have an existing setup that you need to decommission; as that will increase the time to success. ⭐⭐ Setting up AFT technically takes around ~1 hour, which includes creating the AFT management account and deploying the AFT terraform module. Post-steps like populating your repositories for automation would rather be quick. All in all, assuming the process goes smoothly, setting up CT with AFT would take ~4 hours. ⭐⭐ The concept of AWS accelerator is quite promising to set up a secure Landing Zone rather quickly due to the best practice defaults and the easy to understand configuration. But currently there is the downside that if something goes wrong while rolling out the pipeline you might receive really bad error messages that don’t help you resolving the issue. This can delay the time to success significantly. But these issues could vanish over time when the accelerator becomes more mature.

Find your way to building AWS Landing Zones

Building your landing zone in AWS eventually implies it will be built on top of either AWS Organizations directly or AWS Control Tower to enable your multi-account setup. Using either of these would also impact your decision on which landing zone solution to pursue. AWS Organizations is merely a tool that enables a multi-account setup but does not actually build a landing zone, while AWS Control Tower provides an abstraction to multiple AWS services including AWS Organizations and builds a basic landing zone that you can extend atop. The below simplified flow chart conveys the relation between the different solutions.

AWS Landing-Zone Solution Comparison

Unless you are using a region that is not supported by AWS Control Tower or you need absolute control over all the components of your landing zone, you should always go with AWS Control Tower. Most solutions nowadays support it, and it is highly recommended by AWS. But besides the guardrails it provides, there isn’t much that it adds in the meantime and it can be considered a very basic landing zone when used solo. Therefore, if you’re only interested in a basic landing zone setup that only provides you with a multi-account framework with a set of preventive and detective guardrails, then you should definitely consider AWS Control Tower.

From there you can choose to stay with only Account Factory in which you will pretty much only benefit from CT’s guardrails when provisioning accounts, or go for Account Factory for Terraform which additionally provides a GitOps based framework to automatically provision accounts with customization capabilities.

If you’re interested in an end-to-end landing zone solution, then consider AWS Landing Zone Accelerator. It is a solution that promises to build full-fledged AWS landing zones with automation capabilities. It supports both AWS Control Tower and AWS Organizations. This solution was first released on 23 May 2022 which is fairly new. We identified that the product itself and its documentation is not yet battle proven and you can run in some unexpected issues when trying to roll it out. As of this writing, we stumbled across a blocker that prevented us from actually deploying it and therefore we cannot provide a full recommendation yet. We like the simplicity and customizability of the config files that are steering the AWS Accelerator and the understandable pipeline that deploys it. But we are wondering whether more extensive customization that goes beyond the services and features supported by the configuration files of AWS Accelerator are feasible.

The different solutions presented in this blog post are actually not really alternatives that compete against each other. Instead they combine and extend each other very well. For a full-fledged Landing Zone we therefore recommend the combination of AWS Control Tower with AFT for automated account provisioning and Landing Zone Accelerator on AWS to add further highly recommended features to your Landing Zone.

Best-Practice AWS Landing Zone Solution Diagram

If the concerns we mentioned regarding the production-readiness of AWS Landing Zone Accelerator, or if you need to have full control over your landing zones, then go for a custom-built landing zone, either via an AWS partner or with your AWS team.

Did this blog post help you decide which Landing Zone Solution you will apply to AWS? Did you already gather your own insights to building an AWS Landing Zone and want to back up, add or discuss insights from this blog post, please do so and leave a comment! We look forward to hearing about your experience.

Multi-Cloud Maturity Title

The path to proper multi-cloud management – a maturity model

Cloud means Multi-Cloud

The cloud is the foundation of an efficient IT operation in the digital age. More often than not, one cloud platform is not sufficient for at least two reasons: 1) Different cloud platforms service different purposes and each has specific strengths and weaknesses hence embracing the plurality of platforms in a best-of-bread manner is more productive than trying to fight it. And 2) To avoid a technology lock-in on one vendor or platform, creating a diverse skill set preserves technical sovereignty.
Therefore, building a proper multi-cloud management capability is key to a successful and efficient cloud strategy. To give some guidance about the technical approaches to build such a capability, we introduce the meshcloud multi-cloud management maturity model.

Multi-Cloud Maturity
The meshcloud multi-cloud managemenet maturity model

Stage 0: No Multi-Cloud Management ?

The first stage is easy to reach – it is the natural stage when you do not take any actions, maybe because you do not realize you have a multi-cloud challenge to solve. However, the effects rise nevertheless.
Due to disorientation of users individual platform or ad-hoc processes will be established and you end up with a silo-structured organization loaded with unnecessary complexity due to different access routines for different platforms.
But that is not necessarily the greatest worry: people tend to shortcut complexity and lengthy processes and public cloud vendors make it way to easy to access cloud resources within minutes with the simple (yet virtual) swipe of a company credit card. The nightmare has a name: Shadow IT! You end up with individual, unconnected, unregistered public cloud accounts the company pays for but has no efficient way to manage. All comes down to individual initiative and discipline which might go well, but it’s an anarchy and there is no way to ensure rules and regulations are followed and compliance is uphold. This has severe implications for security or regulatory duties, e.g. you cannot give full report about data processing activities as required by the GDPR. Accounts might be shared among colleagues. When people leave the company, no one guarantuees credentials are rotated. And so on.
Therefore, this stage is like purgatory – you suffer from sins of the past and want to get out as soon as possible. To receive remission, action is necessary.

Stage 1: Manual / Semi-automated Multi-Cloud Management ?

So we realize: let’s bring order to the chaos. We try to standardize, define common processes, keep track of tenants and workloads out there in the clouds or running within our own private cloud.
In Stage 1, we define a process to provision and configure tenants on the clouds we use. Someone is defined responsible for a certain platform, to keep security and governance up and to provide users with what they need to get their cloud projects up and running.
Regarding implementation, we move quick and take the tools we have and are familiar with: a ticketing system to order cloud tenants from the IT department which someone creates manually or provides manually or with some scripting on her or his own. What about controlling? Let’s keep track of cloud tenants in an Excel sheet or export a list of projects on a platform to see who’s doing what and how much of it.
It’s tedious and everyone hates it – those who want to use the cloud because there still different contacts for different clouds and because they have to wait so long to receive their credentials. And operators hate it because they are doing dull work and lots of it instead of improving processes and evaluating further advanced technologies.
The cloud is all about automation and the multi-cloud is, too! So this stage does not fulfill the promise of self-serving, on-demand, scaling capacities with a few key strokes, but falls behind the spirit that comes with cloud technologies: automate ALL THINGS.

Stage 2: Imperative / Workflow-based Multi-Cloud Management ?

With further investment, we reach stage 2 where we have reached the stage of integrated, automated processes to provision tenants and user accounts in all relevant cloud platforms. We might even have established a self-service where users are able to set up cloud projects 24/7 at their own pace and needs. There is a portal where they can order cloud projects or ressources and after a while they are ready to run.
So all good now? At first, it might be. But over time, workflows and tenant configurations evolve and after some months or years it becomes hard to tell which tenants were created and configured by which version of the process. Also, do people reliably remove resources when they are not needed anymore? And how are resources in different clouds connected and how do they depend on each other? Hopefully, the documentation is available and up to date – otherwise, we still lose orientation between platforms and projects consuming and possibly spanning them.
But what now?

Stage 3: Declarative Multi-Cloud Management ?

Meet declarative multi-cloud management! You already know the concept from the deployments of your workloads where it labels as infrastructure as code. You describe the desired target state and the system works it out for you. Declarative multi-cloud management means the same, but on an organizational, cross-platform level – Organization as Code as we framed it.
So you define which projects should exist on which platform and which users should have what kind of access. You save the new configuration in a versioned repository and after a short while the whole multi-cloud federation converges to the new state you described. The version control enables you to frantically audit every single change that happened to your multi-cloud landscape. This has several advantages:/li

  • Robustness: Distributed systems might suffer from weak network links which might cause process flows to stall and fail. However, with the target state clearly defined, the system can try again and again to finally reach the desired state.
  • Prevention of local changes: You know the “colleague is on vacation, but I need urgent access” situation which might lead to a quick and dirty permission to cloud projects by an administrator. More often than not, those changes stick longer than intended – the human lazyness is my witness. However, a declarative multi-cloud management system would roll back those changes at the next sync.
  • Decoupling of semantics and implementation logic: With a descriptive configuration you describe what you want, not how to do it. Therefore, flexibility remains with the system on how to implement reaching the desired state which means more freedom to improve the multi-cloud system over time.
  • Documentation and Auditing: If every configuration change is tracked in the manifest and version controlled, it is always easy to report, audit and track who had access to which system or data and how long, why and by whom.

You see that declarative multi-cloud management has many advantages – it’s for some reason the paradigm turned so successful with configuration management. Therefore, meshcloud put the declarative paradigm at the heart of its vision of an agile, but full-control multi-cloud management. If you want to learn more, we are happy to start the conversation with you.
Reach out to us via mail, social webs, phone or any events we attend. We are looking forward to hear your thoughts!

The Cloud Project Lifecycle

The Cloud Project Lifecycle

How you implement cloud computing technologies matters

Cloud technologies are an essential tool to increase software delivery performance. They bring the necessary automation and scalability that enables developers to spin up instances within minutes, get their applications up and running and implement CI/CD routines to develop software faster and close to customer requirements. However, the State of DevOps Report 2018 found that it's not enough to just use cloud computing technologies, it matters how you use them. "Companies that leverage all of the essential cloud characteristics are 23 times more likely to be elite performers" – a group of development teams that delivers software at the highest levels.

In this context, we want to introduce the concept of the cloud project lifecycle to illustrate the stages a cloud project traverses, how this impacts your organization and what you need to do to leverage your cloud to the fullest.

Let's consider the following situation: Within the next 3 years you aim to migrate several thousand projects to the cloud. As you run a large variety of applications, you have decided to use more than one cloud platform, you have a multi-cloud strategy. Each of those projects will run through a lifecycle from its creation to its deletion. The way you design every activity within the lifecycle will have a significant effect on the efficiency of your cloud transformation.

The Cloud Project Lifecycle

The Cloud Project Lifecycle

Tenant Provisioning

This is what you will need to get started. Depending on the cloud platforms you work with, this activity corresponds to creating a cloud account in AWS, creating a namespace in Kubernetes or a space in Cloud Foundry.

Tenant Configuration

Each tenant can be configured individually. Configuration can include setting up Single-Sign-On (SSO) integration, quota setting, security policy definition or VPN setups. Different platforms may offer different capabilities here.

User Provisioning

Once your tenant is up and running you can invite users to start working in the platform. This includes managing their authorization and permissions.

Resource Provisioning

With a user account in the cloud, developers can start to spin up resources and deploy applications. Resource usage will be metered and checked against the configured policies. Data has to be managed according to the corresponding laws (GDPR).

User Deprovisioning

Employees change roles or leave companies. When they do, their access to the cloud has to be retreived as well.

Tenant Deprovisioning

At some point a cloud account may have to be torn down, retreiving authorization and deleting resources.

How the cloud project lifecycle affects your organization

If you are facing a large cloud transformation process, you won't go through these activities only once or twice, you are operating at scale. There may be thousands of projects about to be migrated to the cloud. Tenants have to be re-configured continuously, users come and go and resources are provisioned multiple times per day. A lack of automation and standardized processes will result in immense inefficiencies, putting your cloud investments at risk. Just having cloud platforms available doesn't do the job. You'll need to integrate them into your organization and adapt your processes to the cloud-native way – a high level of automation to accelerate workflows and ensure consistency, self-service for developers to avoid dependencies and resulting bottlenecks and transparency to provide the necessary control on your systems.

What you can do to leverage your cloud to the fullest

Back to the cloud project life cycle, you should ask yourself these two questions for each of those activities:

1.) How long does this take in my environment?
2.) How many people are involved?

If the answer is anything else than "less than a minute" and "max one person", there is still some room for improvement. Cloud management portals can be a means to help you manage administrative processes. meshcloud offers a cloud management portal to unify organizational processes across different cloud technologies. Thereby the platform helps large companies to leverage their move to the cloud by significantly reducing the complexity of heterogeneous IT environments. It provides a self-service-portal that facilitates configuration, user on- and offboarding and resource provisioning across clouds. Having this in place, introducing new technologies becomes a walk in the park. Find more information here.

6 Reasons for Multi-Cloud

One cloud is not enough: By now 81% of companies follow Multi-Cloud strategies. That makes it likely, that on top of managing your cloud migration you have to take care of managing the use of multiple cloud platforms as well – but when done right, the benefits outweigh the possible administrative overhead in many ways. We asked our customers and came up with 6 reasons why the Multi-Cloud approach is a winning strategy:

Avoiding Vendor Lock-in

While all hyperscalers offer multiple datacenter locations across the world, which enable companies to spread the risk and ensure availability – even when using only a single provider, the fear of dependency leads a lot of companies to the Multi-Cloud. Looking at the cloud-native landscape of the past couple of years, we see that there are new platforms appearing on the horizon continuously. How can you make sure you are betting on the right horse if you decide to go for a single cloud approach? (Spoiler: You can't and here is why).

Best-of-breed Approach

Companies run a large variety of applications. For example there may be a mobile application along with a heavily frequented e-commerce platform. Then there are internal applications to manage travel expenses and sick days and a new data science team uses collected data to run analyses to optimize resource management and fulfillment. The bigger the company the greater the variation in cloud service requirements. A best-of-breed approach describes an approach where the company uses the best suitable infrastructure type and provider for each workload.

Cloud Migration

Let's think about banks or industrial companies. They have been running software for decades, long before cloud computing even existed. The move to the cloud is an essential step as their business models are often at risk raising the need to innovate fastly. So they have two tasks to accomplish: 1) Build new applications based on cloud infrastructure, to enable scalability and fast adoption to the market with practices like continuous deployment. 2) Migrate existing applications to the cloud to enable automation and gain efficiency. As a result, they will have to deal with a very heterogeneous infrastructure during this cloud migration process. It makes sense to set up a system that is able to cope with multiple platforms and providers straight away. As this prepares for the future, where more cloud platforms will be integrated.


The big cloud vendors often offer better conditions if they get an exclusive contract in exchange. Therefore, cost-sensitive companies will compromise and choose a single provider, being aware of the vendor lock-in they get themselves into as well as the fact that the chosen cloud platform may not be the best candidate for all their workloads. It still makes sense to be prepared for Multi-Cloud scenarios. Especially fast growing B2C companies aiming for market leadership buy their smaller competitors. While these competitors may operate in the same manner, they may have chosen a different platform to start with. The systems have to be integrated, either by migrating the workloads of the acquired company to match the rest or by enabling the parallel use of multiple platforms. Having thought of this beforehand and planned the system accordingly will help here.

Skill and Employer Attractiveness

Finally, people are different. They have different backgrounds and different preferences. And sometimes it will make sense to go for the platform your employees are best trained in and most familiar with. An
example: It may be cheaper to run applications on IaaS. However, this requires your employees being familiar with setting up and automating infrastructure using infrastructure-as-code. If you don't have these resources, you may be better off investing into a PaaS platform that may be more expensive in infrastructure costs, but better leverages existing developer resources. It is not only the existing skill that matters. It has been found that developers are much happier, if they get to work with their preferred tools. While this may sound like a nice-to-have situation it actually impacts productivity as well as the attractiveness of the employer for new talent.

Cost Efficiency

Depending on the scale of your cloud spend it may be beneficial to spread your applications across different platforms and leverage price differences across providers for different services. Hybrid Cloud setups have even more space for optimization, if they have set up a fine-granular metering and billing solution for their private cloud environments. For smaller companies it may be beneficial to use multiple clouds as vendors provide them with big amounts of free credits that they use before having to pay.

Platform services model classification – be aware of what you need

Platform services play an increasingly important role in cloud infrastructures. They enable application operators to quickly stick together the dependencies they need to run their applications. For example, when deploying your application and you need a database, you just request a service instance for the database of your choice and connect it to your application. Done.

Specifications like the Open Service Broker API provide a standard interface to provide backing services such as databases or analytics to applications. However, the nature of a service offered by someone else to your application is not always that clearly defined and therefore you should pay attention to the exact contract the service offers.

At meshcloud we work with customers to integrate Open Service Broker API compatible services into a private multi-cloud marketplace. Based on that experience, we provide a compact check matrix for clear communication and understanding what type of service you receive when requesting a service instance. The following abstract scheme might not necessarily be complete (please comment if you have anything to add), but it gives a first idea which questions to ask and to ensure there is no misunderstanding between service owner and service consumer.

Service Model classification matrix

[supsystic-tables id=8]


The gold standard and typical understanding of a platform service is certainly: Managed Blackbox. However, there are cases when the other service models make sense, e.g. for highly customizable systems – let's say a Kubernetes cluster. Providing a K8s cluster as unmanaged whitebox service would mean you get a fully provisioned K8s cluster and take over from there in further configuration and maintenance. You still save the time to setup and provision a cluster on your own, but don't have to bear the costs of a fully managed K8s cluster.

In any case, there should be no misunderstanding between service vendor and consumer as to what the level of support really is. Especially, when procuring services becomes fast and easy and happens with a few clicks, simply assuming the vendor will take care of everything might create unpleasant surprises. Be sure to be aware of the exact service conditions, which are hopefully communicated transparently and easy to access.

Step-by-Step: Deploying a Java Spring App to Cloud Foundry

You are a developer and have been dreaming of someone to take care of deploying and operating your cloud-native applications? With Cloud Foundry you might have found a solution to this.

What is Cloud Foundry?

Cloud Foundry is a container-based open-source software for application deployment. It enables developers to deploy apps of all common programming languages, such as Java, Python, Node.js, .NET Core or Go within minutes. The platform takes care of deployment, scaling and administration of cloud-native applications throughout their lifecycle.

What you need to get started

  • An account to use Cloud Foundry. As Cloud Foundry is an open-source platform there are multiple providers.
  • If not done yet, you should get started by installing the Cloud Foundry CLI. Type cf to your terminal to confirm that the installation was successful.
  • If not there already, you need to install Gradle.

So let's get our hands dirty now and deploy an application on the platform.

Step 1: Deploying an application to Cloud Foundry

In general Cloud Foundry offers 2 ways to deploy applications. You can either let the platform build a container out of your application and the integrated buildpacks or, in case you already have a Docker container up and running, you can deploy your Docker container to Cloud Foundry. We will choose the first way in this example and will show how to deploy a Java Spring application.

To start, we need to login to Cloud Foundry. If you are using meshcloud you do this, by choosing a location in the meshPanel and then executing the command shown in the CLI-Access tab.

It will be something like this, depending on the location you chose.

$ cf api

Now, we need to authenticate against this endpoint, by executing:

$ cf login -sso

You will be asked for a passcode, which you also get from the meshPanel (CLI-Access).

We will use an application that is provided within the Cloud Foundry sample apps on Github.  It is an app to store music albums. While it has an integrated in-memory database, you can easily connect it to one of the databases in our marketplace to get persistence. We will get there in a moment. Let's  clone the code first:

$ git clone

and access the folder.

Once you navigated to the app folder, you can build the app:

$ ./gradlew clean assemble

And push it to the cloud:

$ cf push

And there you are: The app is running.

You should see something like this. Have a look at the manifest.yml file in the app folder. It contains the specifications of name, memory, route, etc. for your app.

You can reach the app's web interface by copying the url to your browser.  Changes in the app will be stored in-memory. However, this is no long-term solution, as your are currently only running a single instance of your app. A restart will lead to a loss of your changes. So what we are going to do next is to bind the app to a persistent database and scale it up to 3 instances.

Step 2: Binding a Data Service to your application

A common microservice architecture consists of multiple apps and services like databases or message queues for communication. You can provision services from the Cloud Foundry Marketplace.

You reach the marketplace by executing:

$ cf marketplace

in your terminal. As a result you get a list of services that are currently offered on the platform. Let's create a MongoDB service to bind to our application.

$ cf create-service MongoDB S musicDB

We are specifying the database, choosing a plan and a name for the database. The database will be created straight away. You will see it in your service list:

$ cf services
Getting services in org meshcloud-demo / space aproject as c9f7d64c-404d-4b29-b719-b2359f6c8157...

name              service              plan              bound apps               last operation
musicDB           MongoDB              S                                           create succeeded

Now we can bind the database to our app with a single command.

$ cf bind-service spring-music musicDB

This will cause changes within the app to be persistent.

Step 3: Scaling your application

As we saw in our manifest.yml file and the CLI, we just started a single instance of the application. If you have high-availability requirements, you should always run a minimum of 2 app instances. You can easily scale your application at any time. We will now scale our app horizontally to 4 instances.

$ cf scale spring-music -i 4

Alternatively, you can also scale vertically, increasing RAM or disk of your application instance.


Congratulations! You did it.

The Cloud Foundry-ABC

If you are just about to run your first application in Cloud Foundry, this is the place to start. First of all: Congratulations! Cloud Foundry is a great technology used by developers and it is amazing how it can make your life easier. If you feel like you have been a sys admin, even though you thought of yourself as a developer, I promise, you will love Cloud Foundry. Sys admin time is over, you are finally free to work on your code. Do you think this sounds like Docker or Kubernetes? Wait for it, it can get even easier than this.

In this article, we'll just get you familiar with the most common terms of the Cloud Foundry world. So once we dig deeper, you'll know what we are talking about.


There are multiple public Cloud Foundry installations across the world. meshcloud offers a couple of them within Germany. So once you start working with Cloud Foundry, you will have to define, which of the installations you are communicating to. Each of those installations is called a target. In order to switch locations within the meshcloud Federation, you will also have to switch target.

For example

cf api

will lead you to our DARZ location in Darmstadt.


"CLI" stands for command line interface. The Cloud Foundry CLI will help you to interact with Cloud Foundry, to push your applications, scale them, create services and so on. In order to use Cloud Foundry, you will download and install the Cloud Foundry CLI tools first. Once you installed it, you can use your terminal or Powershell to execute functions, which in the background communicate to the Cloud Foundry API via REST calls. Every call you make to a function, simply starts with cf.


A Cloud Foundry Organization defines a single tenant on the system. So within meshcloud this is one customer account. In order to see the organization that you are currently working with in your terminal just type

cf orgs

and the result will be something like this:


Getting orgs as xyz...



Within meshcloud, a Cloud Foundry Space represents a meshcloud project. You may have multiple spaces/projects within your organization. All your applications and service instances will be scoped to a specific space/project. In order to see the spaces/projects you have access to within your organization, you can type:cf spaces


We differentiate applications and services within Cloud Foundry. While you bring the application to the platform. As meshcloud, we offer various stateful services for you to use. Services are offered in the so-called Market Place. To see the market place, just type: cf marketplace and you will see a list of services to choose from. They are mainly databases and message queues. Each service is offered in various plans, that may differ in size or performance. For example there are shared database plans (our "S" plans) and dedicated database plans (our "M" and "L" plans) within meshcloud's Cloud Foundry.


And that's it. That is the basic vocabulary that you should be familiar with, when starting to use Cloud Foundry.

Featured image extracting key

Extract your public key from your private key

If you should ever lose your public key, but still have your private key, you can always compute your public key from your private key.

The following command prints your public key to stdout (with id_rsa being your private key file, e.g. under ~/.ssh)

ssh-keygen -y -f id_rsa

Or just pipe directly to a file with:

ssh-keygen -y -f id_rsa >

Now you can redistribute your public key where ever you need it.

Any questions? Please comment.

Cloud Foundry Intro: How using PaaS can save you a lot of time

Learn why Cloud Foundry is useful for your application deployment and see how easy you can speed up your deployment process:

Read more