Azure Landing Zone Workflow

Azure Landing Zone Comparison

If you are trying to set foot on the cloud adoption path, whether you want to develop your application from scratch (cloud-native) or migrate already existing infrastructure to this cloud environment, the chances of coming across the term “Landing Zone” is fairly high.

In this blog post, we try to make your life a little easier by clarifying what a “Landing Zone” is and how you can choose between different tools to deploy a Microsoft Azure Landing Zone. If you use GCP read the Google Cloud Landding Zone Comparison blog post.

What is a Landing Zone?

A landing zone is an environment for hosting your workloads, pre-provisioned through code. There is quite a handful of different areas and subjects that should be covered here that we call Blocks. Based on our Cloud Foundation Maturity Model, you can quickly understand which CFMM blocks are crucial to acquire in your environment. Additionally, you can use this great model as a starting point for designing or scaling your Landing Zone.

What does Landing Zone mean in Azure?

In Microsoft terms, “an Azure Landing Zone is the output of a multi-subscription Azure environment that accounts for scale, security, governance, networking, and identity. An Azure landing zone enables application migration, modernization, and innovation at enterprise-scale in Azure. This approach considers all platform resources that are required to support the customer's application portfolio and doesn't differentiate between infrastructure as a service or platform as a service.”

How can you deploy a Landing Zone in Azure?

In general, this is a choice that you have to make based on

  • Your organization's requirements
  • Whether you work in a team or alone
  • Your knowledge and expertise in Infrastructure as Code
  • Your preferences

As you can see in the chart below, we have two major distinctions here. The first one is the toolsets that you can use from Azure portal which we call it the Hybrid ClickOps approach, and the second one as you may have already guessed is using Infrastructure as Code or GitOps Approach. Both approaches have pros and cons, which you’ll find later in this blog.
Azure Landing Zone Workflow

First Approach - Hybrid ClickOps

If it is important for your company to have their baselines by the provider’s recommendation and you don’t want to put a lot of time into building a Landing Zone, or don’t have enough platform engineers with the right expertise in your team, pursuing the Hybrid ClickOps approach (i.e. clicking through Azure portal to create resources) is most likely the first idea that comes to mind. Microsoft has published a ready-made deployment experience called “Accelerator”. It is based on the Azure landing zone conceptual architecture which Microsoft claims fulfils most of their clients’ goals. With just a couple of clicks and giving some inputs you can have a landing zone and complying most of Microsoft’s best practices at the same time. You can also choose to go enterprise and deploy it with multiple subscriptions or go small and deploy all the required resources with using only one subscription. It is worth mentioning that we call it Hybrid ClickOps since there are JSON ARM templates behind the scenes which going to deploy all the resources for you.

Second Approach - GitOps

If you are a platform engineer who was asked to build a landing zone for your company, working as a team or just by yourself, you most likely prefer to go for Infrastructure as Code (IaC) for most of the scenarios you are facing. As the most popular IaC tools at the time of writing this document is Terraform, we are going to get a little more into two not entirely different instruments for that purpose.

When you search in google for Azure Landing Zone with Terraform, you’ll most likely land on one implementation approach. In reality, there are multiple implementation options that Azure presents which are illustrated in this overview. As much as Microsoft explains the different options in their documentation, it can still be a daunting task for a platform engineer to figure out where to start. One thing we noticed was that the naming can be confusing between the two; there is “Azure Landing Zone Terraform Module”, which is also referenced as “Enterprise Scale Terraform Module”. There is also “CAF Terraform Modules” that also includes different implementation approaches (using a starter project with rover/standalone or using a super module, which will be discussed further in this blog post).

These two Terraform options are not that much different from each other since CAF Terraform modules are based on Azure landing zone terraform modules (Enterprise scale). The CAF Terraform modules split up the ALZ modules into different levels and add additional application landing zone related functionality that make it a more holistic, but also more opinionated solution.

1. Azure landing zone terraform module - Enterprise Scale

Azure landing zone terraform modules (also referred to as enterprise scale terraform modules) are a recommended approach by Microsoft to its customers. As a starting point, you can call up a single module with input parameters and it will bring up almost 180 resources as a baseline to build landing zones. Those resources contain a hierarchy of management groups (Decommissioned, Landing Zones, Platform, Sandboxes) with IAM resources and policies suitable for most enterprises, which can be further customized and expanded to what an organization sees fit.
Azure Landing Zone Terraform Module

As you can see from the diagram directly above, it is based on the azure landing zone conceptual architecture but it focuses on the resource hierarchy. It shows what this TF module covers; anything greyed out is not covered by this module.

Examples and considerations needed to execute this module are described in their github page’s wiki.
These examples are designed to help build an understanding of how to use the module, ranging from basic deployments (level 100) covering the core resource hierarchy from Enterprise-scale, through to more advanced (Level 300) scenarios.

Caveats:

  • The module does not create necessary storage for your terraform states - this is a drawback for teams not leveraging Terraform Cloud/Terraform Enterprise.
  • No step-by-step walk-through
  • Does not support different deployment modes out of the box, e.g. with an MCA setup, you cannot create new subscriptions and have to use pre-provisioned subscriptions.

2. Azure CAF Terraform Landing Zones

This approach can be applied in multiple ways, of which both can be incorporated as standalone in your deployment or using the publisher’s recommended tool called rover.

Rover is a container that includes the required tools to deploy landing zones (e.g. az cli, ansible-playbook, terraform, etc.). It provides many benefits such as facilitating consistent developer experience and the transition to automate with CI/CD. More details can be found here.

The two ways are as follows:

  1. CAF supermodule
    This is a Terraform supermodule published in the aztfmod github repository and verified by Microsoft which tries to take the modular approach.
  2. Starter project
    This is a step-by-step guided approach to deploy Azure landing zones. It will guide you through preparing remote storage for your terraform states and continue deploying your landing zones. At first, it might be a little difficult to know your way around different files and folders in the repo, but you will get around it eventually. The nice thing is, that it generates an understandable Readme file for the next step that you can follow.

Another thing is it segregates the environment into 5 different levels and in each level, you have a different state file. As a consequence, different teams with different levels of permission can work with the state file at the same time and only have the right access on their own respective level.
A deployment will typically contains:

  • A couple of level 0 landing zones. Level 0 contains Core platform automation like storage accounts, Key Vault, RBAC
  • Few level 1 and 2 landing zones which are Core platform governance and Core platform connectivity respectively
  • Many level 3 landing zone, that is Application landing zones vending machine. this level would usually include components like virtual network blocks, virtual network peering objects to hubs, route tables
  • Many level 4 applications will exist in an environment, like Azure Kubernetes Services Cluster, API Management services and all its dependency to deliver a service

Caveats:

  • Generally, it is harder to work with it without using Rover since all the documentation is around using this tool
  • You can use the modules as standalone but you should have an updating process in place, so when one module gets updated, all the related modules receive the updates as well.
  • A subscription vending machine is a mechanism to generate subscriptions when you have an Enterprise or Microsoft Customer Agreement, but the documentation is not published yet

Comparing the Options

For this comparison, we assign 1 to 3 ⭐ for different aspects that we look at for these options. If a certain aspect is not covered at all we assign a ❌. Additionally, you can find some details about why we rated an aspect. These details also provide a deeper insight into how these aspects are covered within the options.

Feature Comparison

The feature comparison is based on the Cloud Foundation Maturity Model (CFMM). You can read details of what these blocks mean on the CFMM website. You can also follow the links in every row that redirect you directly to the details of the according CFMM block.

Azure landing zone accelerator (Enterprise-scale for small enterprises) Azure landing zone accelerator (Enterprise scale) Azure landing zone terraform module (Enterprise scale) Azure CAF Terraform modules
Resource Hierarchy ⭐⭐⭐ It creates a nice resource hierarchy with different levels. There is Platform MG which is dedicated to all the services and resources which need to be managed centrally. And the Landing Zones MG for the environments with internal or external connections. ⭐⭐⭐ It creates a nice resource hierarchy with different levels. There is Platform MG which is dedicated to all the services and resources which need to be managed centrally. And the Landing Zones MG for the environments with internal or external connections. ⭐⭐⭐ The resource hierarchy it creates is based on the Azure landing zone conceptual architecture as part of the “Core Resources”. The hierarchy is meant to be customized under the Landing zones level. ⭐⭐⭐ The resource hierarchy it creates is also based on the Azure landing zone conceptual architecture as part of the “Core Resources”. The hierarchy is meant to be customized under the Landing zones level.
Resource Policies ⭐⭐⭐ Assign different policies and initiatives based on Microsoft Cloud Adaption Framework(check here). Some examples are: Deploy activity logs and keep them in log analytic workspace, Enable Microsoft Defender for different services, Enable Microsoft Sentinel, Prevent usage of Public-IP and etc. ⭐⭐⭐ Assign different policies and initiatives based on Microsoft Cloud Adaption Framework(check here). Some examples are: Deploy activity logs and keep them in log analytic workspace, Enable Microsoft Defender for different services, Enable Microsoft Sentinel, Prevent usage of Public-IP and etc. ⭐⭐⭐ Assign different policies and initiatives based on Microsoft Cloud Adaption Framework(check here). Some examples are: Deploy activity logs and keep them in log analytic workspace, Enable Microsoft Defender for different services, Enable Microsoft Sentinel, Prevent usage of Public-IP and etc. ⭐⭐⭐ Assign different policies and initiatives based on Microsoft Cloud Adaption Framework(Check here). Some examples are: Deploy activity logs and keep them in log analytic workspace, Enable Microsoft Defender for different services, Enable Microsoft Sentinel, Prevent usage of Public-IP and etc.
Centralized Audit Logs ⭐⭐⭐ Deploy Log Analytic Workspace and Enable logging through policy. ⭐⭐⭐ Deploy Log Analytic Workspace and Enable logging through policy. ⭐⭐⭐ Deploys log analytics workspace that enables centralized audit logging as part of the “Management Resources” module ⭐⭐⭐ Deploys log analytics workspace that enables centralized audit logging as part of the “Management Resources” module
Cloud Tenant Tagging ⭐ Provides a default_tags variable that can be adapted and applied to multiple resources and resource groups. Additionally base_module_tags includes a tag that shows the deployment source (i.e. terraform), this is by default enabled but can be disabled with disable_base_module_tags. One downside is that it is not clear which Azure objects will be tagged from the module structure. But there isn’t any policies forcing the tags to inherit. ⭐⭐⭐ You can set Multiple parameters here like: inherit_tags, global_tags_propagated, and tags based on a set of common criteria (naming convention, prefixes, region of the deployment, name of the environment, tags inheritance settings, etc.)
Tenant Provisioning/Deprovisioning ⭐⭐⭐ There is a mechanism called subscription vending machine to create subscriptions for EA and MCA. You can give it the name of subscription under the subscriptions setting and it will re-use your current subscription or create one.
Playground/Sandbox Environments ⭐ A sandbox management group will be created without any special policy. ⭐ A sandbox management group will be created without any special policy. ⭐ A sandbox management group will be created without any special policy. ⭐ A sandbox management group will be created without any special policy.
Privileged Access Management (PAM ⭐⭐ Service Principals created and have privilege only on each specific level, You can impersonate them to deploy modules from that level
Service Account Management
Virtual Networks ⭐⭐⭐ Can deploy Hub and spoke with Azure Firewall, Hub and spoke with your own third-party NVA, Virtual WAN (Microsoft managed) ⭐⭐⭐ Can deploy Hub and spoke with Azure Firewall, Hub and spoke with your own third-party NVA, Virtual WAN (Microsoft managed) ⭐⭐⭐ By adding deploy_connectivity_resources = true to the module’s input parameters, the module sets up a hub network with a firewall and gateway subnets. These can be further customized by tweaking some networking settings; more details here. ⭐⭐⭐ It can be implemented by running different modules in Level2/connectivity. Modules contain AZ firewall, hub and spoke, vWan, private DNS,…
On-Prem Connect ⭐⭐ You can choose to deploy the on-premises connectivity using the Virtual WAN or Azure Hub and Spoke. ⭐⭐ You can choose to deploy the on-premises connectivity using the Virtual WAN or Azure Hub and Spoke. Here we can define a subscription specific to this connectivity appliance. ⭐⭐⭐ You can connect to your on-premise networks by choosing the matching option and configurations. You have the option of using Virtual WAN and then you are able to make the connection between your on-premises via VPN or Express route. for using azure arc you have to understand key network design considerations and recommendations for working with Azure Arc, include Azure Arc-enabled resources and add them to your landing zone. ⭐⭐⭐ You can connect to your on-premise networks by choosing components like classic Virtual Network-based Hub and Spoke, Azure Virtual WAN, Azure Virtual WAN regional hub, site-to-site, point-to-site and ExpressRoute connectivity objects, or third parties Network Virtual Appliances which reside in Level 2. for using azure arc you have to understand key network design considerations and recommendations for working with Azure Arc and add them to your landing zone.
Cloud Interconnect ⭐⭐ You can choose to deploy the connection between clouds using the Virtual WAN or Azure Hub and Spoke. ⭐⭐ You can choose to deploy the connection between clouds using the Virtual WAN or Azure Hub and Spoke. Here we can define a subscription specific to this connectivity appliance. ⭐⭐⭐ You can have your connection between clouds by choosing the matching option and configurations. You have the option of using Virtual WAN and then use site to site VPN or any other means afterward. ⭐⭐⭐ You can have your connection between clouds by choosing components like classic Virtual Network-based Hub and Spoke, Azure Virtual WAN, Azure Virtual WAN regional hub, site-to-site, point-to-site and ExpressRoute connectivity objects, or third parties Network Virtual Appliances which reside in Level 2.
Managed Key Vault ⭐⭐ Creates a key vault in each level for storing the credential of the service principals and use it to impersonate and access only on the respective level.
Automation in CI/CD ⭐⭐ with Azure DevOps or Github actions, and specifying security principal ⭐⭐ with Azure DevOps or Github actions, and specifying security principal ❌ Should be done manually by yourself. ⭐⭐⭐ There is a thorough documentation about how to use rover with CI/CD tools

Non-Functional Comparison

Azure landing zone accelerator (Enterprise-scale for small enterprises) Azure landing zone accelerator (Enterprise scale) Azure landing zone terraform module (Enterprise scale) Azure CAF Terraform Modules
Extensibility/Modularity/Scalability ⭐ It deploys around 73 resources in one subscription and different resource groups. scalability is not something we expect from this tool, but we can change some parameters or pass some JSON templates to customize it for our needs. In general, Microsoft recommend this approach for smaller companies without much resources. ⭐ ⭐ It deploys around 73 resources in 4-5 Subscriptions. 3 of them are for platform landing zones (i.e. Management, Identity, Connectivity) and the other 2 for your application landing zone (i.e. Corp and Online) The scalability is not something we expect from this tool, but we can later change some parameters or pass some JSON templates to customize it for our need. ⭐⭐ The approach is modular. Extending it by, for example, inserting new custom modules is not documented and can be a daunting task. You can customize landing zones (under the landing zone management group) by adapting the module’s input parameters. ⭐⭐⭐ This approach is completely modular , you can even use the modules standalone by feeding values to the variables existing in the super module. Or use rover which incorporates a yaml file called ignite.yaml (It’s a config file containing values for the variables that you can easily change. It generates the related files based on this data. It uses Infrastructure as data. This means you only need to specify data and it will do the rest)
Terraform ⭐⭐⭐ All terraform resources can be created by calling one module Azure/caf-enterprise-scale/azurerm and supply it with a wide range of input parameters. ⭐⭐⭐ There are a lot of different modules and examples covering most aspects, and can be called directly from terraform and azureCAF provider by aztfmod.
Learning Curve ⭐⭐⭐ Deployment of a minimum featured landing zone based on Best practices for security and governance is not a hard thing to do. There are some amount of options and checkboxes so you will have some idea about what it will implement. ⭐⭐⭐ Deployment of a minimum featured landing zone based on Best practices for security and governance is not a hard thing to do. There are some amount of options and checkboxes so you will have some idea about what it will implement. ⭐ Steep learning curve. Its documentation does not provide a walkthrough approach. It does provide examples that can be used as is, but is not easy to dig deeper on what will happen under the hood by going through the submodules. ⭐⭐ There is a quite good documentation with showing steps of how to run it using rover. But for using terraform standalone you couldn’t find much information
Community ⭐ Doesn’t have much discussion around it ⭐ Doesn’t have much discussion around it ⭐⭐ Their repo has 300 stargazers at the time of writing. They also have a community in gitter where you can find answers to encountered issues, or also ask. ⭐⭐ Their repo has 285 stargazers at the time of writing. They also have a community in gitter where you can find answers to encountered issues, or also ask.
Time To Success ⭐⭐⭐ Very quick. If you know how your environment should look like. In less than an hour you can have a safe guarded environment up and running ⭐⭐⭐ Very quick. If you know how your environment should look like. In less than an hour you can have a safe guarded environment up and running ⭐⭐ Quite quick to get up and running using their level 100 example (default configuration). Creating the initial ~180 resources takes around 35 minutes. ⭐⭐ It needs some time to understand it first and then start using it, then with rover, it would take around 2-3 hours to have an environment with some policies and hierarchies up and running.

Conclusion

If you neither have much time to start your journey in Azure, nor the required skills to develop your own landing zone with an IaC, the Azure landing zone accelerator (Enterprise Scale) could be the best choice. Otherwise, you are better off going for one of the IaC solutions.

The choice between which terraform-based tool to use has to be made by a organization’s roadmap, but there are some aspects to consider:

Choose Azure landing zone terraform module (Enterprise-Scale) when:

  • You are working with a company who seeks Microsoft’s support, since it is their recommendation to customers.
  • You only need to have a platform landing zone in place or as a starting point
  • The foundation that it is going to deploy is enough and you are able to further tailor it to your need
  • You or your company can only trust what the provider releases itself since this tool is considered as the 3rd party approach

Choose CAF terraform modules when:

  • You like to benefit from Rover because:It contains the versioned tool set you need to apply landing zones

    It contains Rover Ignite, which ingesting YAML files as templates that will generate both the tfvars files and readme files

    Native integration with Visual Studio Code, GitHub Codespaces

    Helps you store and retrieve Terraform state files transparently on Azure storage account

    Facilitates the transition to CI/CD

  • Have an application-level Landing zone. A landing zone for application is split between two levels:Level 3 includes components of an application that are typically delegated to platform operations teams; you can generate level 3 landing zones with Application landing zones vending machine

    Level 4 contains the application configuration delegated for application teams for example Azure Kubernetes Services Cluster, API Management services; you can have them from Azure landing zones solution accelerators, a custom solutions implemented using CAF module or any other Terraform code

  • You like its step by step guide and more thorough documentation
  • You don’t want to put your time on segregating terraform state and making team collaboration mechanisms since it contains this feature out of the box
  • You want to incorporate some of its specific modules in your current environment

Did this blog post help you decide which Landing Zone Solution you will apply to your Azure Tenant? Did you already gather your own insights to building an Azure Landing Zone and want to back up, add or discuss insights from this blog post, please do so and leave a comment! We look forward to hearing about your experience.


Cloud Foundry Fail Over

Sie gehören zu den wohl meist gefürchtetsten Dingen im Geschäftsalltag und doch hatte schon jeder mit ihnen zu tun: Serverausfälle. Fehler sind menschlich und so ist kein Rechenzentrum der Welt vor Ihnen gefeit. Irgendwann trifft es jeden mal. Doch wer das Problem und den damit verbundenen Verlust von Umsätzen, den unzufriedenen Kunden und den monetären Schaden als höhere Gewalt abtut, der urteilt wohlmöglich etwas zu voreilig: Mit einer Fail-Over-Routine ist es eine Sache von wenigen Minuten Ihre Applikation von einem ins andere Rechenzentrum zu verlagern. Mit unseren bislang zwei Standorten und unserem meshStack bringen wir schon alles mit um Applikationen ausfallsicher und redundant zu gestalten, ohne sich ans Zeichenbrett setzen zu müssen, um die Infrastruktur neu zu denken. Im Fall der Fälle kann der Standort einer Applikation binnen weniger Minuten verlagert werden, ohne das Ressourcen dauerhaft geo-redundant gehalten werden müssen.  Dabei geht nicht nur der Umzug schnell von statten, sondern auch die Einrichtung. Damit bringen wir Hochverfügbarkeit in jedes Unternehmen, ob One-Man-Show, oder Großkonzern. Hochkritische Infrastrukturen können auch dauerhaft geo-redundant aufgesetzt werden, dies behandeln wir hier nicht.

Drei Schritte zum Seelenfrieden

  1. Automatisches Backup Ihrer Datenbank einrichten
  2. Infrastruktur in einer anderen Location replizieren
  3. Backup einspielen

Vorab: DNS TTL

Wer seine Infrastruktur schnell ersetzen möchte, der benötigt natürlich auch einen flexiblen DNS-Eintrag. Um einen möglichst problemlosen und schnellen Übergang zu gewährleisten, ist es notwendig eine möglichst niedrige TTL (Time To Live) einzustellen. Der TTL-Eintrag definiert wie lange die entsprechenden Domains im DNS Cache gehalten werden.

Automatisches Backup einrichten

An dieser Stelle gehe ich davon aus, dass Sie Ihre Applikation schon in Cloud Foundry deployed haben.

Zunächst loggen Sie sich in der Cloud Foundry CLI ein. Verschaffen Sie sich einen Überblick über die laufenden Services.

cf services

Aus der Liste der laufenden Services suchen Sie sich nun den Datenbank Service aus, den Sie replizieren möchte.

cf service NAME

Nehmen Sie nun das oben genannte Kommando und ersetzen Sie "NAME" mit dem Namen Ihres Services. Cloud Foundry sollte Ihnen nun allerlei Details zu dem spezifizierten Service zurückgeben. Tatsächlich interessiert uns der erste Abschnitt aber am meisten. Er sollte in etwa so aussehen:

Service instance: todo-db
Service: PostgreSQL
Bound apps: todo-backend
Tags:
Plan: S
Description: PostgreSQL Instances
Documentation url:
Dashboard: https://postgresql.cf.eu-de-darz.msh.host/v2/authentication/************

Das Objekt unserer Begierde ist der Dashboard Eintrag. Wir kopieren die URL in unserer Browserfenster. So landen wir im Backup Manager der Datenbank. Hier haben wir die Möglichkeit zyklische Backups der Datenbank in einem Swift Container zu speichern. OpenStack Swift, ist unser Service für Object Storage. Er eignet sich hervorragend zu Ablegen von Dateien wie dem Datenbank-Dump. Die Erstellung eines Swift Containers ist kinderleicht. Navigieren Sie im Service-Menu des meshPanels unter Storage zum Punkt Objects und geben Sie den Namen des Containers ein. Wichtiger Hinweis: Da Sie im Falle eines Ausfalls in einer Location auch der Swift Storage betroffen sein kann, erstellen Sie das Swift Container in der anderen Location.

Nun, wo Sie mit einem Swift Container ausgestattet sind, müssen wir diesen noch im Backuptool hinterlegen. Das Backuptool benutzt die Swift API und genau dafür holen wir uns im meshPanel jetzt die Zugangsdaten ab. Wählen Sie im meshPanel die Location in dem Sie Ihren Swift Container erstellt haben und navigieren Sie zum untersten Punkt in der Seitenleiste "Service User". Dort geben Sie eine Beschreibung ein und wählen aus dem Dropdown-Menu Openstack. Der Service User wird erstellt, und der Browser läd ihnen eine .txt-Datei mit den Login Credentials herunter. Achtung: Die Credentials werden einmalig ausgehändigt, speichern sie diese gut oder erstellen Sie bei Verlust einen neuen Service User.

In dieser Textdatei sind nun alle nötigen Informationen vorhanden. Wir wechseln nun also wieder zum Datenbank Dashboard. Hier wählen wir unter dem Punkt "Backend Endpoints" die Schaltfläche "Create File Endpoint". Öffnen Sie die, im vorherigen Schritt erstellte, Textdatei. Um das ganze etwas abzukürzen mache ich eine kleine Gegenüberstellung welche Elemente aus der Textdatei (links) in welche Felder im Webinterface (rechts) gehören.

API Endpoint --> Authentification URL
Username --> Username
Password --> Password
OS_USER_DOMAIN_NAME --> Domain
OS_PROJECT_NAME --> Project

Abschließend geben Sie den Namen des Containers an, den Sie in einem der vorangegangenen Schritte erstellt haben.

Backup Plan erstellen

Nun haben wir alle Komponenten zusammengeführt um Backups zu erstellen. Abschließend möchten wir noch einen Backup Plan erstellen, der periodisch Backups unserer Datenbank in den gerade verlinkten Container speichert.

Dazu gehen wir wieder auf das Datenbank Dashboard und wählen die Schaltfläche "Create Backup Plan" aus. Für die Frequenz des Backups, nutzen Sie die Spring Cron Syntax. Mit dieser Angabe zum Beispiel 0 0 , läuft ihr Backupjob stündlich. Die "Retention Period" gibt an wie viele Backups behalten werden sollen. Der "Retention Style" sagt dabei aus, wie die vorangegangene Eingabe zu interpretieren ist: Wählt man Beispielsweise als "Retention Period" zwei, und als Retention Style "Day" dann bleiben die Backups der letzten beiden Tage erhalten. Im Dropdown Menü "File Destination" wählen Sie den eben verlinkten Swift Container.

Über die Backup Plans und verschiedene File Destinations können Sie die Redundanz des Backups erhöhen und beispielsweise auf Swift Container aus mehreren Rechenzentren gleichzeitig Backupen.

Im Problemfall

Tritt der Fall der Fälle nun ein und das Rechenzentrum in dem Ihre Applikation läuft ist nicht mehr erreichbar, können Sie nun einfach Ihre Infrastruktur im anderen Rechenzentrum replizieren.

Replikation

Dafür loggen Sie sich im meshPanel ein und wählen die alternative Location aus. Sie loggen Sich über die Cloud-Foundry-CLI ein. Zunächst erstellen Sie den Datenbank Service, den Sie für Ihre Applikation benötigen.

cf marketplace #Zeigt die verfügbaren Services im Marketplace
cf create-service MySQL S Name #erstellt den Service MySQL in der Größe S mit dem Namen "Name"

In der neuen Infrastruktur befindet sich nun zunächst nur eine leere Datenbank, welche im nächsten Schritt wiederhergestellt wird.
Neben der Datenbank ist allerdings ebenfalls ihre Applikation notwendig. Da Sie die Applikation ja bereits in der alten Infrastruktur deployed haben, sind alle notwendigen Anpassungen für Cloud Foundry bereits abgeschlossen. Sie navigieren nun also nur noch mit ihrem Terminal in den Ordner in dem die Applikation gespeichert haben und führen das Deployment mittels cf push aus.

Nachdem Ihre Applikation erfolgreich deployed wurde, müssen Sie diese noch mit ihrem frisch erstellten Datenbankservice verbinden. Vorsicht: Häufig sind die Bindings auch schon im "manifest.yml" der Applikation definiert. Achten Sie darauf, dass der Name ihres Datenbankservices mit dem in der manifest.yml übereinstimmt oder ändern Sie das Manifest nachträglich.

cf bind-service myapp mydb

Backup Einspielen

Dann lassen Sie sich mittels bashscript cf service Name die Dashboard URL anzeigen, öffnen das Dashboard, und tragen ihre Credentials aus der ".txt"-Datei des Service Users unter der Rubrik Restore ein. Das Schema wurde weiter oben schon ausführlich beschrieben und ist hier identisch. Abschließend klicken Sie auf Restore. Ihre Datenbank müsste jetzt also auf dem selben Stand sein wie jene, die zur Zeit von einem Ausfall betroffen ist

DNS Einträge ändern

Jetzt wo Ihre Applikation auf einem anderen Server läuft, müssen Sie selbstverständlich auch die DNS Einträge umstellen.