How to Implement a Lazy Tab Navigation System in Angular
In this blog post you’ll learn why you’d want and how to implement a lazy tab navigation system in Angular.
Why go for a lazy tab navigation system?
We at meshcloud provide a solution to manage cloud access, users and projects across cloud platforms. One of our main goals is to make the interaction with the cloud as easy as possible. Besides that we want to decrease the effort of management workflows to a minimum because cloud management is a complex topic with a lot of items to manage.
But how do we want to achieve this?
It is mostly a user facing topic. Therefore we’ve improved user experience in our Angular SPA (Single Page Application) and decreased the necessary interactions for specific management workflows. It is important to navigate between the sections without lagging or freezing.
It is really easy to say that the improvement of the user experience is the key. But what makes the difference to support user friendly behavior?
We’ve invested a lot of time to find a proper solution. We came up with a lazy tab navigation system is what we need which covers all mentioned challenges.
What is a lazy tab navigation system?
As the name suggests, the idea is a tab navigation system in which the content is loaded lazy. This will enable us to implement scalable views with a lot of clearly separated sections. But first to the general tab navigation system.
1. Tab navigation system
The tab navigation system is a central component which coordinates the tab creation and holds the tab state. We’ve decided to handle the tab creation based on the route config. This makes total sense because the routes are uniquely identifiable. This means we can automatically prevent the creation of multiple tabs which correspond to the same section. Besides that we have the possibility to create deep links to specific tab sections. This enables us to reference specific tab sections within other views. In general we’ll determine the tab state based on the activated route.
2. Lazy Loading
The other aspect is the lazy loading. Lazy loading is a central Angular feature. It is a design pattern that loads NgModules as needed. For comparison, there is the option to load NgModules eagerly. With eager loading all NgModules are loaded when accessing the web application. This increases the bundle size and for large applications it is not a best practice due to the load time. Eager loading is for a tab navigation system not an option because we’ve a lot of different sections. And we want to allow multiple sub-sections. This would mean that we load every section content up front. From the performance and UX perspective this is not a good approach.
If we now combine the tab navigation system approach with the lazy loading feature we’ll get a really flexible and scalable pattern. The tab navigation system takes the role of describing the context. It is kind of a container which defines the available routes and manages the tabs. In conjunction with the lazy loading feature, we are able to load the content of the respective tab in a dedicated manner.
What is necessary to implement the lazy tab navigation system?
We’ve got an overview about the lazy tab navigation system from the conceptual perspective. Now we’ll take a look at the technical realization and how we’ve implemented this approach.
We need a shared component which covers all criteria of the tab navigation system. This includes the tab creation based on the router config and tab selection based on the activated route. We call the component RouterOutletNavComponent.
Besides that we’ve defined an interface which describes each tab:
- displayName
- routerLink
- selected
Our interface declares additional properties. For example to show/hide a tab depending on a specific condition and to attach a badge with countable information. But this is not relevant for the general implementation. So we’ll leave it out for now.
We call the tab interface RouterOutletNavTab.
Sure, we didn’t implement the tab creation and tab selection logic within the component. We’ve implemented a service that exposes the functionality. This is a best practice to increase modularity and reusability. The component should only handle the user experience. With a clear separation we’ll also increase the testability of the functionalities. This approach should be followed every time ;)
We call the service RouterOutletNavService.
Now let's combine the RouterOutletNavComponent, RouterOutletNavTab and RouterOutletNavService. ****
@Component({
selector: 'mst-router-outlet-nav',
templateUrl: './router-outlet-nav.component.html',
styleUrls: ['./router-outlet-nav.component.scss']
})
export class RouterOutletNavComponent implements OnInit, OnDestroy {
@Input()
public readonly styleClass: RouterOutletNavStyleClass = 'nav-child';
public tabs: RouterOutletNavTab[];
private sub: Subscription;
constructor(
private readonly router: Router,
private readonly activatedRoute: ActivatedRoute,
private readonly routerOutletNavService: RouterOutletNavService
) {
this.tabs = this.routerOutletNavService.initializeTabs(this.activatedRoute);
}
ngOnInit() {
/**
* Select initial tab and navigate to child route.
*/
this.setupInitiallySelectedTab()
.subscribe((routerLink: string) => this.routerOutletNavService.selectTab(routerLink, this.tabs));
/**
* We listen to the router events to select the specific tab.
*/
this.sub = this.router.events
.pipe(
filter(x => x instanceof NavigationEnd),
switchMap((x: NavigationEnd) => {
/**
* If the firstChild is available then we don't determine the first child tab.
*/
if (this.activatedRoute.firstChild) {
return of(x.urlAfterRedirects);
}
/**
* If child route doesn't exists then we'll determine the child route and select the tab.
*/
return this.navigateToFirstChildRoute();
})
)
.subscribe({
next: (routerLink: string) => this.routerOutletNavService.selectTab(routerLink, this.tabs)
});
}
ngOnDestroy(): void {
this.sub.unsubscribe();
}
private setupInitiallySelectedTab(): Observable<string> {
/**
* If childs are applied for example in case of redirection then we'll use the existing destination url.
*/
const currentUrl = this.router.url;
if (this.activatedRoute.firstChild) {
return of(currentUrl);
}
return this.navigateToFirstChildRoute();
}
private navigateToFirstChildRoute(): Observable<string> {
return this.routerOutletNavService.findFirstChildRoute(this.tabs)
.pipe(
take(1),
/**
* This side effect is necessary to apply the determined tab route.
*/
tap((routerLink: string) => {
const extras = {
relativeTo: this.activatedRoute,
replaceUrl: true
};
this.router.navigate(['./', routerLink], extras);
})
);
}
}
We want to support not only a tab navigation system. We want to support a lazy tab navigation system. So it is necessary to embed a RouterOutlet into the tab container. For the styling we use Bootstrap as external dependency.
The corresponding HTML file would look like this:
<div [ngClass]="styleClass">
<ul class="container nav nav-tabs">
<ng-container *ngFor="let t of tabs">
<li class="nav-item">
<a class="nav-link" [class.active]="t.selected" [routerLink]="[t.routerLink]" [id]="t.routerLink">
{{t.displayName}}
</a>
</li>
</ng-container>
</ul>
<div class="tab-content p-4">
<router-outlet></router-outlet>
</div>
</div>
It is a valid use case to nest multiple lazy tab navigation systems. But for simplification we allow only one additional layer. Therefore we’ve defined two different css classes ‘nav-root’ and ‘nav-child’ to tell the levels apart.
We call the style class type RouterOutletNavStyleClass.
export type RouterOutletNavStyleClass = 'nav-root' | 'nav-child';
Very well done. We've implemented our shared lazy tab navigation system.
Now we have to feed the tab navigation system with the corresponding data. The information about the router link and if a tab is selected will be determined based on the route config and activated route. It is really important to declare the available routes with the lazy loading approach from Angular.
But how do we get the display name of each tab? Sure, we could use the route path name. But the name is our uniquely identifiable name. It could also be a human unreadable string. From the user perspective, it is not a good approach to use an identifier string as tab display name. Besides that we need a way to determine some data up front. Keep in mind, besides the display name we’ve also a condition and badge information in place. So we need the possibility to attach specific session data to each tab. Therefore we need to declare a resolver which returns a RouterOutletNavSessionData observable.
We call the abstract resolver class RouterOutletNavSessionResolver.
export interface RouterOutletNavTabData {
displayName: string;
}
export interface RouterOutletNavSessionData {
[key: string]: RouterOutletNavTabData;
}
export abstract class RouterOutletNavSessionResolver implements Resolve<RouterOutletNavSessionData> {
abstract resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<RouterOutletNavSessionData>;
}
Now we’ve everything in place to use the lazy tab navigation system within a specific use case.
Lazy tab navigation system in action
At this point it makes sense to apply the lazy tab navigation system to one of our use cases. We’ve mentioned at the beginning that we provide a solution to manage cloud tenants. This includes for example the customer management. Within a customer we can manage projects, users, financials and much more. So it makes total sense to use the lazy tab navigation system with one additional level to make the management as easy as possible.
We’ll only consider the customer root level. The nested levels are out of scope for now. But in general they follow the same approach.
Our top level component would be the CustomerManagementComponent. This component declares the RouterOutletNavComponent within the HTML file and applies the ‘nav-root’ style class.
...
<mst-router-outlet-nav styleClass="nav-root"></mst-router-outlet-nav>
...
Then we’ll add all available routes to the CustomerManagementRoutingModule. Besides that, it's important to add a CustomerManagementNavSessionResolver to determine the session data up front. This provides our basis for the tab creation.
@Injectable({
providedIn: 'root'
})
export class CustomerManagementNavSessionResolver extends RouterOutletNavSessionResolver {
resolve(route: ActivatedRouteSnapshot, state: RouterStateSnapshot): Observable<RouterOutletNavSessionData> {
const sessionData = {} as RouterOutletNavSessionData;
sessionData['projects'] = this.getProjectsNavTabData();
sessionData['access-control'] = this.getAccessControlNavTabData();
sessionData['financials'] = this.getFinancialsNavTabData();
...
return of(sessionData);
}
private getProjectsNavTabData(): RouterOutletNavTabData {
return {
displayName: 'Projects'
};
}
private getAccessControlNavTabData(): RouterOutletNavTabData {
return {
displayName: 'Access Control'
};
}
private getFinancialsNavTabData(): RouterOutletNavTabData {
return {
displayName: 'Financials'
};
}
...
}
const routes: Routes = [
{
path: '',
component: CustomerManagementComponent,
resolve: {
session: CustomerManagementNavSessionResolver
},
children: [
{
path: 'projects',
loadChildren: () => import('./projects').then(m => ...)
},
{
path: 'access-control',
loadChildren: () => import('./access-control').then(m => ...)
},
{
path: 'financials',
loadChildren: () => import('./customer-financials').then(m => ...)
},
...
]
}
];
@NgModule({
imports: [RouterModule.forChild(routes)],
exports: [RouterModule]
})
export class CustomerManagementRoutingModule { }
That's it. No additional steps are necessary. We can easily add tabs and nested lazy tab navigation systems to the CustomerManagementComponent. For example we could think about a settings section or projects overview section. There are no limits to the imagination.
As inspiration our customer management view looks like this:
The path to proper multi-cloud management – a maturity model
Cloud means Multi-Cloud
The cloud is the foundation of an efficient IT operation in the digital age. More often than not, one cloud platform is not sufficient for at least two reasons: 1) Different cloud platforms service different purposes and each has specific strengths and weaknesses hence embracing the plurality of platforms in a best-of-bread manner is more productive than trying to fight it. And 2) To avoid a technology lock-in on one vendor or platform, creating a diverse skill set preserves technical sovereignty.
Therefore, building a proper multi-cloud management capability is key to a successful and efficient cloud strategy. To give some guidance about the technical approaches to build such a capability, we introduce the meshcloud multi-cloud management maturity model.
The meshcloud multi-cloud management maturity model
Stage 0: No Multi-Cloud Management
The first stage is easy to reach – it is the natural stage when you do not take any actions, maybe because you do not realize you have a multi-cloud challenge to solve. However, the effects rise nevertheless.
Due to disorientation of users individual platform or ad-hoc processes will be established and you end up with a silo-structured organization loaded with unnecessary complexity due to different access routines for different platforms.
But that is not necessarily the greatest worry: people tend to shortcut complexity and lengthy processes and public cloud vendors make it way to easy to access cloud resources within minutes with the simple (yet virtual) swipe of a company credit card. The nightmare has a name: Shadow IT! You end up with individual, unconnected, unregistered public cloud accounts the company pays for but has no efficient way to manage. All comes down to individual initiative and discipline which might go well, but it’s an anarchy and there is no way to ensure rules and regulations are followed and compliance is uphold. This has severe implications for security or regulatory duties, e.g. you cannot give full report about data processing activities as required by the GDPR. Accounts might be shared among colleagues. When people leave the company, no one guarantuees credentials are rotated. And so on.
Therefore, this stage is like purgatory – you suffer from sins of the past and want to get out as soon as possible. To receive remission, action is necessary.
Stage 1: Manual / Semi-automated Multi-Cloud Management
So we realize: let’s bring order to the chaos. We try to standardize, define common processes, keep track of tenants and workloads out there in the clouds or running within our own private cloud.
In Stage 1, we define a process to provision and configure tenants on the clouds we use. Someone is defined responsible for a certain platform, to keep security and governance up and to provide users with what they need to get their cloud projects up and running.
Regarding implementation, we move quick and take the tools we have and are familiar with: a ticketing system to order cloud tenants from the IT department which someone creates manually or provides manually or with some scripting on her or his own. What about controlling? Let’s keep track of cloud tenants in an Excel sheet or export a list of projects on a platform to see who’s doing what and how much of it.
It’s tedious and everyone hates it – those who want to use the cloud because there still different contacts for different clouds and because they have to wait so long to receive their credentials. And operators hate it because they are doing dull work and lots of it instead of improving processes and evaluating further advanced technologies.
The cloud is all about automation and the multi-cloud is, too! So this stage does not fulfill the promise of self-serving, on-demand, scaling capacities with a few key strokes, but falls behind the spirit that comes with cloud technologies: automate ALL THINGS.
Stage 2: Imperative / Workflow-based Multi-Cloud Management
With further investment, we reach stage 2 where we have reached the stage of integrated, automated processes to provision tenants and user accounts in all relevant cloud platforms. We might even have established a self-service where users are able to set up cloud projects 24/7 at their own pace and needs. There is a portal where they can order cloud projects or ressources and after a while they are ready to run.
So all good now? At first, it might be. But over time, workflows and tenant configurations evolve and after some months or years it becomes hard to tell which tenants were created and configured by which version of the process. Also, do people reliably remove resources when they are not needed anymore? And how are resources in different clouds connected and how do they depend on each other? Hopefully, the documentation is available and up to date – otherwise, we still lose orientation between platforms and projects consuming and possibly spanning them.
But what now?
Stage 3: Declarative Multi-Cloud Management
Meet declarative multi-cloud management! You already know the concept from the deployments of your workloads where it labels as infrastructure as code. You describe the desired target state and the system works it out for you. Declarative multi-cloud management means the same, but on an organizational, cross-platform level – Organization as Code as we framed it.
So you define which projects should exist on which platform and which users should have what kind of access. You save the new configuration in a versioned repository and after a short while the whole multi-cloud federation converges to the new state you described. The version control enables you to frantically audit every single change that happened to your multi-cloud landscape. This has several advantages:
- Robustness: Distributed systems might suffer from weak network links which might cause process flows to stall and fail. However, with the target state clearly defined, the system can try again and again to finally reach the desired state.
- Prevention of local changes: You know the “colleague is on vacation, but I need urgent access” situation which might lead to a quick and dirty permission to cloud projects by an administrator. More often than not, those changes stick longer than intended – the human lazyness is my witness. However, a declarative multi-cloud management system would roll back those changes at the next sync.
- Decoupling of semantics and implementation logic: With a descriptive configuration you describe what you want, not how to do it. Therefore, flexibility remains with the system on how to implement reaching the desired state which means more freedom to improve the multi-cloud system over time.
- Documentation and Auditing: If every configuration change is tracked in the manifest and version controlled, it is always easy to report, audit and track who had access to which system or data and how long, why and by whom.
You see that declarative multi-cloud management has many advantages – it’s for some reason the paradigm turned so successful with configuration management. Therefore, meshcloud put the declarative paradigm at the heart of its vision of an agile, but full-control multi-cloud management. If you want to learn more, we are happy to start the conversation with you.
Reach out to us via mail, social webs, phone or any events we attend. We are looking forward to hear your thoughts!
The Cloud Project Lifecycle
How you implement cloud computing technologies matters
Cloud technologies are an essential tool to increase software delivery performance. They bring the necessary automation and scalability that enables developers to spin up instances within minutes, get their applications up and running and implement CI/CD routines to develop software faster and close to customer requirements. However, the State of DevOps Report 2018 found that it's not enough to just use cloud computing technologies, it matters how you use them. "Companies that leverage all of the essential cloud characteristics are 23 times more likely to be elite performers" – a group of development teams that delivers software at the highest levels.
In this context, we want to introduce the concept of the cloud project lifecycle to illustrate the stages a cloud project traverses, how this impacts your organization and what you need to do to leverage your cloud to the fullest.
Let's consider the following situation: Within the next 3 years you aim to migrate several thousand projects to the cloud. As you run a large variety of applications, you have decided to use more than one cloud platform, you have a multi-cloud strategy. Each of those projects will run through a lifecycle from its creation to its deletion. The way you design every activity within the lifecycle will have a significant effect on the efficiency of your cloud transformation.
The Cloud Project Lifecycle
Tenant Provisioning
This is what you will need to get started. Depending on the cloud platforms you work with, this activity corresponds to creating a cloud account in AWS, creating a namespace in Kubernetes or a space in Cloud Foundry.
Tenant Configuration
Each tenant can be configured individually. Configuration can include setting up Single-Sign-On (SSO) integration, quota setting, security policy definition or VPN setups. Different platforms may offer different capabilities here.
User Provisioning
Once your tenant is up and running you can invite users to start working in the platform. This includes managing their authorization and permissions.
Resource Provisioning
With a user account in the cloud, developers can start to spin up resources and deploy applications. Resource usage will be metered and checked against the configured policies. Data has to be managed according to the corresponding laws (GDPR).
User Deprovisioning
Employees change roles or leave companies. When they do, their access to the cloud has to be retreived as well.
Tenant Deprovisioning
At some point a cloud account may have to be torn down, retreiving authorization and deleting resources.
How the cloud project lifecycle affects your organization
If you are facing a large cloud transformation process, you won't go through these activities only once or twice, you are operating at scale. There may be thousands of projects about to be migrated to the cloud. Tenants have to be re-configured continuously, users come and go and resources are provisioned multiple times per day. A lack of automation and standardized processes will result in immense inefficiencies, putting your cloud investments at risk. Just having cloud platforms available doesn't do the job. You'll need to integrate them into your organization and adapt your processes to the cloud-native way – a high level of automation to accelerate workflows and ensure consistency, self-service for developers to avoid dependencies and resulting bottlenecks and transparency to provide the necessary control on your systems.
What you can do to leverage your cloud to the fullest
Back to the cloud project life cycle, you should ask yourself these two questions for each of those activities:
1.) How long does this take in my environment?
2.) How many people are involved?
If the answer is anything else than "less than a minute" and "max one person", there is still some room for improvement. Cloud management portals can be a means to help you manage administrative processes. meshcloud offers a cloud management portal to unify organizational processes across different cloud technologies. Thereby the platform helps large companies to leverage their move to the cloud by significantly reducing the complexity of heterogeneous IT environments. It provides a self-service-portal that facilitates configuration, user on- and offboarding and resource provisioning across clouds. Having this in place, introducing new technologies becomes a walk in the park. Find more information here.
6 Reasons for Multi-Cloud
One cloud is not enough: By now 81% of companies follow Multi-Cloud strategies. That makes it likely, that on top of managing your cloud migration you have to take care of managing the use of multiple cloud platforms as well – but when done right, the benefits outweigh the possible administrative overhead in many ways. We asked our customers and came up with 6 reasons why the Multi-Cloud approach is a winning strategy:
Avoiding Vendor Lock-in
While all hyperscalers offer multiple datacenter locations across the world, which enable companies to spread the risk and ensure availability – even when using only a single provider, the fear of dependency leads a lot of companies to the Multi-Cloud. Looking at the cloud-native landscape of the past couple of years, we see that there are new platforms appearing on the horizon continuously. How can you make sure you are betting on the right horse if you decide to go for a single cloud approach? (Spoiler: You can't and here is why).
Best-of-breed Approach
Companies run a large variety of applications. For example there may be a mobile application along with a heavily frequented e-commerce platform. Then there are internal applications to manage travel expenses and sick days and a new data science team uses collected data to run analyses to optimize resource management and fulfillment. The bigger the company the greater the variation in cloud service requirements. A best-of-breed approach describes an approach where the company uses the best suitable infrastructure type and provider for each workload.
Cloud Migration
Let's think about banks or industrial companies. They have been running software for decades, long before cloud computing even existed. The move to the cloud is an essential step as their business models are often at risk raising the need to innovate fastly. So they have two tasks to accomplish: 1) Build new applications based on cloud infrastructure, to enable scalability and fast adoption to the market with practices like continuous deployment. 2) Migrate existing applications to the cloud to enable automation and gain efficiency. As a result, they will have to deal with a very heterogeneous infrastructure during this cloud migration process. It makes sense to set up a system that is able to cope with multiple platforms and providers straight away. As this prepares for the future, where more cloud platforms will be integrated.
Acquisitions
The big cloud vendors often offer better conditions if they get an exclusive contract in exchange. Therefore, cost-sensitive companies will compromise and choose a single provider, being aware of the vendor lock-in they get themselves into as well as the fact that the chosen cloud platform may not be the best candidate for all their workloads. It still makes sense to be prepared for Multi-Cloud scenarios. Especially fast growing B2C companies aiming for market leadership buy their smaller competitors. While these competitors may operate in the same manner, they may have chosen a different platform to start with. The systems have to be integrated, either by migrating the workloads of the acquired company to match the rest or by enabling the parallel use of multiple platforms. Having thought of this beforehand and planned the system accordingly will help here.
Skill and Employer Attractiveness
Finally, people are different. They have different backgrounds and different preferences. And sometimes it will make sense to go for the platform your employees are best trained in and most familiar with. An
example: It may be cheaper to run applications on IaaS. However, this requires your employees being familiar with setting up and automating infrastructure using infrastructure-as-code. If you don't have these resources, you may be better off investing into a PaaS platform that may be more expensive in infrastructure costs, but better leverages existing developer resources. It is not only the existing skill that matters. It has been found that developers are much happier, if they get to work with their preferred tools. While this may sound like a nice-to-have situation it actually impacts productivity as well as the attractiveness of the employer for new talent.
Cost Efficiency
Depending on the scale of your cloud spend it may be beneficial to spread your applications across different platforms and leverage price differences across providers for different services. Hybrid Cloud setups have even more space for optimization, if they have set up a fine-granular metering and billing solution for their private cloud environments. For smaller companies it may be beneficial to use multiple clouds as vendors provide them with big amounts of free credits that they use before having to pay.
Platform services model classification – be aware of what you need
Platform services play an increasingly important role in cloud infrastructures. They enable application operators to quickly stick together the dependencies they need to run their applications. For example, when deploying your application and you need a database, you just request a service instance for the database of your choice and connect it to your application. Done.
Specifications like the Open Service Broker API provide a standard interface to provide backing services such as databases or analytics to applications. However, the nature of a service offered by someone else to your application is not always that clearly defined and therefore you should pay attention to the exact contract the service offers.
At meshcloud we work with customers to integrate Open Service Broker API compatible services into a private multi-cloud marketplace. Based on that experience, we provide a compact check matrix for clear communication and understanding what type of service you receive when requesting a service instance. The following abstract scheme might not necessarily be complete (please comment if you have anything to add), but it gives a first idea which questions to ask and to ensure there is no misunderstanding between service owner and service consumer.
Service Model classification matrix
Unmanaged | Operated | Managed | ||
Whitebox | The service provides an instance in your space. You have full access to its internal wiring and start from there customizing for your purposes. Example: IaC-Template for MySQL Galera Cluster. | This is a tricky model, as the service vendor offers operation support, but service consumers are also able to access the service interna, e.g. the service vendor offers a joint internal monitoring and provides an automatic recovery, but if you break anything, you need to recreate the service instance on your own. Example: AWS RDS | (Managed Whitebox would not work because the service vendor is responsible for smooth operations, therefore the interna will be protected from outside access. Exception would be supported operations where you have service operators support you in running your systems.) Examples: Monitoring & Incident Response | Whitebox means you see details of the service deployment, e.g. the service instance will be deployed into an IaaS space you have access to. |
Blackbox | (Unmanaged Blackbox would not work as no one would control the service instance) Example: Appliance as a Service without SLA | The service vendor has automatic tooling in place to check service health and recover the instance. Service consumers can not access interna, so the vendor has exclusive control and hence should be able to provide a consistent service experience. Example: DBaaS, MLaaS | This is the most usual and common perception of a platform service. You request a service instance and use it for your application without any further effort. Examples: AWS DynamoDB | Blackbox means you only have a defined interface to control your service instance (e.g. the Open Service Broker API), but no insight where or how this service is deployed or operated. |
Unmanaged: The service vendor does not undertake any measures to ensure the service is running smoothly. You get a service instance provisioned and from there, you're on your own with operations, but you don't need to start from scratch. You usually only pay for the bare resources consumed by the service. | Operated: The service vendor monitors execution of the service itself and recovers the service when it fails. However, how this affects your application or your data is out of scope for the service vendor, but as a service consumer you need to add your own routines on top of the service instance. |
Managed: The service vendor takes over responsibility for availability and, often, data backup and recovery. Further, service updates will be conducted by the service owner without your involvement as a service consumer. This is the most comfortable, but also most expensive way of consuming a service. |
The gold standard and typical understanding of a platform service is certainly: Managed Blackbox. However, there are cases when the other service models make sense, e.g. for highly customizable systems – let's say a Kubernetes cluster. Providing a K8s cluster as unmanaged whitebox service would mean you get a fully provisioned K8s cluster and take over from there in further configuration and maintenance. You still save the time to setup and provision a cluster on your own, but don't have to bear the costs of a fully managed K8s cluster.
In any case, there should be no misunderstanding between service vendor and consumer as to what the level of support really is. Especially, when procuring services becomes fast and easy and happens with a few clicks, simply assuming the vendor will take care of everything might create unpleasant surprises. Be sure to be aware of the exact service conditions, which are hopefully communicated transparently and easy to access.
Step-by-Step: Deploying a Java Spring App to Cloud Foundry
You are a developer and have been dreaming of someone to take care of deploying and operating your cloud-native applications? With Cloud Foundry you might have found a solution to this.
What is Cloud Foundry?
Cloud Foundry is a container-based open-source software for application deployment. It enables developers to deploy apps of all common programming languages, such as Java, Python, Node.js, .NET Core or Go within minutes. The platform takes care of deployment, scaling and administration of cloud-native applications throughout their lifecycle.
What you need to get started
- An account to use Cloud Foundry. As Cloud Foundry is an open-source platform there are multiple providers.
- If not done yet, you should get started by installing the Cloud Foundry CLI. Type
cf
to your terminal to confirm that the installation was successful. - If not there already, you need to install Gradle.
So let's get our hands dirty now and deploy an application on the platform.
Step 1: Deploying an application to Cloud Foundry
In general Cloud Foundry offers 2 ways to deploy applications. You can either let the platform build a container out of your application and the integrated buildpacks or, in case you already have a Docker container up and running, you can deploy your Docker container to Cloud Foundry. We will choose the first way in this example and will show how to deploy a Java Spring application.
To start, we need to login to Cloud Foundry. If you are using meshcloud you do this, by choosing a location in the meshPanel and then executing the command shown in the CLI-Access tab.
It will be something like this, depending on the location you chose.
$ cf api https://api.cf.eu-de-netde.msh.host
Now, we need to authenticate against this endpoint, by executing:
$ cf login -sso
You will be asked for a passcode, which you also get from the meshPanel (CLI-Access).
We will use an application that is provided within the Cloud Foundry sample apps on Github. It is an app to store music albums. While it has an integrated in-memory database, you can easily connect it to one of the databases in our marketplace to get persistence. We will get there in a moment. Let's clone the code first:
$ git clone https://github.com/cloudfoundry-samples/spring-music.git
and access the folder.
Once you navigated to the app folder, you can build the app:
$ ./gradlew clean assemble
And push it to the cloud:
$ cf push
And there you are: The app is running.
You should see something like this. Have a look at the manifest.yml file in the app folder. It contains the specifications of name, memory, route, etc. for your app.
You can reach the app's web interface by copying the url to your browser. Changes in the app will be stored in-memory. However, this is no long-term solution, as your are currently only running a single instance of your app. A restart will lead to a loss of your changes. So what we are going to do next is to bind the app to a persistent database and scale it up to 3 instances.
Step 2: Binding a Data Service to your application
A common microservice architecture consists of multiple apps and services like databases or message queues for communication. You can provision services from the Cloud Foundry Marketplace.
You reach the marketplace by executing:
$ cf marketplace
in your terminal. As a result you get a list of services that are currently offered on the platform. Let's create a MongoDB service to bind to our application.
$ cf create-service MongoDB S musicDB
We are specifying the database, choosing a plan and a name for the database. The database will be created straight away. You will see it in your service list:
$ cf services
Getting services in org meshcloud-demo / space aproject as c9f7d64c-404d-4b29-b719-b2359f6c8157...
OK
name service plan bound apps last operation
musicDB MongoDB S create succeeded
Now we can bind the database to our app with a single command.
$ cf bind-service spring-music musicDB
This will cause changes within the app to be persistent.
Step 3: Scaling your application
As we saw in our manifest.yml file and the CLI, we just started a single instance of the application. If you have high-availability requirements, you should always run a minimum of 2 app instances. You can easily scale your application at any time. We will now scale our app horizontally to 4 instances.
$ cf scale spring-music -i 4
Alternatively, you can also scale vertically, increasing RAM or disk of your application instance.
Congratulations! You did it.
The Cloud Foundry-ABC
If you are just about to run your first application in Cloud Foundry, this is the place to start. First of all: Congratulations! Cloud Foundry is a great technology used by developers and it is amazing how it can make your life easier. If you feel like you have been a sys admin, even though you thought of yourself as a developer, I promise, you will love Cloud Foundry. Sys admin time is over, you are finally free to work on your code. Do you think this sounds like Docker or Kubernetes? Wait for it, it can get even easier than this.
In this article, we'll just get you familiar with the most common terms of the Cloud Foundry world. So once we dig deeper, you'll know what we are talking about.
Target
There are multiple public Cloud Foundry installations across the world. meshcloud offers a couple of them within Germany. So once you start working with Cloud Foundry, you will have to define, which of the installations you are communicating to. Each of those installations is called a target. In order to switch locations within the meshcloud Federation, you will also have to switch target.
For example
cf api https://api.cf.eu-de-darz.msh.host
will lead you to our DARZ location in Darmstadt.
CLI
"CLI" stands for command line interface. The Cloud Foundry CLI will help you to interact with Cloud Foundry, to push your applications, scale them, create services and so on. In order to use Cloud Foundry, you will download and install the Cloud Foundry CLI tools first. Once you installed it, you can use your terminal or Powershell to execute functions, which in the background communicate to the Cloud Foundry API via REST calls. Every call you make to a function, simply starts with cf
.
Organization
A Cloud Foundry Organization defines a single tenant on the system. So within meshcloud this is one customer account. In order to see the organization that you are currently working with in your terminal just type
cf orgs
and the result will be something like this:
Getting orgs as xyz...
name
meshcloud-demo
Space
Within meshcloud, a Cloud Foundry Space represents a meshcloud project. You may have multiple spaces/projects within your organization. All your applications and service instances will be scoped to a specific space/project. In order to see the spaces/projects you have access to within your organization, you can type:cf spaces
Services
We differentiate applications and services within Cloud Foundry. While you bring the application to the platform. As meshcloud, we offer various stateful services for you to use. Services are offered in the so-called Market Place. To see the market place, just type: cf marketplace
and you will see a list of services to choose from. They are mainly databases and message queues. Each service is offered in various plans, that may differ in size or performance. For example there are shared database plans (our "S" plans) and dedicated database plans (our "M" and "L" plans) within meshcloud's Cloud Foundry.
And that's it. That is the basic vocabulary that you should be familiar with, when starting to use Cloud Foundry.
Extract your public key from your private key
If you should ever lose your public key, but still have your private key, you can always compute your public key from your private key.
The following command prints your public key to stdout (with id_rsa
being your private key file, e.g. under ~/.ssh
)
ssh-keygen -y -f id_rsa
Or just pipe directly to a file with:
ssh-keygen -y -f id_rsa > id_rsa.pub
Now you can redistribute your public key where ever you need it.
Any questions? Please comment.
Cloud Foundry Intro: How using PaaS can save you a lot of time
Learn why Cloud Foundry is useful for your application deployment and see how easy you can speed up your deployment process:
No Pets Allowed – Embracing the Cloud is More Than Server Virtualization
The cloud comes along with new approaches to application architecture. One of those is described by the “pet vs. cattle” analogy.