In times of digital transformation multi-cloud strategies are often implemented top-down. The CIO decides “We are going to implement a multi-cloud strategy, because we want to avoid vendor lock-in to a single platform.” While this is a very legitimate reason for a multi-cloud strategy (more can be found here), the actual use of multiple cloud platforms touches many stakeholders within the organization. They have to be taken on the journey and their everyday work has to be considered to make the strategy yield fruit. Depending on your position within the company your interest is either in agility or control. So while a developer is striving to write new code, rather than dealing with repetitive approval tasks, IT managers or security officers are scared to loose control due to increasing complexity.
With the digital transformation, the role of software increases for companies of all industries. Software delivery performance has become the key metric to improve in this context. If you want to be better than your competition, you have to build up capabilities to develop new applications and bring them to market faster than they do it. As an IT leader it is your responsibility to provide the appropriate tools to your development teams, from cloud computing platforms like Kubernetes to state-of-the art tooling for CI/CD, monitoring, logging etc – the whole cloud-native stack. Keeping control of the occuring costs and making sure the tools are easily accessible to your teams to avoid shadow IT are also included in your set of responsibilities.
In times where software developers have become a scarce resource and enterprises compete for the best talent, providing an attractive work environment is key to success. Ask 5 developers what they consider important in their work environment: Being productive and having the right tools to work with will be among their answers. Apart from bad tooling, organizational inefficiencies can be a reason for productivity losses. Dependencies between departments for routine tasks like the creation of cloud accounts, configuration changes or quota management may lead to bottlenecks and unnecessary idle times. Self-service capabilities that provide autonomy help to solve this and drive productivity as well as employee satisfaction. They are in the core of cloud computing technologies, according to the NIST definition.
Being in operations, the main goal is to achieve platform and service stability. Night shifts or at least on-call duty are common practice as service disruptions bear a high cost risk. Customers may not be able to purchase or use a certain service during downtimes. As a result, support incidents pile up and a large amount of time has to be invested even after service recovery. To avoid these incidents, infrastructure has to be set up redundantly, aiming for high availability. The staff should be trained for failures to make sure involved people know what to do in such high pressure situations. An important prerequisite to make this work is to have a consistent environment that doesn’t depend on individuals – automation, version control and documentation are important here. But operations don’t only happen in emergency night shifts. During the day ops teams deal with day-to-day work to provide a set of services to development teams. However, the reality is often different. Instead of evaluation and provisioning of new cloud services or automation tasks, ad-hoc processes and manual routine tasks fill the operators’ days, leading to inconsistencies and impatient developers finding their detours – shadow IT – to achieve their goals, which is a huge security risk. In multi-cloud environments, heterogeneous processes increase this complexity drastically if not taken care of, especially at scale. Again, the reduction of dependencies will help to avoid bottlenecks. Furthermore, it should be easy to provide services to developers. How? By providing a standardized framework for service provisioning like the meshMarketplace operators can easily provide a service e.g. a database or a logging framework to all development teams. While the operators can focus on the stability of each service as well as the addition of new services, the marketplace informs development teams about the existence of a service and boosts its reusability.
Multi-cloud can be used as a tool to reduce cloud costs. In many cases however, the tools to define, compare or evaluate the cost of different projects, applications or deployment environments are missing or immature. Costs often occur in large vendor-dependent blocks that are hard to map and distribute to specific applications. The AWS or Azure cloud spending for example: Budgets for these positions are put aside and only when they are exceeded, action is taken. In these cases DevOps teams are asked to manually check on their resources, to make sure they are all actually needed and no zombie workloads exist.
In private clouds it gets even worse. Platforms used in private cloud contexts often don’t even provide usage information or produce an invoice. As a result companies don’t know the cost of deploying an application internally not to mention how to evaluate the best deployment environment for an application, in terms of cost efficiency. To leverage financial benefits of multi-cloud strategies, cost transparency is necessary. Private clouds need usage metering and billing capabilities and the cloud spend has to be allocated to the actual applications using the resources.
To successfully implement a multi-cloud strategy all the stakeholder needs have to be addressed. Multi-cloud management tools are a way to support this. The meshcloud multi-cloud-platform reduces the complexity of multi-cloud environments. It provides agility to developers and operators while providing visibility to keep control on cost and security. To learn more on our platform, feel free to contact us or schedule a demo with one of our cloud experts.