AWS Account Deletion

How to Delete AWS Accounts (via API)

Two ways to delete AWS accounts

In this article, we will explain how to delete AWS Accounts.

There are 2 convenient ways:

  1. Using the new Organizations model, which was introduced at the end of March 2022.
    This way of deleting AWS accounts is great, since you can manage all closures of accounts directly from the AWS root account. You are not required to log into each individual member account.
  2. Automating the AWS account deletion. This is possible both via the REST API and via the AWS CLI.

1. Delete AWS accounts via the user interface

To be able to close an AWS account, you will need administrator access to the so-called root account. This account hosts all of the individual member accounts (of which you will delete one).

  • Go to the AWS organizations page where you can see a hierarchy of all AWS accounts in your organization. Find the AWS account that you want to delete and open it.

AWS Account Overview

  • In the detailed view of the AWS Account, you will find a “Close” button on the top right. Click on it.

AWS Account Deletion

  • This will open up a form that verifies whether you really want to close the AWS account. It will also make you aware of some of the conditions that apply (more on that below)

2. Delete AWS accounts via API or CLI

For those that love automation out there, we are lucky: AWS this time also developed API and CLI methods to close accounts! Read here how this works.


The process for the AWS CLI is quite trivial. Grab the 12-numbered AWS Account ID and execute the following command:

# How to use
aws organizations close-account --account-id <value>

# Example
aws organizations close-account --account-id 123456789012

Learn more about this CLI command in the AWS CLI documentation.


Using the AWS API is also an option. AWS introduced a new CloseAccount action that can be executed. You will only need the AccountId to be able to execute this quite trivial operation. Read more in their API documentation on how to use the operation exactly.

Limitations of AWS account deletion

There are some limitations to deleting AWS accounts. We will list them here for you:

  • Closing an AWS account does not mean it gets deleted immediately. There is a 90 day grace period in which you have the chance to reopen it again by contacting AWS support. (AWS calls this the “Post-Closure Period”)
  • After the post-closure period passes, all content in the account is automatically deleted.
  • An organization can in a given month only delete 10% of all of its member accounts. If your organization e.g. has 300 accounts, this means you can close only up to 30 accounts per month. Keep this in mind when designing your cloud tenant deprovisioning process.

Explanaition of the

GitOps and the Rise of Everything-as-Code

GitOps is a term that is getting more and more popular to describe a certain way of handling the operations of an application or IT system.

In this blog post, we will look at the definition of GitOps and at its most important core principle: Infrastructure as Code (often referred to as “IaC”).

GitOps is considered the future of doing operations and we will line out why, what the benefits of GitOps are, and why you should also consider going GitOps!

What is Infrastructure-as-code?

Before we dive deeper into GitOps, we have to understand the concept of Infrastructure as Code (IaC). IaC is a different way of managing IT infrastructure. Instead of performing manual tasks in a tool such as the Azure portal, the “state” of the infrastructure is centrally managed via a git repository. This could even be the same git repository that already hosts the code of an IT application. The engineer declares what the state should be, and the IaC tool takes care of achieving that state. We’ve written a guide to testing Infrastructure as Code that goes more into detail.

The opposite of doing declarative coding is known as imperative. To understand the difference and benefits of doing your infrastructure as code, see the following example for running a virtual machine in Azure, in an imperative way and declarative way:

Imperative - the old way (self-written logic)

Retrieve all virtual machines. If there is no virtual machine in Azure, set up a new virtual machine in Azure, if it is already there, do nothing.

The engineer in this case has to write logic and verify the state and whether that state is achieved, for example by calling the Azure CLI for all virtual machines and checking whether the one that needs to be there is already there or not.

Even worse: in a pre-DevOps time this was a manual task done via calling the operations department: “Hey John, I need a new virtual machine in Azure!”.

Declarative - the new way (as code)

I want one virtual machine to be there in Azure.

The engineer only has to write the desired state: the tool (e.g. Terraform) takes care of actually achieving the desired state.

As you can see, managing infrastructure becomes a lot easier this way. The engineer no longer has to keep track of what is already there and what should be there. Instead, the engineer just declares what there should be and the tool takes care of doing so.

Read our blog post on providing a declarative API if you want to learn how and why we built an API that uses the declarative approach, as opposed to a regular imperative REST API.

Everything as Code

IaC has meanwhile become such a success that we can actually see other aspects of IT being handled “as code” as well. This is usually described as “Everything-as-code” or “X-as-code”. The as-code movement has meanwhile made its way to CI/CD, security, and even code quality standards as well. The rule nowadays for developers is: if it can be managed as code, we do so.

Just last week, I ran into this example by myself, in this case for the tool Netlify. Netlify is a way to build and host your front-end project in an easy way. The screenshot below quickly demonstrates how closely related “configuration as code” and the actual “manual” configuration in the user interface are:

Explanaition of the "as code" concept
Examples like these prove that engineers are enabled to do more and more in code, and less via (manual) work in various user interfaces.

What is GitOps?

So then, to make the main question of this post: what is GitOps?

It has to be said that although IaC and GitOps could be considered separate things, there is a good chance that people using IaC are already doing GitOps. If you have hooked up a CI/CD pipeline to your IaC repository, you’re already doing GitOps, nice job!

GitOps builds on top of the principles of Infrastructure as code (or anything as code). By declaring our desired state in code, we can now also hook up CI/CD tools to actually operate the desired state that is declared in our code.

To put it briefly: GitOps is doing IaC in Git and automating away the Ops through CI/CD. Together they become GitOps. Historically, Kubernetes is the first tool that heavily used GitOps.

What are the benefits of GitOps?

By declaring and operating your infrastructure in a GitOps way a lot of benefits are unlocked. These are comparable to the benefits of tracking your application code in Git:

  1. Including our infrastructure as code in a git repository creates a single source of truth. By being able to check out the git repository, anyone at any time can figure out what the state of the infrastructure should be. There is no longer a Word document or Excel sheet laying around in some folder that should contain this information.
  2. When making infrastructure code declarative, instead of writing custom code that rolls out the infrastructure, it becomes less complex. This makes the infrastructure as a whole better readable. No need to guess anymore what the code might be doing. It is just stated what the state should be, which is much easier to grasp than imperative code.
  3. The infrastructure is as close to the application code as it can be. If possible, you can manage the infrastructure of your application fully in the same git repository as the application code that is run on the infrastructure. This also enables kickass workflows such as doing application & infrastructure changes in the same pull requests.
  4. When you write your infrastructure as code, then you allow yourself to reuse infrastructure. You could for example define your infrastructure once, and deploy it both for a test and a staging environment from a single declaration.
  5. You are unlocking all benefits of using git. By tracking infrastructure changes in a git repository, all git benefits are automatically unlocked. Think about branching, tracking history, and most importantly, collaborative pull request workflows for reviewing and accepting changes.
  6. Given the rise of CI/CD tools out of the success of Git, it is also obvious that GitOps provides the best automation opportunities. Declaring your infrastructure as code is one thing, but automating all other tasks away is an obvious next step to improve the way you do operations.

It’s pretty cool what people do with GitOps. For example, the contributors at Infracost combined the benefits of points 5 & 6 and built an open-source tool that estimates the $ you increase or decrease on your infrastructure costs when opening up a new pull request in GitHub.
Tool estimates changes in infrastructure cost based on github pull requests

GitOps enables agility

GitOps in the end achieves that same, known workflow for DevOps teams that they are already used to for managing IT infrastructure, or other aspects.

Need to change the size of the VM used? Just open up a pull request/merge request and the change will automatically be rolled out to the right environments.

The GitOps workflow will ensure that teams get the same speed and agility of changing application code. This is key to enabling business excellence, which is growing closer and closer to a company’s ability to deliver software fast.

Cloud cost dashboard build in google data studio displaying cloud billing information in tables and charts.

Open Source Cloud Cost Dashboard in under 10 Minutes

Cloud cost dashboard build in google data studio displaying cloud billing information in tables and charts.
You can build a cloud cost dashboard like this in under 10 minutes.

Three easy steps to get to your cloud cost dashboard

Everyone loves good dashboards. If done well, cloud cost dashboards can give you all the vital information that you need in a single overview. This is incredibly important with managing your cloud costs.

In this blog post, we will show you how you can build a cloud cost dashboard. See your cloud expenditure at a glance in AWS, Microsoft Azure & (soon to be) Google Cloud. The best part: it is completely free and doesn't take longer than 10 minutes. This is thanks to our open-source multi-cloud CLI Collie which we recently launched on GitHub. To get to your cloud cost dashboard, we will follow along with these steps:

  • Step 1: Preparing cloud cost data with metadata using tags
  • Step 2: Extracting the data from the clouds
  • Step 3: Building a cloud cost dashboard

At the end of this guide, you will have a cloud cost dashboard that looks something like this.

To follow along with this blog post, you need a license for Google Data Studio. It should be included for free in G-Suite. You could also use another dashboard tool such as Microsoft PowerBI. As long as it supports CSV files as data import.

Step 1: Preparing cloud cost data with metadata using tags

Your cloud cost dashboard is only as good as the data it is built upon. That's why the first and most important step is to prepare the necessary cost data with the right metadata. Without proper metadata, it is going to be difficult to filter and view information from certain angles. One vital part of building good cost data from the public cloud providers is the use of tags. The more tags you use, the more questions you can answer for yourself or your management:

  • Which team (or department) is spending the most in the cloud?
  • Which cost center is spending the most in the cloud?
  • Which cloud platform has the highest usage?
  • How much are we spending on development stages?
  • How is the expenditure of cloud-native projects vs. lift-and-shift projects?
  • Whom do I need to contact for more information about this project?

Step 2: Extracting the data from the clouds

Once you are happy with the metadata you applied to your projects, it is time to export the cost data. This allows it to be imported into your cloud cost dashboard later on. To make the export as easy as possible, we will use our recently launched Collie CLI. Collie can export all cost data with one single command in a CSV file. We will then prepare this CSV file using Google Sheets so it can be used as a data source for the dashboard.

To do this export with Collie we need to execute the following steps:

  1. Before installing: make sure you have properly set up the CLIs of the cloud platforms.
  2. Install Collie as explained here.
  3. Run the cost export command for a given time interval. Make sure to use whole months as the cost data is on a monthly basis. The following command would work for Q1 & Q2 of 2021:

collie tenant costs --from 2021-01-01 --to 2021-06-30 -o csv > q1_q2_2021_export.csv

  1. You should now have a CSV export with the cost data of your cloud(s). The metadata tags are provided as extra columns, which is important for when we build your cloud cost dashboard.

Next up, we will import the CSV data into a Google Sheets spreadsheet. Follow these steps:

  1. Create a new Google Sheet (hint: navigate to
  2. At the menu at the top, open "File" and click "Import". Navigate to the "Upload" tab in the dialog and upload the CSV file from before in this dialog.
  3. Make sure to untick the checkbox that says "Convert text to numbers, dates, and formulas". Confirm the import by clicking "Import Data".
  4. Open the new spreadsheet by clicking "Open now". Make sure to name this new spreadsheet something that you can remember later.

That's it! We now have a well-prepared spreadsheet that we will use to power your new cloud cost dashboard.

Step 3: Building your cloud cost dashboard!

Okay, the data is ready! We can start building your cloud cost dashboard now. To make things easier, we have already prepared a template for you. You can find it here. Let's link it to the data source that we set up before. Follow along the next steps to do so:

  1. Open the dashboard link mentioned above and make a copy of the dashboard. To do so, click the settings icon and click "Make a copy" as shown in the screenshot below.

Google Data Studio screenshot showing how to make a copy.

  1. Data Studio will now ask you to enter a new data source. We will use the Google Sheet from before. To do so, click the dropdown below 'New Datasource' and click 'Create new data source'.
  2. Select the connector called 'Google Spreadsheets' and a new window should pop up that allows you to search for Google Sheets in your Google Drive. Try to find the one you created before and confirm the connection by clicking the blue 'Connect' button at the top right.
  3. Before creating this new data source, we need to clean up our data types a tiny bit. In the menu that just opened in front of you, scroll to the 'from' dimension and change its type from 'Date & Time' to 'Date & Time → Year Month'.
  4. Confirm the creation of the data source. Click the "Add to Report" button at the top right.
  5. At last, copy over your new report by clicking "Copy report".

That's it! You're looking at your new cloud cost dashboard. The cloud cost dashboard template we created offers various features:

  • Viewing total costs on a monthly basis
  • Viewing costs per cloud platform
  • Filtering on dates
  • Viewing costs across various tags, in our case:
    The owner (who owns this cloud account)
    Cost Center (a way of allocating costs to budgets)
    Environments (development, production, etc)
    Zone (is it a cloud-native or lift-and-shift project)
    Departments (which unit owns the project)
  • Viewing tenants (and their metadata) with the highest cost.

There is a good chance that you have different metadata than what we used in the dashboard template, which might break one or more charts. We recommend tweaking your cloud cost dashboard to your needs, but we hope that this template helps you off to a great start 🚀

Get Started with your own Cloud Cost Dashboard!

Now that you have reached the end of this post, you see that it doesn't have to be rocket science to build a powerful cloud cost dashboard. Leverage the power of our open-source Collie CLI and you have the necessary cost data extracted within minutes. Duplicate our Google Data Studio Dashboard, connect it to the CSV export and you're done!

What questions are you going to answer with your new cloud cost dashboard? Let us know in the comments below!

What's next?

Collecting costs isn't the only thing that Collie can do 😉 Curious what else Collie CLI can do for you? Head over to our meshcloud GitHub page and find out!


Want to move your cloud financials to the next level? Head over to our cost management solution and learn how we can help you!

business credit card

Saving Cloud Costs: 3 Proven Strategies

Learn how to cut down cloud costs on a strategic level. Deleting unused instances is fine, but here are 3 organizational-level strategies to start saving cloud costs.

From our customers - large multi-cloud enterprises - we learned the top 3 strategies to save on cloud costs:

  1. They predict and commit to cloud usage to save cloud costs.
  2. They use multiple cloud platforms to leverage the competitiveness of the cloud market in negotiations
  3. They use Enterprise Contracts to crack down on cloud spending

If you are an executive or manager responsible for cloud cost management at your business this blog post is for you.

Let’s kick things off with strategy #1

1. Commit to cloud usage to save cloud costs

Why cloud consumption prediction results in discounts

You are familiar with cloud computing and have heard of something called reserved instances:

You, the customer, promise the vendor that you will use a certain service for a given amount of time. Let's say a small virtual machine for an entire year.

The cloud provider is thrilled! Your commitment helps them predict the demand for cloud computing for the upcoming year.

In return for this prediction, you receive a discount from the cloud provider and can save cloud costs.

Predicting cloud consumption is a win-win for both you and the cloud provider.

Commiting to cloud usage on an organizational level

This same concept applies on an organizational level:

If an enterprise knows they will spend ~1 million euros a month on cloud computing, the provider is much better aware of what demand is coming. This also results in a significant discount.

The risk of such a commitment is that your organization will pay that amount of money - regardless of what the actual consumption is in the end.

The more accurate your cloud consumption prediction is as an organization, the more costs you will save. Anything below the ~1 million euros can be considered "wasted", and anything above that is charged with a (usually higher) on-demand fee.

2. Go multi-cloud to apply negotiation pressure

Be aware that the cloud providers want you!

Even though cloud computing is a new development, cloud providers know they will keep your business for years. This is why they're eager to onboard you on their cloud platform.

Once your business-critical applications are on their cloud, they know it is very tough to reverse that decision and they have locked you into their platform.

Show you (can) run on multiple clouds

By showing that you are capable of running your cloud business on one of the other platforms you strengthen your position as a potential client. Let them know you can move your business-critical applications to one of the competitors. They will include even more benefits or discounts to get you to sign as a client. By adopting a multi-cloud strategy, you show that you do not need that particular cloud provider and that you can switch to one of the others if needed.

3. Get Started with Enterprise Contracts

What is an enterprise contract and why do I need one?

When starting with one of the clouds, it will be straightforward to get going and book your first services. While doing so, you will pay the so-called on-demand fee.

When trying out a cloud platform and evaluating whether you want to build here, the on-demand fee is a great option. Not so great when you want to take it to the next level.

When building or migrating business-critical applications you will most likely need an enterprise contract with the provider.

An enterprise contract gives you the possibility for a volume discount and opens the door for negotiations. They are also easier to process for your financial department.

We will provide concrete steps you can take for the "big three": Microsoft Azure, Amazon Web Services, and Google Cloud.

Azure Enterprise Agreement

It is possible to create an Enterprise Agreement (EA) with Microsoft for the sole purpose of using Azure. But you most likely already have an EA in place for other services such as Windows or Office.

If this is the case, add Azure to the existing EA by making an upfront monetary commitment for a minimum three-year period.

This monetary commitment is quite low: it starts at $100 per month.

Be aware though as this is intentional. Microsoft knows that once your enterprise is on their cloud platform, they have their foot in the door and you're likely to grow a lot more from there.

The EA offers built-in savings ranging from 15 percent to 45 percent, based on the committed spend. The more you buy - the better the discount.

Lastly, be aware that there is a minimum user requirement for Azure. You will need at least 500 or more users committed to using the EA. The only exception is the public sector, which has a requirement of 250 users.

If you want, you can also talk directly with an Azure Sales representative here.

AWS: Enterprise Discount Program

AWS does not state it anywhere, but when searching for it online, you will find it: The AWS Enterprise Discount Program (EDP). This is the enterprise contract of AWS that allows you to get volume discounts by doing annual prepayment commitments.

As an EDP comes with an NDA, there is no information known on what the smallest size is nor the estimated discount you can receive with it.

But it is always better to use than paying the on-demand fee, especially if you're an enterprise that (plans to) operate at a large scale. You can get started by contacting AWS sales.

Google Cloud: contact sales

As of writing, Google does not advertise for an enterprise contract. There is also no information online to be found on them having some kind of enterprise contract and discount program.

Yet, there is a good chance that you're able to have an enterprise contract with them anyway.

The best thing you can do is get in contact with their sales team and discuss potential options.


There is no need to pay for the regular on-demand pricing that cloud providers offer. There are multiple levers you can pull to improve your enterprise's negotiation position. Do this by either committing to scale and receiving a "scale discount" using one of the enterprise agreements and/or adapting to a multi-cloud strategy. The cloud market is highly competitive and cloud providers would love to do business with you: Use this to your advantage.