How to Reduce RDS Costs on AWS

Veteran AWS users will know that AWS can be costly if not optimised. Some companies opt to use services that measure AWS expenditure in very fine detail. Should you also do so, you may discover that the database element of AWS, can be a big part of your overall spend.

RDS is unlikely to be as cost and performance optimised as custom tuned instance running on EC2 where you can apply specific operating system and file system optimisations, but this post has been put together to give you some strategies and explain concepts that will lead you to save money on RDS specifically and some parts can apply to your own databases on EC2.

Instance Size

The size and type of the instance is a big factor in the overall spend. Getting the right size for your needs is a tricky exercise. There are two parts to an instance that is relevant to look at: CPU and Memory size.


To determine the right CPU size, you should consider the following:

  • How many concurrent connections do you expect the db server to handle?
  • How much data your db server is expected to process? Simple reads and writes
  • Large batch processing Long reporting queries
  • Are your indexes optimized or is your DB filtering a lot of rows instead?

Plan to have some breathing room for your CPU usage. If the average or constant CPU usage is around 50% or even 70%, then you are leaving very little room for spikes in usage which can result in incidents.


Memory is very important to a DB server. To explain why, there is the concept of ‘working dataset’. As long as the data that the DB server needs is able to fit into memory, then the speed of retrieving that data is x400,000 faster than retrieving it from disk. 

Once the data size begins to exceed the available memory, performance starts to fall off a peak. However, you do not need the entire database schema to fit into memory. It is enough that the part of the data that is accessed often, fits in memory.

Another way to explain this is the concept of ‘hot’ and ‘cold’ data. Hot data (data that is frequently referenced) needs to be in cache for the database to perform well. Whereas cold data can comfortably be stored on disk in the event that it is needed.


As implied before with memory size, for databases, IO is very important and disks are a core part of that.

The IOPs of the disk are a factor for database performance and determine burst or peak reads and writes. Disk size is also a factor for how much data you wish to retain as well as do database operations like altering tables.

As of this writing, there are two types of disks that we use: gp2 and reservedIO.

GP2 is an SSD and the IOPs are determined by a calculation which roughly is per 100Gb = 300IOPs. In some cases, in order to increase IOPs on gp2, we increase the disk size.

GP2 is a relatively cheap option at reasonable sizes.

ReservedIO is essentially a disk that AWS guarantees has a certain number of IOPs, should your application need it. It is considered more expensive.


Backing up and snapshots are also a factor that costs money – in some cases, a lot of money. The higher the disk size, the higher the snapshots. However, the price for the snapshots are cheaper than the disk price.

Be mindful of how long you use RDS snapshots for and consider alternative backing up methods for longer term storage. 

AWS has a service that will convert old RDS snapshots into parquet files which can then be read by other AWS services, but this will not include user rights to what accesses the data. 


Some tips about implementing cost saving changes on RDS.

  1. Disks are easy. Instances are hard: It is easy to increase the size or type of a disk and can be done online.
  2. Decreasing the disk type (from reservedIO to gp2) is also easy and can be done online. This can allow you to convert the disk type to reservedIO for a particular operation on the DB and then put it back to gp2.
  3. Decreasing the disk size requires downtime
  4. Increasing or decreasing the instance size requires downtime. Additionally, decreasing instance size should include some stress testing to validate that you won’t have issues after the downgrade.


It is important to be mindful about particular parts of RDS which can increase costs. Consider your application’s usage, scalability requirements and focus on the key bottlenecks which can affect performance and costs.

Contact us if you need expert advice on reducing your RDS costs.