How to Reduce RDS Costs on AWS
Veteran AWS users will know that AWS can be costly, especially if not optimised. Some companies opt to use services that measure AWS expenditure in very fine detail. Should you also do so, you may discover that RDS, the database element of AWS, can be a big part of your overall spend. Here we will discuss techniques that you can use to reduce RDS costs.
RDS is unlikely to be as cost and performance optimised as custom tuned instance running on EC2 where you can apply specific operating system and file system optimisations. This post has been put together to give you some strategies and explain concepts that will lead you to save money by reducing RDS costs specifically and some parts also apply to running your own databases on EC2.
Reduce RDS Costs Through Instance Rightsizing
The size and type of the instance is a big factor in the overall spend. Getting the right size for your needs is a tricky exercise. There are two parts to an instance that is relevant to look at: CPU and Memory size.
To determine the right CPU size, you should consider the following:
- How many concurrent connections do you expect the db server to handle?
- How much data is your db server expected to process? Simple reads and writes.
- Large batch processing or long reporting queries.
- Are your indexes optimized or is your DB filtering a lot of rows instead?
Plan to have some breathing room for your CPU usage. If the average or constant CPU usage is around 50% or even 70%, then you are leaving very little room for spikes in usage which can result in incidents.
Memory is very important to a DB server. To explain why, there is the concept of ‘working dataset’. As long as the data that the DB server needs is able to fit into memory, then the speed of retrieving that data is x400,000 faster than retrieving it from disk.
Once the data size begins to exceed the available memory, performance starts to fall off a peak. However, you do not need the entire database schema to fit into memory. It is enough that the part of the data that is accessed often, fits in memory.
Another way to explain this is the concept of ‘hot’ and ‘cold’ data. Hot data (data that is frequently referenced) needs to be in cache for the database to perform well. Whereas cold data can comfortably be stored on disk in the event that it is needed.
As implied before with memory size, for databases, IO is very important and disks are a core part of that.
The IOPs of the disk are a factor for database performance and determine burst or peak reads and writes. Disk size is also a factor for how much data you wish to retain as well as do database operations like altering tables.
As of this writing, there are two types of disks that we use: gp2 and reservedIO.
GP2 is an SSD and the IOPs are determined by a calculation which is roughly 3 IOPs per GB of provisioned space. In some cases, in order to increase IOPs on gp2, we increase the disk size.
GP2 is a relatively cheap option at reasonable sizes.
ReservedIO is essentially a disk that AWS guarantees has a certain number of IOPs, should your application need it. It is considerably more expensive.
Reduce RDS Costs of Backups
Backing up and snapshots are also a factor that costs money – in some cases, a lot of money. The larger the disk size, the higher the snapshots. However, the price for the snapshots are cheaper than the disk price.
Be mindful of how long you store RDS snapshots, and consider alternative backing up methods for longer term storage.
AWS has a service that will convert old RDS snapshots into parquet files which can then be read by other AWS services, but this will not include user rights to what accesses the data.
Reduce RDS Costs Through Performance Tuning and Optimization
Another way of facilitating a reduction of RDS instance size is by making sure your database is optimally tuned and your queries well written and optimally indexed. If your workload is well optimized and appropriately indexed, it will require less CPU and less disk I/O to deliver the same level of performance. The difference between CPU or disk I/O consumption on an instance where we have optimized the workload vs. the one that has been neglected for a long time can be very large. A 50x difference is not unheard of, and between 2x and 10x is typical. Here are a couple of CPU usage screenshots from a client’s system on which we carried out our MySQL performance tuning and optimization service.
Before we analyzed and optimized the workload:
After we analyzed and optimized the workload:
As you can see, in this particular case the optimizations have enabled our client to reduce their RDS instance’s CPU requirements by approximately 3x peak and 7x average.
Some tips about implementing cost saving changes on RDS.
- Disks are easy. Instances are hard: It is easy to increase the size or type of a disk and can be done online.
- Changing the disk type (from reservedIO to gp2) is also easy and can be done online. This can allow you to convert the disk type to reservedIO for a particular operation on the DB and then put it back to gp2.
- Decreasing the disk size requires downtime.
- Increasing or decreasing the instance size requires downtime. Additionally, decreasing instance size should include some stress testing to validate that you won’t have issues after the downgrade.
It is important to be mindful about particular parts of RDS which can increase costs. Consider your application’s usage, scalability requirements and focus on the key bottlenecks which can affect performance and costs.
Contact us if you need expert advice on reducing your RDS costs.