For instance, we can define Mapping/Meta-data tables with Distribution-All Key, so that it gets replicated across all the nodes, for fast joins and fast query results. If you correctly define them on your tables, it improves Query performance. Integration Platform as a Service (iPaaS).
Here I installed PostgreSQL on EC2, but Graviton2 is available on RDS as well (in preview) with db. So, finally, the big difference is the price as the Graviton2 m6gd.2xlarge is 20% cheaper than the m5d.2xlarge x86.
And that's finally what most of database applications are doing. However, where running pgbench, ARM had nearly the same performance for the prepared statement protocol and even a bit faster with the simple protocol. 780651 LIOPS / thread, but that is the most optimal database work: all in shared buffers, limited calls, roundtrips and context switch. The PostgreSQL shared buffer cache hits were 13% faster x86: 896280 LIOPS / thread vs. The compilation time for the PostgreSQL sources was 11% slower on ARM: 3 minutes 39 seconds vs.
The software is free and the EC2 running hours for Graviton2 is 20% cheaper: Here is the price of the instance I used.
( cd postgres/contrib & sudo make install ) configure & make all & sudo make install ) Sudo yum install -y gcc readline-devel zlib-devel bison-devel I'll install PostgreSQL here and measure LIOPS with PGIO ( ) sudo yum install -y git gcc readline-devel zlib-devel bison bison-devel flex
I've installed "Amazon Linux 2 AMI (HVM)" on m5d.2xlarge (x86_64) and "Amazon ECS-Optimized Amazon Linux 2 AMI (ARM)" for m6gd.2xlarge ARM64 (aarch64) However, as I want to compare the CPU performance during a long run I'll use larger (and non-burstable) instances: m5d.2xlarge for x86_64 and m6gd.2xlarge aarch64 which have both 8 vCPUs and 32 GB or RAM.
This is a good occasion to test the Graviton2 ARM processors, and you can do the same as I do in this blog post on those instances.
But be careful, when the free trial ends, or if your usage exceeds the free trial restrictions, you’ll pay the standard pay-as-you-go rates. And currently, until June 2021, you can also run a T4g.micro. On the AWS free tier, you can run a t2.micro instance for 750 hours per month during the first 12 month after sign-up date.