I have a chance to test a system with Intel 320 SSD drives (NewRelic provided me with an access to the server), and compare performance with SAS hard drives.
System specification
- Dell PowerEdge R610
- Memory: 48GB
- CPU: Intel(R) Xeon(R) CPU X5650
- RAID controller: Perc H800
- RAID configuration: RAID 5 over 11 disks + 1 hot spare. RAID 5 is chosen for space purposes. In this configuration using 600GB disk, we can get 5.5T of useful space
- Intel drives: Intel 320 SSD 600GB
- HDD drives: Seagate Cheetah 15K 600GB 16MB Cache SAS
- Filesystem: XFS, mkfs.xfs -s size=4096, mount -o nobarrier
Benchmark:
For the benchmark I took a sysbench uniform oltp rw workload. 256 tables, 50mil rows each, which gives in total 3T of data.
To vary a ratio memory/data I will vary an amount of tables from 256 (3TB) to 32 (375GB).
As a backend database I use Percona Server 5.5.19.
I should mention that on these datasizes, sysbench workload is pretty nasty, MySQL will mostly reads and writes pages from buffer pool (replacing pages in buffer pool). This however allows us to see the best possible scenario for SSD running under MySQL, the final result will show the best possible gain.
I do measurements every 10 sec to see stability of results.
Tabular:
1 2 3 4 5 6 7 | Tables HDD SDD Ratio 32 1226 1644 1.340946 64 140 571 4.078571 96 101 506 5.009901 128 89 486 5.460674 192 79 484 6.126582 256 75 495 6.600000 |
As you can see, on the big datasizes we have 5-6x improvement. However on 32 tables (375GB of data), the result became unstable.
There is a graph with time series with 10 sec measurements.
It looks like we are having symptoms of the flushing problem. This is to investigate later.
The scripts and raw results are on Benchmarks Launchpad.
Were the HDDs on a SAN? Reason I ask is that the Dell R610s have only 6 x 2.5″ drive bays.
Thanks,
AD
We are using MD 1200s for spinning disk and MD1220s for the SSDs. The shelves are attached to PERC H800s with 1GB of cache