Open-E Knowledgebase
Search:     Advanced search

How-to dramatically improve IO performance?

Article ID: 1293
Last updated: 13 Jul, 2011
Views: 10995
How-to dramatically improve IO performance? Rating: 8.1667 / 10
Votes: 6
Posted: 22 Jun, 2011
by:
Updated: 13 Jul, 2011
by:

  1. How to measure the IO Performance using IOmeter

There are plenty of benchmarking tools available but storage professionals mostly use IOmeter. Unfortunately IOmeter is a little tricky to use. You really need to read the user manual first. I have frequently seen users trying to run IOmeter tests without success. Being human, most of us hate to read the manual and with IOmeter this can lead to problems. I hope this short post will help you to get the wanted results. First off, you need to know that IOmeter recognizes 2 different volume types:
•un-partitioned disks (blue icon disk), or
•formatted disks (yellow icon with a red slash though it)
With un-partitioned disks you can start the test at once, but to run with formatted disks you need a test file. The test file must be placed in the root directory and named: iobw.tst. By default IOmeter will create the test file if not found. The problem is that nowadays volumes are very big, and IOmeter runs very slowly. It’s much faster to create the test file using theTestFileCreator.exe from Open-E. Please run it in order to create iobw.tst with any size you desire. You can run: TestFileCreator.exe 100G in order to create a file of exactly 100GB.

Download TestFileCreator.exe:

To find out your storage performance there are a few typical test configurations you may want to run. Here are some example results of a FC Volume created on DSS V6 with FC HBA dual 4Gb and MPIO.



So, if your goal is to obtain the maximum MB/sec please create a test set up with 2-4 workers, using a block size of 256k and 100% sequential read or write. In our case the sequential read shows the best result: 772 MB/sec! Please make your settings very carefully and make sure all workers use the same test configuration! If you forget to add your [256kB, 100% sequential read] configuration to every worker you will be surprised with very low results because the default test settings use 2kB block and mixed random/sequential and read/write pattern. So instead of the desired 772MB/sec you might see 100 times less, i.e. ~ 7MB/sec.
  1. What can be done to improve the IO performance

Hard disk performance is limited by mechanical parts velocity and there is no solution for this issue.
Servers are using RAID for redundancy and for performance. Thanks to RAID technology it is possible to scale single hard disk performance.
The team of disks works faster, but applications are needing increased improvements as over periods technical evolutions.
There are storage offers using DDR RAM or SLC Flash that claim 2-20 GB/s throughput or 100K to millions of random IO/sec
But the cost of such systems are extremely high.

Recently RAID controller vendors like LSI and Adaptec offer SSD cache options. If the applications require higher random IO/sec such solutions are worth looking into.
Adaptec claims 8 times faster IO than HDD-only arrays.
LSI claims even 50 times faster IO. They have simulated web servers that re-read hot spot activity. With CacheCade enabled reached at 14,896 IO/sec and compared to 273 IO/sec with disabled CacheCade.
Such SATA/SSD hybrid can significantly  improve applications with random IO patterns. They work with single SSD as a minimum , the solution is relatively inexpensive.
Adaptec maxCache™ 64GB SSD Cache Performance Kit SRP = $1,795
And  LSI CacheCade software pack has a suggested retail price of $270 plus X25-E Extreme 64GB cost $700 makes about $1,000.

Very interesting will be to know what kind of random IO/ps can be reached if we use SSD only.  We have created RAID 5 with 4 * INTEL X25-M SATA SSD. The RAID 5 was fully initialized then the IOmeter test was started.
Results below:

RAID 5 with 4* INTEL X25-M SATA SSD 120GB

In order to be in a position to appreciate the above results we have run the same test pattern with RAID 5 with 12 * HDD SAS 15K (SEAGATE ST373455SS )
Results below:

RAID 5 with 12* SAS 15K

So:

and:

And the comparison was not quit fair because the HDD array has 12 drives and SSD array only 4.
We are using cheaper (MLC) non-enterprise version of the SSD. The main problem with MLC SSD is the write endurance which is about 11TB for 120GB drive and 15TB for 160GB.
If drives are in a RAID 5 the write endurance can be multiplied by n-1. Also there is option to leave some spare space for the Wear Leveling mechanism. For example 10% reserved space expands write endurance by 2.8 times.
So 4 * 160GB SSD in RAID 5 with 10% reserved space = 3 (n-1)  * 2.8 * 15 TB = 126 TB write endurance. If your application is not needing heavy duty write orientations then you can consider it for production.
If the application is heavy duty write oriented you need to consider Enterprise SSD. Here is the write endurance comparison:


How much is 15TB. Sustain write 24/7 with 20MB/sec, so 15TB will be written in about 9 days.
How much is 2PB. Sustain write 24/7 with 20MB/sec, so 2PB will be written in about 3.4 years.
The RAID 5 array with 4 * Intel X25-E 64GB drives will cost $2880 and is about $15 per GB.
The RAID 5 array with 4 * Intel X25-E 120GB drives will cost $1600and is about  $4.4 per GB.
In comparison to 15k SAS HDD, the RAID 5 array with 4 * Seagate 148GB (15k) makes 4 * $300 = $1200  and is about $1.3 per GB.
This shows about 12 times more expensive but at least 3 to 8 times faster with random IO.
We did not test the enterprise SSD but mainstream SSD only. The random IO with enterprise SDD would be even better.
This article was:   Excellent | Very Good | Good | Fair | Poor
Prev   Next
System performance, and RAID BIOS settings.     Performance Testing with DSS V6 for your NIC's and Volumes

The Knowledge base is managed by Open-E data storage software company.
RSS