OCZ Rally2 in Linux Software RAID0

OCZ Rally2 in Linux Software RAID0

Introduction

With Solid-State Drives (SSD) on the verge of mass consumer adoption, we're left wondering what kind of performance we're going to see from these drives. We already know SSD's require less power than drives with moving parts, but what kind of performance gains will we see? To get an idea, we took one the faster drives on the market, the OCZ Rally2, and ran it through our benchmarking process. To make things more interesting (and see how much performance we could squeeze from the technology, we're using two drives later in this article to use the drives in a RAID0 configuration. The Rally2 relies on dual-channel technology, much like you'll find in modern PC RAM configurations. The flash inside the drive is separate into two parts, allowing for faster writes, reads, and access times.

A Closer Look

For 2GB of Storage, the Rally2's casing is hardly what I expected to find. This drive is sleek. It's tiny, sturdy, and attractive. The chassis and cap are both made from aluminum, while the end opposite the USB connector is smoked plastic with an LED inside to indicate activity. Check this out compared to my older 256MB San Disk Cruzer and even a first generation iPod shuffle. With it's tiny size, we had no trouble plugging two of them in right next to each other.

Single Drive performance

We decided to use Linux as our benchmarking platform for this review, as most of the benchmarking utilities we need either come installed or are easily available. In Windows, we sometimes find ourselves downloading several benchmarking suites to achieve the same purpose. The simplest tests to perform are done with the command hdparm. hdparm stands for Hard Drive Parameters, and adjusts just that. Specifically, hdparm has a wide set of tools for ATA drives, such as adjusting the DMA. For other drives, the options become much more limited, as they rely more on hardware settings than software. hdparm allows us to benchmark with the t and T flags, in a command such as
# hdparm -tT /dev/sda
Where -t gives us device read timings, and -T gives us cache read timings. Furthermore, -t deals with sequential disk performance. This causes a reading of the very start of a disk, which is usually faster than other parts on spinning harddrives. This is a little different for solid state storage, but it generally still gives you a good idea of the device's top speed. -T, on the other hand, gives you the speed of the hard drive's cache without any hard drive activity. This is somewhat less important in most cases, but still a helpful benchmark none the less. Let's see how a single drive performs.
/dev/sda
  Timing Cached Reads: 1777.98MB/sec
  Timing Buffered Reads: 23.38MB/sec
The buffered reads aren't too shabby at all, just a tad shy of the maximum 28MB/sec speed the drive is advertised with. For reference, my laptop's hard drive provides the following:
/dev/hda
  Timing Cached Reads: 1829.20MB/sec
  Timing Buffered Reads: 30.11MB/sec
You can run the benchmark a few times to get a good idea of what the average speeds are. Unfortunately, transfer speeds aren't all that matter in this day and age. We imagine that SSD's may end up in a lot of servers, as well. In a server environment, hard drive access times are very important. Companies have to get information to the guest as soon as possible, and part of that information transfer relies on how fast the hard drive can locate the requested data. Spinning hard drives have a major disadvantage here, as they have to spin to find a location. Solid state drives, however, are a mere mix of circuits with no moving parts. Access times should be much improved. Using a utility called Seeker, we found the Rally2's access time to be a mere 6.63 milliseconds. That's nearly three times the speed of my laptop's hard drive: 18.43 milliseconds. It seems we've found another advantage the Rally2 and Solid State drives have over spinning drives. On a side note, using the Rally2 no my desktop gave me the results of 3.79 millisecond access time. I'm not sure how to account for this, aside from an increase in processor speed and possibly a difference in the USB controllers. As a final test, we chose to use the Linux command dd to test sustained write speeds. dd is a common UNIX program whose primary purpose is the low-level copying and conversion of files (Wikipedia definition). We simply used /dev/zero (which outputs zeroes) to write files to the drive. We wrote 1GB to the drive to achieve this purpose.
# time dd if=/dev/zero of=/mnt/disk/test bs=1024 count=1000000
This is where if is the input file, of is the output file, bs is the block size, and count is the number of times to do it. We're writing approximately 1GB of data to the drive, which left us with a 14 MB/sec write speed. Not too shabby at all. Compare this to the 11 MB/sec write speed of my laptop's hard drive. Impressive, right? Warning: Make sure of is set to a new FILE. If you set this to a hard drive such as /dev/hda, it will overwrite data starting at the beginning of your disk. Say goodbye to your partition tables! Now that we've seen what a single drive is capable of, what do you think the results would be if the drive was to be put into a software-based RAID0 configuration? Would we see performance gains that rival the speed of a traditional disk? Let's hope so.

OCZ Rally2 in RAID0

In some tests, we've seen Linux' software raid capabilities outperform those of dedicated hardware solutions. Additionally, the only way we can create a raid configuration with two USB drives is through software. So, for our testing purposes, we're going to use the mdadm command to create a RAID0 between our two drives. In RAID0, data is split up and written between the two drives, allowing for faster read and write speeds.

Preparation

To avoid potential errors, we used fdisk to create a partition on each separate drive, which we than partitioned with the XFS filesystem to handle large files. After these stepsy were completed, we were ready to create our RAID device.
# mdadm --create /dev/md1 --level=0 --raid-devices=2 /dev/sda1 
  /dev/sdb1 --auto=yes
# mount /dev/md1 /mnt/usb
Looking a little closer at the mdadm command, we think the options we specified are fairly self explanatory. --create specifies what node in the /dev filesystem you want to create the array in; the md* series of names usually relate to raid devices. --level specifies what kind of RAID you want to create. We're not going to go into other types, just know that RAID0 usually gives the fastest results with the least amount of failure protection. --raid-devices specifies how many devices you're using (followed by the actual devices), and finally, auto auto-creates the node for you so everything is set up and re-creates if the drives are disconnected, which is quite useful for USB drives.

Benchmarking RAID0

Now that the drive is created and ready to be tested, we're going to run it through the usually benchmarks we used earlier.
# time dd if=/dev/zero of=/mnt/disk/test bs=1024 count=4000000
4.1 GB copied, 177.541 seconds, 23.1 MB/s
23.1MB/sec is a big improvement over the single drive performance, roughly 50%. Since data is being written on two drives at once, we're seeing faster drive speeds as a result. Next, we're going to user seeker to find the access time of the raid.
Results: 177 seeks/second, 5.64 ms random access time
This is an entire millisecond faster than the single drive, which may only seem like a small improvement, but when you consider your average 7200RPM hard drive's access time is in an excess of 12 ms or more, this seems like a bigger improvement. Unfortunately, I didn't have a chance to test the raid setup on my desktop, so I can't verify if they times are faster on my desktop. If you recall, my desktop showed a single drive having an access time of 3.79 ms, rather this machine's 6.x ms time. hdparm also gives us a jump in performance:
RAID
Timing Cached Reads: 495.49MB/sec
Timing Buffered Reads: 37.44MB/sec
We're seeing mixed results here. The cached reads are down, but we thinks that's just from the overhead of running the raid. There's nothing horribly wrong with this speed, though. On the other hand, we're seeing approximately a 50% performance gain in buffered reads! That's the kind of performance jumps we want to see! We wanted to see if we could push this any farther, but hdparm doesn't give us many options for a USB device. The one option we found that gives us a boost in performance is the read ahead. This changes the amount of data that is read in advance. The default setting is 512kb, but we bumped it up to 1024 and 2048 to see if we saw any gain.
# hdparm -a1024 /dev/md0
Cached = Same
Buffered = 41.12MB/sec

# hdparm -a2048 /dev/md0
Cached = 35.65MB/sec
Buffered = 42.65MB/sec
As you can see, we've gained almost a 100% speed increase over a single drive. This is strictly for sequential readings, but if you can squeeze extra performance out, then why not? Especially if you're using large files on the drive, you're going to appreciate faster read-ahead times, and you will see the benefits of it. This setting won't, however, effect the write speeds. As a last benchmark, we wanted to see what the worst-case scenario is for our drives. To do this, we mounted the raid device with the "sync" option. Doing this removes the RAM from the equation. Data is written between busses, and it isn't buffered through the RAM to increase overall speed. For example, have you ever pulled a USB drive from a Windows box without properly unmounting it? Chances are that you lost some files you recently saved to it. This is because that although the file transfer progress dialog closed, some data was still being written to the drive from RAM (sometimes, anyways), so you interrupted the file transfer and corrupted whatever data you'd just written to it. Using the sync option means that the dialog won't close until the data is definitely on the drive.
mount -o sync /dev/md1 /mnt/usb
Let's us the dd command to write a gigabyte of data to the drive.
1.0 GB copied, 125.706 seconds, 8.1 MB/s 
That's a giant drop in performance. You're unlikely to come across this kind of performance unless you're utilizing all your RAM for other purposes.

Conclusion & Possibilities

Given our benchmarks, we can safely say that using USB drives in a RAID0 can bestow upon you a solution faster than traditional hard drives, given the right hardware. From writing, to reading, to access times, our RAID0 device outperformed the 7200RPM hard drive in side my laptop, and probably outperformed many other ATA-based drives out there. What can you do with a RAID0 device? If you're got a database below 4GB in size, I recommended running it from these drives. You're going to see huge gains in response times and read speeds. Queries inserting data may lag behind traditional drives, but we feel that in some cases this setup will outperform it. We just wish we had the resources to verify this claim. Additionally, Vista's ReadyBoost seems to be all the new rage these days. You don't have to feel left out in Linux. This setup would be a fantastic drive to use as a swap partition when compared against spinning drives. If I didn't have several gigs of RAM in my box already, you can bet this is what I'd be doing. The idea of this article was to explore solid state storage options and give an idea of what we'll be in for once Solid State Hard Drives are upon us. Based on our benchmarks, we believe that solid state drives will eventually take the performance crown from current spinning models. We are already seeing faster access times, and we imagine read-write speeds are better in some parts that others. SSD Manufacturers have already promised speeds faster than what our RAID0 has given us. If we get our hands on a SSD in the future, you can expect an article comparing those claims to what we've found.

Sean Potter

Editor-in-Chief
I've been a dedicated Linux user for over two decades, and have been building computers and servers for even longer. My professional career has taken me down the path of simultaneous systems administration and web development, which allows me to constantly strengthen my Linux-fu.

View Posts