Battleship Mtron


Author: Dominick V. Strippoli


"Solid State Raid 0 Performance Explored using the Mtron Pro 16GB SSD"

As a pre-requisite to this article, reading last weeks Mtron Professional 16GB Solid State Drive review would give you a great insight to current high end solid state technology. Thanks to our good friend Shawn at, we had an additional 7 drives shipped overnight for this current review. So, as you can see in the "Battleship Mtron" picture above, our test bench had a total of 9 X Mtron 16GB SSD's (Solid State Drive's) in Raid 0. As you can imagine, a drive setup like this can cost upwards of $7000 at the present time. We horded and jumped onto this technology fast at NLH. We decided heck, why not try and test out the theoretical maximums on throughput using these SSD's? Yes I know exactly what you're thinking: Who in their right mind would have a drive setup like this? This is what we do here at Next Level We take hardware to the absolute maximum. When something is already fast enough, we try and make it faster.

Before we begin with the review at hand, here is a few short sum-up's from last weeks Mtron 16GB Single Drive review.

-The Mtron 16GB Pro SSD became the fastest SATA drive in the world in read operations and general usage, as compared to the WD Raptor 150. The drive actually produced a staggering 111 MB/s sustained read, and .1ms access time. Read Performance in real world scenarios was boosted in incredible multiples compared to the Raptor, while NAND Flash based sustained write and short write operations still suffered up to 23% over the WD Raptor 150 and was the only article negative. Mtron highly recommends using the NVIDIA 680i chipset or a pure hardware Raid controller for max performance using the drive. There is an apparent Intel ICH9R throttling issue and sustained transfer of the drive is capped at 81 MB/s when using Intel motherboards. Current pricing on the Mtron Pro line is still stuck at $50 per gigabyte or $799 per drive. Ouch!-

Today we are going to be comparing and scaling the Mtron 16GB Pro under Raid 0 on a pure hardware raid controller. During the entire review we will have an assortment of different drive types and controllers at our disposal, and we will come across bandwidth limitations that we never knew existed until today. I must say, this drive setup has definitely made my PC end-user experience incredible. One word currently popping into my head to describe it is: Ludicrous!! :)

Our test bed consists of:

Intel QX9650 Processor , Corsair Dominator DDR3-1800, Gigabyte X38T-DQ6, Sapphire HD 2900 XT, Silverstone OP1000

Raid Controllers Used: Areca ARC-1220, Areca ARC-1231ML

Alternate HDD's Used: Maxtor DiamondMax 300GB IDE, WD Raptor 150 SATA

For our first boot up we chose to start off obviously with the lightest combo using a 2 drive Raid 0 setup on the Areca 1220 hardware Raid controller. Our HDTach benchmark results displayed an almost perfect multiple of 2X scaling using the drives. As you can see, our sustained read was a hair under 240 MB/s or (two drives X 120 MB/s) which would be absolutely perfect Raid 0 scaling.

Now comes the interesting part where I promised to tell you about bandwidth limitations with these drives. The Mtron Pro SSD's are so powerful (speaking in terms of read performance and IOP's input/output operations per second) that current generation hardware Raid controllers are getting stressed out to the complete maximum with them. Since these hardware Raid cards are inserted into PCIe 8X slots and they run with either full 4x or 8x lane compatibility, the cards do have plenty of theoretical bandwidth. However, the processor on each controller differs from the midrange ARC-1220 and the high end ARC-1231ML. The card we were using for the review initially was an Areca ARC-1220 with the 400mhz Intel IOP333 processor. Take a quick look at our next HDTach shot of 5 drives in Raid 0 and than we can do more explaining:

Since we already know that these Mtron SSD's have the theoretical capability to scale in almost perfect multiples using Raid 0, something is definitely wrong with the picture above you. Five drives put out only 386 MB/s sustained read when we should be anywhere from 550 to 600 MB/s easily. After countless hours of research about the Areca 1220, I finally stumbled across a gentleman's very informative post on one of the hardware review forums explaining about theoretical throughput maximum on the ARC-1220 controller. The limitation happens to be right around 400 to 450 MB/s max on the 1220. I had one of my suppliers overnight me an Areca 1231ML and I junked the 1220 immediately. The Areca 1231ML is an identical raid controller, the only difference being upgradeable cache, 12 SATA ports, and the #1 difference and reason for upgrading being the high end 800 MHz Intel IOP341 processor onboard. I simply plugged my existing array into the new controller and BOOM. Look what magically appeared:

Right off the bat using the Areca 1231ML and the same 5 drives, sustained read went up to 608 MB/s and burst jumped a couple hundred points to 1200 MB/s. This means our old controller had us capped at 400 MB/s throughput and took away an extra 200 MB/s of un-tapped power from the Mtron units. However, knowing in my head my full intentions of achieving close to 1 GB/s throughput with 9 or 10 drives, I did some research on the Areca 1231ML as well. It turns out not too many people really have this kind of problem using current generation mechanical HDD's. Single consumer level raid controllers are not usually meant to be scaling at 800 to 1000 MB/s sustained throughput. But, the only information I could find on the 1231ML led me to believe it runs out of steam right around 800 MB/s. So, again this was more bad news for me but I decided to continue on with my testing anyway. Our final drive setup was the extremely expensive, yet impressive 9 X Mtron 16GB Professional's in Raid 0. Here is our bandwidth shot using HDTach:

Again you can see that we have hit another limitation on the expensive and high end Areca 1231ML. But this time I am not mad, nor curious, nor doing anymore research on the subject! Our limitation is once again a capped out controller processor. This time it is the high end and enterprise praised IOP341. With an uncapped controller we should theoretically be at 1100 MB/s sustained read right now which would un-cork an additional 300 MB/s out of our current setup. The article will have to suffice with only 830 MB/s sustained read. :)

Later on in this very article you will see how the 9 drive cap will effect high level server performance using IOMeter, but for now let us move into synthetic and real world performance testing. We will follow a very similar article outline to our single drive review while adding the rest of the drives into the test mix. Our synthetic testing is self explanatory but as usual for my real world testing I always like include a small disclaimer with the article: Next Level Hardware uses the traditional stop watch timing technique for real world performance measuring. So, although our results will be as close to humanly possible you always have to factor in a slim margin of error.

Our first test will be booting Windows Vista Ultimate Edition 32-bit. The timed reading you see in the screenshot is the average of 5 startups and shutdowns on the drives. Vista boot time is measured from as soon as you see the first bar move on the Vista screen and timing is stopped when the mouse on the hourglass stops loading services/resident programs on the desktop.

Again, as stated in the single drive review. You can clearly see that sheer random file reads, access time and drive latency is what has affected the operating system boot speed. We are seeing identical boot loading times when scaling under Raid 0 as compared to the single Mtron 16GB drive.

Single Drive Vista Boot Up Video

I have always liked analogies relating cars to computers because drag racing happens to be a huge interest of my own outside of the technology and hardware world. Application loading and boot performance for the most part received the greatest benefit from the single Mtron, just as I predicted. You will notice the largest benefit will be swapping out your single mechanical rotating HDD for an SSD when initiating random file reads and app loading. But, when raiding SSD's you are only adding horsepower, not torque if you can understand the logic. Here it comes: Horsepower is the speed at what you hit the wall, torque is how far you blast through the wall and how much damage you inflict in it. In drag racing torque is what gets you off the starting line and through the first 60 feet of the race, horsepower is what wins the race up top. Now place that analogy into computers. Imagine torque = access time/latency and horsepower = sustained read. One single SSD will give you below .1ms access time, so your torque is going to create the snappy feeling and instantaneous file loads. Now, even though you are adding additional horsepower (more drives in raid 0) your torque remains the same. So, unless you are loading apps/games that have a few heavier duty file loads. ie: larger chunks of files being loaded, not small little chunks. The games/apps will benefit from more horsepower during the load operation only in that instance. So, for the most part the Raid 0 array scaled incredibly and windows feels much nicer as a total package. But specific game/app loads are not something you should be jumping for joy about in a Raid 0 array and definitely not why you should be adding that second Mtron to your arsenal. However, as discussed previously a single Mtron 16GB SSD will increase OS boot time by up to 130% in some cases when using Vista.

Our next series of tests will all be based on Synthetic Performance Testing. The first measurement will be recorded using a program called HDTach by Simpli Software. It is a tool to measure raw hard drive sustained read and access time.

As you can see, the raw power of the Mtron Solid State Drive is apparent using HDTach. It displays an average read increase of almost 30 MB/s over the mechanical WD Raptor 150 in a single drive configuration and when scaling under Raid 0 it is just astounding. Scaling 5 drives to 608 MB/s means that these drives when given a proper pure hardware raid controller setup will scale at 121.6 MB/s multiples meaning that they actually outperform the Mtron rated spec of 120 MB/s sustained read. On the other hand, the 9 drive Raid 0 setup with an 830 MB/s sustained read looks like a pretty number that will pounce on anything else currently out on the market, it is just not what we were expecting. Theoretically these drives have an additional 300 MB/s un-tapped just waiting to be ripped out but unfortunately we will need an ultra premium enterprise level controller with the throughput capability of over 800 MB/s. Once again, the most important number that you should be looking at is random access time. NAND Solid State will increase the snappy feeling of random file reads by over 80 times when compared to the WD Raptor 150.

Just to confirm these numbers, I have also used Lavalys Everest Diskbench to compliment HDTach. Later on we will use the IOMeter test suite as well for server level performance.

Again we see the performance difference measured in Everest Diskbench to display a full 90% increase in random read performance and a 40% increase in Linear Read from the middle benchmark in a single drive configuration. Not to mention our raid 0 scaling figures look pretty much identical to verify the validity of our initial HDTach benchmark.

The next synthetic benchmark is the PCMark2005 Hard Drive Test Suite. For this test, using the single drive had much higher performance margins with his older MSD unit compared to the new professional MSP unit in this review. The only difference was Anand used the Nvidia 680i chipset which Mtron recommends. Our problems here lie in throughput and SSD performance on the Intel motherboard. In many instances from my feedback of the last article new Mtron users were saying sometimes they performed well, sometimes numbers were lower than expected. And on rare occasions for some users the bios took 2 minutes every time the computer was turned on to recognize the drives, etc. These issues are just non existent on Nvidia 680i. From my personal experience using a Gigabyte Intel X38 motherboard I can recommend the purchase of an Mtron drive with no compatibility issues whatsoever. But, you will not have the best performance on an X38 without either a pure hardware controller, or actually going out to the store and buying an Nvidia 680i. I really hope Intel works out the solid state Southbridge bugs ASAP so I can ultimately get a cap free test suite. But, because of my Intel Gigabyte X38 Motherboard I am pretty sure these PCMark scores are capped slightly using the single drive.

Even without using the Nvidia 680i chipset we still see an average single drive HDD performance increase on this test suite of 65% over the WD Raptor 150. But back to un-capping the performance with a pure hardware controller. As you can see, with a two drive Raid 0 setup not only did HDD General Usage performance double but it increased by 5 X over the single drive. That tells me and confirms to me that indeed Intel's X38 is holding back the single drive performance figures quite significantly. Even more incredible is the 9 drive setup on the Areca 1231ML hardware controller. Synthetic PCMark05 HDD General Usage is increased a whopping 16 X faster than a single drive.

We are now moving on to our server level testing portion using a widely known benchmarking tool called IOMeter. IOMeter is used to find specific holes and test all aspects of your file system completely, both in single threaded and multi thread (server level applications). The measurement is called IOP's, or input/output operations per second and is the standard measurement for all enterprise systems. Since this is the Mtron Professional Drive (built for a focus on server apps), we are going to be testing with a specific focus on server level/multi-threaded performance. Our test configuration will display the standard database/web server benchmarks of Random Reads (non sequential) at 4k and 8k, as well as maximum IOP'S using a 512 byte access specification (non sequential). To get these results a minimum of a 10GB capacity test configuration is stored on the volume and 20 outstanding I/O's are selected. To give you an idea of the scope of read server performance with this drive I have included a Maxtor IDE Diamondmax as well as a WD Raptor 150 in my testing.

We have now come to another portion in the article where I am going to have to explain to you why my drives stopped scaling. The nine drive test results are not quite as good as we would have hoped. With small random reads the performance is only slightly higher than a 5 drive setup multi-threaded. This implies that the controller card does not like doing over 50K IOPS. More importantly it is quite obvious that these SATA controllers have never had such an array strapped to them before. With a normal mechanical SATA, SCSI, SAS, etc., drive setup. The controllers are feeding small random read IOP multiples from 190 to 300 IOP's with a single mechanical drive. With the single solid state Mtron drive they would be feeding close to 70 X more operations per second or 16,000 to 19,000 IOP's with a single drive!!!. So, you can imagine the stress on the controller when scaled using 9 Mtron drives as compared to scaling 9 mechanical drives.This is very disappointing as I was hoping to achieve close to 100,000 IOP's at 512 and over 85,000 IOP's at 4k with this raid 0 setup. The good news is that with the right raid controller and when properly configured for a true enterprise setup I am almost 100% certain that this drive configuration will do very close to 100,000 IOP'S at a random 512 byte read, and definitely close to/or over 85,000 IOP's at a random 4k read. To give you an idea just how fast these drives really are in a server level random read configuration we will use 8k as an example.

A 15K SCSI Seagate Savvio U320 drive will perform at 252 IOP'S at a random read of 8k using 20 I/O's. Consider that a standard Raid 1 redundancy setup on a simple web server. A single Mtron will perform up to 33X faster when random reading from the same file system on that identical server. Even more impressive and incredible to say the least: On your standard web server using an 8k random file read structure, a 9 X Drive Mtron Professional 16GB enterprise setup will perform up to 212X faster than a 15K Seagate Savvio SCSI on that identical server. Do we need anymore confirmation that these drives are just downright sick? Although they do perform astounding in random read operation, random write is still very sub-par on flash technology. Even though we are not benchmarking random write IOP's I will give you some quick insight. Write performance is not yet a perfect and refined process using NAND flash and you will not have a drive that is going to write file operations as well as a high end U320 15K SCSI or SATA 10K setup. There is a company that I have been talking with directly about this NAND flash write issue called EasyCo in PA, USA. They are working on a process called MFT technology and they offer a simple MFT driver that is claiming to increase random write IOP's on a single drive up to 15,000 IOP's. Doug Dumitru had explained to me this technology will take your standard Mtron 16GB Professional drive and turn it into an enterprise behemoth. Anyway, short IOMeter sum-up: The single drives perform up to 33X faster than SCSI/SAS mechanical drives in a webserver 8k random read configuration. Raid scaling increases IOP's exponentially.

Because of IOMeter's high reliability as an enterprise benchmark tool I chose to use it to test sustained read and write performance. As you can see, a single raptor will outperform a single Mtron 16GB pro in general write operation by 2 MB/s in this example. Raiding two Mtron drives will completely eliminate any and all write issue completely on this drive. To me personally, this is the single most beneficial reason for raiding 2 Mtron drives in a gaming/overclocking consumer desktop. You will substantially eliminate the current NAND general write bottleneck.

Our next benchmark is going to be a Windows Vista Specific benchmark. PCMark Vantage is a newly created tool from Futuremark to measure the complete performance of your MS Vista Operating System. Although Vantage is a synthetic result, I still believe it will give you a rough idea of how your computer will perform as compared to many others around the world. The funny part about this test is that these drives are indeed the fastest drives in the world. Shortly after producing my Vantage testing I saw my name " Solid State Test Rig" as #1 on FutureMarks Hall of Fame listing. :)

Just to give you a quick Vantage comparison shot along with Raid 0 scaling.


We will now move on to non synthetic "real world" testing. The first test is going to be gaming load testing. Each of the games were timed for three separate readings, with a windows reboot between each reading. The times you see are the average between the three readings.

As you can see in all of the games, we are averaging a load speed increase of 68% over the Western Digital Raptor 150 compared to the single Mtron 16GB. The speed increase is truly incredible with this solid state drive. Now, please remember the Horsepower/Torque analogy that I discussed earlier in this article. Even though we are adding more horsepower (more drives and sustained throughput), latency and random access time (torque) remains the same. For Quake 4 we displayed an identical load time telling me that this game has a large amount of small blocks of files during load. However, for games that required a little more large file seeking on the drive we displayed minor increases in load time while scaling in raid. FEAR is the only game that actually scaled tremendously with more drives. When I loaded up FEAR on the 9 drive setup, Level 1 was pretty much loaded as soon as I clicked the mouse. Pretty incredible to say the least. Based on all of my results, not to mention having the ability to personally get a taste of all of these different test setups I am going to say the ultimate current choice in SSD technology is going to be a 2 X 16GB Mtron Pro Raid 0 setup for gaming.

The next loading test will be based on the well known application Adobe Photoshop. I am using an older version 7.1 and we are double clicking on a 2MB Jpeg file from the Windows Vista Desktop to start timing, and we stop timing when the file ceases loading operation, and Photoshop is fully idle.

This really is a bad test for testing out these drives because load speed is already fast enough on the mechanical WD Raptor. Although we saw a tremendous speed increase when moving to the Mtron SSD, you have pretty much hit a boot load wall. I don't think any increase in file performance will speed up the boot on this application anymore than it already is.

As I promised previously, we are now moving on to more write intensive real world testing. The next test is a combined read/write operation. Various applications are timed for total installation time using the progress bar indicator. The install programs are launched from the desktop and installed to the program files directory measuring both read/write performance at the same time. In last weeks single drive review we displayed the WD Raptor 150 to perform write operations much cleaner than the single NAND Flash based Mtron device.

This last test is a great tool because combined read/write operations are basically a general performance spec for operating system usage. The faster your drive setup is here, the better overall your computer is going to perform when working in the OS environment. This is the test where the Raid 0 array really shines and puts a butt whooping on every other rig currently out on the market. By taking a look at the Photoshop and 3DMark03 installation time we see that by adding additional drives in Raid 0, once again we eliminate the NAND Flash based inherent write speed issues. Crysis is a huge 6GB installation and required just about 3 minutes 20 seconds to install on the single Mtron 16GB Pro drive. In comparison the mechanical WD Raptor 150 had installed Crysis around 20 seconds faster than the Mtron unit, displaying once again the NAND write problems at hand. However, throw a few of these puppies in Raid 0 and you have one of the fastest rigs on the planet. 9 drives in raid 0 will cut Crysis total installation time down to less than a minute (57 seconds) compared to 3 minutes 20 seconds on the single Mtron. That is a HUGE difference in performance spec. So, based on these intriguing results I once again vote that a 2 Drive Mtron 16GB Pro Raid 0 setup is going to be the hot ticket as far as a gaming/overclocking/high end computer.

Our last real world test will be a standard Windows Vista 1 gigabyte folder copy (from drive to drive) test. We take a folder with over 100 individual files totaling a 1 gigabyte folder capacity and we simply copy and paste to a new location on the drive.

The single drive folder copy performance was identical from a WDRaptor 150 to an Mtron 16GB Pro. All you have to do is add a second Mtron 16GB Pro into the mix and you will cut your combined read/write transfer time almost in half. Add 9 drives and copy/pasting 1GB will take under 4 seconds flat. These drives are incredible people!

Power consumption is still another incredible feat with these Solid State Drives. The Mtron unit uses a maximum of 2.95 watts load in a single configuration to power the drive compared to the 10 watts full seek load of the WD Raptor. So, for the same 26 watts of load it will take to power 9 Mtron drives in Raid 0, you would only be able to power 3 Raptors in Raid 0. Talk about efficiency!

In conclusion, we see that not only is a single Mtron 16GB Professional drive the fastest consumer SATA drive in the world. But, we were able to eliminate all of the NAND Flash write issues completely by adding a second drive in Raid 0. My complete "Holiday Wish List" recommendation for an ultra high end Gaming/Overclocking Desktop is definitely going to include two of these bad boys in Raid 0. Not only are you going to obtain a sustained read of close to 240 MB/s along with zero latency, but you are going to achieve "true" IOMeter measured sustained write speeds of 152 MB/s. Back on topic of our absolutely sick and disturbingly fast 9 drive Raid 0 setup. What we have here is an ultimate enterprise performance solution. The Pro drive comes with a 5 year manufacturer warranty which is completely valid in the enterprise market. The manufacturer also rates the lifeline of these drives at 140 years of 50GB read/write's per day. The only thing that bothers me in an enterprise scenario is the fact that we really do not have any real world longevity data on SSD's other than what is marked on paper by the manufacturer. Real world results by enterprise guru's such as Dave Graham state that NAND Based SSD's actually deteriorate rather rapidly in a server environment. A large scale corporation would be much better suited waiting for more data on longevity before implementing any kind of setup like this in a server.  As long as you have a very intricate backup solution and you do not mind sacrificing slight redundancy for more performance, these drives will simply annihilate the SCSI/SAS and SATA Raid and single drive competition on the market today. Although we should have many new SSD products on the horizon including the IO Drive by FusionIO, not to mention we already have the super fast RamSan. 2008 is going to be a prosperous and beneficial year for storage technology. Cheers.

Email the Author: Dominick Strippoli


This drive can be ordered directly from

Volume Discounts Are Available