Recent Comments

Comment RSS
Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2014 Nodnarb.Net

Microsoft Hyper-V 2008 R2 IDE vs. SCSI Performance

by Brandon November 30, 2009 02:46 PM

Virtualization has been a hot topic in tech for several years now and shows no signs of going out of style any time soon. I think there are some really great uses for it, such as server and application consolidation, but, I also think there is a lot of hype and glamour surrounding the topic as well. Will virtualizing your servers really save you money and time? That all depends, in my opinion. Only you know what applications you have and how well those currently perform. The only way you'll know how virtual machines will help or hurt you is by testing.

The department I work for currently uses Microsoft Virtual Server 2005 R2 on top of Windows Server 2003 for the majority of our virtual machines. When Microsoft introduced Hyper-V with Windows Server 2008, we discussed upgrading, but ultimately held off on the new Microsoft virtualization solution. Hyper-V had many mixed reviews upon its debut, so we wanted to wait to see what later revisions could deliver. Also, at the time, Virtual Server 2005 met our needs just fine. Since VS2005 wasn't broken ... you get the idea.

Time has passed since then and Microsoft has recently released Windows Server 2008 R2 with an updated version of Hyper-V. There have been several improvements to Hyper-V since its initial release. Virtual Server 2005 has started to show its age, compared to all the other virtualization solutions now available, so now is a great time to do some testing of Hyper-V 2008 R2. [more]

As users of Virtual Server 2005, my co-workers and I have known using SCSI virtual disks (with Virtual Machine Additions installed) has advantages over using IDE disks in VS2005. Switching from the IDE to a SCSI interface made noticeable virtual machine performance improvements. When Hyper-V was originally released, however, booting off SCSI disks wasn't supported. "How version 1.0!" we all thought. Removing features from new products isn't new for Microsoft, after all. Well, it's not supported in Server 2008 R2, either. I was disappointed, but this time the question "Does it really matter?" came to mind.

I decided to test this using one of our newer servers that has been freshly reloaded with Windows Server 2008 R2 Standard with Hyper-V from scratch. For the test, I'd create a new Windows 7 virtual machine with two extra drives attached. One drive would be connected to the virtual IDE controller. The other would be connected to a new virtual SCSI controller. Both drives would be set to the same physical size (30GB) and both would be created as fixed-size virtual disks.

The test platform specs are as follows.

Test Server:

  • HP ProLiant DL380 G5
  • Dual Intel Xeon E5450 quad core 3.00GHz CPUs
  • 14GB ECC memory
  • HP NC373i gigabit server adaptor (1x1GBPs connection)
  • Drive C is a HP Smart Array P400 controller (512 MB) with 2x HP 68GB 15k RPM SAS drives in RAID-1 (cache enabled)
  • Drive E is a HP Smart Array P800 controller (512 MB) connected to an external MSA50 drive array with 10x HP 68GB 15k RPM SAS drives in RAID-10 (cache enabled)
  • Windows Server 2008 R2 with Hyper-V installed on drive C
  • Virtual machine VHD files on drive E

Test Virtual Machine:

  • Windows 7 Enterprise RTM 32-bit
  • 2GB memory
  • Drive C (boot) is a 40GB fixed-size NTFS volume connected to IDE controller 0, port 0 on the virtual machine.
  • Drive E (IDETest) is a 30GB fixed-size NTFS volume connected to IDE controller 1, port 0 on the virtual machine.
  • Drive F (SCSITest) is a 30GB fixed-size NTFS volume connected to SCSI controller 0, ID 0 on the virtual machine.
  • UAC is turned off, no updates applied (fresh installation for testing)
  • No antivirus installed
  • Logged in as an administrator

Below are some screenshots of the virtual machine's disk configuration, along with the Hyper-V settings for the virtual machine.

Virtual machine settings Virtual machine disk configuration

Disclaimer time. I've worked in the IT field for many years now and am knowledgeable in many areas of IT infrastructure and application development. I am relatively new to Windows Server 2008 R2 with Hyper-V, however. I'm also not a professional reviewer and benchmarker. I do my best to test, record, and interpret the results, but you may come to different conclusions. If you do, let me know (but play nice)! I also encourage you to repeat similar test steps if you have the time and resources. I'm doing this for fun and learning and didn't receive compensation from any product vendors (or their competitors!).

First round of tests: ATTO Disk Benchmark

To begin, I decided to go with good 'ol fashioned disk benchmarking. I searched for disk benchmarking software on Google and found ATTO Disk Benchmark v2.45. It's available as a free download from ATTO Technology, Inc. I've seen this software used on various hardware review sites, also.

For each test, I set ATTO to use the following parameters:

  • Transfer Size: 0.5 to 8192.0 KB
  • Total Length: 2GB
  • Direct I/O checked
  • Overlapped I/O selected
  • Queue Depth 4

The tests were run three times in a row for each virtual drive. The IDE disk was tested first. The virtual machine was then rebooted before the SCSI disk was tested. Below are screenshots of each test. Look in the application title bar for the test iteration.

ATTO IDE test 1  ATTO IDE test 2 ATTO IDE test 3

ATTO SCSI test 1 ATTO SCSI test 2 ATTO SCSI test 3

Do these test results look strikingly similar to you, too? It seems the trend between SCSI and IDE disks are almost identical. To get a better visual representation of this that I could compare, I decided to plot the data in Excel. The graph below compares the average of the IDE read and write runs to the average of the SCSI read and write runs.

IDE vs SCSI Performance graph From the graph, it's easy to see how IDE and SCSI reads/writes are right on par with each other. Sure, there are some differences here and there, but I think the margin of error in this testing method makes this too close to call. There were variances in each run of ATTO, so, for now, it's looking like these virtual disks perform virtually (pun intended) equal.

One thing I wondered about is the behavior when the tests hit the 512KB mark. Until then, reads/writes scale nicely. For both SCSI and IDE, read and write performance basically "meet" at the 512KB mark and then diverge from there, with priority clearly given to reads for data transfer sizes greater than 512KB. Write performance after that tops out at about 200 MB/sec and holds steady, with read performance topping out at 500 MB/sec. I tried ATTO on the virtual host computer with the same settings and no virtual machines running and experienced similar results. I then ran ATTO on some other computers, such as my laptop and desktop, and did not experience that behavior. Reads and writes on those machines stayed near equal after a 4KB data set size.

I believe the cause of the performance differences, in this case, is the HP array controller's cache. Transfer rates in hundreds of megabytes per second is pretty high, after all. The test is still valid, in my opinion, because you'll want to have your controller's cache enabled when running virtual machines in a production environment. In fact, the only other instance where I was able to replicate this behavior with ATTO Disk Benchmark was on a Lenovo X61 tablet with an Intel 32GB solid-state disk (SSDSA2SH032G1GN). On this drive, there was a similar separation of read and write performance, this time at the 4KB mark.

Second round of tests: Microsoft's SQLIO utility

I wasn't completely satisfied with the ATTO benchmark after reviewing the results. For one, I wasn't sure if ATTO Disk Benchmark was necessarily the correct tool to test this scenario (SAS drives in RAID10, virtual environment, etcetera). Secondly, I wanted something that could differentiate between random and sequential testing, along with giving the virtual drives a longer workout. The ability to import test results directly into Excel would be a big help, too.

A co-worker recommended a utility from Microsoft called SQLIO. The name of this utility is deceptive, because it does not depend on Microsoft SQL Server nor does it seem to have anything else to really do with it. It should not be confused with another Microsoft utility with a similar name. After some Google searches about the app, I came across a really great tutorial with examples and video by Brent Ozar.

I used Brent's sample configuration batch file to run my tests. All I changed were the drive letters on the "-d" parameter. I followed the rest of his instructions and let the testing run over the weekend on the IDE and SCSI virtual disks. When I ran the USP_Import_SQLIO_TestPass stored procedure to populate the SQLIO_TestPass table from the imported data, I specified "Hyper-V IDE" or "Hyper-V SCSI" in the sixth parameter. I then imported the data into an Excel pivot table. The entire process was pretty easy to follow. Testing each disk took about six hours.

Below is a pivot chart of the test results. The "128" values on the horizontal axis represent 128 outstanding requests and the "2, 4, 8, 32, 64" values represent the number of threads. I only included the "128" outstanding requests values because having them all in the graph made it unreadable. The numbers in the graph area are the "Max MB/sec" column.

SQLIO Results Chart Overall, again, the performance for Hyper-V virtual SCSI and IDE disks are close to identical. The most important values are the Max MB/s and Max IOs/sec. For the random read and write tests, IOs/sec are nearly matching evenly for both IDE and SCSI across all tests. I would attribute this to the controller's cache. On sequential reads, however, the SCSI virtual disk has a slight edge over the IDE disk. Also, on sequential reads, the Max MB/sec of the SCSI disk is about 10MB/sec higher than the IDE disk for each run. Another interesting series of data to note is the Max Latency for sequential writes (shorter is better). As the thread counts increase, the IDE disk performed best by eight threads, but then climbed back up by 128 threads. The SCSI disk, however, started low and increased almost linearly.

Conclusion

Whew, that was a lot of testing ... at least for someone who doesn't do that type of work all the time. In the end, I don't think you'll see a huge difference between virtual IDE and virtual SCSI disk performance in Hyper-V 2008 R2. It's true that the SQLIO tests revealed some slight advantages to virtual SCSI disks, but I believe you'd only experience this under very controlled situations.

What other kind of tests could I have done? I'm sure there are plenty out there that I'm not aware of. I felt this was a good start, though, considering I couldn't find much out there regarding IDE vs. SCSI performance in Hyper-V. One additional test I thought of after I finished was placing a constant load on the virtual machine's CPU during the SQLIO tests. I've run out of time with this hardware, though, so maybe I'll get to save that for another day.

I hope you found this post helpful. If you have any questions, please leave a comment and I will answer as soon as I can. Thanks for reading!

Tags: , , ,

Computers

Backups to the rescue!

by Brandon November 21, 2009 07:30 PM

This post is a friendly reminder regarding the importance of regularly backing up your computer. Data disasters happen. Are you prepared?

I recently had a chance to put my backups to the test. I have a secondary computer here that I use for games, fun, and general messing around. I use it a few times during the week, but mostly on weekends. This week, I happened to fire it up on Tuesday evening. My attention drew elsewhere after initially pressing the power button. When I returned to find it hung on the disk detection screen, I knew there was a problem.

On this PC, I used a RAID 0 array for the C drive. For my non-technical readers, RAID is a technology that lets you combine multiple hard drives for either speed, data redundancy, or a combination of both. In my case, I used RAID 0 to combine two physical drives into one logical one. Basically, this means the operating system and applications consider the two hard drives to be one. The advantages to RAID 0 are that it lets you combine the capacity of two hard drives and increases disk performance, generally, by a noticeable amount.

The main disadvantage to RAID 0 is the fact that if one of your hard drives fails, then the data on the remaining drive(s) is useless. This is so because RAID 0 splits the data, like your videos, pictures, songs, and operating system files, between the hard drives (two or more) in the array. If part of the data is gone, like when a hard drive fails, you're toast.

Aware of the data loss risk, I used RAID 0 on this computer for the disk performance improvement. Having made the decision to go with RAID 0, I knew regularly backing up the C drive would be important. Keep in mind, regular backups are important regardless whether or not you are using RAID. I believe it's especially important when using RAID 0, however, because this level of RAID does not provide any data redundancy. Other levels of RAID do and I'd recommend them for storing important data. I won't go into the differences here, but you can check out the Wikipedia articles above or do some general Internet search to find out the benefits of RAID.

The drive that failed was a Western Digital Raptor Raptor 150GB. When a hard drive fails, it can be time consuming to track down exactly which drive is the culprit in multi-drive systems. I had a good idea which one was at fault because the hard drives in this system are in individual trays, each with their own indicator lights. The light on the faulty drive was lit solid when the computer was trying to detect the installed hard drives at startup. To verify this drive was bad, I removed all other hard drives from the system and downloaded the Western Digital diagnostic tools from their site. Each hard drive manufacturer generally has a similar tool they offer for helping your identify hard drive problems. I ran the tool and it verified the drive was bad. Check out the screen shot below. I ran the full scan, like the software suggested, and eventually received another status code (0222 & 0225) indicating the drive failed.

I replaced the drive a few days later with an updated Western Digital model. I've had good luck with Western Digital in the past. One friend of mine has sworn them off completely. Why go back to a manufacturer that failed on me? The simple fact is standard hard drives are mechanical, and thus will eventually wear out and fail, no matter who made them. I, for instance, tended to avoid Maxtor drives before the company was bought by Seagate. Ask ten tech guys what their favorite brand of hard drive is and why and you'll get ten different answers. It's a lot like asking someone about their favorite kind of car.

For my backup solution, I chose to use Acronis True Image Home to back up to an external hard drive. I started using Acronis True Image a few years ago when it was at version 11. They recently released version 2010, which is what my latest backup was created with. I've used Acronis software many times at work to backup and restore data to various machines, but this was the first time I had to use it to recover personal data that would otherwise be gone for good without a reliable backup. After booting the PC from the Acronis recovery CD, I selected the backup archive on my external hard drive and chose to restore it to the new internal hard drive. About an hour and a half later, I saw the message below.

Even before I started the restore, I was confident I'd get my data back. It's always a relief to see this screen, though! After this, all it took to getting back to normal was removing the Acronis CD and restarting the computer. Everything was back to normal.

Lessons learned:

I got lucky because I took the time to set up a backup schedule and made the necessary adjustments to it throughout the years to make sure backups were getting created successfully. My backup solution, which was a USB external hard drive plugged into the computer I was backing up, worked out fine in this case. When creating backups, however, you need to consider all possibilities. Instead of an internal hard drive failure, where would I be now if the house caught on fire or if someone stole all my computer equipment?

With the wide availability of high-speed Internet service, many online backup service providers have emerged. Their goal is to back up your data, encrypted over the Internet, to a secured data center that's protected against fire/flood/theft (hopefully!), rather than an external hard drive sitting no more than 6 feet away from your PC.

Advantages to a service such as this include:

  • Safe, offsite storage of your data.
  • Having your data available on any Internet connected PC, secured by your account credentials.
  • The ability to retrieve older versions of your files (this varies by service provider)

There are some disadvantages, too:

  • It can take a long time to back up all your data over the Internet.
  • Limitations of the service provider, such as storage limits or platforms supported (Mac and Linux not supported by all).
  • Cost, which is usually a monthly or yearly fee.
  • Doesn't back up your entire computer (just photos, documents, etc.)
  • Generally limited to one computer backed up per account.

Despite the drawbacks, I think I'm going to look into a service like this. It's one thing to lose the hard drive of my secondary PC. I'd be pretty unhappy, though, if I lost all the digital photos I have from the last ten years or so. If I subscribe to one of these services I'll post a follow-up.

I kept the geek level to a minimum in this post so all readers, regardless of computer skills, can understand the importance of backups. If you have a technical question, feel free to ask it in the comments.

Tags: , ,

Computers

Team Fortress 2 Engineer Halloween Costume

by Brandon November 9, 2009 10:17 PM

Me as the TF2 engineer Halloween is, perhaps, one of my favorite "holidays" of all time. Having a late October birthday, many of my childhood birthday parties were Halloween-themed. Now that I'm all grown up, it's great that people in their 20s, 30s and beyond can still celebrate this time of year costumed, albeit a very different kind of party from when I was a kid.

The question "What's my costume this year?" is something I ask myself as each October approaches. I'll re-use a costume if attending different Halloween gatherings than the previous year, but I like to mix it up a bit if I'll be seeing the same friends as before. Rather than buy a costume for 2009, I thought it would be fun to either make something or use things I already had. No slutty nurse for me this year!

The inspiration for this year's attire came from one of my all-time favorite video game, Team Fortress 2. Team Fortress 2 (or TF2 as it's commonly known) is a first-person shooter game available for Sony Playstation 3, Microsoft XBox 360 and PC. Trust me, though, you'll get the most enjoyment out of it on a PC. For those of you not in the know, TF2 is a team-based (duh!) online multiplayer game where the character you choose has specific roles and abilities that serves as part of a team of several other players, which opposes a team of other similar characters. Teams are differentiated by wearing either red or blue and consist of real-life fellow human-player geeks at computers. No "bots", computer controlled players, are allowed. I could go on, but you should read more here.

My costume for this year was the TF2 Engineer (aka "Engi"). I wanted to do this last year but already had a costume by the time I thought of it. You can find many other examples of TF2 costumes by searching Google. I couldn't find many blog posts on how people created their TF2 costumes, so I thought I'd put this out there for you.

I'll be up front with you. This crap was expensive! Approximately $200 was spent this season on my Halloween ware. If you're shocked by this, pause here, re-read the little blub under the big "Nodnarb.Net" at the top of this site, and return.

The seemingly ridiculous-for-a-Halloween-costume cost was planned for and even had a purpose! My goal was to buy items that were 1) real and 2) re-usable in everyday life. I do a lot of handiwork around the house and have been wanting some of these items (for real!) for a while. A few of these items are a stretch for point No. 2, but, otherwise, my goals were met. Allow me to tell you about it.

The yellow hard hat and brazing goggles were bought from Amazon.com for about $8 and $8.25. These are the items that fall into the "more likely than not to be used at some near point in the future, probably" category. If I should find myself needing to enter a construction site or doing a little gas welding, I'll be covered. The goggles may even come in handy when using my Dremel tool. Goals one and two have been met, but just barely!

The stubble was 100% grown by me for absolutely free.

The red shirt was on sale at Kohl's for about $12 and the patches were made from felt fabric bought at Michaels for about $2. Many special thanks to my wonderful girlfriend for designing the wrench logo in Illustrator, making the patches, attaching them to the shirt (via safety pins) and finding the majority of this stuff to begin with. The Engi's sleeves are rolled up, you know, for safety. You learned about this in middle school shop class, unless you took home ec.

The overalls are Carhartt dark brown bib/unlined, purchased from Getzs.com for about $64 because they were in stock, the right color and I'd receive them by Halloween. This pair of overalls definitely meets goal two. They're rugged and warm, which will be perfect for shoveling snow in the near future. A party guest even complimented me on them and told me they're a great brand.

The gloves were about $3 (Engi only wears the right one) and the pipe wrench was about $20 from Lowe's. Also from Lowe's were the knee pads ($10) and a 15-foot yellow extension cord ($15). The tool belt was about $50. I already had the boots, which are Red Wings.

So that's it! Now you can geek out and make your own next year. Final thoughts:

The best parts about the costume were the ability to easily store multiple beverages in the tool belt and use the pipe wrench as a real-man's bottle opener. smile_shades

The worst part was getting confused for another well-known engineer type by the non-gamer community...or someone else entirely unrelated.

[PicasaAlbum:Halloween2009]

Tags: , , , ,

Holidays | LOL