Categories
Support

Virtual Infrastructures Which is Faster NFS or iSCSI

Part 1. The setup

In my current position I am responsible for the support and management of all aspects of my employer’s IT infrastructure. This ranges from firewalls and network switches to servers and printers to phones and desktops. While my employer is a not-for-profit agency we have been able to implement an enterprise-class infrastructure thanks in part to several grants and donations we have received. It is my hope to make these enterprise-class methods & best-practices available to small-businesses and educational institutions at a fraction of the cost through the use of user-friendly open-source software.

At the core of these best-practices is the use of virtualization technology which has numerous benefits such as:

consolidate under-utilized physical servers to reduce hardware, electricity, and cooling cost

isolate applications to individual virtual machinesto prevent incompatibility issues

run multiple operating systems utilizing a single hardware platform

If you have only one virtual machine host server then local storage is sufficient however if you have several servers working in conjunction with each other then you need some sort of centralized storage to utilize those servers to their fullest potential. Centralized storage usually consists of a SAN or NAS that makes disk space available to the servers using the NFS, iSCSI, or Fiber-Channel protocols.

I have not yet had the opportunity to work with Fiber-Channel due to the costs associated but for the past few years I have having been using the NFS & iSCSI protocols in virtual infrastructures. There are numerous discussions on the Internet stating that one is better than the other but few have gone into the details as to why.

Recently I have been involved in several discussions with some of my peers regarding the different filesystems (EXT3, EXT4, UFS, ZFS) available in today’s mainstream Linux distributions. Each has its own set of requirements, benefits, and drawbacks. As a result of these discussions I decided that I wanted to see for myself which protocol and filesystem provided the best centralized storage performance for virtualization host servers.

For hardware, my test environment consists of a Supermicro Xeon 3450 based server with 16GB ram and a 500GB hard drive connected via cross-over cable to an HP Proliant N40L Microserver with 8GB ram, a 250Gb hard-drive, and two 1.5TB Western Digital Black hard-drives. For software, the Supermicro server is running VMWare vSphere 5.0 and the HP server will serve as the centralized storage running FreeNAS, CentOS, Ubuntu, & OpenFiler. The servers are connected via dedicated network cards and a cross-over cable to eliminate the possibility of a network switch’s backplane bandwidth affecting performance.

After a bit of research I came up with four procedures/applications that I wanted to use that would consistently test each of the protocols and filesystems on the different distributions. The applications are installed in a Windows XP virtual machine hosted on the Supermicro’s local datastore but ran on a secondary thin-provisioned 80GB virtual hard-disk that is hosted on the HP’s datastore.

  1. Use the vSphere client to copy a 3.05GB ISO file from the Supermicro’s local datastore to the HP’s datastore (and vice-versa) and time how long it takes.
  2. Run ATTO Disk Benchmark v2.47 inside the virtual machine with all of the default settings. This program transfers data ranging from .5KB to 8MB both to (write) and from (read) the secondary virtual disk and measures the transfer rate.
  3. Run Intel NAS Performance Test v1.7.1 inside the virtual machine with all of the default settings. This program is capable of running around 12 different benchmark tests on the secondary virtual disk.

HD Video Playback, 2x HD Playback, 4x HD Playback

HD Video Record

HD Playback & Record

Content Creation

Office Productivity

File copy to/from NAS

Dir copy to/from NAS

Photo Album

4. Run Iometer v inside the virtual machine using the procedure and config file provided on http://technodrone.blogspot.com/2010/06/benchmarking-your-disk-io.html. Iometer creates a 50GB file on the secondary virtual disk and the following tests are ran for 5 minutes each with a 2 minute lead up in an attempt to produce more “real world” scenarios.

4K; 100% Read; 0% random (Regular NTFS Workload 1)

4K; 75% Read; 0% random (Regular NTFS Workload 2)

4K; 50% Read; 0% random (Regular NTFS Workload 3)

4K; 25% Read; 0% random (Regular NTFS Workload 4)

4K; 0% Read; 0% random (Regular NTFS Workload 5)

8K; 100% Read; 0% random (Exchange Workload 1)

8K; 75% Read; 0% random (Exchange Workload 2)

8K; 50% Read; 0% random (Exchange Workload 3)

8K; 25% Read; 0% random (Exchange Workload 4)

8K; 0% Read; 0% random (Exchange Workload 5)

8K; 50% Read; 50% random (Exchange Workload 6)

64K; 100% Read; 100% sequential (SQL Workload 1)

64K; 100% Write: 100% sequential (SQL Workload 2)

64K; 100% Read; 100% Random (SQL Workload 3)

64K; 100% Write: 100% Random (SQL Workload 4)

256K; 100% Read; 100% sequential (Backup)

256K; 100% Write100% sequential (Restore)

Ideally each of the above tests would be ran a minimum of three times and the results averaged to get the final results for each test for each OS, filesystem, & protocol. However doing so would have taken way longer than I can spare for this project so each test was ran just one time.

I’m not going to go into considerable detail about each of the operating systems I’m testing as each could be a full article in itself. Instead I’m going to just cover the basic setup of each and focus more on the results.

Part 2  FreeNAS 8

FreeNAS 8.0.4 x64 ZFS NFS

The first operating system tested was FreeNAS 8.0.4 x64. I installed the operating system to the 250GB hard-drive and created a ZFS mirror of the two 1.5TB hard-drives. A secondary subnet IP address is assigned to a dedicated NIC that can be accessed from the storage NIC on the ESXi host and a NFS share is created on the ZFS mirror. The NFS share is mounted inside of ESXi and the first set of tests are began.

  1. Copying the 3.05GB ISO file to the NFS datastore took 16m 35s for a throughput of 3.04 MB/s. Copying the ISO from the NFS datastore took 2m 16s for a throughput of 22.71 MB/s.
  2. Screen capture of the ATTO benchmark results:

3. Screen capture of the Intel NASPT benchmark results:

4. Summary of the Iometer results after taking approximately 4 hours to create the 50GB test file:

It would appear that the overall results show good read performance but slow write performance for FreeNAS 8 running NFS via a ZFS mirror.

FreeNAS 8.0.4 x64 ZFS iSCSI

Next the NFS share is removed from ESXi and FreeNAS and replaced with an iSCSI extent. Unfortunately FreeNAS 8 doesn’t support device extents on a ZFS mirror so a file extent had to be used. A file extent is simply a single file that is shared and looks like a single disk to the ESXi host. This is not recommended in production environments because if something happens to that file all of the virtual machines could be lost. All other settings remain the same as the first test.

  1. Copying the 3.05GB ISO file to the iSCSI datastore took 49s for a throughput of 61.63 MB/s. Copying the ISO from the NFS datastore took 2m 10s for a throughput of 23.23 MB/s.
  2. Screen capture of the ATTO benchmark results:

3. Screen capture of the Intel NASPT benchmark results:

4. Summary of the Iometer results after taking approximately 30 min to create the 50GB test file:

It would appear that the overall results show a slight drop in read performance but considerable improvement in write performance for FreeNAS 8 running iSCSI via a ZFS mirror.

FreeNAS 8.0.4 x64 UFS NFS

Next the iSCSI file extent and the ZFS mirror are deleted and replaced with a UFS mirror and NFS share.

  1. Copying the 3.05GB ISO file to the NFS datastore took 58m 28s for a throughput of .86 MB/s. Copying the ISO from the NFS datastore took 3m 4s for a throughput of 16.41 MB/s.
  2. Screen capture of the ATTO benchmark results:

3. Screen capture of the Intel NASPT benchmark results:

4. Summary of the Iometer results after taking over 24 hours and crashing FreeNAS to create only 30GB of the 50GB test file:

FreeNAS 8.0.4 x64 UFS iSCSI

Next the NFS share is removed from ESXi and FreeNAS and replaced with an iSCSI extent. Again, since FreeNAS 8 doesn’t support device extents on a ZFS mirror a file extent had to be used.

  1. Copying the 3.05GB ISO file to the iSCSI datastore took 1m 26s for a throughput of 35.12 MB/s. Copying the ISO from the NFS datastore took 2m 14s for a throughput of 22.54 MB/s.
  2. Screen capture of the ATTO benchmark results:

3. Screen capture of the Intel NASPT benchmark results:

4. Summary of the Iometer results after taking approximately 30 min to create the 50GB test file:

FreeNAS 8 Summary of results

In order to get a better look at each of the test results I put them all into LibreOffice Calc so that I could see each test side by side and bolden/underline the top performer of each:

  1. Summary of the copy ISO test for each filesystem and protocol:

2. Summary of the ATTO test for each filesystem and protocol:

3. Summary of the Intel NASPT test for each filesystem and protocol:

4. Summary of the Iometer test for each filesystem and protocol:

Based upon each of these summaries it would appear that ZFS is definitely the winner between the two filesysems. It is interesting to note that if the need for a datastore with faster reads is greater then NFS is the way to go but if faster writes are needed then iSCSI is the way to go. However in a production environment I can not recommend the use of iSCSI in FreeNAS unless a hardware raid solution is used for the ZFS drivespace so that file extents are not used.

Up Next: Part 3 CentOS

By: Cory Claflin

(Source: Skynet Solutions)