When upgrading Cisco Unified Communications from fixed appliances to virtual servers running on a Cisco UCS system, best practice is to use Fibre Channel connected shared storage for the datastore.
So what would a small and fast storage system look like that would meet the needs of a Cisco UC on UCS upgrade? The specs are listed here http://docwiki.cisco.com/wiki/UC_Virtualization_Storage_System_Design_Requirements
Condensing the information from the Cisco article:
- Storage block size: 4K
- LUN Size: 500GB-1.5TB
- IOPS: ~200 IOPS per Cisco Voice application at 500 user design size
- Connectivity: Fibre Channel preferred
- Data types: CUCM storage access is on average 93-98% sequential writes with an IO block size of 4 kilobytes.
The Adcap design that meets this specification is based on the Cisco C240. For an expanded use case discussion read this post: Cisco Nexenta ZFS Storage Use Case – Unified Communications Upgrade. The base solution includes:
- Cisco C240 value bundle server with:
- 64GB memory
- Emulex dual 8GB FC controller
- Cisco VIC 1225 dual 10GE CNA.
- 10 hard drives total. 2 for NexentaStor OS, 7 for RAIDZ2, 1 for spare.
- Using 300GB 10K RPM drives provides 2.1TB raw storage.
- NexentaStor software with Fibre Channel plugin.
This is a good storage system design for 100-500 users. If the voice system has more than 500 users, or the organization prefers to have no single point of failure in the voice system, my recommendation is to use a high availability storage system design. There are basically two choices for Cisco voice applications for storage redundancy:
- Use two of the C240′s with onboard storage, and map different application cluster members to different boxes.
- Deploy a High Availability Nexenta storage system using dual C240′s and multiple external JBOD’s.
In order to test and make sure the Adcap Cisco Nexenta ZFS Storage System meets these requirements, I set up a very similar lab configuration. The processor is a little faster and there is a little more memory in my test C240 than the recommended configuration, but processor utilization has been low and the memory is plenty for a single box of drives.
I am also testing the effect of data compression in this configuration, which will be the subject of a future post. There have been some good posts written about ZFS Compression. Basically if the data is more than 12% compressible, the LZJB algorithm that is enabled by default on the ZFS file system will use the readily available processing power of the server to improve write speed by compressing it before it is written. Because Cisco Call Manager uses the storage with about 95% writes of sequential data, this would improve speed significantly.
Because a Cisco Unified Communications system often has multiple virtual servers in it, I set up a version of the test environment that has 8 IOMeter Virtual Machines. See my Cisco Nexenta Benchmarking post for a more complete description of the test environment.
In this case, I have enable Fibre Channel storage access from the virtual machines to the Cisco C240 using an Emulex dual port 8G FC HBA, with the Cisco Nexus operating as the Fibre Channel switch. There are two FC zones set up, and multipathing is enabled. The virtual machines automatically set up multipathing over the two iSCSI links as well as the FC connections, but since I wanted to isolate the test to just FC, I removed the iSCSI connection so the VMware virtual machines were forced to use Fibre Channel. The first picture below is of the FC interfaces on the NexentaStor GUI, and the second is the virtual FC HBA’s on the VMware IOMeters accessing the storage LUN’s on the NexentaStor box via Fibre Channel.
Each of the Voice test Volumes on the Nexenta storage system is set up as RAIDZ2 with six drives and a spare. This give a total of 1.62TB of usable space. I set up a 1TB LUN on the volume, because I had to give each of the 8 virtual machines as much storage space as possible to test. Each VM gets 110GB of storage, which is larger than the memory on the C240 box of 96GB, so the performance testing should be fairly accurate.
The purpose of the test is to make sure that each voice virtual machine will have the ability to have at least 200 IOPS of storage performance when accessing the shared storage system. There is no Cisco Voice choice on the IOMeter, but there are selections that are fairly close. In getting something close to 4K blocks, 95% write, and sequential data, the IOMeter testing application offers 4K blocks, 100% write, non-random data. This is the test that is going to be the most important.
However, while going to the effort of setting up a testing environment, I figured I would run some additional read and write tests to look at the boundaries of the performance. The main advantages ZFS has over other storage systems in this specific use case are efficient compression on writes and large memory for read caching. The C240 hardware is at a CPU, memory, and chipset level equivalent to the more expensive commercial storage arrays, so that should help also.
The results were impressive. Data (For full size click on picture, then click on the thumbnail on the next page):
Chart of one of the 8 virtual machines showing IOPS, Latency, and Mbps Throughput during the 1 hour test:
The Cisco Nexenta ZFS Storage System delivers much more performance than required by Cisco Voice Applications. With an average of 746 IOPS per virtual machine, the system definitely exceeds the 200 IOPS per virtual machine recommended by Cisco for a typical 500 user voice system installation!
If you are looking to upgrade your Cisco voice environment, this is a good system to consider. It has the performance, reliability, expandability, and management capabilities of an Enterprise class storage system with an entry level.
Author: Rolf Versluis
Last Updated: April 4th, 2013 |