Introduced in 5.0, the VSA (Vsphere Storage Appliance) allows you to create a two or three node cluster without shared storage. I decided to use the latest version 5.1.3 since it had several advantages over the original version. For example, the vcenter server can manage multiple VSA clusters or be a virtual machine inside of the cluster itself. I decided to create a simple two node cluster with a separate vcenter server running 2003 64 bit. My Vcenter server had two cpus, 6 gbs of ram.
Here are the steps and issues I encountered.
01. Install esxi 5.1 in two servers. My servers had 8 gbs of ram (minimum 6gb), 4 nics and two disk drives. I did not have hardware raid (needed for support) but I had two 40 gb disks per esxi server. I had to remove my original drive since the install of the VSA kept complaining about different disk sizes and having less than 30 GBs of space.
02. Install a brand new vcenter server. Mine was running 2k3, sp2. It is imperative that you upgrade
internet explorer. 2k3 comes with i.e. 6.0. This does not work well with the VSA Manager tab. It took me a while to find the bug about IE 6.
03. Download the latest version of the Vsphere Storage Appliance from vmware.com/downloads . Burn it into a dvd.
04. Create a Datacenter (mine was called Training) and add the two esxi hosts. make sure the esxi servers have a static ip address.
05. Start the installation of the VSA. Simply run the vmware-vsamanager.exe executable on the vcenter server. I found the executable in the D:\Installers folder.
06. Select the Language of your choice and accept the License Agreement.
07. Accept the ip address of the vcenter server (automatically populated) and the communication ports that will be used.
08. Provide the password for the user vmwarevcsadmin. I used Vmware1! This is the VSA service account.
09. Use the demo license (60 days).
10. Click on Install and eventually Finish.
Warnings/Issues I encountered/Did not expect
The only issue I encountered during the installation was a complain that the Windows Firewall was not running. Strange since it was disabled after the install of the vcenter server as it should had been. I started the firewall, rerun the installer and then disabled the firewall again. At the end, i checked the VMware VSA cluster service and it was running. I launched the vsphere client, connected to the vcenter server and verified that the VSA plugin was enabled. I was prepared to enable it manually to see the VSA Manager tab in the vsphere client but did not have to.
11. Launch the Vsphere Client and connect to the Vcenter Server. Log in as the administrator
12. Select the Datacenter and look for the new tab called the VSA Manager.
13. Select New Installation
14. Select your Datacenter
15. Select the two hosts. Here is where you may issues with the hardware being used. It is easy to see that the installer recognized that I had unsupported hardware but after a couple of changes, it let me
select them. This issues can be related to lack of raid, no redundancy at the network level (how you build the switches), lack of EVC compatibility (easily fixed, info at bottom of doc) and so forth.
16. Select the IP address of the VSA Cluster and the IP address of the VSA Cluster Service
17. Specify the storage capacity to use. Here is a photo of the entire set of questions.
18. Select format disks on first access.
19. Click on Install and walk away for a while. Once you finish, the cluster will be ready to be used.
20. Click on Finish.
1. Although the Installation and configuration steps are simple, there are many gotchas. There are requirements regarding the storage, virtual switches and port groups and so forth. Read the entire VSA Installation PDF to avoid some of the issues I encountered.
For example, I pasted the network and EVC related information from the install pdf.
When networking configuration on ESXi hosts has been changed, or if you need to manually configure the NICs selected for different portgroups, the following consideration apply:
Each ESXi host must contain at least one vSwitch.
Configure five port groups on each host, named exactly as follows: VSA-Front End Network, VM
Network, Management Network, VSA-Back End Network, VSA-VMotion.
For each port group, configure NIC teaming so that it has at least one active and one standby NIC.
If the NIC is active for the Management and VM Network port groups, it should not be active for
VSA-Front End port group. Use the standby NIC instead.
If the NIC is active for port group VSA-Back End, it should not be active for the VSA-VMotion port group. You can use the standby NIC.
If your configuration does not allow to enable the EVC mode on the HA cluster, disable the EVC
mode by setting the evc.config property to false in the VSAManager/WEB-
2. If you want to test the VSA in a nested environment (not supported but works), check out
What follows is a capture of one Vcenter server managing multiple vsas.