Sunday, November 27, 2016

Powercli 6.5 and Virtual SAN

Storage Module Updates

The PowerCLI Storage module has been a big focus on this release. A lot of functionality has been added around vSAN, VVOLs, and the handling of virtual disks. The vSAN cmdlets have been bolstered to more than a dozen cmdlets which are focused on the entire lifecycle of a vSAN cluster. The entire vSAN cluster creation process can be automated with PowerCLI as well as running tests, updating the HCL database, and much more!
  • Get-VsanClusterConfiguration
  • Get-VsanDisk
  • Get-VsanDiskGroup
  • Get-VsanFaultDomain
  • Get-VsanResyncingComponent
  • Get-VsanSpaceUsage
  • New-VsanDisk
  • New-VsanDiskGroup
  • New-VsanFaultDomain
  • Remove-VsanDisk
  • Remove-VsanDiskGroup
  • Remove-VsanFaultDomain
  • Set-VsanClusterConfiguration
  • Set-VsanFaultDomain
  • Test-VsanClusterHealth
  • Test-VsanNetworkPerformance
  • Test-VsanStoragePerformance
  • Test-VsanVMCreation
  • Update-VsanHclDatabase
vSphere 6.5 introduces a new way to handle the management of virtual disks. Instead of managing a VM’s hard disks through the VM, they can now be managed independently with new PowerCLI cmdlets. This allows the handling of a virtual disk’s lifecycle to be decoupled from the lifecycle of a VM. This adds a ton of flexibility!
  • Copy-VDisk
  • Get-VDisk
  • Move-VDisk
  • New-VDisk
  • Remove-VDisk
  • Set-VDisk
From: http://blogs.vmware.com/PowerCLI/2016/11/new-release-powercli-6-5-r1.html

Friday, November 25, 2016

Virtual SAN and iSCSI

How to configure iSCSI Luns with Virtual SAN 6.5

Maximums: You can have a maximum of 1024 luns and 128 targets x cluster. Also, these luns are not meant to be used by other esxi hosts. They are meant to be used by other (non vSphere) environments.

Step 1: As you use the Web Client , select the cluster, go to Configure and iSCSI Targets. The iSCSI service is disabled by default. Click on Edit on the top right corner to enable it.


Step 2: Click on Edit and enable the Virtual SAN iSCSI service. Select the iSCSI network, the port to use (3260 is the default) and your authentication preferences (CHAP and Mutual CHAP are supported). Also, select the policy to use for the object (FTT=0 or something else if you desire).


Step 3:  Click on the Green Plus Sign to add your first iSCSI target. Notice you don't have any once iSCSI is enabled.


Step 4: Select the lun ID (0 is the default) and the size of the lun (10gbs in this case).




Step 5: Get additional information and master the esxcli vsan iscsi namespace.





Step 6: Test your configuration. In this case, a Windows 7 pc was used to connect to the iscsi server and format the iscsi lun. Click on Start, type iscsi and select iscsi initiator. Enable the service.


Step 7: Go back to the Web Client and find out who is the owner of the iscsi lun/object. In this case, it was 10.1.1.11 (upper right corner)


Step 8: Input that ip address in the Targets column and connect.


Step 9: In the Windows pc, click on Start and type disk management. Select Create and Format Hard Disk Partitions. You should see a second drive that needs to be initialized.


Step 10: Select the new 10gb drive and format it.





Final Note: You can also configure iSCSI Initiator Groups to define which initiators can access the targets. To do so, click on the green plus sign to add an initiator group. Before doing this, go to the Windows pc, click on the configuration tab and document the initiator name.










Thursday, November 24, 2016

How to configure a Nested Virtual SAN Cluster 6.5

How to configure a Virtual SAN cluster for your home lab

1. Install esxi on a physical host. In this case, the server uses the ip 10.1.1.2 (10.1.1.1 had my vCenter appliance). Connect to it using the new host client. Just type the ip or the hostname and log in as root.


2. On the physical host, create two internal standard switches. Do not connect them to an uplink. The first one will eventually be used for vMotion between the nested esxi hosts and the second one for virtual san traffic. Enable Promiscous Mode (critical!!!) on vSwitch1 and vSwitch2.



3. On vSwitch1, create a port group called vMotion. On vSwitch2, create a port group called vsan. This is what my configuration looked like by the time I was finished.



4. Using the host client or the web client), create three virtual machines with 2 vcpus, 6 gbs of memory, three vnics and three disks (4 gbs is not enough for virtual san). Connect the vnics to VM Network, vMotion and vsan. Make the disks 10 gbs, 5gbs and 50gbs. I ended up changing memory to 8gbs though).




5. Once you create the 3 future nested esxi hosts, install them one by one. Do not clone them. Then, proceed to use the dcui and change their hostnames and ip addresses. This is what mine looked like.




6. Connect to your vCenter server using the web client (not the new html client) and create a datacenter and add the three hosts. It should look something like this by the time you finish. Some
of my hosts had ssh enabled; that explains the warnings.



7. Create the vmkernel ports for virtual san and vMotion for the three nested esxi hosts. I used the 10 network for management, the 11 network for vMotion and the 12 network for vsan.  Make sure to test every network with ping once you finish. 


8. Got to your esxi hosts and make the 5gb drive as an SSD drive. 


9. Right click on the datacenter and create a cluster. Name it and enable virtual san. Do not enable anything else for now. Then drag and drop the three hosts. You have the choice of Automatic or Manual (I went with automatic in this case). Otherwise, create the disk groups manually once you finished.


10. Now take a look, if you did it right, the vsanDatastore should be around 150gbs (3 x 50gbs).


Final Note: Here are some captures and commands (including how to log in to the rvc)










Sunday, November 20, 2016

vCenter Appliance 6.5 Installation and Configuration

How to install and configure the vCenter Appliance 6.5

Step 1: Download the software and burn it into a dvd if you want. Explore the contents. Notice the vcsa-ui-installer folder. That has the graphical tools for different operating systems. Notice that the deployment can be achieved from windows, linux and mac.





Step 2: Double click  on installer and start the configuration of the appliance. The deployment is divided into 2 stages. 










Step 3: Once Stage 1 is complete, start stage 2.






Step 4: Once the install is done, you can open the console of the vm, connect to the VAMI port (5480) or point to https://name_of_vcenter to see the shortcuts to the web client and the new html client.