Sunday, December 25, 2016

vCenter Migration from Windows to Linux

Step 1:  Download the vCenter Appliance and copy the Migration Assistant Tool folder to the windows vCenter. Inside the folder you have the executable. Double click on the VMware-Migration-Assistant script.




Step 2:  Launch the appliance installer and double click on the Migrate icon to start Stage 1. Go through the wizard.












Step 3:  Start Stage 2 of the migration and verify the new vCenter once the operation is finished.










Friday, December 23, 2016

vCenter Appliance Partitions


From:
http://www.virtuallyghetto.com/2016/11/updates-to-vmdk-partitions-disk-resizing-in-vcsa-6-5.html

Vimtop Main Options

What is vimtop?  Vimtop is a command found on the vCenter appliance that can give you lots of performance related information. Just run vimtop without any options to run it interactively. This gives you cpu, memory and process information.


Vimtop has an "h" option for help. Press Escape once you are done looking at these options.


Vimtop has a "k" option for disk related information. Look for Read and Write operations.


Vimtop has an "o" option for network related information. Look for dropped packets.


Final Note:

"P" option = pauses the screen
"S" option = sets the refresh rate in seconds
"Q" option = quit vimtop

Monday, December 19, 2016

VMFS-6 Improvements

Paths

ESXi hosts running version 6.5 can now support up to 2,000 paths in total. This is an increase from the 1024 paths that were supported in previous versions of vSphere.

Devices

ESXi hosts running version 6.5 can now support up to 512 devices. This is a two-fold increase from previous versions of ESXi where the number of devices supported per host was limited to 256.

512e Advanced Format Device Support

The storage industry is hitting capacity limits with 512N (native) sector size used currently in rotating storage media. To address this issue, the storage industry has proposed new Advanced Format (AF) drives which use a 4K native sector size. These AF drives allows disk drive vendors to build high capacity drives which also provide better performance, efficient space utilization and improved reliability and error correction capability.
Given that legacy applications and operating systems may not be able to support 4KN drives, the storage industry has proposed an intermediate step to support legacy applications by providing 4K sector size drives in 512 emulation (512e) mode. These drives will have a physical sector size of 4K but the logical sector size of 512 bytes and are called 512e drives. These drives are now supported on vSphere 6.5 for VMFS and RDM (Raw Device Mappings).
File Block Format:
VMFS-6 introduces two new block sizes, referred to as small file block (SFB) and large file block (LFB). While the SFB size can range from 64KB to 1MB for future use-cases, VMFS-6 in vSphere 6.5 is utilizing an SFB size of 1MB only. The LFB size is set to 512MB.
Thin disks created on VMFS-6 are initially backed with SFBs. Thick disks created on VMFS-6 are allocated LFBs as much as possible. For the portion of the thick disk which does not fit into an LFB, SFBs are allocated.
These enhancements should result in much faster file creation times. This is especially true with swap file creation so long as the swap file can be created with all LFBs. Swap files are always thickly provisioned.
VMFS Creation:
Using these new enhancements, the initialization and creation of new VMFS datastore has been significantly improved in ESXi 6.5. For a 32 TB volume, VMFS creation time was halved. In the example shown below, the creations of a 32TB VMFS-6 volume on ESXi 6.5 only takes half the time of creating a 32TB VMFS-5 volume on ESXi 6.0U2.
Concurrency Improvements
This next feature introduces lock contention improvements and improved resignaturing and scanning. Some of the lock mechanisms on VMFS were largely responsible for some of the biggest delays in parallel device scanning and filesystem probing on ESXi. Since Sphere 6.5 has higher limits on number of devices and paths, a big factor in enabling this support was to redesign device discovery and filesystem probing to be highly parallel.
These improvements are significant for Site Recover Manager, especially with a failover event, as the changes here lead to improved resignature and rescan/device discovery.
There are also benefits to Thin provisioning operations. Previous versions of VMFS only allowed one transaction at a time per host on a given filesystem. VMFS-6 supports multiple concurrent transactions at a time per host on a given filesystem. This results in improved IOPS for multi-threaded workloads on thin files.
Upgrading to VMFS-6
Datastore filesystem upgrade from VMFS-5 (or previous versions) to VMFS-6 is not supported. Customers upgrading from older versions of vSphere to 6.5 release should continue to use VMFS-5 datastores (or older version) until they create new VMFS-6 datastores. 
Since there is no direct ‘in-place’ upgrade of filesystem supported, customers should use Virtual Machine migration techniques such as Storage vMotion to move VMs from the old datastore to the new VMFS-6 datastore.
Hot Extend VMDK Beyond 2TB
Prior to ESXi 6.5, thin virtual disks could only be extended if their size was below 2TB when the VM was powered on. If the size of a VMDK was 2TB or larger, or the expand operation caused it to exceed 2TB, the hot extend operation would fail. This required administrators to typically shut down the virtual machine to expand it beyond 2TB. The behavior has been changed in vSphere 6.5 and hot extend no longer has this limitation.

How to backup the vCenter Appliance 6.5

Backing Up and Restoring the vCenter Appliance

Step 1: Deploy an ftp server. In this case, an Ubuntu Linux instance was used. Below are the  
instructions to install and configure an ftp server.

A. Add the package by typing the following: sudo apt-get update ; sudo apt-get install vsftpd

B. Edit the /etc/vsftpd.conf file using gedit or vi (gui vs cli tools). Uncomment two lines
write_enable=YES and local_umask=022. Add the following four lines ... allow_writable-chroot=YES, pasv_enable=YES, pasv_min_port=40000 and pasv_max_port=40100.
Verify your steps.



C. Restart the ftp process by typing sudo service vsftpd restart. Verify your steps.



D. Add the ftp user by typing sudo useradd -m ftpuser -s /usr/sbin/nologin.

E. Give the ftpuser a password by typing sudo passwd ftpuser. Verify your steps.




F. Add a new line by typing /usr/sbin/nologin to the /etc/shells file. Here is what it should look like.



G. Type chmod -R 777 /home/ftpuser

Step 2:  Connect to the vCenter Appliance using port 5480 and log in as root.



Step 3: Click on Backup.


Step 4: Specify the protocol, the username and name of the backup.


Step 5: Verify that the backup starts and wait a few minutes.


Step 6. Confirm that the backup worked.


Step 7: Look at the results.


Note: Should the backup fail, putty into the vCenter Appliance and access the /var/log/vmware/applmgmt folder. There is a log called backup.log that displays backup related information.

Step 8:  To restore, simply deploy a new one using the iso. Towards the bottom, you will see the restore choice.



Step 4: Once the restore is complete, connect to the new vCenter server and verify functionality.


Saturday, December 10, 2016

VMFSsparse vs SEsparse

VMFSsparse is a virtual disk format used when a VM snapshot is taken or when linked clones are created off the VM. VMFSsparse is implemented on top of VMFS and I/Os issued to a snapshot VM are processed by the VMFSsparse layer. VMFSsparse is essentially a redo-log that grows from empty (immediately after a VM snapshot is taken) to the size of its base VMDK (when the entire VMDK is re-written with new data after the VM snapshotting). This redo-log is just another file in the VMFS namespace and upon snapshot creation the base VMDK attached to the VM is changed to the newly created sparse VMDK.

Because VMFSsparse is implemented above the VMFS layer, it maintains its own metadata structures in order to address the data blocks contained in the redo-log. The block size of a redo-log is one sector size (512 bytes). Therefore the granularity of read and write from redo-logs can be as small as one sector. When I/O is issued from a VM snapshot, vSphere determines whether the data resides in the base VMDK (if it was never written after a VM snapshot) or if it resides in the redo-log (if it was written after the VM snapshot operation) and the I/O is serviced from the right place. The I/O performance depends on various factors, such as I/O type (read vs. write), whether the data exists in the redo-log or the base VMDK, snapshot level, redo-log size, and type of base VMDK.

I/O type: After a VM snapshot takes place, if a read I/O is issued, it is either serviced by the base VMDK or the redo-log, depending on where the latest data resides. For write I/Os, if it is the first write to the block after the snapshot operation, new blocks are allocated in the redo-log file, and data is written after updating the redo-log metadata about the existence of the data in the redo-log and its physical location. If the write I/O is issued to a block that is already available in the redo-log, then it is re-written with new data.

Snapshot depth: When a VM snapshot is created for that first time, the snapshot depth is 1. If another snapshot is created for the same VM, the depth becomes 2 and the base virtual disks for snapshot depth 2 become the sparse virtual disks in snapshot depth 1. As the snapshot depth increases, performance decreases because of the need to traverse through multiple levels of metadata information to locate the latest version of a data block.

I/O access pattern and physical location of data: The physical location of data is also a significant criterion for snapshot performance. For a sequential I/O access, having the entire data available in a single VMDK file would perform better compared to aggregating data from multiple levels of snapshots such as the base VMDK and the sparse VMDK from one or more levels.

Base VMDK type: Base VMDK type impacts the performance of certain I/O operations. After a snapshot, if the base VMDK is thin format [4], and if the VMDK hasn’t fully inflated yet, writes to an unallocated block in the base thin VMDK would lead to two operations (1) allocate and zero the blocks in the base, thin VMDK and (2) allocate and write the actual data in the snapshot VMDK. There will be performance degradation during these relatively rare scenarios.

SEsparse SEsparse is a new virtual disk format that is similar to VMFSsparse (redo-logs) with some enhancements and new functionality. One of the differences of SEsparse with respect to VMFSsparse is that the block size is 4KB for SEsparse compared to 512 bytes for VMFSsparse. Most of the performance aspects of VMFSsparse discussed above—impact of I/O type, snapshot depth, physical location of data, base VMDK type, etc.—applies to the SEsparse format also.

In addition to a change in the block size, the main distinction of the SEsparse virtual disk format is space efficiency. With support from VMware Tools running in the guest operating system, blocks that are deleted by the guest file system are marked and commands are issued to the SEsparse layer in the hypervisor to unmap those blocks. This helps to reclaim space allocated by SEsparse once the guest operating system has deleted that data. SEsparse has some optimizations in vSphere 5.5, like coalescing of I/Os, that improves its performance of certain operations compared to VMFSsparse.

http://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/sesparse-vsphere55-perf-white-paper.pdf

Friday, December 9, 2016

Encrypted vMotion

Encrypted vSphere vMotion

Starting with vSphere 6.5, vSphere vMotion always uses encryption when migrating encrypted virtual machines. For virtual machines that are not encrypted, you can select one of the encrypted vSphere vMotion options.

Encrypted vSphere vMotion secures confidentiality, integrity, and authenticity of data that is transferred with vSphere vMotion. Encrypted vSphere vMotion supports all variants of vSphere vMotion for unencrypted virtual machines, including migration across vCenter Server systems. Migration across vCenter Server systems is not supported for encrypted virtual machines.

For encrypted disks, the data is transmitted encrypted. For disks that are not encrypted, Storage vMotion encryption is not supported.

For virtual machines that are encrypted, migration with vSphere vMotion always uses encrypted vSphere vMotion. You cannot turn off encrypted vSphere vMotion for encrypted virtual machines.

For virtual machines that are not encrypted, you can set encrypted vSphere vMotion to one of the following states. The default is Opportunistic. 


Disabled

Do not use encrypted vSphere vMotion.

Opportunistic

Use encrypted vSphere vMotion if source and destination hosts support it. Only ESXi versions 6.5 and later use encrypted vSphere vMotion.

Required

Allow only encrypted vSphere vMotion. If the source or destination host does not support encrypted vSphere vMotion, migration with vSphere vMotion is not allowed.

When you encrypt a virtual machine, the virtual machine keeps a record of the current encrypted vSphere vMotion setting. If you later disable encryption for the virtual machine, the encrypted vMotion setting remains at Required until you change the setting explicitly. You can change the settings using Edit Settings.


Virtual Machine Encryption Best Practices
Follow virtual machine encryption best practices to avoid problems later, for example, when you generate a vm-support bundle. 


Follow these general best practices to avoid problems. 


Do not encrypt any vCenter Server Appliance virtual machines. 

If your ESXi host crashes, retrieve the support bundle as soon as possible. The host key must be available if you want to generate a support bundle that uses a password, or if you want to decrypt the core dump. If the host is rebooted, it is possible that the host key changes and you can no longer generate a support bundle with a password or decrypt core dumps in the support bundle with the host key. 

Manage KMS cluster names carefully. If the KMS cluster name changes for a KMS that is already in use, any VM that is encrypted with keys from that KMS enters an invalid state during power on or register. In that case, remove the KMS from the vCenter Server and add it with the cluster name that you used initially. 

Do not edit VMX files and VMDK descriptor files. These files contain the encryption bundle. It is possible that your changes make the virtual machine unrecoverable, and that the recovery problem cannot be fixed. 

The encryption process encrypts data on the host before it is written to storage. Backend storage features such as deduplication and compression might not be effective for encrypted virtual machines. Consider storage tradeoffs when using vSphere Virtual Machine Encryption. 

Encryption is CPU intensive. AES-NI significantly improves encryption performance. Enable AES-NI in your BIOS

Wednesday, December 7, 2016

vCenter High Availability Illustrated

What is vCenter High Availability?

vCenter High Availability is a new feature of vSphere 6.5. It only works with the linux vCenter appliance. By the time you are done, you end up with an active, a passive and a witness nodes. In 6.5, there is an RTO of about 5 minutes; which varies depending on loads and hardware. File level replication is done through Linux rsync (asynchronous). Native postgres replication handles VCdb and VUMdb replication (synchronous). 

Requirements:

SSH needs to be enabled on the vCenter prior to implementing vCenter HA. It will fail otherwise. The heartbeat ip addresses need to be on a different subnet. I used one switch with the Management Network, VM Network and VM Network 2. Documentation states that your vCenter should have 4 vcpus and 16gbs of RAM (Small Configuration). Tiny was used in this case (2 vcpus and 10gbs). This works but it's not supported.

Here is the architecture illustrated:
Here are the configuration steps:

Step 1: Deploy a vCenter Server appliance. Create a cluster and enable HA. If you want to enable DRS, you will need 3 hosts. In this demo, a two node cluster was built with HA Only. A basic installation allows all 3 appliances to be placed on the same host, although you can modify this during the setup. Look at the specs of my original vCenter appliance. This is NOT supported. You really need 4 vcpus and 16gbs (small configuration, not tiny).


Step 2: Select your vCenter, click on the Configure tab and select vCenter HA. Click on Configure on the upper right corner. Two choices exist, Basic and Advanced. Basic is more automated but will NOT work if the vCenter appliance is not in the inventory (meaning a vm inside of a datacenter controlled by that same vCenter Server.



Step 3: Select the heartbeat ip address for the active vCenter. and specify the ip addresses of the backup vCenter and the heartbeat vCenter. The active vCenter had an ip of 10.1.1.150 and the heartbeat network was the 10.1.1.2 network.



Step 4: Now specify the IP addresses to be used by the passive node and the witness.


Step 5: Review the information and click on next and click on Finish and wait. Monitor the recent tasks. This deployment will take a while. This is what you should see by the time you are done.


Step 6: Verify the results. You should have all 3 vms.


Step 7: Select the active vCenter, click on Monitor and select vCenter HA. All three vms should be UP. Then, take a look at the Configure tab.



Step 8: Notice that the other two vms (passive and witness) only use about 1gb of ram. Using Basic deployment, the witness is created with only 1vcpus and 1gb, yet the passive one has the same number of vcpus and RAM as the active one.



Step 9: Test the failover. Select the vCenter appliance and click on Initiate Failover in the upper right corner. All three vms will continue running but the passive one will take over and start all the services.



Step 10: After about one minute, the Web Client will disconnect. If you reload the page, you should see something like this (browser dependent; I used Chrome).


Step 11: My failover took about 12 minutes since all three vms were in the same host and I only had 16gbs on my host. The hypervisor activated ballooning and gave the RAM formerly used by the original active node to the one taking over.  This is what I saw once it succeeded. Notice how .151 is not the active one instead of .150 (the original one).