Wednesday, July 8, 2015

VSAN 6 RVC (Ruby vSphere Console) Examples

 Taking a Test Drive; it's been too long:

1. Putty into the vcenter appliance and log in as root

login as: root

VMware vCenter Server Appliance 6.0.0

Type: vCenter Server with an embedded Platform Services Controller

root@10.1.1.3's password:
Last login: Wed Jul  8 17:55:58 2015 from 10.1.1.200
Connected to service

    * List APIs: "help api list"
    * List Plugins: "help pi list"
    * Enable BASH access: "shell.set --enabled True"
    * Launch BASH: "shell"

2. Use the rvc command and log in as administrator@vsphere.local (NOT AS ROOT!!!)

Command> rvc administrator@vsphere.local@localhost
Warning: Permanently added 'localhost' (vim) to the list of known hosts
password:
Welcome to RVC. Try the 'help' command.
0 /
1 localhost/

3. Start learning how to use the rvc by using the help command

> help
Namespaces:
basic
vm
permissions
connection
device
vsan
perf
datastore
spbm
vm_guest
diagnostics
vnc
role
esxcli
snapshot
vds
cluster
syslog
statsinterval
find
issue
host
mark
vim
datacenter
alarm
vmrc
resource_pool
core

4. Use the help command followed by any of the previously shown namespaces

> help vsan
Commands:
clear_disks_cache: Clear cached disks information
enable_vsan_on_cluster: Enable VSAN on a cluster
disable_vsan_on_cluster: Disable VSAN on a cluster
cluster_change_checksum: Enable/Disable VSAN checksum enforcement on a cluster
cluster_change_autoclaim: Enable/Disable autoclaim on a VSAN cluster
host_consume_disks: Consumes all eligible disks on a host
host_wipe_vsan_disks: Wipes content of all VSAN disks on hosts, by default wipe all disk groups
host_info: Print VSAN info about a host
cluster_info: Print VSAN config info about a cluster or hosts
disks_info: Print physical disk info about a host
cluster_set_default_policy: Set default policy on a cluster
object_info: Fetch information about a VSAN object
disk_object_info: Fetch information about all VSAN objects on a given physical disk
cmmds_find: CMMDS Find
fix_renamed_vms: This command can be used to rename some VMs which get renamed by the VC in case of storage inaccessibility. It is possible for some VMs to get renamed to vmx file path. eg. "/vmfs/volumes/vsanDatastore/foo/foo.vmx". This command will rename this VM to "foo". This is the best we can do. This VM may have been named something else but we have no way to know. In this best effort command, we simply rename it to the name of its config file (without the full path and .vmx extension ofcourse!).
vm_object_info: Fetch VSAN object information about a VM
disks_stats: Show stats on all disks in VSAN
whatif_host_failures: Simulates how host failures impact VSAN resource usage
observer: Run observer
observer_process_statsfile: Analyze an offline observer stats file and produce static HTML
resync_dashboard: Resyncing dashboard
vm_perf_stats: VM perf stats
enter_maintenance_mode: Put hosts into maintenance mode
Choices for vsan-mode: ensureObjectAccessibility, evacuateAllData, noAction
lldpnetmap: Gather LLDP mapping information from a set of hosts
check_limits: Gathers (and checks) counters against limits
object_reconfigure: Reconfigure a VSAN object
obj_status_report: Print component status for objects in the cluster.
apply_license_to_cluster: Apply license to VSAN
support_information: Command to collect vsan support information
check_state: Checks state of VMs and VSAN objects
reapply_vsan_vmknic_config: Unbinds and rebinds VSAN to its vmknics
recover_spbm: SPBM Recovery
vmdk_stats: Print read cache and capacity stats for vmdks.
Disk Capacity (GB):
Disk Size: Size of the vmdk
Used Capacity: MD capacity used by this vmdk
Data Size: Size of data on this vmdk
Read Cache (GB):
Used: RC used by this vmdk
Reserved: RC reserved by this vmdk
v2_ondisk_upgrade: Upgrade a cluster to VSAN 2.0
scrubber_info: Print scrubber info about objects on this host or cluster
host_evacuate_data: Evacuate hosts from VSAN cluster
host_exit_evacuation: Exit hosts' evacuation, bring them back to VSAN cluster as data containers
host_claim_disks_differently: Tags all devices of a certain model as certain type of device
host_wipe_non_vsan_disk: Wipe disks with partitions other than VSAN partitions
proactive_rebalance: Configure proactive rebalance for Virtual SAN
proactive_rebalance_info: Retrieve proactive rebalance status for Virtual SAN
purge_inaccessible_vswp_objects: Search and delete inaccessible vswp objects on a virtual SAN cluster.

5. Use the vsan. command followed by the TAB key twice to see available commands

> vsan. [tab] [tab]
vsan.apply_license_to_cluster         vsan.host_wipe_non_vsan_disk
vsan.check_limits                     vsan.host_wipe_vsan_disks
vsan.check_state                      vsan.lldpnetmap
vsan.clear_disks_cache                vsan.obj_status_report
vsan.cluster_change_autoclaim         vsan.object_info
vsan.cluster_change_checksum          vsan.object_reconfigure
vsan.cluster_info                     vsan.observer
vsan.cluster_set_default_policy       vsan.observer_process_statsfile
vsan.cmmds_find                       vsan.proactive_rebalance
vsan.disable_vsan_on_cluster          vsan.proactive_rebalance_info
vsan.disk_object_info                 vsan.purge_inaccessible_vswp_objects
vsan.disks_info                       vsan.reapply_vsan_vmknic_config
vsan.disks_stats                      vsan.recover_spbm
vsan.enable_vsan_on_cluster           vsan.resync_dashboard
vsan.enter_maintenance_mode           vsan.scrubber_info
vsan.fix_renamed_vms                  vsan.support_information
vsan.host_claim_disks_differently     vsan.v2_ondisk_upgrade
vsan.host_consume_disks               vsan.vm_object_info
vsan.host_evacuate_data               vsan.vm_perf_stats
vsan.host_exit_evacuation             vsan.vmdk_stats
vsan.host_info                        vsan.whatif_host_failures

6. Start using the cd (change dir) and ls (list) commands to navigate the tree structure

> cd localhost
/localhost> ls
0 VSAN Datacenter (datacenter)
/localhost> ls 0
0 storage/
1 computers [host]/
2 networks [network]/
3 datastores [datastore]/
4 vms [vm]/

/localhost> cd 1
/localhost/VSAN Datacenter/computers> ls
0 VSAN Cluster (cluster): cpu 13 GHz, memory 6 GB
1 10.1.1.1 (standalone): cpu 7 GHz, memory 10 GB
2 10.1.1.2 (standalone): cpu 7 GHz, memory 11 GB

/localhost/VSAN Datacenter/computers> cd 0
/localhost/VSAN Datacenter/computers/VSAN Cluster> ls
0 hosts/
1 resourcePool [Resources]: cpu 13.02/13.02/normal, mem 6.00/6.00/normal
/localhost/VSAN Datacenter/computers/VSAN Cluster> cd 0
/localhost/VSAN Datacenter/computers/VSAN Cluster/hosts> ls
0 10.1.1.82 (host): cpu 2*1*2.39 GHz, memory 6.00 GB
1 10.1.1.83 (host): cpu 2*1*2.39 GHz, memory 6.00 GB
2 10.1.1.81 (host): cpu 4*1*2.39 GHz, memory 8.00 GB

/localhost/VSAN Datacenter/computers/VSAN Cluster/hosts> cd 0
/localhost/VSAN Datacenter/computers/VSAN Cluster/hosts/10.1.1.82> ls
0 vms/
1 datastores/
2 networks/
/localhost/VSAN Datacenter/computers/VSAN Cluster/hosts/10.1.1.82> cd networks
/localhost/VSAN Datacenter/computers/VSAN Cluster/hosts/10.1.1.82/networks> ls
0 VM Network

> cd /localhost/"VSAN Datacenter"

7. Use the mark command to create shortcuts and run some of these commands

/localhost/VSAN Datacenter> mark cluster ~/computers/"VSAN Cluster"

/localhost/VSAN Datacenter> vsan.whatif_host_failures ~cluster
Simulating 1 host failures:

+-----------------+----------------------------+-----------------------------------+
| Resource        | Usage right now            | Usage after failure/re-protection |
+-----------------+----------------------------+-----------------------------------+
| HDD capacity    |   7% used (27.84 GB free)  |  11% used (17.86 GB free)         |
| Components      |   0% used (1874 available) |   0% used (1312 available)        |
| RC reservations |   0% used (20.98 GB free)  |   0% used (13.99 GB free)         |
+-----------------+----------------------------+-----------------------------------+

/localhost/VSAN Datacenter> vsan.resync_dashboard
missing argument 'cluster_or_host'
/localhost/VSAN Datacenter> vsan.resync_dashboard ~cluster
2015-07-08 18:13:35 +0000: Querying all VMs on VSAN ...
2015-07-08 18:13:35 +0000: Querying all objects in the system from 10.1.1.82 ...
2015-07-08 18:13:36 +0000: Got all the info, computing table ...
+-----------+-----------------+---------------+
| VM/Object | Syncing objects | Bytes to sync |
+-----------+-----------------+---------------+
+-----------+-----------------+---------------+
| Total     | 0               | 0.00 GB       |
+-----------+-----------------+---------------+

/localhost/VSAN Datacenter> vsan.disks_stats ~cluster
2015-07-08 18:14:21 +0000: Fetching VSAN disk info from 10.1.1.81 (may take a moment) ...
2015-07-08 18:14:21 +0000: Fetching VSAN disk info from 10.1.1.83 (may take a moment) ...
2015-07-08 18:14:21 +0000: Fetching VSAN disk info from 10.1.1.82 (may take a moment) ...
2015-07-08 18:14:24 +0000: Done fetching VSAN disk infos
+---------------------+-----------+-------+------+----------+------+----------+---------+
|                     |           |       | Num  | Capacity |      |          | Status  |
| DisplayName         | Host      | isSSD | Comp | Total    | Used | Reserved | Health  |
+---------------------+-----------+-------+------+----------+------+----------+---------+
| mpx.vmhba1:C0:T2:L0 | 10.1.1.81 | SSD   | 0    | 5.00 GB  | 0 %  | 0 %      | OK (v2) |
| mpx.vmhba1:C0:T1:L0 | 10.1.1.81 | MD    | 0    | 4.99 GB  | 7 %  | 0 %      | OK (v2) |
+---------------------+-----------+-------+------+----------+------+----------+---------+
| mpx.vmhba1:C0:T4:L0 | 10.1.1.81 | SSD   | 0    | 5.00 GB  | 0 %  | 0 %      | OK (v2) |
| mpx.vmhba1:C0:T5:L0 | 10.1.1.81 | MD    | 0    | 4.99 GB  | 7 %  | 0 %      | OK (v2) |
+---------------------+-----------+-------+------+----------+------+----------+---------+
| mpx.vmhba1:C0:T1:L0 | 10.1.1.82 | SSD   | 0    | 5.00 GB  | 0 %  | 0 %      | OK (v2) |
| mpx.vmhba1:C0:T3:L0 | 10.1.1.82 | MD    | 0    | 4.99 GB  | 7 %  | 0 %      | OK (v2) |
+---------------------+-----------+-------+------+----------+------+----------+---------+
| mpx.vmhba1:C0:T2:L0 | 10.1.1.82 | SSD   | 0    | 5.00 GB  | 0 %  | 0 %      | OK (v2) |
| mpx.vmhba1:C0:T5:L0 | 10.1.1.82 | MD    | 0    | 4.99 GB  | 7 %  | 0 %      | OK (v2) |
+---------------------+-----------+-------+------+----------+------+----------+---------+
| mpx.vmhba1:C0:T2:L0 | 10.1.1.83 | SSD   | 0    | 5.00 GB  | 0 %  | 0 %      | OK (v2) |
| mpx.vmhba1:C0:T5:L0 | 10.1.1.83 | MD    | 0    | 4.99 GB  | 7 %  | 0 %      | OK (v2) |
+---------------------+-----------+-------+------+----------+------+----------+---------+
| mpx.vmhba1:C0:T1:L0 | 10.1.1.83 | SSD   | 0    | 5.00 GB  | 0 %  | 0 %      | OK (v2) |
| mpx.vmhba1:C0:T3:L0 | 10.1.1.83 | MD    | 0    | 4.99 GB  | 7 %  | 0 %      | OK (v2) |
+---------------------+-----------+-------+------+----------+------+----------+---------+

/localhost/VSAN Datacenter/computers/VSAN Cluster/hosts> cd /
> cd 1
/localhost> mark esxi ~cluster/hosts/10.1.1.81

/localhost> vsan.host_info ~esxi
2015-07-08 18:20:46 +0000: Fetching host info from 10.1.1.81 (may take a moment) ...
Product: VMware ESXi 6.0.0 build-2159203
VSAN enabled: yes
Cluster info:
  Cluster role: master
  Cluster UUID: 529b150c-c557-299b-58e1-c3bcc8a543dc
  Node UUID: 54c7edfe-a5a1-458b-734a-000c29554efb
  Member UUIDs: ["54c7edfe-a5a1-458b-734a-000c29554efb", "54c7f36b-8867-9610-8438-000c29bc9ece", "54c7f0bc-0185-23d2-9611-000c296a6e1f"] (3)
Node evacuated: no
Storage info:
  Auto claim: no
  Checksum enforced: no
  Disk Mappings:
    SSD: Local VMware Disk (mpx.vmhba1:C0:T2:L0) - 5 GB, v2
    MD: Local VMware Disk (mpx.vmhba1:C0:T1:L0) - 5 GB, v2
    SSD: Local VMware Disk (mpx.vmhba1:C0:T4:L0) - 5 GB, v2
    MD: Local VMware Disk (mpx.vmhba1:C0:T5:L0) - 5 GB, v2
FaultDomainInfo:
  Not configured
NetworkInfo:
  Adapter: vmk2 (10.1.3.81)

8. Use the "q" command to quit

/localhost> q


1 comment: