pktcap-uw is an enhanced packet capture tool introduced in 5.5 that replaces the legacy tcpdump-uw utility. It can be used to capture packets using physical nics, vmkernel ports or switch port. It is simply more robust and powerful than the older tool. What follows are the main options for this command.
Here is how to use it.
To get help on how to use the command:
pktcap-uw -h | more
To capture frames using a particular vmkernel port
pktcap-uw --vmk vmk0
To capture frames using an uplink
pktcap-uw --uplink vmnic0
To capture frames using a particular switch port
pktcap-uw --switchport 10
To redirect output to a file
pktcap-uw --vmk vmk0 -o /myfile
Note: Control C to end the session.
Some captures here:
Tuesday, August 19, 2014
Mastering esxcfg-advcfg
esxcfg-advcfg is a command that allows an admin to view and/or modify parameters for the esxi host. What follows is a series of examples on how to view and modify some of these parameters.
Step 1: Type the command without any arguments (or with the "h" option) to view available options.
Step 2: Use the "-l" option to view a list of the parameters. You can use the pipe (|) with the more command to view these parameters one page at a time.
Step 3: Use the "-g" option to get the value of a parameter. In this case, the output shows the hostname for this esxi host.
Step 4: Use the "-g" and "-s" option to view and modify a parameter. This example shows how to indicate which vmkernel port is to be used for vmotion.
Step 5: View and modify NFS related parameters. This example modifies the default value of NFS volumes.
Step 6: Starting in 5.5, you now see parameters for vsans (Virtual Sans). Here you can see that 60 minutes is the default timeout for reconstruction of objects when a host is down.
Step 7: This next parameter allows an admin to make the annoying "SSH is Enabled" warning to go away.
Step 8: This example shows how to view and modify the value for Normal Shares.
Step 9: This last example shows how to modify the default value of ballooning % for vms.
Vsphere Vsan. Performance too slow?
Storage Controllers previously supported for Virtual SAN that are no longer supported (2081431)
Purpose
This article provides information on the list of controllers that are no longer supported with Virtual SAN.
Resolution
As part of VMware's ongoing testing and certification efforts on Virtual SAN compatible hardware, VMware has decided to remove these controllers from the Virtual SAN compatibility list. While fully functional, these controllers offer IO throughput that is too low to sustain the performance requirements of most VMware environments.
Because of the low queue depth offered by these controllers, even a moderate IO rate could result in IO operations timing out, especially during disk rebuild operations. In this event, the controller may be unable to cope with both a rebuild activity and running virtual machine IO causing elongated rebuild time and slow application responsiveness. To avoid issues such as this, VMware is removing these controllers from the Hardware Compatibility List.
If you have purchased Virtual SAN for use with these controllers, contact VMware Customer Service for the next steps.
Note: These Fujitsu controllers are removed from the Virtual SAN Compatibility Guide as they were added prematurely. These controllers are not affected by low queue depth.
Because of the low queue depth offered by these controllers, even a moderate IO rate could result in IO operations timing out, especially during disk rebuild operations. In this event, the controller may be unable to cope with both a rebuild activity and running virtual machine IO causing elongated rebuild time and slow application responsiveness. To avoid issues such as this, VMware is removing these controllers from the Hardware Compatibility List.
If you have purchased Virtual SAN for use with these controllers, contact VMware Customer Service for the next steps.
Note: These Fujitsu controllers are removed from the Virtual SAN Compatibility Guide as they were added prematurely. These controllers are not affected by low queue depth.
- Fujitsu PRAID CP400i
- Fujitsu PRAID CM400i
- Fujitsu PRAID EM400i
Wednesday, August 6, 2014
Learning How To Use Esxtop
Esxtop explained in 10 minutes.
What it is:
Esxtop is a command found inside of the esxi hosts that can be used to determine issues regarding cpu, memory, disk and network. By default, esxtop shows cpu related activity but this can be changed to display memory, disk and network related activity by pressing different keys on the keyboard.
The esxtop defaults to cpu related information. That is the "C" option.
What you see here is one virtual machine using the cpu 100%. Notice how this vm is using logical cpu 0. Notice the fields %USED and %RDY. The first one is about 100% while the other one is close to 0%. This shows that the vm is pretty much never waiting to have access to the logical cpu. Notice the field %MLMTD is equals to 0 since there is no cpu limit implemented.
What follows is an example of two identical vms fighting for the same cpu. They have equal shares. Notice how %USD drops to about 50% while %RDY jumps to about 50%. This shows that one vm is using the cpu, the other one waits and vice-versa.
What follows is an example of having one vm have twice the cpu shares compared to the other one. Notice the relationship of shares with %USD and %RDY. Notice the 66% vs 33% ratio.
After powering off the other vm, a limit was implemented for the remaining vm. The cpu was limited to close to 50%.
Here is the "M" option for memory. At the top you can see that this esxi host has 4gbs of RAM (PMEM/MB. Notice this shows Transparent Page Sharing activity, Ballooning activity, Compression and Swapping. When looking at the vm line, notice this vm was given 4gb. The hypervisor has granted him 4gbs without a problem (GRANT). Also notice that there is no ballooning, compression or swapping yet.
Similar capture; this time after implementing a memory limit on this vm. This triggers the memory reclamation techniques previously mentioned.
You can see disk related activity at the hba level by using the "D" option. Notice DAVG (device average latency) and KAVG (kernel average latency). You can also see reads x second, writes x second and so forth.
You can see disk related activity at the individual disk level by using the "U" option. This displays both disks with vmfs file systems and nfs datastores. This shows the internal disk and the cd/dvd. No activity was taking place while the capture was taken. Other fields of importance are ACTV and QUED; nothing active and nothing in the queue.
You can see disk related information by individual vm by using lower case "V". Notice you can see latency related information when it comes to reads and writes.
Here is the "N" option for networking. Notice the fields of dropped outbound and inbound packets.
You can use the "F" option to add or remove columns for any of the views. The ones with the "*" are already showing, the ones without the "*" are not.
You can use the "H" option for help.
You can use the "S" option to change how often esxtop refreshes. The default is 5 seconds. This can be lowered or increased.
The "Q" option quits the utility.
What it is:
Esxtop is a command found inside of the esxi hosts that can be used to determine issues regarding cpu, memory, disk and network. By default, esxtop shows cpu related activity but this can be changed to display memory, disk and network related activity by pressing different keys on the keyboard.
What you see here is one virtual machine using the cpu 100%. Notice how this vm is using logical cpu 0. Notice the fields %USED and %RDY. The first one is about 100% while the other one is close to 0%. This shows that the vm is pretty much never waiting to have access to the logical cpu. Notice the field %MLMTD is equals to 0 since there is no cpu limit implemented.
What follows is an example of two identical vms fighting for the same cpu. They have equal shares. Notice how %USD drops to about 50% while %RDY jumps to about 50%. This shows that one vm is using the cpu, the other one waits and vice-versa.
After powering off the other vm, a limit was implemented for the remaining vm. The cpu was limited to close to 50%.
Here is the "M" option for memory. At the top you can see that this esxi host has 4gbs of RAM (PMEM/MB. Notice this shows Transparent Page Sharing activity, Ballooning activity, Compression and Swapping. When looking at the vm line, notice this vm was given 4gb. The hypervisor has granted him 4gbs without a problem (GRANT). Also notice that there is no ballooning, compression or swapping yet.
Similar capture; this time after implementing a memory limit on this vm. This triggers the memory reclamation techniques previously mentioned.
You can see disk related activity at the hba level by using the "D" option. Notice DAVG (device average latency) and KAVG (kernel average latency). You can also see reads x second, writes x second and so forth.
You can see disk related activity at the individual disk level by using the "U" option. This displays both disks with vmfs file systems and nfs datastores. This shows the internal disk and the cd/dvd. No activity was taking place while the capture was taken. Other fields of importance are ACTV and QUED; nothing active and nothing in the queue.
You can see disk related information by individual vm by using lower case "V". Notice you can see latency related information when it comes to reads and writes.
Here is the "N" option for networking. Notice the fields of dropped outbound and inbound packets.
You can use the "F" option to add or remove columns for any of the views. The ones with the "*" are already showing, the ones without the "*" are not.
You can use the "H" option for help.
You can use the "S" option to change how often esxtop refreshes. The default is 5 seconds. This can be lowered or increased.
Tuesday, August 5, 2014
Solaris 11 + ZFS as an NFS server for ESXI
How to configure a Solaris 11 NFS Server
1. Start the installation of Solaris 11. In this case, a vm was created with 1 vcpu and 3 gbs of RAM. This vm was created with 2 20gb disks; the second disk to be used for nfs. Take a look at the screens and answers the questions as needed.
Step 2: Reboot the solaris server after the install and log in.
Step 3: Change the ip address of the nfs server with the following commands.
# sudo ipadm create-addr -T static -a local=10.1.1.80/24 net0
# ipadm show-if; ipadm show-addr
Step 4. Create an NFS share using ZFS and the second disk using the commands in the capture.
1. Start the installation of Solaris 11. In this case, a vm was created with 1 vcpu and 3 gbs of RAM. This vm was created with 2 20gb disks; the second disk to be used for nfs. Take a look at the screens and answers the questions as needed.
Step 2: Reboot the solaris server after the install and log in.
Step 3: Change the ip address of the nfs server with the following commands.
# sudo ipadm create-addr -T static -a local=10.1.1.80/24 net0
# ipadm show-if; ipadm show-addr
Step 4. Create an NFS share using ZFS and the second disk using the commands in the capture.
Step 5: Mount the file system using esxcfg-nas
Saturday, August 2, 2014
Just one more xen-guy??? Checking out Xen Server.
How to install and configure Xen Server and Xen Center
Sometimes it's good to compare different competing solutions. Today I felt like installing Xen Server since it had been a while. Here are the steps.
1. Download the software and burn it into a dvd.
2. Boot your physical server (or virtual machine) from the cd. Answer the typical questions. A virtual machine inside of an esxi host was used in this case. I gave the vm 2 cpus and 2 gbs of ram. Make it a 64-bit linux vm.
5. Install the msi.
6. Launch the Xen Center utility.
7. Add your xen server to your inventory.
8. Create a test vm by clicking on the New VM icon.
Sometimes it's good to compare different competing solutions. Today I felt like installing Xen Server since it had been a while. Here are the steps.
1. Download the software and burn it into a dvd.
2. Boot your physical server (or virtual machine) from the cd. Answer the typical questions. A virtual machine inside of an esxi host was used in this case. I gave the vm 2 cpus and 2 gbs of ram. Make it a 64-bit linux vm.
3. Once the installation finishes, reboot the xen server
4. Launch your browser and connect to the xen server to download the xen center.
5. Install the msi.
6. Launch the Xen Center utility.
7. Add your xen server to your inventory.
8. Create a test vm by clicking on the New VM icon.
Subscribe to:
Posts (Atom)