Sysadmin Today #13: VMware Performance Best Practices

In this episode, I discuss best practices for setting up and maximizing a VMware environment. Including CPU, Memory, Storage and Network optimizations.

Host: Paul Joyner

Disk Performance
Device Latency not to exceed 25ms | Kernel & Queue Latency not to exceed 2ms | Total latency not to exceed 27ms
Counters to use:
Queue command latency
Physical device read latency
Command latency
Kernel command latency
Kernel read latency
Bus resets
Queue read latency
Physical device write latency

Network Performance
Look for dropped packets, usage in KBPS, check for errors on the physical switch port.
Counters to use:
Data receive rate
Data transmit rate
Receive packets dropped
Transmit packets dropped

MEM Performance for VMware Essentials Customers

Performance Best Practice for VMware 6.0

Veeam One Monitoring

Please Support the Channel

4 thoughts on “Sysadmin Today #13: VMware Performance Best Practices

  1. Paul, why do you automatically recommend a SAN for the SMB? I would argue that a SAN has no place in an SMB. Just configure a beefy ESXi host(s) with one big raid 10 (OBR10). Less complex and much faster that what a SAN can provide.

    1. Ryan,
      Thanks for the feedback. VMware core features was designed on the principle of using shared storage. Now that can be a SAN or a NFS capable NAS. The beauty of using VMware is its ability to not have a single point of failure. Having all your eggs in one basket is not the best practice. Using vSAN is very affordable, plus there are some very cost effective NFS systems as well. In regards to speed a proper SAN can provide much higher I/O’s then local storage. Now if you are only using one or two virtual machines then yes one server is a good choice depending on its load. Once you start to get three or more then a multiserver environment is highly recommended.

      1. Paul,
        A SAN does have it’s place in larger enterprise environments. You still have all your eggs in one basket and in a worse situation with a SAN. Take a look at Starwind’s Virtual SAN ( The product is completely FREE. You could also just go with 2 HOSTS with the same specs and use the Veeam Backup and replication product for a much less complex solution. You really don’t need a SAN in the SMB. It doesn’t belong.

        1. Ryan,

          I will professionally state that we agree to disagree. SAN’s and higher end NAS hardware is designed for high availability. They provide a higher level of fault tolerance then a Server with internal storage with RAID. I did check out your link, and using VMware I would opt for VMware’s vSAN technology which does the same thing except with better integration and performance, and it is cost effective. Again, in some very limited and small role outs, a single server with internal storage may work out great, but once you start using three or more VM’s, Remote Desktop Services/Virtual Desktops, SQL and other high demanding workloads (which a lot of SMB’s do) having a storage platform that is not only supported, but offers the fault tolerance and performance is vital. That is why I think VMware’s vSAN shines, because it uses the internal storage of the server with a single (you can add more) SSD drive for caching and provides fault tolerance between two for more host servers. SAN/NAS hardware comes in all sizes with price points that are targeted to both SMB and Enterprise markets. Also, I would argue that SAN’s do not require this high level of complexity that you are referring to. The only complex aspect, if you can call it that, is that you need an additional switch, or a switch that has a VLAN carved out. Ryan, I really do appreciate your comments. It is beneficial to get different viewpoints.

Leave a Reply

Your email address will not be published. Required fields are marked *