FAQ: Using SSDs with ESXi
Most state-of-the-art enterprise storage architectures make use of SSD (Solid State Disk) storage in one or the other way, and - with the inevitably dropping prices - they have become affordable even for home use. What is their benefit? Since they are based on Flash memory SSDs offer a much higher throughput and much lower latency than traditional magnetic hard disks. I can well remember my delight when I equipped my home PC with an SSD for the first time and saw Windows booting ten times faster than before, in only a few seconds ... and I always wondered how VMware ESXi and the VMs it runs would benefit from SSD storage.
Well, a while ago I upgraded the two ESXi boxes that make up my Small Budget Hosted Virtual Lab (SBHVL) to include Intel Haswell CPUs, and one of them is also running with 2x Intel 240GB SSDs now. It's time to write about what I have learnt about ESXi and SSDs: In this blog post I will summarize how ESXi can make use of local SSDs in general, and specifically what you need to think about when using them as regular datastores.
How VMware ESXi can make use of local SSDs
ESXi can use locally attached SSDs in multiple ways:
- as Host swap cache (since 5.0): You can configure ESXi to use a portion of an SSD-backed datastore as swap memory shared by all VMs. This is only useful if you plan to heavily overcommit the RAM of your host (e.g. in VDI scenarios). Swapping out a VM's memory to disk is the last resort if all other memory reclamation methods (like page sharing and memory compression) have already been fully utilized, and it will usually have a significant performance impact. However, swapping to SSD is less bad than swapping to hard disks and will reduce this impact.
For details on how to configure the Host swap cache see the vSphere Resouce Management documentation. - as Virtual Flash (since 5.5): Since vSphere 5.5 you can format an SSD disk with theVirtual Flash File System (VFFS) and use it either as Host swap cache (see above) or as a configurable read and write-through cache for selected VMs. I consider the latter to be much more useful than a swap cache, because it allows you to use SSDs as a drive cache for a VM's virtual disks that are stored on regular hard disks. However, it requires Enterprise Plus licensing. The vSphere 5.5 Storage documentation includes details on how to set this up.
- as part of a Virtual SAN (VSAN) (soon coming): VSAN is in public beta right now and will require vSphere 5.5 once it is generally available. It allows to combine the local storage of multiple ESXi hosts into a dynamic and resilient shared pool and even requires to include at least one SSD per host which is used as write buffer and read cache. For further information you can read Duncan Epping's introduction to VSAN and the white paper What's new in VMware Virtual SAN (VSAN).
- as a regular datastore: Of course you can also just format your SSD disks with VMFS and use them as regular datastores for your VMs. This is fully supported by VMware, and the remainder of this post will focus on this usage scenario.
Checks and fakes ...
After you have built an SSD disk into your host you should at first check whether it was properly detected as SSD or not. In the vSphere Client you need to look atHost/Configuration/Storage/Devices. The Drive Type will be shown there either as non-SSD orSSD:
SSD display in vSphere Client |
esxcli storage core device listto list all disk devices and their properties. If the output includes a line
Is SSD: true
then the disk is properly detected as SSD.In the rare event that an SSD is not properly detected you can use Storage claim rules to force its type to SSD. The vSphere 5 docs include detailed instruction on how to do this. This is also useful if you want to fake a regular hard disk to be an SSD, e.g. for testing purposes!
Once you have created a VMFS datastore on a properly detected (or faked) SSD disk and put a VM on this datastore its virtual disks will inherit the SSD property. That means the Guest OS will be able to detect a virtual disk residing on an SSD datastore as a virtual SSD disk and treat it accordingly. For this detection to work ESXi 5.x, VM hardware version 8+ and a VMFS5 datastore are required (see vSphere docs)!
Again, for testing purposes you can also fake a single virtual disk to appear as SSD (regardless of the underlying datastore's type) by setting a parameter like scsiX:Y.virtualSSD = 1 in the VM's configuration file. See William Lam's post about this for details.
Finally you may want to check if the Guest OS in the VM properly detects its virtual disk as SSD. This is important at least for modern Windows versions, because they then put various system optimizations in place. For Windows 7 (and 2008 R2) it looks like there is no easy way to tell if it has properly detected the SSD. You need to use an indirect way and check if the system optimizations have been applied or not - this MSDN blog post will help. With Windows 8 (and 2012) it is much easier. Open the Control Panel applet Defragment and optimise your drives and it will clearly list your drives' media types:
Defragment and optimise your drives in Windows 8 |
What about lifetime?
SSDs are supposed to have a limited lifetime (compared to hard disks), because their flash-based cells can only bear a certain number of (re)write cycles before they fail. Nevertheless most consumer grade SSDs are sold with a 5-years warranty - under the assumption that you write anaverage of max. 20 GB per day on to the disk.
That means you can estimate the (remaining) life time of an SSD disk by monitoring its write volume. The topic Estimate SSD Lifetime in the vSphere docs explains how to do this:
- Determine the device id of the SSD by listing the disks with
esxcli storage core device list - Display the statistics of the SSD by running
esxcli storage core device stats get -d=device_id
Displaying disk statistics with ESXi |
These statistics are reset to 0 when the host reboots, so just check its uptime with the uptime command and you will get an idea of how many GB were written per day in average.
Does ESXi support TRIM?
Since the write cycles of each Flash cell are limited the controller of an SSD will try to evenly distribute all writes over the complete disk. And it will carefully track what cells are already in use and what cells are no longer in use and can be overwritten. Now there is a problem: The controller has no awareness and understanding of the file system (e.g. Windows NTFS) that the OS uses on the disk and thus can not easily tell on its own whether a block is in use or not. As the number of known free Flash cells decreases the write performance of the SSD also decreases because it heavily depends on the number of cells that can be simultaneously written to.
To address this issue the ATA TRIM command was introduced many years ago. Modern Operating Systems use the TRIM command to inform the SSD controller when they delete a block so that it can add the associated Flash cell to its free list and knows that it can be overwritten.
So, does ESXi support TRIM? I tried really hard ..., but it looks like today you cannot find an official and reliable source clearly stating either that ESXi supports TRIM or not. Most non-VMware sources state (in blog posts etc.) that ESXi does not support TRIM, but without providing a reliable source.
However, while researching I found out that the SCSI equivalent of the ATA TRIM command is theUNMAP command, and this rang a bell in me: In vSphere 5.0 reclamation of VMFS deleted blocks with the help of SCSI UNMAP commands was introduced as part of the vStorage APIs for Array Integration (VAAI). When vSphere 5.0 was released this functionality was enabled by default (if the Storage Array supported it), but this was soon changed, because it had undesired side effects in some situations. Today VAAI space reclamation is a manual process that can be triggered by running the command
vmkfstools -y nn
on a datastore. The VMware KB article 2014849 explains this in detail and also mentions how you can check whether this is supported on a disk or not: The command
esxcli storage core device vaai status get -d device_id
will display the line
Delete Status: Supported
if SCSI UNMAP can be used on a disk to perform space reclamation. And guess what: This is the case with the SSD disks that I have in my ESXi host! And consequently I was able to run vmkfstools -y on them!
But will this really fire TRIM commands to the SSD? With this question in my mind I used my Google-Fu again and finally stumbled over this french blog post by Raphaël Schitz who discovered the same and asks: SSD + VAAI = TRIM? Like me he is also unable to definitely answer this question ...
If someone of VMware reads this and is able to answer this question then I would be grateful to hear from him - please end our days of uncertainty about TRIM support in ESXi!!
What if ESXi does not support TRIM?
In the early days of SSDs TRIM support was very important to keep the drive healthy and fast. But nowadays' SSD controllers have become much more intelligent - they are able to detect unused pages on their own and free up Flash cells with a so-called Background Garbage collectionprocess. So it is disputable whether TRIM is today really still needed or not. But hey, if Windows supports TRIM then ESXi should do so, too, right?! At least I would feel more confident then ...
ไม่มีความคิดเห็น:
แสดงความคิดเห็น