VSAN 6.5 to 6.6 Upgrade Issues with CLOMD Liveness

Before attempting any upgrades in a production environment I always try to test the process and functionality in a lab first. With this in mind I wanted to test the upgrade of VSAN 6.5 to 6.6 in my home lab, and unfortunately I initially didn’t have a whole lot of success. I’ve now fixed all the issues and just in-case anyone has the same problems, I’d like to ensure the resolution is readily available. I haven’t had the time to define the root cause but I have resolved the issues.

Firstly, let me make sure you understand, this is on UN-SUPPORTED hardware. These issues may not ever exist in a fully supported and compliant production environment. I have not seen these VSAN upgrade issues in fully supported environment. However, we all tend to run our labs on un-supported hardware so I’m sure I won’t be the only one that comes across these issues and just in-case other people do, the resolution is pretty simple. I have seen the same issues three times in three separate (unsupported) environments.

The upgrade was from VSAN 6.5 to VSAN 6.6 and as VSAN isn’t a stand-alone product, it is built into vSphere so the upgrade performed is as simple as upgrading ESXi. I was running ESXi 6.5.0 (Build 4887370) and the upgrade was to ESXi 6.5.0 (Build 5310538).

It has been a long (and i mean a LONG time) time since I have seen an ESXi purple screen. But soon after upgrading my environment to ESXi 6.5 (5310538) my hosts started purple screening. I had to take a screen shot because this is a rare sight. It only happened once and since the below fixes were applied it has never happened again.

Screen Shot 2017-05-26 at 7.28.59 PM

The VSAN upgrade process is very straight forward to perform.

  • Upgrade vCenter Server
  • Upgrade ESXi hosts
  • Upgrade the disk format version

Straight after the upgrade I started receiving vMotion alerts and my VMs wouldn’t migrate between hosts. There didn’t appear to be any configuration issues with vMotion and it was working perfectly fine before the upgrade. I tested the connectivity using a vmkping between hosts on the vMotion vmkernel IP and it failed. There was no network connectivity between hosts on the vMotion vmkernel port!

The vMotion fix:
I found that simply deleting the existing vMotion vmkernel and recreating a new vmkernel with the exact same configuration fixed all the issues. I had to do this on all hosts within the cluster and vMotion started working again.

CLOMD Liveness

This brings me to the next issue which was a lot more critical, the CLOMD Liveness. After I resolved the vMotion alerts, I ran a quick health check on VSAN. I found that my hosts were now reporting a “CLOMD Liveness” issue. This is concerning because the CLOMD (Cluster Level Object Manager Daemon) is a key component to VSAN. CLOMD runs on every ESXi host in a VSAN cluster and is responsible for creating new objects, communication between hosts for data moves and evacuations, and the repair of existing VSAN objects. To put it simply, this is a critical component for creating any new objects on VSAN.

Screen Shot 2017-05-26 at 9.04.03 PM

If you want to test this out (in a test environment), SSH to your ESXi hosts and stop the CLOMD daemon by running “/etc/init.d/clomd stop” and then try to create new objects or do a VM creation proactive VSAN test and see what happens. You will get the error “Cannot complete file creation operation”.

Screen Shot 2017-05-26 at 9.15.41 PM

And the output from the proactive VSAN test is “Failed to create object. A CLOM is not attached. This could indicate that the clomd daemon is not running”.

Screen Shot 2017-05-26 at 9.19.53 PM

If CLOMD isn’t running, you’re not at risk of losing any data, it just means that new data can’t be created, I would still suggest that it is critical to get it running again.

The CLOMD Liveness can occur for a number of reasons. The VMware KB article is here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2109873

In order to check the CLOMD service/daemon was running on the hosts you can execute the following command on each host:

/etc/init.d/clomd status

The results showed that the CLOMD service was not running and even after re-starting the service, it would stop running a short time later.

Screen Shot 2017-05-26 at 8.20.43 PM

The VSAN CLOMD Liveness fix:
Learning from the vmkernel issues, I immediately tried deleting and re-creating the VSAN vmkernel on each host and this fixed the issue. However to do this was a little more difficult than the vMotion process because when you delete the VSAN vmkernel you instantly partition that host, so you need to be careful how you do this.

Place the host in Maintenance Mode first! We aren’t going to lose any data so you don’t need to evacuate the data, however I would recommend you at least select “Ensure data accessibility from other hosts”. Selecting “No Data Migration” is generally only suggested if you are shutting down all nodes in the VSAN cluster, or possibly a non-intrusive action like a quick reboot.

Once the host is in Maintenance Mode you can now delete the existing vmkernel and re-create a new one with the same settings. I would then reboot the host for good measure. Once the host is back up, you can exit Maintenance Mode and then move on to the next host.

Again, I stress that I have only seen this issue on un-supported hardware.

My VSAN Upgrade Process

  1. Upgrade vCenter
  2. Upgrade each ESXi server
  3. Upgrade the disk format version
  4. Run a VSAN health check!
  5. If you have a CLOMD issue then for each ESXi host in the VSAN Cluster
    1. Place a host in Maintenance Mode
    2. Delete and re-create the vMotion vmkernel
    3. Delete and re-create the VSAN vmkernel
    4. Reboot the ESXi host
    5. Move on to the next host
  6. Run a VSAN health check again

 


 

Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC

SuperMicro VSAN HCIBench

After spending a lot of time and money building up my home lab environment, the first thing I wanted to do was test it out. I wanted to know what sort of performance will I get from this little VSAN lab. In my haste to to get my hardware I opted for an NVMe M.2 SSD that I expected to perform well but it wasn’t ever going to break any records. It was available at the time and at the right price, so I bought it. Now that my lab is built, I really want to know how it actually performs and if my eagerness paid off or if it’ll come back to bite me. Regardless of the hardware, this is a home lab configuration built on the SuperMicro E200-8D platform with an all flash VSAN. How good can it be?

Lab Specs

Here is my hardware details. I have 3x SuperMicro servers in a VSAN cluster, each running the same hardware, connected via a 10Gb network.

Product

Details
SuperMicro E200-8D SYS-E200-8D
CPU Intel XEON-D 1528 1.9GHz (6 core)
RAM 64GB ECC UDIMM RAM (4 x 16GB)
Capacity Disk 1TB 2.5″ SanDisk X400
Cache Disk 128GB NVMe M.2 SanDisk X400
 Network 10GBase-T with 9000MTU
ESXi  ESXi Version 6.5.0 (4887370)

Lab Test with HCIBench

Screen Shot 2017-04-29 at 8.16.19 AM

VMware Flings publish an awesome little appliance called HCIBench, which is a Hyper-Converged Infrastructure Benchmarking tool. You can download it from the VMware Flings website. This is a very simple tool that makes performance testing of a HCI POC or home lab an extremely simple task. Run it in your home lab and let me know what you get. I’d like to get some comparisons on other home lab environments.

I won’t go into much detail around the install process because it is very simple and the Install Instructions are very clear and well written. The gist of the install goes like this:

  1. Download and import OVA.
  2. Enter the network configuration.
  3. Log into the website at http://ipaddress:8080.
  4. Username is “root” and the password was setup in the OVA deployment.
  5. Enter all of your vCenter details.
  6. Press the button to download Vdbench and then upload it. This is for licensing constraints. You must download Vdbench yourself.
  7. Tick the “Easy Run” for automated VSAN testing.
  8. Validate and then start the Test.

Once the test has started you will get a progress screen

Screen Shot 2017-04-29 at 7.56.49 AM

The HCIBench tool will deploy the necessary VMs to your environment, configure them and wait for them to respond on the network. You will need to either provide a DHCP network or tick the box to get the HCIBench tool to allocate IPs to the worker VMs.

Screen Shot 2017-04-29 at 7.59.40 AM

It will take a while for the VMs to be deployed and then they will prepare the disks before the actual test starts. This takes about 10 minutes or more. Once everything is ready the test will start.

Screen Shot 2017-04-29 at 10.10.25 AM

 It will take a couple hours to do a full test. While it was running I logged in to esxtop and took a couple quick screen shots of the current disk activity.

Screen Shot 2017-04-29 at 8.50.16 AMScreen Shot 2017-04-29 at 8.59.35 AMScreen Shot 2017-04-29 at 9.00.13 AM

Results

After a few hours of testing I had the results. I wasn’t really surprised at the figures, they seem to be exactly what I was expecting to get from the SanDisk X400 disks. According to the UserBenchmark website the expected 4k Write throughput for the X400 is 63.7MB/s and my throughput was 62.81MB/s. Now it’s time to buy a Samsung 960 EVO M.2 SSD and do the test again 🙂

Datastore SuperMicro_VSAN
VMs 6
IOPS 16078.98 IOPS
THROUGHPUT 62.81 MB/s
LATENCY 23.8660 ms
R_LATENCY 16.0298 ms
W_LATENCY 42.1727 ms

Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC

SuperMicro Build – BIOS and IPMI

Upgrading the BIOS and IPMI firmware is not necessary but I do highly recommend at least the IPMI update. Depending on the age of your system the BIOS may already be up to date. I purchased three servers at the same time, from MITXPC and two of them already had the updated BIOS firmware. Upgrading the firmware can be a daunting task, especially if you have just bought a new server. The last thing you want to do is break your new server by upgrading the Firmware incorrectly. Luckily the process to upgrade the firmware is pretty straight forward so I went ahead and did it just to be sure. If you aren’t comfortable taking responsibility for upgrading your firmware then just upgrade the IPMI software.

Upgrading the BIOS Firmware and IPMI

Download the SuperMicro BIOS update and IPMI update files from the website here:

https://www.supermicro.com/support/bios/firmware.aspx

https://www.supermicro.com/support/resources/results.aspx

The IPMI update will all you to use the iKVM/HTML5 console screen rather than the older JAVA based console. This is much easier to use and much more secure.

In order to upgrade the BIOS firmware you need to boot to a USB disk and apply the firmware update from there. I use Rufus to create the bootable USB disk. There are no specific configuration details that you need to configure with Rufus other than to use a FAT32 file system and select the FreeDOS bootable image.

029- Screen Shot 2017-04-10 at 1.45.09 PM

Once this is finished, copy the relevant files to the USB drive. You can see below that I have also copied the SuperMicro IPMI tool. This is handy in case you need to reset the IPMI settings.

IPMICFG_1.26.0_20161227 – SuperMicro IPMI Configuration Tool

REDFISH_X10_346 – SuperMicro IPMI Update

X10SDVF6_A03 – SuperMicro BIOS Update

 031 - Screen Shot 2017-04-10 at 2.02.11 PM

Insert the USB drive into your SuperMicro E200 and boot to the USB. You will need to Select the non UEFI bootable device that represents your USB drive.

032 - Screen Shot 2017-04-10 at 10.34.47 AM

When FreeDOS has booted, change the directory to the BIOS Update folder and then type:

FLASH.bat

033 - Screen Shot 2017-04-10 at 10.35.33 AM

The BIOS update will now run. It takes a while, so be patient. Once it is complete you will see the below confirmation to power-off the system. For good measure it is also recommended by SuperMicro to reset the CMOS.

The BIOS update is now complete. The next step is to update the IPMI.

The IPMI update can be done from the UI very easily. If you haven’t already configured the IPMI, log into the BIOS and on the IPMI tab configure your networking settings.

Type the IP address for the IPMI into a web browser and then enter the default username and password (needs to be capitals):

Username: ADMIN

Password: ADMIN

036 - Screen Shot 2017-04-10 at 10.56.44 AM

Under the Maintenance tab, select Firmware Update.

037 - Screen Shot 2017-04-10 at 10.58.45 AM

Press the button to “Enter Maintenance Mode” and then upload your IPMI Update file.

038 - Screen Shot 2017-04-10 at 10.59.11 AM

Once the files are uploaded you will need to un-tick the Preserve settings options. This is required by SuperMicro as part of the IPMI upgrade. If you leave these options ticked the upgrade may fail.

038 - Screen Shot 2017-04-10 at 11.11.08 AM

The upgrade may take a while and the system will reboot once it has finished. Once the upgrade is complete you can see the Firmware Revision is now updated.

039 - Screen Shot 2017-04-10 at 11.06.08 AM

After the IPMI upgrade is complete, you can now launch a iKVM/HTML5 remote console session.

Your hardware is now ready to use. The next blog post will be demonstrating the VMware Validated Design and the Automated Deployment Toolkit, which will build the entire SDDC stack in my home lab, FULLY AUTOMATED.

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


PreviousSuperMicro Build – The InstallationNext – Networking Configuration

SuperMicro Build – Installation

In the previous blog we have talked about the hardware options for the SuperMicro and what you might get for you money. Now you have bought your server, you’ve bought all of your components and your ready to start installing everything. I am not going to do a full step by step guide, rather I will give you all the information you need to do the install yourself, without writing an article that is too long and boring. If you have any questions about the installation then let me know.

It is extremely simple to open up the SuperMicro E200-8D, it is literally a few screws and you’re in. So take the lid off it and let’s get started!

Installing the 3rd Fan

The SuperMicro E200 ships with 2 case fans. These are more than suitable to keep the unit cool, however my systems are not in an air-conditioned room, so for added security on those hot days I opted for a 3rd fan.

There are 7x screws around the face plate that first need to be undone, then you can remove the front panel and access the fans. Don’t forget to remove the blanking plate from the front panel. There’s no point installing a 3rd fan if it can’t suck any air in! Unfortunately this is a very easy step to miss, and I’ve seen it done on multiple occasions.

Just bend the blanking plate backwards and forwards a couple times and the tabs at the bottom will break.

The 3rd Fan is very easy to fit, just a couple screws in the bottom of the fan to secure, then plug it in. The motherboard manual is clearly labelled FAN1, FAN2, FAN3 and FAN4. I have highlighted the picture below to show where the plugs are. I’ve used FAN1, FAN2 and FAN3. I have also paid careful attention to route the fan cables out of the airflow path and tucked them out of the way.

Installing the RAM and NVMe

The RAM and NVMe are very simple to install, so no need for instructions. The NVMe has a single screw and the RAM just drops straight in. Here’s some pictures, just because.

Installing the HDD

This single topic is what has prompted me to write this article. Honestly, this can cause major issues if not installed correctly. First I’m going to show you a few pictures of what NOT TO DO.

In case the above pictures weren’t abundantly clear to you, there are 2 major issues here that could break your system.

  1. The HDD is mounted the incorrect way and the cables are bent and rubbing against the fan housing.
  2. The excess HDD cables are wound up in a roll and jammed in the front of the fans, stopping most of the air from freely circulating over the CPU and NVMe card. This is a great way to overhead your system real fast!

Ok, so now that you’ve seen what not to do, it seems like it should be pretty easy to do it right the first time.

Face the HDD with the cables towards the back of the unit. This will give you a lot of room to play with.

The power and SATA cables are a little bit more difficult to find space but try to place them neatly and don’t obstruct the airflow to your critical components. I’ve seen a few different variations of types of cables that are delivered with the SuperMicros, so depending on what type of cables you have then this could be much easier for you than it was for me.

I actually made my own cables by reducing the size of the cables that were delivered with my unit. If you can find a set of new power cables that are more suitable then please let me know as I would love to buy them. What you need is a Female Molex to Female SATA power cable that is at least 20cm long. This would remove the major bulk of useless cables from your system and leave it nice and open to promote the best air flow.

Here are some pictures of how I configured my system.

Here is a quick comparison of the bad and good installation. In the first picture you can see that the bundle of cables are all but stopping the air circulation. The second picture shows a wide open zone for air to flow freely.

Rack Mount Brackets

One of the reasons I have chosen the SuperMicro E200 in my home lab for its portability, so it might seem a bit weird to then fit it in a rack. There are a couple reasons for this, but mostly it is because I can separate a cool zone and hot zone within my rack and provide better cooling efficiency for my hardware. I think that if it takes me 10min to remove the rack mount brackets then this is a small price to pay for the improved cooling, better security, better protection, it’s easier to use and looks better too.

The rack mount brackets are secured to the side of the unit and when fitted to a rack they provide a physically separate zone that forces cold air to enter in the front and hot air to be vented out the back. The rack mount brackets also provide a secure mounting tray for the power supply.

There are screw holes in the side of the SuperMicro E200 and the rack mount bracets just bolt straight up. It is extremely simple to do. Here are a few pictures to illustrate the installation.

Server Rack

Buying the right server rack can be really hard. At first I purchased a small network cabinet but the length was a problem. My 10Gb switch was too long to fit in this short cabinet, so I had to start looking at other options. A colleague of mine had an old full size server rack that he didn’t need so we swapped our cabinets over. I couldn’t be happier with that decision. Yes the 42RU server rack is large and I don’t need all of the capacity but it has provided me with a unique opportunity to create a “cool zone” within the server rack that feeds cold air into my servers and lets hot air vent out the back. Considering my lab environment is locked up in my garage with no air-conditioning and not a lot of air flow, a full size rack with a controlled cold zone keeps my hardware humming along.

Along with my SuperMicro E200 servers, I have a full size tower server that serves as a NAS, a full length 10Gb switch and an old cheap 2RU server that might come in handy one day.

You can see from the picture below that I have sealed the door with some plastic sheets and in front of the servers there is a 30cm air gap that serves as the cold zone.

023 - IMG_4273

If you look closely in the above picture, you will see that at the top of the rack there is a duct running into the cold zone. This duct feeds cold air to a 150mm hydroponics fan, which pushes approximately 200cfm of air through a water cooled radiator and into the cold zone. This whole setup is electronically controlled via a thermostat that is placed inside the rack. This is system is engaged when the temperature increases past 30 degrees Celsius. This ensures that on those hot days in the garage my lab stays cool.

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


Previous – SuperMicro Build – The ComponentsNextSuperMicro Build – BIOS and IPMI

SuperMicro Build – The Components

The SuperMicro E200-8D and E300-8D are excellent options for a home lab, especially because of their small size, low power consumption and enterprise ready hardware. If you haven’t already read my first blog post, you can find my SuperMicro vs Intel NUC post here.

So you’ve bought a nice shiny new SuperMicro E200-8D and now you’re ready to start building you home lab, right? Not quite. These units don’t generally come as a plug and play unit, there is some assembly required. In my case this includes the RAM, NVMe M.2 SSD, 2.5” Sata SSD, an additional case fan and rack mount brackets. But it doesn’t stop there! Before we start to build our home lab, we need to update the BIOS firmware and the IPMI software, which enables the use of a HTML console session instead of the old JAVA console. I will cover the steps in more detail over the next few blog posts, first the Hardware selection, then the install guide and finally the BIOS and IPMI updates.

So let’s get started.

Bill of Materials

I purchased all of my hardware from Eric Yui at MITXPC. The prices and available hardware may vary, so if you’re interested then you should check out the MITXPC website for the available stocks and pricing. Don’t forget to use William Lam’s virtuallyGhetto discount! In case you didn’t know, William Lam has secured a 2% discount from MITXPC for the community. You can find all of the details here.

I am by no means recommending that you should buy the same hardware that I have, you should buy the hardware that suits your requirements and is also within your price range. I will outline the hardware options with a good, better and best option and you can make your own choices. Please add to the comments if you have any relevant experience on different products that you prefer.

First, here is what i bought.

Product

Part Number

Price

SuperMicro E200-8D SYS-E200-8D $799.99
64GB ECC UDIMM RAM (4 x 16GB) TBA $329.95
1TB 2.5” SSD SanDisk X400 $299.95
128GB NVMe M.2 SSD Plextor PX-128S2G $59.99
1x Additional Case Fan FAN-0065L4 $9.95
Rack Mount Brackets SMC-MCP-290-10110-0B $44.95
$1544.74
virtuallyGhetto 2% Discount VIRTUALLYGHETTO2OFF -$30.89
TOTAL   US $1513.85

RAM

027- RAM

ECC Capacity Speed Price
Good Non-ECC 64GB 2133MHz $300
Better ECC UDIMM 64GB 2133MHz $400
Best ECC RDIMM 128GB 2400MHz $1,000

This is a pretty simple decision, what RAM to fit to your SuperMicro E200-8D? Even just looking at the table above shows a pretty clear decision. In my opinion the only question here is what capacity of RAM do you need? 64GB or 128GB. That’s really about the hardest thing you’ll have to consider.

In regards to ECC or Non-ECC, well the price doesn’t change much and the SuperMicro E200 is restricted to 64GB of non-ECC RAM. I can’t imagine why you would need ECC RDIMM (Registered) RAM in your home lab. I have the ECC RDIMM listed there as the “best” option but that is purely on specs. My honest opinion is that the “best” option for your SuperMicro home lab is the ECC UDIMM (Un-Registered) RAM. For the price it’s a good buy and you aren’t restricted to 64GB, which means you can have up to 128GB RAM capacity if you so desire.

This leaves one major decision you have to make, what capacity of RAM do you buy. I opted for 64GB of ECC UDIMM RAM which cost me $330. Unfortunately, in the recent months the price of RAM has increased significantly and it is now approximately $400. I have covered this topic fairly heavily in my previous blog post – SuperMicro vs Intel NUC.

I’ll make the decision as simple as I can for you. How many VMs are you planning on running and how much RAM vs CPU do they require? The SuperMicro E200-8D has 11.4GHz of CPU processing power (1.9Ghz x 6 cores). Divide the amount of RAM you think you’ll need by 11.4GHz and that will give you the approximate RAM to CPU ratio. If this RAM to CPU ratio fits what you need in your environment, then buy that amount of RAM.

128GB RAM / 11.4GHz = 11.2GB RAM per 1GHz of CPU

64GB RAM / 11.4GHz = 5.6GB of RAM per 1GHz of CPU

There are more considerations to factor in to the above calculations that I have covered in my previous post (like VSAN RAM usage), so have a read through that and make a decision on your capacity. As I said, I opted for 64GB RAM (4x16GB) ECC UDIMM.

NVMe M.2

016 -960pro-980x551-1

Sequential Read (MB/s) Sequential Write (MB/s) 4K Random Read (IOPS) 4K Random Write (IOPS) Price (approx.)
Good 500 300 90,000 50,000 $80
Better 1,500 600 150,000 80,000 $150
Best 3,000 2,000 300,000 100,000 $300 +

The SuperMicro E200-8D contains an NVMe M.2 slot on the motherboard that accepts an 80mm PCI-E x4 SSD. When selecting an NVMe M.2 SSD there are a few things that you should consider, performance, size, the cost per GB and the bus interface.

Because I am building a VSAN environment, my NVMe card will be used as the caching tier in my VSAN storage, so I don’t need a large capacity card. Duncan Epping has detailed the flash cache calculation for VSAN here. I am running a single 1TB SSD in each ESXi host so using the 10% rule my 128GB NVMe SSD is actually oversized, however they don’t generally come much smaller and the price was great, so I grabbed it.

If I were to do it again, I would probably invest in a higher performance NVMe M.2 SSD. The SuperMicro supports a PCI-E 3.0 x4 interface which can provide higher performance than a SATA 3 interface and this can make a significant difference to the read/write caching performance in your VSAN environment. I went with the “good” option and I should have probably used the “better” option. What you really need to ask yourself is how much performance do you really need and how deep are your pockets?

If you do opt for a really fast NVMe SSD then make sure you also install the 3rd fan. These small and powerful SSDs can generate a lot of heat and the 3rd fan blows air straight over the NVMe SSD which will help to ensure longevity of your hardware.
My preference would be to buy the Samsung 960 EVO M.2 PCI-E, which can be found on Amazon starting from US$130. The 960 EVO in a 250GB size provides 3,000MB/s sequential read, 1,900MB/s sequential write and up to 300,000 IOPS. You can find more information here.

2.5” HDD

017 - StorageReview-SanDisk-X400

HDD/ SSD Capacity Sequential Read MB/s Sequential Write MB/s Price (approx.)
Good 7200 RPM 2TB + 200 150 $100
Better SSD 1TB 500 300 $300
Best SSD 2TB 550 400 $800

There is a huge range of 2.5” SATA drives that are more than suitable for the SuperMicro E200. The major concern here is capacity and price. If you are running VSAN like I am, then even a 7200rpm HDD is going to be a really good option due to their high capacity and cheaper price. Performance is still a concern but VSAN uses a fast NVMe M.2 SSD for read and write caching to provide better performance. The capacity disk will still need to have reasonably good read performance because not all reads will be processed on the high performance NVMe cache, so performance still matters but with only one disk space available in the E200-8D you need to use this space wisely and get the most out of your capacity disk.

Because I am running VSAN and I wanted to enable the compression and de-duplication capabilities, this means I am restricted to using an all flash VSAN with an SSD for the capacity disk. If you were running a hybrid VSAN then you could pair a high capacity 7200rpm HDD with a high performance NVMe card and end up with an excellent disk setup for your home lab.

In my situation, I have attempted to get the largest SSD capacity that was within my budget. I ended up purchasing the SanDisk X400 1TB SSD for US$300. This gives me 500MB/s sequential read and 350MB/s sequential write. The Samsung 850 EVO 1TB was a close contender but for an extra US$100 you get a little bit faster sequential write speed, which isn’t important for the VSAN capacity tier anyway. The SanDisk X400 provided me with 1TB capacity and more than enough performance for the capacity disk tier.

When comparing the NVMe and SSD disks, I have found that this UserBenchmark website has been extremely useful to compare various brands and their performance.

Scalability

Now that you have considered the above RAM, NVMe and HDD requirement, the one final thing that I would ask you to consider is if you will scale-out or scale-up when you require additional resources.

What this essentially means is that once you have consumed all of your resources, will you buy additional servers (with the same resources) or will you replace the components within your server to higher spec items? In my opinion the main constraint is CPU processing power (11.4GHz), so the best option is to lean towards scaling out.
Why is this important? Well, cost. If you have spec’d your servers with high performance items that are lower in capacity, then you will probably need to buy additional servers sooner rather than later. This could leave you out-of-pocket a lot of money. The “best” option is really above and beyond but this shows the capability of the SuperMicro E200 to enable a high performance and high capacity platform. For a home lab, I will always stick with the “better” options as I feel that this provides great performance, more than enough capacity and is very cost efficient.

Overall Component Price

 

RAM NVMe HDD Price

Good

64GB Non-ECC 500 MB/s HDD, 2TB

$480

Better

64GB ECC UDIMM 1,500 MB/s SSD, 1TB

$850

Best 128GB ECC RDIMM 3,000 MB/s SSD, 2TB

$2100

I hope that you can now understand the additional hardware components that are required to be purchased with the SuperMicro and can now make an informed decision as to what your options are. I would also hope that you have a realistic performance expectation that is based on the Good, Better and Best components. If you refer to the above table you can clearly see that there is a massive step up from the Better to the Best options. I’d like to say that you get what you pay for but in this scenario I don’t think that anyone requires the performance, capacity or availability that comes with the Best options. I personally sit somewhere between the Good and Better options but if I were to do it again, I would factor in spending $800 on the SuperMicro and another $800 on the additional components. With a little perspective it makes the SuperMicro look quite cheap when you are willing to spend just as much on the components as you are for the SuperMicro E200. This is where the real cost (and performance) is, the components.

Continue on to Part 3 to follow along with the Installation and common mistakes people make.

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


Previous –  SuperMicro vs Intel NUCNextSuperMicro Build – The Installation

Home Lab build – From the start.

Most of my colleagues and just about anyone in the IT industry have a home lab. Is there a better way to continue learning your chosen technology? Most of us learn from experience and how better to gain experience than by building your own enterprise environment at home. I have set out to rebuild my old home lab environment and I will be detailing the configuration throughout the process so that anyone can build something similar in their home. I have chosen the hardware platform (SuperMicro E200-8D) and will be detailing the process to build out an entire VMware SDDC lab environment.

There are multiple approaches to building your lab at home. Not many people have the capability or money to run enterprise hardware in your home lab, there are sacrifices to be made and most often these decisions come down to noise, space, power consumption and cost.

If you don’t have a home lab already, build one, break it, fix it, maintain it and learn from your experience!

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


Next – SuperMicro vs Intel NUC

SuperMicro vs Intel NUC

A couple of weeks ago I was talking to William Lam (http://www.virtuallyghetto.com/) and Alan Renouf (http://www.virtu-al.net/) about their exciting USB to SDDC demonstration, they were using an Intel NUC to deploy a VMware SDDC environment to a single node using VSAN. I offered them the opportunity to test out the same capability with one of my SuperMicro E200-8D servers and they took me up on the opportunity. Since then I have been approached by a number of people with requests for information about why I chose to go with the SuperMicro E200 for my home lab over the Intel NUC. I’ve never written a blog before but I thought this might be a good way to “cut out the middle man” so to speak. So here it goes, my reasons for why I chose the SuperMicro over the Intel NUC.

My previous home labs have generally been made up of used enterprise servers that can be picked up cheaply. These used servers are loud, power hungry and heavy. My goal was to firstly consume less power and secondly make my lab somewhat portable. These requirements appear to be popular amongst the community at the moment and there are a lot of stories about people using the Intel NUC’s to achieve these outcomes. I started to look around and it was fairly obvious that there were two stand-out options, the Intel NUC and the SuperMicro E200. I was left with a decision to be made and had to ask myself some additional questions about what I really wanted in my home lab. I came up with the following requirements.

  1. Minimal power consumption.
  2. Small and lightweight.
  3. Capable of a good consolidation ratio of VMs to Host.
  4. Capable of using all flash VSAN (albeit in an unsupported configuration).
  5. Enough scalability to expand the lab in the future.
  6. Good availability. These need to be up when I am doing demonstrations to customers.
  7. Ability to set up an enterprise-like environment for comparisons with customer environments.

The next step was to compare my options. The following table takes information from the respective vendor sites and addresses my specific requirements plus a couple of additional considerations. This table made my decision easy for me. It became quite obvious to me that the SuperMicro was the more superior option for my home lab, in fact the SuperMicro is an enterprise ready solution.

SuperMicro E200-8D Intel NUC 7th Gen
ESXi 6.5 Compatible

Native install works

Requires NIC drivers

CPU Type

XEON D-1528

Intel i7-7567U

CPU Capacity

6 core 12 threads

1.9 GHz – 2.2 GHz

Dual Core

3.5 GHz – 4.0 GHz

RAM Type

4x DDR4

2x DDR4 SODIMM

RAM Capacity

128GB ECC RDIMM

64 GB Non-ECC UDIMM

32GB Non-ECC

Intel Optane Ready

Yes

Yes

HDD Capacity

1x 2.5”

1x 2.5”

NVMe / M.2 SATA

YES

YES

SATADOM

YES

NO

Micro SDXC

NO

YES

1Gbe Networking

2x 1Gbe

1x 1Gbe

10Gbe Networking

2x 10Gbe

1x Thunderbolt 40Gbps

Wireless

NO

802.11ac

IPMI

YES

NO

Power Consumption

60W

64W

Rack Mounting

YES

 NO

SR-IOV Support

YES

NO

Video Port

VGA only

HDMI 4k

Noise Comparison

2x Fans

Fan-less

USB Capacity

2 Ports

4 Ports

Price Comparison

US$799

US$630

Hopefully the above table has also helped you with your decision. The reason I opted for the SuperMicro E200 is that although it does cost  little bit more, it is an enterprise ready solution that accepts ECC memory, uses a XEON CPU, has a larger RAM capacity and a larger CPU capacity.

To provide more information, here are the more detailed comparisons between the SuperMicro E200 and the Intel NUC.

ESXi 6.5 Compatible

Both the NUC and the SuperMicro require additional drivers and configurations to the native ESXi installation in order to work properly, so this point is more for information rather than defining the obvious choice. The 1Gbe NICs on the SuperMicro work with the native ESXi drivers and will work out-of-the-box, you need to install additional drivers to get the 10Gbe NICs to work. The 10Gbe drivers are supported with ESXi 6.0 and can be found on the VMware Downloads page here: https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-INTEL-IXGBE-451&productId=491

The NUC requires additional drivers in order to get the 1Gbe NIC to work. This means that you will need to either build a custom image or install the drivers locally after ESXi is installed. William Lam has detailed the options and procedures here: http://www.virtuallyghetto.com/2017/02/update-on-intel-nuc-7th-gen-kaby-lake-esxi-6-x.html

While you are creating your custom ESXi image for your NUC or SuperMicro I would recommend you remove the native vmw-ahci driver vib from your image. This will force your storage controller to use the newer sata-ahci drivers. ESXi 6.5 contains many new drivers, however the standard image still contains both the native drivers and a newer version. VMware don’t chose to default to the use of the newer version of the drivers because they may not be 100% feature comparable. In this case, if you review the storage controller support on the VMware compatibility list, it clearly states that it is supported with the sata-ahci drivers. The native vmw-achi drivers do not perform well and you will recognise a massive performance improvement by using the new drivers. Anthony Spiteri has done an excellent job detailing the issues and resolution here: http://anthonyspiteri.net/homelab-supermicro-5020d-tnt4-storage-driver-performance-issues-and-fix/ 

Networking

The SuperMicro has a significantly better networking capability than the Intel NUC. Looking at the Intel NUC you could probably use Thunderbolt to Ethernet adapters or USB to Ethernet adapters and build yourself a redundant networking capability. At the end of the day, the SuperMicro has 2x 10Gbe NICs and 2x 1Gbe NICs. Because I want to run an all flash VSAN configuration, I want to use the 10Gbe networking capability to optimise my VSAN performance.

During my 10Gbe vs 1Gbe networking considerations, I also considered the SuperMicro E300-8D due to its 10Gb SFP configuration (and expansion PCIe slot). It is very hard to find a 10GBase-T network switch at a reasonable price and I ended up having to spend the most money on my 10GBase-T 48 port Dell switch. In hindsight, the SuperMicro E300-8D could have been a viable option because I would have been able to run a VSAN supported storage controller and a 10Gb SFP switch is much easier to find at a reasonable price. Of course there is no comparison to the Intel NUC because it doesn’t have 10Gb NICs, let alone multiple 10Gb NICs. I eventually made the decision that a 10Gb networking capability would not only provide me with the performance but also scalability and not need to be replaced in a couple years time.

Storage

The Intel NUC and SuperMicro both have similar storage capabilities. If you are wanting to run VSAN then you will need to buy an NVMe card for the caching tier and a 2.5″ HDD (or SSD) for the capacity tier. The SuperMicro has more SATA ports on the motherboard but no space to mount any additional drives, so not much point in even considering them.

The one thing that you can seriously consider here is to look at the SuperMicro E300-8D. The E300 has a smaller CPU (XEON D-1518 4 core 2.2GHz) but it is larger in size due to a PCIe x8 slot. This would be a great place to use a VSAN supported storage controller!

IPMI

I love that the SuperMicro has a built in IPMI port. This allows me to view a console screen or mount an ISO over the network. To put it simply, I don’t need to go out to my garage to manage my lab.

Noise

Yes, this is where the Intel NUC wins. The NUC doesn’t have any fans and therefore doesn’t make any noise. You could put these inside your house and you wouldn’t know they’re there. In comparison, the SuperMicro could be considered to be quite loud. This wasn’t an issue for me because my lab is in my garage, plus once you turn on my 10Gb 48 port switch, you can disregard any noise that the SuperMicro might be making. Did I really want to sacrifice my cooling capability and running temperature in order to reduce the noise? No, I want fans pushing as much air through my lab to keep it cool and a bit of noise is worth it. In fact, the SuperMicro comes with 2x 40mm fans and a spare slot for a 3rd fan which I promptly populated straight away.

Take a look at Paul Braren’s blog at TinkerTry where he analyses the noise from the SuperMicro servers – https://tinkertry.com/supermicro-superserver-sys-e200-8d-and-sys-e300-are-here

Power Consumption

One of my most critical requirements was lower power consumption. I have had some pretty high electricity bills in the past while running large rack mount servers. I haven’t measured the actual power consumption from between the Intel NUC and SuperMicro however I would be very confident that the Intel NUC would consume less power. Both units are very low in power consumption compared to a large rack mount server so they both meet my requirement.

Rack Mount

This was a big bonus. The SuperMicro E200-8D has rack mount brackets. Not only does this make my lab neat and tidy, its also easily expandable and I can also build in a hot/cold zone within my rack. Where I live it can get very hot in summer (40 degrees celsius or 100 degrees fahrenheit) so keeping my lab cool is a must. By using rack mount panels I have been able to separate the front fan intake on the SuperMicros to the rear hot outlets. I can then duct cold air into the front of the rack and keep my lab operating temperature at a respectable level. If I were to use the Intel NUCs then I would have no way of keeping them cool during Summer.

The below diagrams show the rack mount configuration and part numbers for both the E200-8D (MCP-290-10110-0B) and the E300-8D (MCP-290-30002-0B). Although I could not find these listed for sale anywhere, Eric at MITXPC was able to source the rack mount brackets for me.

Video Port

This might seem simple, however do you have a HDMI capable monitor in your home lab? I don’t. I’m using a fairly old monitor with VGA and DVI ports. The Intel NUC may be 4k capable and offer a HDMI port, which would be great for a media PC but why would you need this in your home lab? If the Intel NUC also had a VGA port then it may be comparable but it only offers a HDMI port. The SuperMicro VGA port also comes in handy when you turn up at a customer site and they don’t have a HDMI port.

USB Capacity

This is a down side for the SuperMicro as it only has 2x USB ports. Because I am running VSAN, both my internal NVMe and SSD can’t be used to boot ESXi, so I use a small USB drive which runs ESXi. While installing ESXi to the SuperMicros I found myself with a lack of USB ports. One is required for a bootable USB drive to install ESXi to, the ESXi install media on another USB drive and a USB keyboard to click through the install. 3x USB ports would have been nice but I mounted the ESXi image over the IPMI connection and clicked through the install process from the comfort of my lounge room, using the IPMI console screen.

This is where the Intel NUC does provide a Micro SDXC slot which you could very well utilise as the ESXi install location. The NUC also has an additional 4x USB ports.

SATADOM

The Supermicro SATA DOM (Disk on Module), is a small SATA3 (6Gb/s) flash memory module designed to be inserted into a SATA connector and provides high performance solid state storage capacity that simulates a hard disk drive (HDD). The SATADOM device could be used as a boot disk for the ESXi installation rather than a bootable USB and are available in 128GB, 64GB, 32GB, and 16GB sizes.

Screen Shot 2017-04-05 at 8.50.09 AM.png

CPU and RAM

The biggest consideration here is performance, capacity and availability, all of which the SuperMicro exceeds the Intel NUC in leaps and bounds. This makes the cost of the SuperMicro look cheap when compared to the Intel NUC based on the numbers, this is detailed below. At a high level the SuperMicro can use either ECC or non-ECC RAM, it uses full sized RAM slots rather than SODIMM, the RAM capacity is 4x larger at 128GB and the CPU capacity is nearly doubled. This makes the cost of the SuperMicro a lot cheaper than the Intel NUC once you start to consider purchasing more than a single unit.

The RAM capacity is a massive point in favour of the SuperMicro’s. This is incredibly important for the consolidation ratio of VMs to Hosts, especially when running all flash VSAN. You must remember to take into consideration that VSAN will consume a large chunk of your RAM. For ease of calculations I will use 10GB as my VSAN memory consumption, however the actual number was 10.5GB. Details of how to calculate your VSAN memory requirements can be found here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2113954

Let’s assume you opt for the Intel NUC with a maximum of 32GB RAM. You instantly lose 10GB to VSAN, so you’re left with 22GB RAM to use in your environment. Each Intel NUC will provide you with 7GHz of CPU processing power and 22GB of RAM. This leaves you with 3.14GB RAM for each 1Ghz of CPU used. From my previous analysis of my home lab my VMs average between 200MHz and 500MHz CPU usage. I will use 500Mhz (0.5GHz) for my calculations and a conservative estimate of my consolidation ratio.

7GHz / 0.5GHz = 14 VMs per NUC

I have approximately 65 VMs in my home lab and this would mean that I require 5x Intel NUCs just to meet my current capacity requirements with a consolidation ratio of 14 VMs per host. What’s worse is that I would be highly unlikely to actually get 14 VMs on each Intel NUC because I only have 22GB RAM available to use. Each of the 14 VMs would have approximately 1.5GB RAM allocated.

22GB RAM / 14 VMs = 1.57GB RAM per VM

Based on the above calculations the RAM is a massive constraint on the use of Intel NUCs in a home lab environment. Realistically, based on RAM consumption I would need 9 Intel NUCs in my lab. I have used an estimated 3GB RAM required per VM for the below calculations.

(3GB RAM per VM x 65 VMs) / 22GB RAM per NUC = 8.86 (9) NUCs

Each SuperMicro E200-8D has 11.4GHz CPU processing power and 128GB RAM (less 10GB for VSAN). Applying the same calculations as above.

11.4GHz / 0.5GHz = 22.8 VMs per SuperMicro

118GB RAM / 22.8 VMs = 5.18GB RAM per VM per SuperMicro

As you can see from the above calculation, with 128GB RAM in the SuperMicro the CPU becomes the constraining factor. Leaving 5.18GB of RAM for each VM using 0.5GHz of CPU. This is fairly consistent in a VMware environment where RAM is more heaving utilised than CPU, so you would be better off ensuring you have more RAM than CPU.

Let’s work out what my consolidation ratio will be based on my actual RAM requirements of 3GB RAM per VM.

(3GB RAM per VM x 65 VMs) / 118GB RAM per SuperMicro = 1.65 (2) SuperMicros

The calculations make it very obvious from anyones perspective. To suit my needs I need 2x SuperMicros or 9x Intel NUCs. I could have stuck with the 2x SuperMicros and setup a 2 node VSAN configuration utilising the virtual witness appliance as the 3rd node, however I want to make this enterprise ready so I opted to meet the minimum of 3 nodes for VSAN.

If you factor in the costs for 128GB of ECC RAM then it can get more expensive. Because I was going to buy 3x SuperMicro servers anyway (for VSAN) then why not be more price conscious and use 64GB non-ECC RAM per server. This meant that my lab was more realistically sized with 3 servers at 64GB RAM in each = 192GB RAM and the cost was $1499 per SuperMicro E200-8D (including disks).

Price

From all of the above details if it isn’t already obvious why I opted to build my lab with SuperMicro E200 servers, do the math on the cost.

9x Intel NUCs x $630 each = $5,670 USD

3x SuperMicros x $799 each = $2,397 USD

The costs I have used above are estimates based on a quick search, you may find cheaper prices if you look harder. I haven’t factored in the cost of the RAM, SSD or NVMe as these would be similar additional costs regardless of choosing the NUC or SuperMicro. There are other considerations that may affect the cost comparison of each unit, just to list a couple:

  • The supported SSD and NVMe cards could warrant a difference in price.
  • The RAM costs could vary between choosing to use SODIMM or ECC RAM.
  • The rack mount brackets on the SuperMicro servers are an additional cost.
  • The SuperMicros could likely consume more power during daily use.

If you have read all the way to the end, you are obviously just as interested in getting the “right” configuration for your home lab as I was. Seriously though, if at the end of this article you’re still leaning towards the NUC then just do it. You’re not going to be disappointed.

I purchased my SuperMicro servers from MITXPC as they specialise in micro systems. I found them via Amazon, however the prices on Amazon are significantly more expensive than if you go direct. If you’re interested, ask to speak with Eric Yui as he has been very helpful for me and will look after you.

Screen Shot 2017-04-02 at 10.52.35 AMScreen Shot 2017-04-02 at 10.52.49 AM

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


PreviousHome Lab Build – From the StartNextSuperMicro Build – The Components