Home Lab build – From the start.

Most of my colleagues and just about anyone in the IT industry have a home lab. Is there a better way to continue learning your chosen technology? Most of us learn from experience and how better to gain experience than by building your own enterprise environment at home. I have set out to rebuild my old home lab environment and I will be detailing the configuration throughout the process so that anyone can build something similar in their home. I have chosen the hardware platform (SuperMicro E200-8D) and will be detailing the process to build out an entire VMware SDDC lab environment.

There are multiple approaches to building your lab at home. Not many people have the capability or money to run enterprise hardware in your home lab, there are sacrifices to be made and most often these decisions come down to noise, space, power consumption and cost.

If you don’t have a home lab already, build one, break it, fix it, maintain it and learn from your experience!

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight

Next – SuperMicro vs Intel NUC

What experience do you need to apply for a role with VMware Professional Services?

I wrote this article on Linked a while ago and I wanted combine it

VMware Professional Services in Australia are always looking for suitable candidates to join their team of consultants. Have you ever read those job descriptions and wondered if you have what it takes to work for VMware PSO?

Do you have years of vCenter and vSphere experience? This experience, although valuable is only a very small part of a consultant’s role. Clients engage VMware consultants to design their vSphere environments as a solid foundation for the deployment of our extensive suite of SDDC products. It is in the SDDC spare where VMware consultants demonstrate their value.

This narrows down the field of candidates significantly, doesn’t it? Have you deployed VMware’s entire suite of SDDC products? Don’t worry, not many people outside of VMware have. VMware’s increasing range of products and capabilities means that your unique experience may just put you ahead of the rest. Here are some simplified criteria that may assist you in determining if you should apply.

Let’s start with the basics. You should meet all of these criteria:

  • A passion for VMware. Whether you have your own blog, read all the books, build your home lab or just want to learn. You should love what you do.
  • Consulting capabilities are critical. You will be expected to work directly with clients that range from senior managers to junior techs.
  • Communicate clearly and confidently, be well presented, and have good documentation experience.
  • Fit in with the team. Your job is not always about who you work for, but who you work with. It is the people that work with you every day that makes your job enjoyable. Share your knowledge, contribute to the teams success, build relationships, and enjoy yourself.

Here is the difficult part. These are the technical skills and experience that VMware consultants really need.

You may only fit a few of these criteria, but that could be all you need. Not many people outside of VMware have experience with the following products. So how do you get short listed and score yourself an interview?

vRealize Automation (vRA)

This product is now considered our “core” knowledge. All consultants need to know vRA. Don’t worry though, new applicants can leverage their experience with PowerCLI, Orchestrator or vCloud Director to successfully apply for a role. Do your research, learn about vRA, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and how vRA integrates with other VMware products.

Application Services

You may have never heard of it but its role in PaaS is crucial. Have you previously automated the deployment of applications on Windows or Linux? This will get your foot in the door. Application Services leverages PowerShell, CMD and Bash scripts to orchestrate the installation of various applications. If you can install applications from scripts, then we want to hear from you.

vRealize Orchestrator (vRO)

Often referred to as VMware’s best kept secret. If you use vCenter you already own it, but I bet you don’t use it! Why? A long time ago, vRO was difficult to use unless you were a JavaScript programmer. These days 90% of what you need is out-of-the-box functionality and minimal understanding of JavaScript is required. Deploy it, use it, learn it. vRO provides centralised integration to VMware products and extensibility to 3rd party products. If you have great JavaScript capabilities already, then put your hat in the ring now.

vRealize Operations Manager (vROps)

Management, monitoring and analysis of customers environments is always high on the list of required experience. If you have used vROps, great! Why not take it a step further and integrate custom dashboards with 3rd party products, talk about super metrics, or integrate vROps with vRO. If you can demonstrate your ability to use alerts to execute vRO workflows that automatically resolve health issues on client sites, then…..where do i sign?

vRealize Code Stream (DevOps)

You have worked on infrastructure your entire career, have you ever even had a conversation with a software developer? This is situation for 99% of VMware engineers. If words like continuous integration, continuous delivery, Artifactory, Jenkins, Yum, Git or any other developer products and terms mean anything to you then we may have a role for you.

Horizon EUC

End User Computing is a massive area that encompasses a large range of products. If you have experience with Horizon View, Horizon DaaS, App Volumes, User Environment Manager, AirWatch, or Workspace ONE then do yourself a favour and apply now. There is however a lack of VMware specific experience in the community so we have opened our doors to applicants with Citrix and Microsoft VDI consulting experience.


I have left the best for last. NSX is the most difficult to recruit for. There is very little experience within the community using NSX and that means that any relevant network experience is acceptable. Higher certifications levels and extensive experience with physical network design and configuration will be highly regarded. Candidates with the flexibility to work on other VMware products (as above) will be put to the top of the list.


I hope that this will help you decide if you will apply for that next role that is advertised with VMware PSO. Long gone are the days where 5+ years of vSphere experience is necessary.

The next blogs

As this site is only new and there is very little content, I would like to outline what i’m currently documenting and will soon be posted up. In my role I tend to focus on EUC and architecture, so there will be a heavy focus on this. I will create additional categories in addition to my Home Lab details for EUC, SDDC, Automation and general training resources for career development within the VMware space.

Building my home lab

This blog will outline the configuration and installation of my home lab environment. How I designed the lab, what my considerations were and the setup details. I have built a custom ESXi ISO image with drivers and configurations already included for the SuperMicro E200 servers. I will share this with the community. As I build my home lab I am documenting the specific configurations for networking, security, automation, VDI, airwatch…etc These documents will all be posted up soon.

The VMware Validated Design – Automated Deployment Tool

If you haven’t seen the VMware Validated Design (VVD), this is semi-complete design that is pre-validated by VMware. If you are not aware of the VVD you can find the documentation here: https://www.vmware.com/support/pubs/vmware-validated-design-pubs.html. In addition to the VVD documentation, VMware Professional Services and select partners have access to an in-house deployment tool that automates the entire SDDC deployment, including vCenter, VSAN, NSX, vRealize Automation, LogInsight, vRealize Operations Manager, Site Recovery Manager, vRealize Orchestrator and vRealize Business. There deployment tool can deploy the entire SDDC stack in 1/2 day, however the underlying configuration details are quite complex to get to this stage. As I go through the process in my home lab, I will detail these complexities and capabilities for your interest.

VMware Verify

Do you use 2-Factor Authentication to access your environments? VMware have released a built in 2-Factor authentication capability to vIDM called VMware Verify. Rather than typing in a code that gets generated from your personal token in order to authenticate, VMware Verify uses push notifications to an app on your mobile devices. This means that your 2-Factor auth is as simple as accepting the notification on your mobile. I will detail the configuration process to set-up VMware Verify with vIDM.

User-Cert 2-Factor Authentication with vIDM and Horizon Access Points

One of my customers had a fairly unique use-case with their Horizon VDI external access requirements. The requirement was fairly simple, they wanted to lock down physical devices to be the only devices capable of connecting from the internet to their internal VDI environment. These devices were provisioned by the business and supplied to the external users. The business wanted to ensure that only their approved devices were able to be connected to their classified environment. Working with a colleague, Anthony Urquhart, we assigned CA signed certificates to the devices and the Horizon Access Points would accept these as a form of 2-Factor authentication without any notification or inconvenience to the end user. If a device without an approved certificate attempted to connect, the user would be declined a connection before they are prompted for credentials.

Horizon Access Point Architecture and API

I am often asked by customers and colleagues to assist with deploying Horizon Access Point and how they should be connected. VMware recommend a 3 NIC configuration, however this is often not a suitable configuration for any customer running a DMZ. I will detail how to architect and deploy Horizon Access Points using various methods including the API.

Horizon Access Point integration with vIDM, Horizon and AirWatch

Horizon Access Point 2.8 is fully capable of providing a single unified access point to broker external connections to Horizon, vIDM and AirWatch. When Access Point 2.9 was released it was re-named to the Unified Access Gateway to accomodate this new capability. I will document the deployment and configuration of Horizon Access Points to allow the consolidation of external access brokering to a single point of entry.

Horizon DaaS

Horizon DaaS, if you haven’t heard about it before is a multi-tenant capable VDI solution that is geared towards Service Providers. I have worked with a number of large VDI Service Providers and will document the capabilities, configuration, common pitfalls and any other considerations when implementing and using Horizon DaaS.

SuperMicro vs Intel NUC

A couple of weeks ago I was talking to William Lam (http://www.virtuallyghetto.com/) and Alan Renouf (http://www.virtu-al.net/) about their exciting USB to SDDC demonstration, they were using an Intel NUC to deploy a VMware SDDC environment to a single node using VSAN. I offered them the opportunity to test out the same capability with one of my SuperMicro E200-8D servers and they took me up on the opportunity. Since then I have been approached by a number of people with requests for information about why I chose to go with the SuperMicro E200 for my home lab over the Intel NUC. I’ve never written a blog before but I thought this might be a good way to “cut out the middle man” so to speak. So here it goes, my reasons for why I chose the SuperMicro over the Intel NUC.

My previous home labs have generally been made up of used enterprise servers that can be picked up cheaply. These used servers are loud, power hungry and heavy. My goal was to firstly consume less power and secondly make my lab somewhat portable. These requirements appear to be popular amongst the community at the moment and there are a lot of stories about people using the Intel NUC’s to achieve these outcomes. I started to look around and it was fairly obvious that there were two stand-out options, the Intel NUC and the SuperMicro E200. I was left with a decision to be made and had to ask myself some additional questions about what I really wanted in my home lab. I came up with the following requirements.

  1. Minimal power consumption.
  2. Small and lightweight.
  3. Capable of a good consolidation ratio of VMs to Host.
  4. Capable of using all flash VSAN (albeit in an unsupported configuration).
  5. Enough scalability to expand the lab in the future.
  6. Good availability. These need to be up when I am doing demonstrations to customers.
  7. Ability to set up an enterprise-like environment for comparisons with customer environments.

The next step was to compare my options. The following table takes information from the respective vendor sites and addresses my specific requirements plus a couple of additional considerations. This table made my decision easy for me. It became quite obvious to me that the SuperMicro was the more superior option for my home lab, in fact the SuperMicro is an enterprise ready solution.

SuperMicro E200-8D Intel NUC 7th Gen
ESXi 6.5 Compatible

Native install works

Requires NIC drivers

CPU Type

XEON D-1528

Intel i7-7567U

CPU Capacity

6 core 12 threads

1.9 GHz – 2.2 GHz

Dual Core

3.5 GHz – 4.0 GHz

RAM Type

4x DDR4


RAM Capacity



32GB Non-ECC

Intel Optane Ready



HDD Capacity

1x 2.5”

1x 2.5”







Micro SDXC



1Gbe Networking

2x 1Gbe

1x 1Gbe

10Gbe Networking

2x 10Gbe

1x Thunderbolt 40Gbps







Power Consumption



Rack Mounting



SR-IOV Support



Video Port

VGA only


Noise Comparison

2x Fans


USB Capacity

2 Ports

4 Ports

Price Comparison



Hopefully the above table has also helped you with your decision. The reason I opted for the SuperMicro E200 is that although it does cost  little bit more, it is an enterprise ready solution that accepts ECC memory, uses a XEON CPU, has a larger RAM capacity and a larger CPU capacity.

To provide more information, here are the more detailed comparisons between the SuperMicro E200 and the Intel NUC.

ESXi 6.5 Compatible

Both the NUC and the SuperMicro require additional drivers and configurations to the native ESXi installation in order to work properly, so this point is more for information rather than defining the obvious choice. The 1Gbe NICs on the SuperMicro work with the native ESXi drivers and will work out-of-the-box, you need to install additional drivers to get the 10Gbe NICs to work. The 10Gbe drivers are supported with ESXi 6.0 and can be found on the VMware Downloads page here: https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-INTEL-IXGBE-451&productId=491

The NUC requires additional drivers in order to get the 1Gbe NIC to work. This means that you will need to either build a custom image or install the drivers locally after ESXi is installed. William Lam has detailed the options and procedures here: http://www.virtuallyghetto.com/2017/02/update-on-intel-nuc-7th-gen-kaby-lake-esxi-6-x.html

While you are creating your custom ESXi image for your NUC or SuperMicro I would recommend you remove the native vmw-ahci driver vib from your image. This will force your storage controller to use the newer sata-ahci drivers. ESXi 6.5 contains many new drivers, however the standard image still contains both the native drivers and a newer version. VMware don’t chose to default to the use of the newer version of the drivers because they may not be 100% feature comparable. In this case, if you review the storage controller support on the VMware compatibility list, it clearly states that it is supported with the sata-ahci drivers. The native vmw-achi drivers do not perform well and you will recognise a massive performance improvement by using the new drivers. Anthony Spiteri has done an excellent job detailing the issues and resolution here: http://anthonyspiteri.net/homelab-supermicro-5020d-tnt4-storage-driver-performance-issues-and-fix/ 


The SuperMicro has a significantly better networking capability than the Intel NUC. Looking at the Intel NUC you could probably use Thunderbolt to Ethernet adapters or USB to Ethernet adapters and build yourself a redundant networking capability. At the end of the day, the SuperMicro has 2x 10Gbe NICs and 2x 1Gbe NICs. Because I want to run an all flash VSAN configuration, I want to use the 10Gbe networking capability to optimise my VSAN performance.

During my 10Gbe vs 1Gbe networking considerations, I also considered the SuperMicro E300-8D due to its 10Gb SFP configuration (and expansion PCIe slot). It is very hard to find a 10GBase-T network switch at a reasonable price and I ended up having to spend the most money on my 10GBase-T 48 port Dell switch. In hindsight, the SuperMicro E300-8D could have been a viable option because I would have been able to run a VSAN supported storage controller and a 10Gb SFP switch is much easier to find at a reasonable price. Of course there is no comparison to the Intel NUC because it doesn’t have 10Gb NICs, let alone multiple 10Gb NICs. I eventually made the decision that a 10Gb networking capability would not only provide me with the performance but also scalability and not need to be replaced in a couple years time.


The Intel NUC and SuperMicro both have similar storage capabilities. If you are wanting to run VSAN then you will need to buy an NVMe card for the caching tier and a 2.5″ HDD (or SSD) for the capacity tier. The SuperMicro has more SATA ports on the motherboard but no space to mount any additional drives, so not much point in even considering them.

The one thing that you can seriously consider here is to look at the SuperMicro E300-8D. The E300 has a smaller CPU (XEON D-1518 4 core 2.2GHz) but it is larger in size due to a PCIe x8 slot. This would be a great place to use a VSAN supported storage controller!


I love that the SuperMicro has a built in IPMI port. This allows me to view a console screen or mount an ISO over the network. To put it simply, I don’t need to go out to my garage to manage my lab.


Yes, this is where the Intel NUC wins. The NUC doesn’t have any fans and therefore doesn’t make any noise. You could put these inside your house and you wouldn’t know they’re there. In comparison, the SuperMicro could be considered to be quite loud. This wasn’t an issue for me because my lab is in my garage, plus once you turn on my 10Gb 48 port switch, you can disregard any noise that the SuperMicro might be making. Did I really want to sacrifice my cooling capability and running temperature in order to reduce the noise? No, I want fans pushing as much air through my lab to keep it cool and a bit of noise is worth it. In fact, the SuperMicro comes with 2x 40mm fans and a spare slot for a 3rd fan which I promptly populated straight away.

Take a look at Paul Braren’s blog at TinkerTry where he analyses the noise from the SuperMicro servers – https://tinkertry.com/supermicro-superserver-sys-e200-8d-and-sys-e300-are-here

Power Consumption

One of my most critical requirements was lower power consumption. I have had some pretty high electricity bills in the past while running large rack mount servers. I haven’t measured the actual power consumption from between the Intel NUC and SuperMicro however I would be very confident that the Intel NUC would consume less power. Both units are very low in power consumption compared to a large rack mount server so they both meet my requirement.

Rack Mount

This was a big bonus. The SuperMicro E200-8D has rack mount brackets. Not only does this make my lab neat and tidy, its also easily expandable and I can also build in a hot/cold zone within my rack. Where I live it can get very hot in summer (40 degrees celsius or 100 degrees fahrenheit) so keeping my lab cool is a must. By using rack mount panels I have been able to separate the front fan intake on the SuperMicros to the rear hot outlets. I can then duct cold air into the front of the rack and keep my lab operating temperature at a respectable level. If I were to use the Intel NUCs then I would have no way of keeping them cool during Summer.

The below diagrams show the rack mount configuration and part numbers for both the E200-8D (MCP-290-10110-0B) and the E300-8D (MCP-290-30002-0B). Although I could not find these listed for sale anywhere, Eric at MITXPC was able to source the rack mount brackets for me.

Video Port

This might seem simple, however do you have a HDMI capable monitor in your home lab? I don’t. I’m using a fairly old monitor with VGA and DVI ports. The Intel NUC may be 4k capable and offer a HDMI port, which would be great for a media PC but why would you need this in your home lab? If the Intel NUC also had a VGA port then it may be comparable but it only offers a HDMI port. The SuperMicro VGA port also comes in handy when you turn up at a customer site and they don’t have a HDMI port.

USB Capacity

This is a down side for the SuperMicro as it only has 2x USB ports. Because I am running VSAN, both my internal NVMe and SSD can’t be used to boot ESXi, so I use a small USB drive which runs ESXi. While installing ESXi to the SuperMicros I found myself with a lack of USB ports. One is required for a bootable USB drive to install ESXi to, the ESXi install media on another USB drive and a USB keyboard to click through the install. 3x USB ports would have been nice but I mounted the ESXi image over the IPMI connection and clicked through the install process from the comfort of my lounge room, using the IPMI console screen.

This is where the Intel NUC does provide a Micro SDXC slot which you could very well utilise as the ESXi install location. The NUC also has an additional 4x USB ports.


The Supermicro SATA DOM (Disk on Module), is a small SATA3 (6Gb/s) flash memory module designed to be inserted into a SATA connector and provides high performance solid state storage capacity that simulates a hard disk drive (HDD). The SATADOM device could be used as a boot disk for the ESXi installation rather than a bootable USB and are available in 128GB, 64GB, 32GB, and 16GB sizes.

Screen Shot 2017-04-05 at 8.50.09 AM.png


The biggest consideration here is performance, capacity and availability, all of which the SuperMicro exceeds the Intel NUC in leaps and bounds. This makes the cost of the SuperMicro look cheap when compared to the Intel NUC based on the numbers, this is detailed below. At a high level the SuperMicro can use either ECC or non-ECC RAM, it uses full sized RAM slots rather than SODIMM, the RAM capacity is 4x larger at 128GB and the CPU capacity is nearly doubled. This makes the cost of the SuperMicro a lot cheaper than the Intel NUC once you start to consider purchasing more than a single unit.

The RAM capacity is a massive point in favour of the SuperMicro’s. This is incredibly important for the consolidation ratio of VMs to Hosts, especially when running all flash VSAN. You must remember to take into consideration that VSAN will consume a large chunk of your RAM. For ease of calculations I will use 10GB as my VSAN memory consumption, however the actual number was 10.5GB. Details of how to calculate your VSAN memory requirements can be found here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2113954

Let’s assume you opt for the Intel NUC with a maximum of 32GB RAM. You instantly lose 10GB to VSAN, so you’re left with 22GB RAM to use in your environment. Each Intel NUC will provide you with 7GHz of CPU processing power and 22GB of RAM. This leaves you with 3.14GB RAM for each 1Ghz of CPU used. From my previous analysis of my home lab my VMs average between 200MHz and 500MHz CPU usage. I will use 500Mhz (0.5GHz) for my calculations and a conservative estimate of my consolidation ratio.

7GHz / 0.5GHz = 14 VMs per NUC

I have approximately 65 VMs in my home lab and this would mean that I require 5x Intel NUCs just to meet my current capacity requirements with a consolidation ratio of 14 VMs per host. What’s worse is that I would be highly unlikely to actually get 14 VMs on each Intel NUC because I only have 22GB RAM available to use. Each of the 14 VMs would have approximately 1.5GB RAM allocated.

22GB RAM / 14 VMs = 1.57GB RAM per VM

Based on the above calculations the RAM is a massive constraint on the use of Intel NUCs in a home lab environment. Realistically, based on RAM consumption I would need 9 Intel NUCs in my lab. I have used an estimated 3GB RAM required per VM for the below calculations.

(3GB RAM per VM x 65 VMs) / 22GB RAM per NUC = 8.86 (9) NUCs

Each SuperMicro E200-8D has 11.4GHz CPU processing power and 128GB RAM (less 10GB for VSAN). Applying the same calculations as above.

11.4GHz / 0.5GHz = 22.8 VMs per SuperMicro

118GB RAM / 22.8 VMs = 5.18GB RAM per VM per SuperMicro

As you can see from the above calculation, with 128GB RAM in the SuperMicro the CPU becomes the constraining factor. Leaving 5.18GB of RAM for each VM using 0.5GHz of CPU. This is fairly consistent in a VMware environment where RAM is more heaving utilised than CPU, so you would be better off ensuring you have more RAM than CPU.

Let’s work out what my consolidation ratio will be based on my actual RAM requirements of 3GB RAM per VM.

(3GB RAM per VM x 65 VMs) / 118GB RAM per SuperMicro = 1.65 (2) SuperMicros

The calculations make it very obvious from anyones perspective. To suit my needs I need 2x SuperMicros or 9x Intel NUCs. I could have stuck with the 2x SuperMicros and setup a 2 node VSAN configuration utilising the virtual witness appliance as the 3rd node, however I want to make this enterprise ready so I opted to meet the minimum of 3 nodes for VSAN.

If you factor in the costs for 128GB of ECC RAM then it can get more expensive. Because I was going to buy 3x SuperMicro servers anyway (for VSAN) then why not be more price conscious and use 64GB non-ECC RAM per server. This meant that my lab was more realistically sized with 3 servers at 64GB RAM in each = 192GB RAM and the cost was $1499 per SuperMicro E200-8D (including disks).


From all of the above details if it isn’t already obvious why I opted to build my lab with SuperMicro E200 servers, do the math on the cost.

9x Intel NUCs x $630 each = $5,670 USD

3x SuperMicros x $799 each = $2,397 USD

The costs I have used above are estimates based on a quick search, you may find cheaper prices if you look harder. I haven’t factored in the cost of the RAM, SSD or NVMe as these would be similar additional costs regardless of choosing the NUC or SuperMicro. There are other considerations that may affect the cost comparison of each unit, just to list a couple:

  • The supported SSD and NVMe cards could warrant a difference in price.
  • The RAM costs could vary between choosing to use SODIMM or ECC RAM.
  • The rack mount brackets on the SuperMicro servers are an additional cost.
  • The SuperMicros could likely consume more power during daily use.

If you have read all the way to the end, you are obviously just as interested in getting the “right” configuration for your home lab as I was. Seriously though, if at the end of this article you’re still leaning towards the NUC then just do it. You’re not going to be disappointed.

I purchased my SuperMicro servers from MITXPC as they specialise in micro systems. I found them via Amazon, however the prices on Amazon are significantly more expensive than if you go direct. If you’re interested, ask to speak with Eric Yui as he has been very helpful for me and will look after you.

Screen Shot 2017-04-02 at 10.52.35 AMScreen Shot 2017-04-02 at 10.52.49 AM

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight

PreviousHome Lab Build – From the StartNextSuperMicro Build – The Components