SuperMicro Build – The Components

The SuperMicro E200-8D and E300-8D are excellent options for a home lab, especially because of their small size, low power consumption and enterprise ready hardware. If you haven’t already read my first blog post, you can find my SuperMicro vs Intel NUC post here.

So you’ve bought a nice shiny new SuperMicro E200-8D and now you’re ready to start building you home lab, right? Not quite. These units don’t generally come as a plug and play unit, there is some assembly required. In my case this includes the RAM, NVMe M.2 SSD, 2.5” Sata SSD, an additional case fan and rack mount brackets. But it doesn’t stop there! Before we start to build our home lab, we need to update the BIOS firmware and the IPMI software, which enables the use of a HTML console session instead of the old JAVA console. I will cover the steps in more detail over the next few blog posts, first the Hardware selection, then the install guide and finally the BIOS and IPMI updates.

So let’s get started.

Bill of Materials

I purchased all of my hardware from Eric Yui at MITXPC. The prices and available hardware may vary, so if you’re interested then you should check out the MITXPC website for the available stocks and pricing. Don’t forget to use William Lam’s virtuallyGhetto discount! In case you didn’t know, William Lam has secured a 2% discount from MITXPC for the community. You can find all of the details here.

I am by no means recommending that you should buy the same hardware that I have, you should buy the hardware that suits your requirements and is also within your price range. I will outline the hardware options with a good, better and best option and you can make your own choices. Please add to the comments if you have any relevant experience on different products that you prefer.

First, here is what i bought.

Product

Part Number

Price

SuperMicro E200-8D SYS-E200-8D $799.99
64GB ECC UDIMM RAM (4 x 16GB) TBA $329.95
1TB 2.5” SSD SanDisk X400 $299.95
128GB NVMe M.2 SSD Plextor PX-128S2G $59.99
1x Additional Case Fan FAN-0065L4 $9.95
Rack Mount Brackets SMC-MCP-290-10110-0B $44.95
$1544.74
virtuallyGhetto 2% Discount VIRTUALLYGHETTO2OFF -$30.89
TOTAL   US $1513.85

RAM

027- RAM

ECC Capacity Speed Price
Good Non-ECC 64GB 2133MHz $300
Better ECC UDIMM 64GB 2133MHz $400
Best ECC RDIMM 128GB 2400MHz $1,000

This is a pretty simple decision, what RAM to fit to your SuperMicro E200-8D? Even just looking at the table above shows a pretty clear decision. In my opinion the only question here is what capacity of RAM do you need? 64GB or 128GB. That’s really about the hardest thing you’ll have to consider.

In regards to ECC or Non-ECC, well the price doesn’t change much and the SuperMicro E200 is restricted to 64GB of non-ECC RAM. I can’t imagine why you would need ECC RDIMM (Registered) RAM in your home lab. I have the ECC RDIMM listed there as the “best” option but that is purely on specs. My honest opinion is that the “best” option for your SuperMicro home lab is the ECC UDIMM (Un-Registered) RAM. For the price it’s a good buy and you aren’t restricted to 64GB, which means you can have up to 128GB RAM capacity if you so desire.

This leaves one major decision you have to make, what capacity of RAM do you buy. I opted for 64GB of ECC UDIMM RAM which cost me $330. Unfortunately, in the recent months the price of RAM has increased significantly and it is now approximately $400. I have covered this topic fairly heavily in my previous blog post – SuperMicro vs Intel NUC.

I’ll make the decision as simple as I can for you. How many VMs are you planning on running and how much RAM vs CPU do they require? The SuperMicro E200-8D has 11.4GHz of CPU processing power (1.9Ghz x 6 cores). Divide the amount of RAM you think you’ll need by 11.4GHz and that will give you the approximate RAM to CPU ratio. If this RAM to CPU ratio fits what you need in your environment, then buy that amount of RAM.

128GB RAM / 11.4GHz = 11.2GB RAM per 1GHz of CPU

64GB RAM / 11.4GHz = 5.6GB of RAM per 1GHz of CPU

There are more considerations to factor in to the above calculations that I have covered in my previous post (like VSAN RAM usage), so have a read through that and make a decision on your capacity. As I said, I opted for 64GB RAM (4x16GB) ECC UDIMM.

NVMe M.2

016 -960pro-980x551-1

Sequential Read (MB/s) Sequential Write (MB/s) 4K Random Read (IOPS) 4K Random Write (IOPS) Price (approx.)
Good 500 300 90,000 50,000 $80
Better 1,500 600 150,000 80,000 $150
Best 3,000 2,000 300,000 100,000 $300 +

The SuperMicro E200-8D contains an NVMe M.2 slot on the motherboard that accepts an 80mm PCI-E x4 SSD. When selecting an NVMe M.2 SSD there are a few things that you should consider, performance, size, the cost per GB and the bus interface.

Because I am building a VSAN environment, my NVMe card will be used as the caching tier in my VSAN storage, so I don’t need a large capacity card. Duncan Epping has detailed the flash cache calculation for VSAN here. I am running a single 1TB SSD in each ESXi host so using the 10% rule my 128GB NVMe SSD is actually oversized, however they don’t generally come much smaller and the price was great, so I grabbed it.

If I were to do it again, I would probably invest in a higher performance NVMe M.2 SSD. The SuperMicro supports a PCI-E 3.0 x4 interface which can provide higher performance than a SATA 3 interface and this can make a significant difference to the read/write caching performance in your VSAN environment. I went with the “good” option and I should have probably used the “better” option. What you really need to ask yourself is how much performance do you really need and how deep are your pockets?

If you do opt for a really fast NVMe SSD then make sure you also install the 3rd fan. These small and powerful SSDs can generate a lot of heat and the 3rd fan blows air straight over the NVMe SSD which will help to ensure longevity of your hardware.
My preference would be to buy the Samsung 960 EVO M.2 PCI-E, which can be found on Amazon starting from US$130. The 960 EVO in a 250GB size provides 3,000MB/s sequential read, 1,900MB/s sequential write and up to 300,000 IOPS. You can find more information here.

2.5” HDD

017 - StorageReview-SanDisk-X400

HDD/ SSD Capacity Sequential Read MB/s Sequential Write MB/s Price (approx.)
Good 7200 RPM 2TB + 200 150 $100
Better SSD 1TB 500 300 $300
Best SSD 2TB 550 400 $800

There is a huge range of 2.5” SATA drives that are more than suitable for the SuperMicro E200. The major concern here is capacity and price. If you are running VSAN like I am, then even a 7200rpm HDD is going to be a really good option due to their high capacity and cheaper price. Performance is still a concern but VSAN uses a fast NVMe M.2 SSD for read and write caching to provide better performance. The capacity disk will still need to have reasonably good read performance because not all reads will be processed on the high performance NVMe cache, so performance still matters but with only one disk space available in the E200-8D you need to use this space wisely and get the most out of your capacity disk.

Because I am running VSAN and I wanted to enable the compression and de-duplication capabilities, this means I am restricted to using an all flash VSAN with an SSD for the capacity disk. If you were running a hybrid VSAN then you could pair a high capacity 7200rpm HDD with a high performance NVMe card and end up with an excellent disk setup for your home lab.

In my situation, I have attempted to get the largest SSD capacity that was within my budget. I ended up purchasing the SanDisk X400 1TB SSD for US$300. This gives me 500MB/s sequential read and 350MB/s sequential write. The Samsung 850 EVO 1TB was a close contender but for an extra US$100 you get a little bit faster sequential write speed, which isn’t important for the VSAN capacity tier anyway. The SanDisk X400 provided me with 1TB capacity and more than enough performance for the capacity disk tier.

When comparing the NVMe and SSD disks, I have found that this UserBenchmark website has been extremely useful to compare various brands and their performance.

Scalability

Now that you have considered the above RAM, NVMe and HDD requirement, the one final thing that I would ask you to consider is if you will scale-out or scale-up when you require additional resources.

What this essentially means is that once you have consumed all of your resources, will you buy additional servers (with the same resources) or will you replace the components within your server to higher spec items? In my opinion the main constraint is CPU processing power (11.4GHz), so the best option is to lean towards scaling out.
Why is this important? Well, cost. If you have spec’d your servers with high performance items that are lower in capacity, then you will probably need to buy additional servers sooner rather than later. This could leave you out-of-pocket a lot of money. The “best” option is really above and beyond but this shows the capability of the SuperMicro E200 to enable a high performance and high capacity platform. For a home lab, I will always stick with the “better” options as I feel that this provides great performance, more than enough capacity and is very cost efficient.

Overall Component Price

 

RAM NVMe HDD Price

Good

64GB Non-ECC 500 MB/s HDD, 2TB

$480

Better

64GB ECC UDIMM 1,500 MB/s SSD, 1TB

$850

Best 128GB ECC RDIMM 3,000 MB/s SSD, 2TB

$2100

I hope that you can now understand the additional hardware components that are required to be purchased with the SuperMicro and can now make an informed decision as to what your options are. I would also hope that you have a realistic performance expectation that is based on the Good, Better and Best components. If you refer to the above table you can clearly see that there is a massive step up from the Better to the Best options. I’d like to say that you get what you pay for but in this scenario I don’t think that anyone requires the performance, capacity or availability that comes with the Best options. I personally sit somewhere between the Good and Better options but if I were to do it again, I would factor in spending $800 on the SuperMicro and another $800 on the additional components. With a little perspective it makes the SuperMicro look quite cheap when you are willing to spend just as much on the components as you are for the SuperMicro E200. This is where the real cost (and performance) is, the components.

Continue on to Part 3 to follow along with the Installation and common mistakes people make.

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


Previous –  SuperMicro vs Intel NUCNextSuperMicro Build – The Installation

The next blogs

As this site is only new and there is very little content, I would like to outline what i’m currently documenting and will soon be posted up. In my role I tend to focus on EUC and architecture, so there will be a heavy focus on this. I will create additional categories in addition to my Home Lab details for EUC, SDDC, Automation and general training resources for career development within the VMware space.

Building my home lab

This blog will outline the configuration and installation of my home lab environment. How I designed the lab, what my considerations were and the setup details. I have built a custom ESXi ISO image with drivers and configurations already included for the SuperMicro E200 servers. I will share this with the community. As I build my home lab I am documenting the specific configurations for networking, security, automation, VDI, airwatch…etc These documents will all be posted up soon.

The VMware Validated Design – Automated Deployment Tool

If you haven’t seen the VMware Validated Design (VVD), this is semi-complete design that is pre-validated by VMware. If you are not aware of the VVD you can find the documentation here: https://www.vmware.com/support/pubs/vmware-validated-design-pubs.html. In addition to the VVD documentation, VMware Professional Services and select partners have access to an in-house deployment tool that automates the entire SDDC deployment, including vCenter, VSAN, NSX, vRealize Automation, LogInsight, vRealize Operations Manager, Site Recovery Manager, vRealize Orchestrator and vRealize Business. There deployment tool can deploy the entire SDDC stack in 1/2 day, however the underlying configuration details are quite complex to get to this stage. As I go through the process in my home lab, I will detail these complexities and capabilities for your interest.

VMware Verify

Do you use 2-Factor Authentication to access your environments? VMware have released a built in 2-Factor authentication capability to vIDM called VMware Verify. Rather than typing in a code that gets generated from your personal token in order to authenticate, VMware Verify uses push notifications to an app on your mobile devices. This means that your 2-Factor auth is as simple as accepting the notification on your mobile. I will detail the configuration process to set-up VMware Verify with vIDM.

User-Cert 2-Factor Authentication with vIDM and Horizon Access Points

One of my customers had a fairly unique use-case with their Horizon VDI external access requirements. The requirement was fairly simple, they wanted to lock down physical devices to be the only devices capable of connecting from the internet to their internal VDI environment. These devices were provisioned by the business and supplied to the external users. The business wanted to ensure that only their approved devices were able to be connected to their classified environment. Working with a colleague, Anthony Urquhart, we assigned CA signed certificates to the devices and the Horizon Access Points would accept these as a form of 2-Factor authentication without any notification or inconvenience to the end user. If a device without an approved certificate attempted to connect, the user would be declined a connection before they are prompted for credentials.

Horizon Access Point Architecture and API

I am often asked by customers and colleagues to assist with deploying Horizon Access Point and how they should be connected. VMware recommend a 3 NIC configuration, however this is often not a suitable configuration for any customer running a DMZ. I will detail how to architect and deploy Horizon Access Points using various methods including the API.

Horizon Access Point integration with vIDM, Horizon and AirWatch

Horizon Access Point 2.8 is fully capable of providing a single unified access point to broker external connections to Horizon, vIDM and AirWatch. When Access Point 2.9 was released it was re-named to the Unified Access Gateway to accomodate this new capability. I will document the deployment and configuration of Horizon Access Points to allow the consolidation of external access brokering to a single point of entry.

Horizon DaaS

Horizon DaaS, if you haven’t heard about it before is a multi-tenant capable VDI solution that is geared towards Service Providers. I have worked with a number of large VDI Service Providers and will document the capabilities, configuration, common pitfalls and any other considerations when implementing and using Horizon DaaS.

SuperMicro vs Intel NUC

A couple of weeks ago I was talking to William Lam (http://www.virtuallyghetto.com/) and Alan Renouf (http://www.virtu-al.net/) about their exciting USB to SDDC demonstration, they were using an Intel NUC to deploy a VMware SDDC environment to a single node using VSAN. I offered them the opportunity to test out the same capability with one of my SuperMicro E200-8D servers and they took me up on the opportunity. Since then I have been approached by a number of people with requests for information about why I chose to go with the SuperMicro E200 for my home lab over the Intel NUC. I’ve never written a blog before but I thought this might be a good way to “cut out the middle man” so to speak. So here it goes, my reasons for why I chose the SuperMicro over the Intel NUC.

My previous home labs have generally been made up of used enterprise servers that can be picked up cheaply. These used servers are loud, power hungry and heavy. My goal was to firstly consume less power and secondly make my lab somewhat portable. These requirements appear to be popular amongst the community at the moment and there are a lot of stories about people using the Intel NUC’s to achieve these outcomes. I started to look around and it was fairly obvious that there were two stand-out options, the Intel NUC and the SuperMicro E200. I was left with a decision to be made and had to ask myself some additional questions about what I really wanted in my home lab. I came up with the following requirements.

  1. Minimal power consumption.
  2. Small and lightweight.
  3. Capable of a good consolidation ratio of VMs to Host.
  4. Capable of using all flash VSAN (albeit in an unsupported configuration).
  5. Enough scalability to expand the lab in the future.
  6. Good availability. These need to be up when I am doing demonstrations to customers.
  7. Ability to set up an enterprise-like environment for comparisons with customer environments.

The next step was to compare my options. The following table takes information from the respective vendor sites and addresses my specific requirements plus a couple of additional considerations. This table made my decision easy for me. It became quite obvious to me that the SuperMicro was the more superior option for my home lab, in fact the SuperMicro is an enterprise ready solution.

SuperMicro E200-8D Intel NUC 7th Gen
ESXi 6.5 Compatible

Native install works

Requires NIC drivers

CPU Type

XEON D-1528

Intel i7-7567U

CPU Capacity

6 core 12 threads

1.9 GHz – 2.2 GHz

Dual Core

3.5 GHz – 4.0 GHz

RAM Type

4x DDR4

2x DDR4 SODIMM

RAM Capacity

128GB ECC RDIMM

64 GB Non-ECC UDIMM

32GB Non-ECC

Intel Optane Ready

Yes

Yes

HDD Capacity

1x 2.5”

1x 2.5”

NVMe / M.2 SATA

YES

YES

SATADOM

YES

NO

Micro SDXC

NO

YES

1Gbe Networking

2x 1Gbe

1x 1Gbe

10Gbe Networking

2x 10Gbe

1x Thunderbolt 40Gbps

Wireless

NO

802.11ac

IPMI

YES

NO

Power Consumption

60W

64W

Rack Mounting

YES

 NO

SR-IOV Support

YES

NO

Video Port

VGA only

HDMI 4k

Noise Comparison

2x Fans

Fan-less

USB Capacity

2 Ports

4 Ports

Price Comparison

US$799

US$630

Hopefully the above table has also helped you with your decision. The reason I opted for the SuperMicro E200 is that although it does cost  little bit more, it is an enterprise ready solution that accepts ECC memory, uses a XEON CPU, has a larger RAM capacity and a larger CPU capacity.

To provide more information, here are the more detailed comparisons between the SuperMicro E200 and the Intel NUC.

ESXi 6.5 Compatible

Both the NUC and the SuperMicro require additional drivers and configurations to the native ESXi installation in order to work properly, so this point is more for information rather than defining the obvious choice. The 1Gbe NICs on the SuperMicro work with the native ESXi drivers and will work out-of-the-box, you need to install additional drivers to get the 10Gbe NICs to work. The 10Gbe drivers are supported with ESXi 6.0 and can be found on the VMware Downloads page here: https://my.vmware.com/web/vmware/details?downloadGroup=DT-ESXI60-INTEL-IXGBE-451&productId=491

The NUC requires additional drivers in order to get the 1Gbe NIC to work. This means that you will need to either build a custom image or install the drivers locally after ESXi is installed. William Lam has detailed the options and procedures here: http://www.virtuallyghetto.com/2017/02/update-on-intel-nuc-7th-gen-kaby-lake-esxi-6-x.html

While you are creating your custom ESXi image for your NUC or SuperMicro I would recommend you remove the native vmw-ahci driver vib from your image. This will force your storage controller to use the newer sata-ahci drivers. ESXi 6.5 contains many new drivers, however the standard image still contains both the native drivers and a newer version. VMware don’t chose to default to the use of the newer version of the drivers because they may not be 100% feature comparable. In this case, if you review the storage controller support on the VMware compatibility list, it clearly states that it is supported with the sata-ahci drivers. The native vmw-achi drivers do not perform well and you will recognise a massive performance improvement by using the new drivers. Anthony Spiteri has done an excellent job detailing the issues and resolution here: http://anthonyspiteri.net/homelab-supermicro-5020d-tnt4-storage-driver-performance-issues-and-fix/ 

Networking

The SuperMicro has a significantly better networking capability than the Intel NUC. Looking at the Intel NUC you could probably use Thunderbolt to Ethernet adapters or USB to Ethernet adapters and build yourself a redundant networking capability. At the end of the day, the SuperMicro has 2x 10Gbe NICs and 2x 1Gbe NICs. Because I want to run an all flash VSAN configuration, I want to use the 10Gbe networking capability to optimise my VSAN performance.

During my 10Gbe vs 1Gbe networking considerations, I also considered the SuperMicro E300-8D due to its 10Gb SFP configuration (and expansion PCIe slot). It is very hard to find a 10GBase-T network switch at a reasonable price and I ended up having to spend the most money on my 10GBase-T 48 port Dell switch. In hindsight, the SuperMicro E300-8D could have been a viable option because I would have been able to run a VSAN supported storage controller and a 10Gb SFP switch is much easier to find at a reasonable price. Of course there is no comparison to the Intel NUC because it doesn’t have 10Gb NICs, let alone multiple 10Gb NICs. I eventually made the decision that a 10Gb networking capability would not only provide me with the performance but also scalability and not need to be replaced in a couple years time.

Storage

The Intel NUC and SuperMicro both have similar storage capabilities. If you are wanting to run VSAN then you will need to buy an NVMe card for the caching tier and a 2.5″ HDD (or SSD) for the capacity tier. The SuperMicro has more SATA ports on the motherboard but no space to mount any additional drives, so not much point in even considering them.

The one thing that you can seriously consider here is to look at the SuperMicro E300-8D. The E300 has a smaller CPU (XEON D-1518 4 core 2.2GHz) but it is larger in size due to a PCIe x8 slot. This would be a great place to use a VSAN supported storage controller!

IPMI

I love that the SuperMicro has a built in IPMI port. This allows me to view a console screen or mount an ISO over the network. To put it simply, I don’t need to go out to my garage to manage my lab.

Noise

Yes, this is where the Intel NUC wins. The NUC doesn’t have any fans and therefore doesn’t make any noise. You could put these inside your house and you wouldn’t know they’re there. In comparison, the SuperMicro could be considered to be quite loud. This wasn’t an issue for me because my lab is in my garage, plus once you turn on my 10Gb 48 port switch, you can disregard any noise that the SuperMicro might be making. Did I really want to sacrifice my cooling capability and running temperature in order to reduce the noise? No, I want fans pushing as much air through my lab to keep it cool and a bit of noise is worth it. In fact, the SuperMicro comes with 2x 40mm fans and a spare slot for a 3rd fan which I promptly populated straight away.

Take a look at Paul Braren’s blog at TinkerTry where he analyses the noise from the SuperMicro servers – https://tinkertry.com/supermicro-superserver-sys-e200-8d-and-sys-e300-are-here

Power Consumption

One of my most critical requirements was lower power consumption. I have had some pretty high electricity bills in the past while running large rack mount servers. I haven’t measured the actual power consumption from between the Intel NUC and SuperMicro however I would be very confident that the Intel NUC would consume less power. Both units are very low in power consumption compared to a large rack mount server so they both meet my requirement.

Rack Mount

This was a big bonus. The SuperMicro E200-8D has rack mount brackets. Not only does this make my lab neat and tidy, its also easily expandable and I can also build in a hot/cold zone within my rack. Where I live it can get very hot in summer (40 degrees celsius or 100 degrees fahrenheit) so keeping my lab cool is a must. By using rack mount panels I have been able to separate the front fan intake on the SuperMicros to the rear hot outlets. I can then duct cold air into the front of the rack and keep my lab operating temperature at a respectable level. If I were to use the Intel NUCs then I would have no way of keeping them cool during Summer.

The below diagrams show the rack mount configuration and part numbers for both the E200-8D (MCP-290-10110-0B) and the E300-8D (MCP-290-30002-0B). Although I could not find these listed for sale anywhere, Eric at MITXPC was able to source the rack mount brackets for me.

Video Port

This might seem simple, however do you have a HDMI capable monitor in your home lab? I don’t. I’m using a fairly old monitor with VGA and DVI ports. The Intel NUC may be 4k capable and offer a HDMI port, which would be great for a media PC but why would you need this in your home lab? If the Intel NUC also had a VGA port then it may be comparable but it only offers a HDMI port. The SuperMicro VGA port also comes in handy when you turn up at a customer site and they don’t have a HDMI port.

USB Capacity

This is a down side for the SuperMicro as it only has 2x USB ports. Because I am running VSAN, both my internal NVMe and SSD can’t be used to boot ESXi, so I use a small USB drive which runs ESXi. While installing ESXi to the SuperMicros I found myself with a lack of USB ports. One is required for a bootable USB drive to install ESXi to, the ESXi install media on another USB drive and a USB keyboard to click through the install. 3x USB ports would have been nice but I mounted the ESXi image over the IPMI connection and clicked through the install process from the comfort of my lounge room, using the IPMI console screen.

This is where the Intel NUC does provide a Micro SDXC slot which you could very well utilise as the ESXi install location. The NUC also has an additional 4x USB ports.

SATADOM

The Supermicro SATA DOM (Disk on Module), is a small SATA3 (6Gb/s) flash memory module designed to be inserted into a SATA connector and provides high performance solid state storage capacity that simulates a hard disk drive (HDD). The SATADOM device could be used as a boot disk for the ESXi installation rather than a bootable USB and are available in 128GB, 64GB, 32GB, and 16GB sizes.

Screen Shot 2017-04-05 at 8.50.09 AM.png

CPU and RAM

The biggest consideration here is performance, capacity and availability, all of which the SuperMicro exceeds the Intel NUC in leaps and bounds. This makes the cost of the SuperMicro look cheap when compared to the Intel NUC based on the numbers, this is detailed below. At a high level the SuperMicro can use either ECC or non-ECC RAM, it uses full sized RAM slots rather than SODIMM, the RAM capacity is 4x larger at 128GB and the CPU capacity is nearly doubled. This makes the cost of the SuperMicro a lot cheaper than the Intel NUC once you start to consider purchasing more than a single unit.

The RAM capacity is a massive point in favour of the SuperMicro’s. This is incredibly important for the consolidation ratio of VMs to Hosts, especially when running all flash VSAN. You must remember to take into consideration that VSAN will consume a large chunk of your RAM. For ease of calculations I will use 10GB as my VSAN memory consumption, however the actual number was 10.5GB. Details of how to calculate your VSAN memory requirements can be found here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2113954

Let’s assume you opt for the Intel NUC with a maximum of 32GB RAM. You instantly lose 10GB to VSAN, so you’re left with 22GB RAM to use in your environment. Each Intel NUC will provide you with 7GHz of CPU processing power and 22GB of RAM. This leaves you with 3.14GB RAM for each 1Ghz of CPU used. From my previous analysis of my home lab my VMs average between 200MHz and 500MHz CPU usage. I will use 500Mhz (0.5GHz) for my calculations and a conservative estimate of my consolidation ratio.

7GHz / 0.5GHz = 14 VMs per NUC

I have approximately 65 VMs in my home lab and this would mean that I require 5x Intel NUCs just to meet my current capacity requirements with a consolidation ratio of 14 VMs per host. What’s worse is that I would be highly unlikely to actually get 14 VMs on each Intel NUC because I only have 22GB RAM available to use. Each of the 14 VMs would have approximately 1.5GB RAM allocated.

22GB RAM / 14 VMs = 1.57GB RAM per VM

Based on the above calculations the RAM is a massive constraint on the use of Intel NUCs in a home lab environment. Realistically, based on RAM consumption I would need 9 Intel NUCs in my lab. I have used an estimated 3GB RAM required per VM for the below calculations.

(3GB RAM per VM x 65 VMs) / 22GB RAM per NUC = 8.86 (9) NUCs

Each SuperMicro E200-8D has 11.4GHz CPU processing power and 128GB RAM (less 10GB for VSAN). Applying the same calculations as above.

11.4GHz / 0.5GHz = 22.8 VMs per SuperMicro

118GB RAM / 22.8 VMs = 5.18GB RAM per VM per SuperMicro

As you can see from the above calculation, with 128GB RAM in the SuperMicro the CPU becomes the constraining factor. Leaving 5.18GB of RAM for each VM using 0.5GHz of CPU. This is fairly consistent in a VMware environment where RAM is more heaving utilised than CPU, so you would be better off ensuring you have more RAM than CPU.

Let’s work out what my consolidation ratio will be based on my actual RAM requirements of 3GB RAM per VM.

(3GB RAM per VM x 65 VMs) / 118GB RAM per SuperMicro = 1.65 (2) SuperMicros

The calculations make it very obvious from anyones perspective. To suit my needs I need 2x SuperMicros or 9x Intel NUCs. I could have stuck with the 2x SuperMicros and setup a 2 node VSAN configuration utilising the virtual witness appliance as the 3rd node, however I want to make this enterprise ready so I opted to meet the minimum of 3 nodes for VSAN.

If you factor in the costs for 128GB of ECC RAM then it can get more expensive. Because I was going to buy 3x SuperMicro servers anyway (for VSAN) then why not be more price conscious and use 64GB non-ECC RAM per server. This meant that my lab was more realistically sized with 3 servers at 64GB RAM in each = 192GB RAM and the cost was $1499 per SuperMicro E200-8D (including disks).

Price

From all of the above details if it isn’t already obvious why I opted to build my lab with SuperMicro E200 servers, do the math on the cost.

9x Intel NUCs x $630 each = $5,670 USD

3x SuperMicros x $799 each = $2,397 USD

The costs I have used above are estimates based on a quick search, you may find cheaper prices if you look harder. I haven’t factored in the cost of the RAM, SSD or NVMe as these would be similar additional costs regardless of choosing the NUC or SuperMicro. There are other considerations that may affect the cost comparison of each unit, just to list a couple:

  • The supported SSD and NVMe cards could warrant a difference in price.
  • The RAM costs could vary between choosing to use SODIMM or ECC RAM.
  • The rack mount brackets on the SuperMicro servers are an additional cost.
  • The SuperMicros could likely consume more power during daily use.

If you have read all the way to the end, you are obviously just as interested in getting the “right” configuration for your home lab as I was. Seriously though, if at the end of this article you’re still leaning towards the NUC then just do it. You’re not going to be disappointed.

I purchased my SuperMicro servers from MITXPC as they specialise in micro systems. I found them via Amazon, however the prices on Amazon are significantly more expensive than if you go direct. If you’re interested, ask to speak with Eric Yui as he has been very helpful for me and will look after you.

Screen Shot 2017-04-02 at 10.52.35 AMScreen Shot 2017-04-02 at 10.52.49 AM

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


PreviousHome Lab Build – From the StartNextSuperMicro Build – The Components

Save storage by reducing media size

I found that my media on my home server was starting to take up a lot of space and I set out to find a better way to manage it. First I thought that I would simply delete older videos that were over a certain date (with some exceptions), however I quickly came to the conclusion “why delete it, if I can first resize it”. So I wrote a script to recursively search through my media and then reduce the size of any videos that met my requirements.

As the script runs it will report on the progress with the number of files processed and the reduced size from the original.

Screen Shot 2017-11-05 at 8.42.35 pm

The script uses ffmpeg so you will need to ensure it is installed and then add the /bin directory in the windows PATH. The script utilises both FFMPEG and FFPROBE. Both of these executables should exist in the FFMPEG /bin directory.

  1. Start the System Control Panel applet (Start – Settings – Control Panel – System).
  2. Select the Advanced tab.
  3. Click the Environment Variables button.
  4. Under System Variables, select Path, then click Edit.
  5. You’ll see a list of folders with a “;” separator.
  6. Add the ffmpeg /bin folder to the end of the list. e.g ;C:\Program Files\ffmpeg-20171027-5834cba-win64-static\bin

The script can be executed with the following parameters:

-Directory
-OptimizeAfterDays
-ValidateOnly
-DeleteOriginalVideo
-ffmpegQuality
-ConstantRateFactor

-Directory
The directory to search for the media. This directory will be searched recursively.

-OptimizeAfterDays
The number of days a file must exist for before it is optimized. If the file creation date is older than the threshold, the media will be optimized.

-ValidateOnly
This is a switch. Including this parameter in the command will stop the script from optimizing or deleting any files. It will only report on what files are going to be modified.

-DeleteOriginalVideo
This is a switch. Including this parameter in the command will stop the script from deleting the original input file.

-ffmpegQuality
This parameter will allow you to tab-complete the possible settings. The Quality sets the output resolution 480p, 720p or 1080p. The parameter settings are “hd480”, “hd720”, “hd1080”

-ConstantRateFactor
The constant rate factor defines the rate control for the x264 encoding process. A lower rate factor means a higher quality. You can set this between 0-51. A setting of between 21 and 24 is a very good range to chose from.

Examples:
.\OptimizeMedia.ps1 -Directory "C:\Temp" -OptimizeAfterDays "30" -ValidateOnly
This will search c:\Temp for any media files that were created more than 30 days ago. No files will be optimised or deleted. Only a validation will run.

.\OptimizeMedia.ps1 -Directory "C:\Temp" -ffmpegQuality hd480 -ConstantRateFactor 21 -OptimizeAfterDays 30 -DeleteOriginalVideo
This will search c:\Temp for any media files that were created more than 30 days ago. Media files will be optimised to a lower quality (480p) and size (CRF21), then the original file will be deleted.

 

Downloads

PowerShell Script – OptimizeMedia.ps1
Download ffmpeg – https://www.ffmpeg.org/download.html

Automated Secure Desktop Solution

 

A lot of security breaches that occur in desktop environments occur because desktop admins make unauthorized changes in order to “get the job done”. It is easy enough to lock down a desktop so that end-users only have minimal access rights but the more you lock down a desktop the more you can affect the user productivity. A very simple example is when a desktop admin adds a standard user to the local admin security group, providing the user with additional capabilities as a shortcut to resolve their productivity issues. This is the event that I have chosen to focus on in demonstrating the integration of our products to automate a secure desktop solution. The event that I have chosen to demonstrate is irrelevant in the context of the solution and has been chosen purely for demonstration purposes. This demonstration of an Automated Secure Desktop solution can therefore be customized to suit any number of use-cases. The power in this solution is the integration of vRealize Orchestrator, which will execute a workflow using the content from the trigger to achieve the desired outcome. The desired outcome that I have configured, for demonstration purposes, is to isolate the desktop using NSX.

Screen Shot 2017-09-20 at 11.25.25 am

In order to integrate the VMware products that form the Automated Secure Desktop Solution, we first need to understand the process of information. At a high level this can be defined by “Monitor, Detect, Orchestrate and Remediate”.

  • The monitoring is done using the Log Insight agent to monitor the Windows Event Log.
  • The security breach is detected using a custom query in the Log Insight Server.
  • The orchestration step is the most critical step and it leverages vRealize Orchestrator to perform a number of custom tasks.
  • The remediation is accomplished by using NSX to isolate the desktop.

The power behind using this framework is that it forms a highly customizable solution that can be used to achieve a number of outcomes for the customer. This specific solution relies upon Log Insight and vRealize Orchestrator to be used at the “Detect” and “Orchestrate” steps, however the monitoring and remediation can be customized. For example, you don’t need to use NSX to isolate a desktop, you could choose to just enable a higher level of debug logging, restrict access to the internet, refresh the desktop or simply email or SMS the user to alert them of the security issue.

Monitoring for a security incident

We are using a Windows 10 virtual desktop as the end user device and the Log Insight agent is installed within the image. The Log Insight Windows Agent collects events from Windows event channels and forwards them to the Log Insight server. By default, the Log Insight Windows Agent collects events from the Application, System, and Security channels. Within the Windows 10 Operating System we configure the local security policy to enable logging on the modification of local security groups to the Security channel log. These logs are then made available in the Log Insight Server for further analysis.

Picture1

Detect the incident has occurred

Log Insight is an excellent log management and analytics platform that makes it very simple to detect when the incident occurs. A custom query within Log Insight is created to analyze the ingested logs and alert will be raised when the event is found. Rather than sending an email alert, the Log Insight query is configured send a Webhook, which is essentially an API call that includes the content of the alert as a JSON block.

Picture2

It is at this point where a Webhook Shim needs to be utilized to translate the API call from Log Insight into a format that vRealize Orchestrator will accept. The Webhook Shim is a critical component that facilitates the integration of Log Insight and vRealize Operations Manager to vRealize Orchestrator. The simplest way to deploy the Webhook Shim is via a Docker container on a PhotonOS VM. The VMware Cloud Management Blog has outlined the details here: VMware Blog – Webhook Shim on Docker

Other resources to investigate (as shown in the video):
VMware Blog – Webhook Shim
StorageGumbo blog by John Dias

Orchestration of the tasks

vRealize Orchestrator is the central component that brings the entire solution together. vRealize Orchestrator allows us to expand the capabilities of this solution to more than just an automated remediation task. During the orchestration stage in this demonstration we have analyzed the alert content from Log Insight and then used the data to retrieve additional user details from Active Directory, display a pop up message on the user’s desktop, send an email to the relevant authorities and then execute the remediation task. The remediation task in this case was to isolate the desktop from the network, however by using vRealize Orchestrator the remediation task can be configured to be any other action to suit the customer requirements. Examples of this might include

  • Increase the debug logging level on the desktop.
  • Send SMS alert messages.
  • Recompose the desktop back to default settings.
  • Isolate the desktop from the internet.
  • Restrict application installs.
  • Or so many other tasks that can be configured in vRealize Orchestrator.

Picture3

Automatic Remediation of the incident

As part of the orchestration stage, the final task is to execute the remediation task. We are using a Horizon virtual desktop that is connected to the corporate network by an NSX virtual switch. NSX provides a significant capability for security and control of desktop infrastructure by employing software defined policies that are applied to workloads and users. Within seconds of the breach occurring the desktop has been effectively isolated from the network by applying a security tag to the virtual desktop.

Picture4

Video demonstration

The automated secure desktop solution has integrated Horizon virtual desktops, Log Insight, vRealize Orchestrator and NSX to provide a solution that detects the incident has occurred and automatically remediates the issue without any administrative input. This has been a fairly straight forward demonstration of what is a highly customizable solution. I have chosen to leverage Log Insight to detect the incident and execute a workflow within vRealize Orchestrator, however there are other options available to get to this stage. Not only does vRealize Orchestrator have a public API but it can also be integrated with external solutions through plug-ins, or you could use vRealize Orchestrator to ingest the data directly via email, SNMP traps or API calls to third party products.

This solution has been demonstrated publically in order to provoke thought and discussion about the integration and possibly solutions that the wider VMware product suite can offer. I would be interested to hear about similar solutions that have been used at our customer sites and how you approached the integration.

 

VSAN 6.5 to 6.6 Upgrade Issues with CLOMD Liveness

Before attempting any upgrades in a production environment I always try to test the process and functionality in a lab first. With this in mind I wanted to test the upgrade of VSAN 6.5 to 6.6 in my home lab, and unfortunately I initially didn’t have a whole lot of success. I’ve now fixed all the issues and just in-case anyone has the same problems, I’d like to ensure the resolution is readily available. I haven’t had the time to define the root cause but I have resolved the issues.

Firstly, let me make sure you understand, this is on UN-SUPPORTED hardware. These issues may not ever exist in a fully supported and compliant production environment. I have not seen these VSAN upgrade issues in fully supported environment. However, we all tend to run our labs on un-supported hardware so I’m sure I won’t be the only one that comes across these issues and just in-case other people do, the resolution is pretty simple. I have seen the same issues three times in three separate (unsupported) environments.

The upgrade was from VSAN 6.5 to VSAN 6.6 and as VSAN isn’t a stand-alone product, it is built into vSphere so the upgrade performed is as simple as upgrading ESXi. I was running ESXi 6.5.0 (Build 4887370) and the upgrade was to ESXi 6.5.0 (Build 5310538).

It has been a long (and i mean a LONG time) time since I have seen an ESXi purple screen. But soon after upgrading my environment to ESXi 6.5 (5310538) my hosts started purple screening. I had to take a screen shot because this is a rare sight. It only happened once and since the below fixes were applied it has never happened again.

Screen Shot 2017-05-26 at 7.28.59 PM

The VSAN upgrade process is very straight forward to perform.

  • Upgrade vCenter Server
  • Upgrade ESXi hosts
  • Upgrade the disk format version

Straight after the upgrade I started receiving vMotion alerts and my VMs wouldn’t migrate between hosts. There didn’t appear to be any configuration issues with vMotion and it was working perfectly fine before the upgrade. I tested the connectivity using a vmkping between hosts on the vMotion vmkernel IP and it failed. There was no network connectivity between hosts on the vMotion vmkernel port!

The vMotion fix:
I found that simply deleting the existing vMotion vmkernel and recreating a new vmkernel with the exact same configuration fixed all the issues. I had to do this on all hosts within the cluster and vMotion started working again.

CLOMD Liveness

This brings me to the next issue which was a lot more critical, the CLOMD Liveness. After I resolved the vMotion alerts, I ran a quick health check on VSAN. I found that my hosts were now reporting a “CLOMD Liveness” issue. This is concerning because the CLOMD (Cluster Level Object Manager Daemon) is a key component to VSAN. CLOMD runs on every ESXi host in a VSAN cluster and is responsible for creating new objects, communication between hosts for data moves and evacuations, and the repair of existing VSAN objects. To put it simply, this is a critical component for creating any new objects on VSAN.

Screen Shot 2017-05-26 at 9.04.03 PM

If you want to test this out (in a test environment), SSH to your ESXi hosts and stop the CLOMD daemon by running “/etc/init.d/clomd stop” and then try to create new objects or do a VM creation proactive VSAN test and see what happens. You will get the error “Cannot complete file creation operation”.

Screen Shot 2017-05-26 at 9.15.41 PM

And the output from the proactive VSAN test is “Failed to create object. A CLOM is not attached. This could indicate that the clomd daemon is not running”.

Screen Shot 2017-05-26 at 9.19.53 PM

If CLOMD isn’t running, you’re not at risk of losing any data, it just means that new data can’t be created, I would still suggest that it is critical to get it running again.

The CLOMD Liveness can occur for a number of reasons. The VMware KB article is here: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2109873

In order to check the CLOMD service/daemon was running on the hosts you can execute the following command on each host:

/etc/init.d/clomd status

The results showed that the CLOMD service was not running and even after re-starting the service, it would stop running a short time later.

Screen Shot 2017-05-26 at 8.20.43 PM

The VSAN CLOMD Liveness fix:
Learning from the vmkernel issues, I immediately tried deleting and re-creating the VSAN vmkernel on each host and this fixed the issue. However to do this was a little more difficult than the vMotion process because when you delete the VSAN vmkernel you instantly partition that host, so you need to be careful how you do this.

Place the host in Maintenance Mode first! We aren’t going to lose any data so you don’t need to evacuate the data, however I would recommend you at least select “Ensure data accessibility from other hosts”. Selecting “No Data Migration” is generally only suggested if you are shutting down all nodes in the VSAN cluster, or possibly a non-intrusive action like a quick reboot.

Once the host is in Maintenance Mode you can now delete the existing vmkernel and re-create a new one with the same settings. I would then reboot the host for good measure. Once the host is back up, you can exit Maintenance Mode and then move on to the next host.

Again, I stress that I have only seen this issue on un-supported hardware.

My VSAN Upgrade Process

  1. Upgrade vCenter
  2. Upgrade each ESXi server
  3. Upgrade the disk format version
  4. Run a VSAN health check!
  5. If you have a CLOMD issue then for each ESXi host in the VSAN Cluster
    1. Place a host in Maintenance Mode
    2. Delete and re-create the vMotion vmkernel
    3. Delete and re-create the VSAN vmkernel
    4. Reboot the ESXi host
    5. Move on to the next host
  6. Run a VSAN health check again

 


 

Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC

Single Node SuperMicro Home Lab

Building a home lab can be an expensive endeavour, so if there’s a much cheaper and easier option that still achieves the same outcome then why not do it? Who needs all that physical hardware when you can build your entire lab environment from a single SuperMicro server? The SuperMicro E200-8D and E300-8D are both micro servers that are ideal for this type of home lab build. Have a look at my previous article on this topic  (SuperMicro vs Intel NUC) where I explain why the SuperMicro is such a great option. They are micro servers that take up next to no space, consume minimal amounts of power and provide you with 128GB RAM capacity.

Thanks to a colleague of mine, Dale Shaw @Shawski500 who has loaned me his SuperMicro E200-8D server with 128GB RAM, I am able to show the process to build out a home lab on a single server.

Home Lab Concept

Ok, so the concept here is pretty simple. Take a single server with 128GB RAM, build 4 nested ESXi hosts with 32GB RAM each that will share the resources of the single physical host. Why 4 nested ESXi hosts? Not only does the RAM split at 32GB nicely but this also allows you to build a couple of 2 node clusters in your environment (i.e. management and compute clusters).
In perfect timing, William Lam (virtuallyGhetto) has just published 2 new blogs that we can leverage to assist us with our home lab build.

Utilising one or both of the above capabilities, we can simplify our home lab build. If you haven’t tried it, this is a great opportunity to try out the Project USB to SDDC in order to kick start your home lab build. William has already tried it on the SuperMicro E200-8D and without any effort the SDDC environment was up and running. If we can do it on the floor of the Melbourne Convention Centre, then you can do it at home!MelbourneVMUG

As is often the case with a home lab build, the idea is to manually install all of the components in order to learn how they work, break things, fix them and make it your own. So this article will provide you with the details you need to build your home lab using the ESXi virtual appliances that William offers. How you then chose to build your actual lab environment is up to you.

What You’ll Need

Let’s get started with the essentials. Here is what you’ll need to get started to build your new lab.

  • Server with sufficient RAM and CPU (I’m using a SuperMicro E200-8D with 128GB RAM)
  • Local disk or NAS for storage
  • ESXi 6.5d iso
  • ESXi 6.5d virtual appliance
  • Virtual router (pfsense or similar)
  • Nested VM for AD, DNS, DHCP, CA…etc

Building the Physical ESXi host

This is where all the critical configuration is, so don’t rush into building the nested ESXi hosts straight away. The first step is to prep your server (BIOS and IPMI Updates, Network configuration, BIOS settings and all the normal stuff) then install ESXi to it. I won’t go into any details around this process as you should be familiar with installing ESXi 🙂

Here is my Physical ESXi host. Just to confirm, it is a SuperMicro E200-8D with 6 CPUs and 128GB RAM. I’m also using local SSD storage rather than my NAS, just for this demonstration. I will configure VSAN within the nested environment based on this underlying single 1TB SSD and NVMe cache.

Screen Shot 2017-05-16 at 10.52.37 AM

Network

The networking configuration on the physical ESXi host is important to get right. If it isn’t right then your nested lab won’t be able communicate between ESXi hosts. A massive benefit to using the SuperMicro servers is that they have multiple NICs and therefore I can run separate vSwitches for my nested environment. I’ve built a separate vSwitch called “Nested ESXi” and assigned it my 10Gbe NICs. The Physical ESXi management is on its own vSwitch, the default “vSwitch0” and is assigned to two 1Gbe NICs.

Screen Shot 2017-05-16 at 11.55.04 AM.png

On the “Nested ESXi” vSwitch I have created a single port group also called “Nested ESXi”. The network settings for the Nested ESXi switch and port group needs the following configuration:

  • Allow Promiscuous Mode.
  • Allow Forged Transmits.
  • Allow MAC Changes.
  • VLAN 4095, which is a “trunk” port group and will allow you to run multiple VLANs in your nested lab.
  • MTU needs to be set to Jumbo Frames if you are going to use NSX in your nested lab.

Screen Shot 2017-05-16 at 10.54.28 AM.png

Storage

The next step is to configure the storage. In my case I am going to run a nested VSAN lab and the SuperMicro E200-8D is fitted with a 350GB NVMe and a 1TB SSD, so I need to create local datastores for each of these storage tiers.

Screen Shot 2017-05-16 at 12.04.40 PM.png

Nested ESXi Hosts

Now that our underlying networking and storage is configured, we can start to deploy our nested ESXi hosts. You can deploy as many or as little number of nested hosts as you like. This is now an extremely simple process thanks to the nested ESXi appliances. Simply deploy the ova file 4 times to build 4 nested ESXi hosts. You will need to configure each host during the deploying with their management configuration. At this point you need to decide on what your Management VLAN ID will be and the host IP addresses. At this early stage DNS isn’t critical but if you’ve already decided on what your DNS Server IP address will be then enter in all the details during the deployment.

The VLAN ID will likely be 0 or blank. Because the physical port group is configured as VLAN 4095 or a trunk port group, then you can use multiple VLANs in your nested environment and you can use either a Management VLAN or No VLAN. Once we have configured our nested ESXi hosts, we will deploy a virtual router that will then be configured with our nested home lab VLANs and we can configure VLANs for VSAN, vMotion, Nested Management…etc. For now, all we need is for the ESXi hosts to communicate between each other without routing to any other VLANs so just make sure they’re all configured on the same network and are accessible from your home network. Don’t power on the nested ESXi hosts yet.

Now that the nested ESXi hosts are deployed, we need to configure them before powering them on. This includes the CPU and RAM resources and the storage configuration. You should now have 4 virtual ESXi hosts on your physical ESXi server.

Screen Shot 2017-05-16 at 10.52.13 AM

  • Each nested ESXi host will be deployed with 2 NICs. Check that both of these are connected to the “Nested ESXi” port group and set to “connected”.
  • If you are going to run VSAN on you nested home lab like I am, then configure each nested ESXi host with 3 HDDs in suitable sizes.
    • Hard Disk 1 shouldn’t be modified as this is where ESXi is installed.
    • Hard Disk 2 is configured as the read/write cache and is connected to the “Local NVMe” datastore.
    • Hard Disk 3 is your VSAN capacity disk and should be as large as you can afford. It should be connected to the “Local SSD” datastore.
  • The CPU should be set to use all of the available CPU cores
  • The RAM is set to the shared amount, in my case 32GB.

Screen Shot 2017-05-16 at 12.36.40 PM.png

Configure all of your nested ESXi hosts in the same way, and then power them all on.

Accessing Your Nested Lab

There are a number of ways in which you can configure access to your new lab and this entirely depends on what you have available to you. You have deployed your nested ESXi hosts on your physical home network, so you can now connect to each of the ESXi hosts and configure them to suit your new lab environment.

The next issue will be building out all of your VMs within your nested lab and the nested networking configuration. You could simply deploy all of your VMs to your physical home network and on the same subnet as your ESXi management. This will work but it’s not really what I’d build a home lab for. I’ve configured this nested home lab to use a trunk port group so that I can run multiple VLANs in my home lab. I want to be able to deploy and use NSX and VSAN, both of which will require VLAN IDs and communication between ESXi hosts. In order to start using VLAN IDs within your nested lab and configure routing between these VLANs, you’re going to need a nested virtual router. There are many options out there but for simplistic sake I have used a pfsense configuration. This is downloaded in the form of an ISO file and when booted from the iso it will build the virtual router for you.

Here is a quick overview of my pfsense configuration for this lab, with the WAN network being the untagged native network and the LAN networks the nested VLANs. If you want to do something similar then let me know and i’ll try to put together a more detailed “next steps” follow up with nested networking configuration, vCenter deployment, VSAN configuration and NSX.

Now you now have 4 ESXi hosts running on a single SuperMicro server that can be used to build your home lab however you like. Here is what my new lab environment looks like.

Screen Shot 2017-05-17 at 11.52.52 AM.png


Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC

SSL Certificate Tool (CertGenVVD)

 

It’s always one of the parts of a new implementation that I don’t look forward to, generating SSL signed certificates for all of the various VMware products. This is something that i’ve done a lot of times in my years at VMware but I still avoid doing it if possible. Not surprisingly, a lot of customers reach out to VMware for support when renewing certificates too. The process you have to go through even just to generate the certificates is time consuming and prone to error.

  • First you have to write out the config files for all of the certificates.
  • Then generate a .csr file for each of those certificates.
  • Submit the .csr and get a CA signed SSL certificate back.
  • Download the root and intermediary certificates.
  • Create SSL Chains with the root, intermediary and SSL certificate. This is where one of the most common mistakes occur, mixing up the chain certificate ordering.
  • Using OpenSSL you can create a range of .pem or .p7b or .pfx files depending on what the specific product is that your implementing.
  • And then you can start to install the SSL Certificates for each product.

If you haven’t done this process hundreds of times, it’s quite a time consuming task and if you get it wrong it takes a lot of time to resolve issues. This is just one of those things that I don’t think anyone really enjoys doing. Until now, that is. I’ve spent the last week playing with the VVD CertGen tool and I’ve actually enjoyed my time doing it. So much so that i’ve even written a PowerShell script to make the process even easier and i’d like to share my work with the community.

First of all, I am no PowerShell expert and of course I can’t take responsibility for anything that happens with this script. The VMware CertGen tool does all the hard work, my script simply takes the input from a .csv file and then creates all of the config files which are then input into the CertGen tool. I’ve wrapped it all up into a simple process that anyone can use. The CertGen tool outputs CA Signed SSL Certificates for all of the products and automatically creates the various different certificate formats that each product requires. All that is left to do is upload the SSL certificate to the product.

CertGen Tool and Scripts

The first thing you need to do is review the VMware KB article KB2146215 on the CertGen Tool. This article will provide you with the instructions to use the CertGen tool. I will cover off the simple steps, however the KB article details the pre-requisites and configuration of the CA servers, the supported platforms, product compatibility and it also explains use-cases outside of what i’ll explain here. This blog article will cover the use of my script to automatically generate the configuration files from a .csv and a simplified set of instructions for the CertGen tool usage.

At the bottom of the KB Article, in the attachments section, download the CertGenVVD zip file.

Screen Shot 2017-05-08 at 10.58.26 AM

Extract the zip file to a location that will be easy to access via command line. This can be simply c:\Temp. The zip file contains the “ConfigFiles” folder, a “default.txt” file and the “CertGenVVD-3.0.ps1” script file.

Open the “ConfigFiles” folder and delete all of the existing config files, or you can delete the entire folder, the script will just re-create the folder anyway. Normally you would use these files to manually update the configuration details for each of your products. We don’t need to do this because we will use a csv file and then build all of these files using the script. You can also delete the “default.txt” file as we won’t need this.

Download my Certificate Config Tool which will include the csv configuration file “CertConfig.csv” and the “CertConfig.ps1” script. Extract this zip file to the same location as the CertGen Tool. You should now have a file structure that looks like this.

Screen Shot 2017-05-08 at 11.51.10 AM

I have offered the above instructions so that you can download the most up to date version of the CertGenVVD tool and use it in conjunction with my script. If you would rather a more simplified approach and download the pre-configured package, then you can download the Cert Tool zip file here which contains my configuration scripts + the CertGenVVD-3.0.ps1 scripts in a pre-configured directory. Just Download the zip file and extract to to a directory like C:\Temp.

Cert Tool Package Download

Creating the SSL Certificates

I first created this spreadsheet to be used with the VMware Validated Design (VVD) Configuration Workbook and the values are linked to the configuration cells within the VVD workbook. When using the VVD Deployment Tool the certificate configuration is entirely automated from generation of the configuration files and all the way to implementing the certificates for each of the products. I have simply exported the spreadsheet as a csv file and shared it as-is so that it can be more widely used outside of the VVD process.

Update the Cert Config csv

Therefore the first step you must do is update the values within the csv file. I have pre-populated the configuration details that I used in a test lab so that you can see how it works.

Screen Shot 2017-05-08 at 1.06.28 PM

  • Every row with a “Name” on it relates to an individual certificate that will be created
  • If the DNS1 column contains an “n/a” then the certificate for that row will be skipped. I have included certificates for a number of fake hosts in the configuration csv that you can leave as n/a or delete the row if you don’t need them.
  • Some products require additional SANs (Subject Alternate Names), therefore each DNS column references an additional SAN for each certificate. If you don’t require additional names, leave the cells blank.
  • The domain name needs to be populated because the PowerShell script uses the short DNS name separately. The script will combine the short DNS and Domain Name to create the FQDN.
  • Some products require the IP address. You can populate that here or leave it blank for the products that you only want to have a DNS record and not locked to an IP address.
  • The FileName column is the name of the configuration file that gets created. The name and folder structure of the Signed Certificates is created by the CertGenVVD Tool and is based on the Common Name inside the certificate (the FQDN).

Once the csv file is complete save it with the same filename “CertConfig.csv” in the same directory as the “CertConfig.ps1” file. The script expects this file to be in the same folder as the script, as does the CertGenVVD script.

Prepare the Microsoft CA Server

To use a Microsoft Certificate Authority Server you must ensure that the server meets the pre-requisites that the CertGenVVD script required. This is fairly simple to do, if you have administrator rights to the CA.

As part of the Certificate Authority services, you must ensure that the following additional services are installed and configured

  • Certificate Authority Web Enrolment
  • Certificate Authority Web Serviced

You will also need a Certificate Template that is used to sign the certificates. Open your CA server settings, expand the folder structure, right click on “Certificate Templates” and select “Manage“. Right click the “Web Server” and select “Duplicate Template“. I create a VMware specific Template that includes the following configuration.

  • Template Name – VMware.
  • Compatibility of Windows Server 2003 and upwards.
  • In the Subject Name tab, make sure “Supply in the request” is selected.
  • In the Extensions tab.
    • Delete all the application policies.
    • In Key Usage select “Signature is proof of origin (nonrepudiation)”.

Screen Shot 2017-05-08 at 2.31.59 PM

Close the Certificate Templates Console and add the new VMware Certificate Template to the CA by right clicking on the “Certificate Templates” folder, select “New” and then select “Certificate Template to Issue“. Find the “VMware” certificate and click OK.

Prepare the Operating System

On the Windows Operating System that in intend to execute the scripts from you will need to install OpenSSL and Java. Without these installed the CertGenVVD script will not work.

You should download the most up to date versions online, however for ease of use I am using the following versions that are bundled with the VVD Deployment Tool.

Win32 OpenSSL
Java 8u60

Download and install OpenSSL and Java. Once these are installed you will need to set your environment PATHs to include these products. To do this, right click on your computer, go to “Properties” and then “Advanced System Settings“. In the “Advanced” tab click on “Environment Variables

Screen Shot 2017-05-08 at 2.52.39 PM

Create a new System Variable called JAVA_HOME and enter the path to the Java application folder.

Screen Shot 2017-05-08 at 2.53.37 PM

Scroll down through the “System Variables” and find the “path“. Edit the path variable and add the OpenSSL and Java Path’s to end of the variable. Use a semicolon “;” as the separator.

Execute the CertConfig Script

  1. Change Directory to the location of the CertConfig.ps1 script. In my case this is C:\Temp\CertTool
  2. Execute the “CertConfig.ps1” script
  3. Answer the default configuration questions:
    1. Organisation
    2. OU
    3. Location
    4. State
    5. County
    6. Key Size (Default is set to 2048)

Screen Shot 2017-05-08 at 1.31.20 PM

That it! You will now see a new folder called “ConfigFiles” within the Cert Tool directory that has been fully populated with the configuration files for each of your certificates.

Execute the CertGenVVD Script

  1. Set the execution policy to remote signed with the following command.
    Set-ExecutionPolicy RemoteSigned
  2. Do a test run of the CertGenVVD script by first running the script with the -validate parameter. This will check everything is configured successfully and ready to issues the CA signed certificates.
    ./CertGenVVD-3.0.ps1 -validate
  3. Execute the “CertGenVVD-3.0.ps1” script with the required parameters (as defined in the KB article KB2146215.
    ./CertGenVVD-3.0.ps1 -MSCASigned -attrib “CertificateTemplate:VMware” -config “labrat.local\labrat-CA” -username labrat\Administrator -password VMware1!

The -attrib parameter references the CA Servers Certificate Template that will be used to sign these certificates. You created this when preparing the CA Server.

The -config parameter is the name of your CA Server.

Screen Shot 2017-05-08 at 1.31.20 PM
You will be asked to enter a password for the p12/pem certificates. This is required.

Screen Shot 2017-05-08 at 2.00.35 PM

It will only take a minute and the script will do all the rest of the work. When the script is finished you will be presented with a list the certificates that were generated, which will be located in a new directory called “SignedByMSCACerts


Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC

SuperMicro VSAN HCIBench

After spending a lot of time and money building up my home lab environment, the first thing I wanted to do was test it out. I wanted to know what sort of performance will I get from this little VSAN lab. In my haste to to get my hardware I opted for an NVMe M.2 SSD that I expected to perform well but it wasn’t ever going to break any records. It was available at the time and at the right price, so I bought it. Now that my lab is built, I really want to know how it actually performs and if my eagerness paid off or if it’ll come back to bite me. Regardless of the hardware, this is a home lab configuration built on the SuperMicro E200-8D platform with an all flash VSAN. How good can it be?

Lab Specs

Here is my hardware details. I have 3x SuperMicro servers in a VSAN cluster, each running the same hardware, connected via a 10Gb network.

Product

Details
SuperMicro E200-8D SYS-E200-8D
CPU Intel XEON-D 1528 1.9GHz (6 core)
RAM 64GB ECC UDIMM RAM (4 x 16GB)
Capacity Disk 1TB 2.5″ SanDisk X400
Cache Disk 128GB NVMe M.2 SanDisk X400
 Network 10GBase-T with 9000MTU
ESXi  ESXi Version 6.5.0 (4887370)

Lab Test with HCIBench

Screen Shot 2017-04-29 at 8.16.19 AM

VMware Flings publish an awesome little appliance called HCIBench, which is a Hyper-Converged Infrastructure Benchmarking tool. You can download it from the VMware Flings website. This is a very simple tool that makes performance testing of a HCI POC or home lab an extremely simple task. Run it in your home lab and let me know what you get. I’d like to get some comparisons on other home lab environments.

I won’t go into much detail around the install process because it is very simple and the Install Instructions are very clear and well written. The gist of the install goes like this:

  1. Download and import OVA.
  2. Enter the network configuration.
  3. Log into the website at http://ipaddress:8080.
  4. Username is “root” and the password was setup in the OVA deployment.
  5. Enter all of your vCenter details.
  6. Press the button to download Vdbench and then upload it. This is for licensing constraints. You must download Vdbench yourself.
  7. Tick the “Easy Run” for automated VSAN testing.
  8. Validate and then start the Test.

Once the test has started you will get a progress screen

Screen Shot 2017-04-29 at 7.56.49 AM

The HCIBench tool will deploy the necessary VMs to your environment, configure them and wait for them to respond on the network. You will need to either provide a DHCP network or tick the box to get the HCIBench tool to allocate IPs to the worker VMs.

Screen Shot 2017-04-29 at 7.59.40 AM

It will take a while for the VMs to be deployed and then they will prepare the disks before the actual test starts. This takes about 10 minutes or more. Once everything is ready the test will start.

Screen Shot 2017-04-29 at 10.10.25 AM

 It will take a couple hours to do a full test. While it was running I logged in to esxtop and took a couple quick screen shots of the current disk activity.

Screen Shot 2017-04-29 at 8.50.16 AMScreen Shot 2017-04-29 at 8.59.35 AMScreen Shot 2017-04-29 at 9.00.13 AM

Results

After a few hours of testing I had the results. I wasn’t really surprised at the figures, they seem to be exactly what I was expecting to get from the SanDisk X400 disks. According to the UserBenchmark website the expected 4k Write throughput for the X400 is 63.7MB/s and my throughput was 62.81MB/s. Now it’s time to buy a Samsung 960 EVO M.2 SSD and do the test again 🙂

Datastore SuperMicro_VSAN
VMs 6
IOPS 16078.98 IOPS
THROUGHPUT 62.81 MB/s
LATENCY 23.8660 ms
R_LATENCY 16.0298 ms
W_LATENCY 42.1727 ms

Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC

SuperMicro Build – BIOS and IPMI

Upgrading the BIOS and IPMI firmware is not necessary but I do highly recommend at least the IPMI update. Depending on the age of your system the BIOS may already be up to date. I purchased three servers at the same time, from MITXPC and two of them already had the updated BIOS firmware. Upgrading the firmware can be a daunting task, especially if you have just bought a new server. The last thing you want to do is break your new server by upgrading the Firmware incorrectly. Luckily the process to upgrade the firmware is pretty straight forward so I went ahead and did it just to be sure. If you aren’t comfortable taking responsibility for upgrading your firmware then just upgrade the IPMI software.

Upgrading the BIOS Firmware and IPMI

Download the SuperMicro BIOS update and IPMI update files from the website here:

https://www.supermicro.com/support/bios/firmware.aspx

https://www.supermicro.com/support/resources/results.aspx

The IPMI update will all you to use the iKVM/HTML5 console screen rather than the older JAVA based console. This is much easier to use and much more secure.

In order to upgrade the BIOS firmware you need to boot to a USB disk and apply the firmware update from there. I use Rufus to create the bootable USB disk. There are no specific configuration details that you need to configure with Rufus other than to use a FAT32 file system and select the FreeDOS bootable image.

029- Screen Shot 2017-04-10 at 1.45.09 PM

Once this is finished, copy the relevant files to the USB drive. You can see below that I have also copied the SuperMicro IPMI tool. This is handy in case you need to reset the IPMI settings.

IPMICFG_1.26.0_20161227 – SuperMicro IPMI Configuration Tool

REDFISH_X10_346 – SuperMicro IPMI Update

X10SDVF6_A03 – SuperMicro BIOS Update

 031 - Screen Shot 2017-04-10 at 2.02.11 PM

Insert the USB drive into your SuperMicro E200 and boot to the USB. You will need to Select the non UEFI bootable device that represents your USB drive.

032 - Screen Shot 2017-04-10 at 10.34.47 AM

When FreeDOS has booted, change the directory to the BIOS Update folder and then type:

FLASH.bat

033 - Screen Shot 2017-04-10 at 10.35.33 AM

The BIOS update will now run. It takes a while, so be patient. Once it is complete you will see the below confirmation to power-off the system. For good measure it is also recommended by SuperMicro to reset the CMOS.

The BIOS update is now complete. The next step is to update the IPMI.

The IPMI update can be done from the UI very easily. If you haven’t already configured the IPMI, log into the BIOS and on the IPMI tab configure your networking settings.

Type the IP address for the IPMI into a web browser and then enter the default username and password (needs to be capitals):

Username: ADMIN

Password: ADMIN

036 - Screen Shot 2017-04-10 at 10.56.44 AM

Under the Maintenance tab, select Firmware Update.

037 - Screen Shot 2017-04-10 at 10.58.45 AM

Press the button to “Enter Maintenance Mode” and then upload your IPMI Update file.

038 - Screen Shot 2017-04-10 at 10.59.11 AM

Once the files are uploaded you will need to un-tick the Preserve settings options. This is required by SuperMicro as part of the IPMI upgrade. If you leave these options ticked the upgrade may fail.

038 - Screen Shot 2017-04-10 at 11.11.08 AM

The upgrade may take a while and the system will reboot once it has finished. Once the upgrade is complete you can see the Firmware Revision is now updated.

039 - Screen Shot 2017-04-10 at 11.06.08 AM

After the IPMI upgrade is complete, you can now launch a iKVM/HTML5 remote console session.

Your hardware is now ready to use. The next blog post will be demonstrating the VMware Validated Design and the Automated Deployment Toolkit, which will build the entire SDDC stack in my home lab, FULLY AUTOMATED.

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


PreviousSuperMicro Build – The InstallationNext – Networking Configuration

SuperMicro Build – Installation

In the previous blog we have talked about the hardware options for the SuperMicro and what you might get for you money. Now you have bought your server, you’ve bought all of your components and your ready to start installing everything. I am not going to do a full step by step guide, rather I will give you all the information you need to do the install yourself, without writing an article that is too long and boring. If you have any questions about the installation then let me know.

It is extremely simple to open up the SuperMicro E200-8D, it is literally a few screws and you’re in. So take the lid off it and let’s get started!

Installing the 3rd Fan

The SuperMicro E200 ships with 2 case fans. These are more than suitable to keep the unit cool, however my systems are not in an air-conditioned room, so for added security on those hot days I opted for a 3rd fan.

There are 7x screws around the face plate that first need to be undone, then you can remove the front panel and access the fans. Don’t forget to remove the blanking plate from the front panel. There’s no point installing a 3rd fan if it can’t suck any air in! Unfortunately this is a very easy step to miss, and I’ve seen it done on multiple occasions.

Just bend the blanking plate backwards and forwards a couple times and the tabs at the bottom will break.

The 3rd Fan is very easy to fit, just a couple screws in the bottom of the fan to secure, then plug it in. The motherboard manual is clearly labelled FAN1, FAN2, FAN3 and FAN4. I have highlighted the picture below to show where the plugs are. I’ve used FAN1, FAN2 and FAN3. I have also paid careful attention to route the fan cables out of the airflow path and tucked them out of the way.

Installing the RAM and NVMe

The RAM and NVMe are very simple to install, so no need for instructions. The NVMe has a single screw and the RAM just drops straight in. Here’s some pictures, just because.

Installing the HDD

This single topic is what has prompted me to write this article. Honestly, this can cause major issues if not installed correctly. First I’m going to show you a few pictures of what NOT TO DO.

In case the above pictures weren’t abundantly clear to you, there are 2 major issues here that could break your system.

  1. The HDD is mounted the incorrect way and the cables are bent and rubbing against the fan housing.
  2. The excess HDD cables are wound up in a roll and jammed in the front of the fans, stopping most of the air from freely circulating over the CPU and NVMe card. This is a great way to overhead your system real fast!

Ok, so now that you’ve seen what not to do, it seems like it should be pretty easy to do it right the first time.

Face the HDD with the cables towards the back of the unit. This will give you a lot of room to play with.

The power and SATA cables are a little bit more difficult to find space but try to place them neatly and don’t obstruct the airflow to your critical components. I’ve seen a few different variations of types of cables that are delivered with the SuperMicros, so depending on what type of cables you have then this could be much easier for you than it was for me.

I actually made my own cables by reducing the size of the cables that were delivered with my unit. If you can find a set of new power cables that are more suitable then please let me know as I would love to buy them. What you need is a Female Molex to Female SATA power cable that is at least 20cm long. This would remove the major bulk of useless cables from your system and leave it nice and open to promote the best air flow.

Here are some pictures of how I configured my system.

Here is a quick comparison of the bad and good installation. In the first picture you can see that the bundle of cables are all but stopping the air circulation. The second picture shows a wide open zone for air to flow freely.

Rack Mount Brackets

One of the reasons I have chosen the SuperMicro E200 in my home lab for its portability, so it might seem a bit weird to then fit it in a rack. There are a couple reasons for this, but mostly it is because I can separate a cool zone and hot zone within my rack and provide better cooling efficiency for my hardware. I think that if it takes me 10min to remove the rack mount brackets then this is a small price to pay for the improved cooling, better security, better protection, it’s easier to use and looks better too.

The rack mount brackets are secured to the side of the unit and when fitted to a rack they provide a physically separate zone that forces cold air to enter in the front and hot air to be vented out the back. The rack mount brackets also provide a secure mounting tray for the power supply.

There are screw holes in the side of the SuperMicro E200 and the rack mount bracets just bolt straight up. It is extremely simple to do. Here are a few pictures to illustrate the installation.

Server Rack

Buying the right server rack can be really hard. At first I purchased a small network cabinet but the length was a problem. My 10Gb switch was too long to fit in this short cabinet, so I had to start looking at other options. A colleague of mine had an old full size server rack that he didn’t need so we swapped our cabinets over. I couldn’t be happier with that decision. Yes the 42RU server rack is large and I don’t need all of the capacity but it has provided me with a unique opportunity to create a “cool zone” within the server rack that feeds cold air into my servers and lets hot air vent out the back. Considering my lab environment is locked up in my garage with no air-conditioning and not a lot of air flow, a full size rack with a controlled cold zone keeps my hardware humming along.

Along with my SuperMicro E200 servers, I have a full size tower server that serves as a NAS, a full length 10Gb switch and an old cheap 2RU server that might come in handy one day.

You can see from the picture below that I have sealed the door with some plastic sheets and in front of the servers there is a 30cm air gap that serves as the cold zone.

023 - IMG_4273

If you look closely in the above picture, you will see that at the top of the rack there is a duct running into the cold zone. This duct feeds cold air to a 150mm hydroponics fan, which pushes approximately 200cfm of air through a water cooled radiator and into the cold zone. This whole setup is electronically controlled via a thermostat that is placed inside the rack. This is system is engaged when the temperature increases past 30 degrees Celsius. This ensures that on those hot days in the garage my lab stays cool.

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


Previous – SuperMicro Build – The ComponentsNextSuperMicro Build – BIOS and IPMI

Home Lab build – From the start.

Most of my colleagues and just about anyone in the IT industry have a home lab. Is there a better way to continue learning your chosen technology? Most of us learn from experience and how better to gain experience than by building your own enterprise environment at home. I have set out to rebuild my old home lab environment and I will be detailing the configuration throughout the process so that anyone can build something similar in their home. I have chosen the hardware platform (SuperMicro E200-8D) and will be detailing the process to build out an entire VMware SDDC lab environment.

There are multiple approaches to building your lab at home. Not many people have the capability or money to run enterprise hardware in your home lab, there are sacrifices to be made and most often these decisions come down to noise, space, power consumption and cost.

If you don’t have a home lab already, build one, break it, fix it, maintain it and learn from your experience!

Home Lab Build Series

IntroductionHome Lab Build – From the Start

Part 1SuperMicro vs Intel NUC

Part 2 – SuperMicro Build – The Components

Part 3SuperMicro Build – The Installation

Part 4SuperMicro Build – BIOS and IPMI

Part 5 – Networking Configuration

Part 6 – VVD – Automated Deployment Toolkit – vSphere, VSAN, NSX and vDP

Part 7 –  VVD – Automated Deployment Toolkit – vRA, vRO, vROps and Log Insight


Next – SuperMicro vs Intel NUC