Disable TLS 1.2 on VMware Unified Access Gateway (UAG)

I was asked today about how to change the TLS settings on a UAG appliance. While I wouldn’t recommend doing this, unless you really know what you’re doing. I figured that it would be a good example of how to use the REST API to modify the settings on a UAG.

Log into the UAG GUI via https://{IP Address}:9443/admin 

Go to the System Configuration

Screen Shot 2018-02-21 at 9.10.39 pm

Within the System configuration you can see that TLS 1.2 is enabled.

Screen Shot 2018-02-21 at 8.51.19 pm

You can see in the below image, when you try to disable TLS 1.2 in the GUI, it is disabled.



The settings that you can’t change in the GUI (Particularly the TLS 1.2) can be changed via the API. I choose to use the Postman API tool, and that is what this guide will focus on. Use the API Get command to first review the current settings. To validate the API – got to
https://{IP Address}:9443/rest/swagger.yaml

Once you open Postman, type in the UAG API URL –
https://{IP Address}:9443/rest/v1/config/system

Screen Shot 2018-02-21 at 8.53.02 pm

Select Basic Auth and type in your username and password.

Screen Shot 2018-02-21 at 8.53.20 pm

Select the Headers tab and add a Key for “Content-Type” with a value of “application/json“.

Screen Shot 2018-02-21 at 8.55.51 pm

Click “Send” and you will get the results in the window below, in a JSON format.

Screen Shot 2018-02-21 at 8.53.30 pm


Now that you have validated the existing config and retrieved a properly formatted JSON block with the correct settings, now you can use the PUT command to make the necessary changes.

Copy the JSON block and then change the API call from GET to PUT.

Screen Shot 2018-02-21 at 8.59.20 pm

Open the “Body” tab, select the “raw”  input method and change the input type to “JSON (application/json)“.

Screen Shot 2018-02-21 at 9.00.47 pm

Paste the JSON block into the window below and change the “tls12Enabled” to false.

Screen Shot 2018-02-21 at 9.02.56 pm

Click “Send” and the results will be display in the bottom window.

Screen Shot 2018-02-21 at 9.03.38 pm

You can now log back into the UAG system configuration and the TLS 1.2 setting is disabled.

Screen Shot 2018-02-21 at 9.05.58 pm.png

Save storage by reducing media size

I found that my media on my home server was starting to take up a lot of space and I set out to find a better way to manage it. First I thought that I would simply delete older videos that were over a certain date (with some exceptions), however I quickly came to the conclusion “why delete it, if I can first resize it”. So I wrote a script to recursively search through my media and then reduce the size of any videos that met my requirements.

As the script runs it will report on the progress with the number of files processed and the reduced size from the original.

Screen Shot 2017-11-05 at 8.42.35 pm

The script uses ffmpeg so you will need to ensure it is installed and then add the /bin directory in the windows PATH. The script utilises both FFMPEG and FFPROBE. Both of these executables should exist in the FFMPEG /bin directory.

  1. Start the System Control Panel applet (Start – Settings – Control Panel – System).
  2. Select the Advanced tab.
  3. Click the Environment Variables button.
  4. Under System Variables, select Path, then click Edit.
  5. You’ll see a list of folders with a “;” separator.
  6. Add the ffmpeg /bin folder to the end of the list. e.g ;C:\Program Files\ffmpeg-20171027-5834cba-win64-static\bin

The script can be executed with the following parameters:


The directory to search for the media. This directory will be searched recursively.

The number of days a file must exist for before it is optimized. If the file creation date is older than the threshold, the media will be optimized.

This is a switch. Including this parameter in the command will stop the script from optimizing or deleting any files. It will only report on what files are going to be modified.

This is a switch. Including this parameter in the command will stop the script from deleting the original input file.

This parameter will allow you to tab-complete the possible settings. The Quality sets the output resolution 480p, 720p or 1080p. The parameter settings are “hd480”, “hd720”, “hd1080”

The constant rate factor defines the rate control for the x264 encoding process. A lower rate factor means a higher quality. You can set this between 0-51. A setting of between 21 and 24 is a very good range to chose from.

.\OptimizeMedia.ps1 -Directory "C:\Temp" -OptimizeAfterDays "30" -ValidateOnly
This will search c:\Temp for any media files that were created more than 30 days ago. No files will be optimised or deleted. Only a validation will run.

.\OptimizeMedia.ps1 -Directory "C:\Temp" -ffmpegQuality hd480 -ConstantRateFactor 21 -OptimizeAfterDays 30 -DeleteOriginalVideo
This will search c:\Temp for any media files that were created more than 30 days ago. Media files will be optimised to a lower quality (480p) and size (CRF21), then the original file will be deleted.



PowerShell Script – OptimizeMedia.ps1
Download ffmpeg – https://www.ffmpeg.org/download.html

Single Node SuperMicro Home Lab

Building a home lab can be an expensive endeavour, so if there’s a much cheaper and easier option that still achieves the same outcome then why not do it? Who needs all that physical hardware when you can build your entire lab environment from a single SuperMicro server? The SuperMicro E200-8D and E300-8D are both micro servers that are ideal for this type of home lab build. Have a look at my previous article on this topic  (SuperMicro vs Intel NUC) where I explain why the SuperMicro is such a great option. They are micro servers that take up next to no space, consume minimal amounts of power and provide you with 128GB RAM capacity.

Thanks to a colleague of mine, Dale Shaw @Shawski500 who has loaned me his SuperMicro E200-8D server with 128GB RAM, I am able to show the process to build out a home lab on a single server.

Home Lab Concept

Ok, so the concept here is pretty simple. Take a single server with 128GB RAM, build 4 nested ESXi hosts with 32GB RAM each that will share the resources of the single physical host. Why 4 nested ESXi hosts? Not only does the RAM split at 32GB nicely but this also allows you to build a couple of 2 node clusters in your environment (i.e. management and compute clusters).
In perfect timing, William Lam (virtuallyGhetto) has just published 2 new blogs that we can leverage to assist us with our home lab build.

Utilising one or both of the above capabilities, we can simplify our home lab build. If you haven’t tried it, this is a great opportunity to try out the Project USB to SDDC in order to kick start your home lab build. William has already tried it on the SuperMicro E200-8D and without any effort the SDDC environment was up and running. If we can do it on the floor of the Melbourne Convention Centre, then you can do it at home!MelbourneVMUG

As is often the case with a home lab build, the idea is to manually install all of the components in order to learn how they work, break things, fix them and make it your own. So this article will provide you with the details you need to build your home lab using the ESXi virtual appliances that William offers. How you then chose to build your actual lab environment is up to you.

What You’ll Need

Let’s get started with the essentials. Here is what you’ll need to get started to build your new lab.

  • Server with sufficient RAM and CPU (I’m using a SuperMicro E200-8D with 128GB RAM)
  • Local disk or NAS for storage
  • ESXi 6.5d iso
  • ESXi 6.5d virtual appliance
  • Virtual router (pfsense or similar)
  • Nested VM for AD, DNS, DHCP, CA…etc

Building the Physical ESXi host

This is where all the critical configuration is, so don’t rush into building the nested ESXi hosts straight away. The first step is to prep your server (BIOS and IPMI Updates, Network configuration, BIOS settings and all the normal stuff) then install ESXi to it. I won’t go into any details around this process as you should be familiar with installing ESXi 🙂

Here is my Physical ESXi host. Just to confirm, it is a SuperMicro E200-8D with 6 CPUs and 128GB RAM. I’m also using local SSD storage rather than my NAS, just for this demonstration. I will configure VSAN within the nested environment based on this underlying single 1TB SSD and NVMe cache.

Screen Shot 2017-05-16 at 10.52.37 AM


The networking configuration on the physical ESXi host is important to get right. If it isn’t right then your nested lab won’t be able communicate between ESXi hosts. A massive benefit to using the SuperMicro servers is that they have multiple NICs and therefore I can run separate vSwitches for my nested environment. I’ve built a separate vSwitch called “Nested ESXi” and assigned it my 10Gbe NICs. The Physical ESXi management is on its own vSwitch, the default “vSwitch0” and is assigned to two 1Gbe NICs.

Screen Shot 2017-05-16 at 11.55.04 AM.png

On the “Nested ESXi” vSwitch I have created a single port group also called “Nested ESXi”. The network settings for the Nested ESXi switch and port group needs the following configuration:

  • Allow Promiscuous Mode.
  • Allow Forged Transmits.
  • Allow MAC Changes.
  • VLAN 4095, which is a “trunk” port group and will allow you to run multiple VLANs in your nested lab.
  • MTU needs to be set to Jumbo Frames if you are going to use NSX in your nested lab.

Screen Shot 2017-05-16 at 10.54.28 AM.png


The next step is to configure the storage. In my case I am going to run a nested VSAN lab and the SuperMicro E200-8D is fitted with a 350GB NVMe and a 1TB SSD, so I need to create local datastores for each of these storage tiers.

Screen Shot 2017-05-16 at 12.04.40 PM.png

Nested ESXi Hosts

Now that our underlying networking and storage is configured, we can start to deploy our nested ESXi hosts. You can deploy as many or as little number of nested hosts as you like. This is now an extremely simple process thanks to the nested ESXi appliances. Simply deploy the ova file 4 times to build 4 nested ESXi hosts. You will need to configure each host during the deploying with their management configuration. At this point you need to decide on what your Management VLAN ID will be and the host IP addresses. At this early stage DNS isn’t critical but if you’ve already decided on what your DNS Server IP address will be then enter in all the details during the deployment.

The VLAN ID will likely be 0 or blank. Because the physical port group is configured as VLAN 4095 or a trunk port group, then you can use multiple VLANs in your nested environment and you can use either a Management VLAN or No VLAN. Once we have configured our nested ESXi hosts, we will deploy a virtual router that will then be configured with our nested home lab VLANs and we can configure VLANs for VSAN, vMotion, Nested Management…etc. For now, all we need is for the ESXi hosts to communicate between each other without routing to any other VLANs so just make sure they’re all configured on the same network and are accessible from your home network. Don’t power on the nested ESXi hosts yet.

Now that the nested ESXi hosts are deployed, we need to configure them before powering them on. This includes the CPU and RAM resources and the storage configuration. You should now have 4 virtual ESXi hosts on your physical ESXi server.

Screen Shot 2017-05-16 at 10.52.13 AM

  • Each nested ESXi host will be deployed with 2 NICs. Check that both of these are connected to the “Nested ESXi” port group and set to “connected”.
  • If you are going to run VSAN on you nested home lab like I am, then configure each nested ESXi host with 3 HDDs in suitable sizes.
    • Hard Disk 1 shouldn’t be modified as this is where ESXi is installed.
    • Hard Disk 2 is configured as the read/write cache and is connected to the “Local NVMe” datastore.
    • Hard Disk 3 is your VSAN capacity disk and should be as large as you can afford. It should be connected to the “Local SSD” datastore.
  • The CPU should be set to use all of the available CPU cores
  • The RAM is set to the shared amount, in my case 32GB.

Screen Shot 2017-05-16 at 12.36.40 PM.png

Configure all of your nested ESXi hosts in the same way, and then power them all on.

Accessing Your Nested Lab

There are a number of ways in which you can configure access to your new lab and this entirely depends on what you have available to you. You have deployed your nested ESXi hosts on your physical home network, so you can now connect to each of the ESXi hosts and configure them to suit your new lab environment.

The next issue will be building out all of your VMs within your nested lab and the nested networking configuration. You could simply deploy all of your VMs to your physical home network and on the same subnet as your ESXi management. This will work but it’s not really what I’d build a home lab for. I’ve configured this nested home lab to use a trunk port group so that I can run multiple VLANs in my home lab. I want to be able to deploy and use NSX and VSAN, both of which will require VLAN IDs and communication between ESXi hosts. In order to start using VLAN IDs within your nested lab and configure routing between these VLANs, you’re going to need a nested virtual router. There are many options out there but for simplistic sake I have used a pfsense configuration. This is downloaded in the form of an ISO file and when booted from the iso it will build the virtual router for you.

Here is a quick overview of my pfsense configuration for this lab, with the WAN network being the untagged native network and the LAN networks the nested VLANs. If you want to do something similar then let me know and i’ll try to put together a more detailed “next steps” follow up with nested networking configuration, vCenter deployment, VSAN configuration and NSX.

Now you now have 4 ESXi hosts running on a single SuperMicro server that can be used to build your home lab however you like. Here is what my new lab environment looks like.

Screen Shot 2017-05-17 at 11.52.52 AM.png

Other posts you might be interested in:

SuperMicro VSAN HCIBench

SSL Certificate Tool (CertGenVVD)

Single Node SuperMicro Home Lab

SuperMicro vs Intel NUC