Nesting Microsoft Azure Stack POC on ESXi

Recently, Microsoft announced a public proof of concept of their latest cloud software.  The intention of this software is to attempt to provide a hybrid cloud platform with the look and feel and their public cloud, Azure.

Now, the technical requirements to run this proof of concept are pretty steep.  The intention of these requirements is to allow for a single server instance in which Microsoft’s hypervisor, Hyper-V, runs on this single instance and multiple virtual machines are built and delivered to create the other components necessary for the architecture of the proof of concept.

The physical host not only needs to satisfy the memory and CPU requirements, but also needs to satisfy physical disk requirements.  This physical disk requirement is non-negotiable.  During the multiple times I tried to install this on physical equipment, I tried to use fibre channel and iSCSI disks and ultimately found out that the Microsoft technology that was requiring this, Storage Spaces, is not able to utilize LUNs.  Unfortunately, I was unable to satisfy the requirement of four of these physical disks.  I was using a Cisco UCS B200 M2 (older blade, but does support Windows 2012 R2, which was also a requirement of the proof of concept) and I’m unable to provide that many physical disks to that particular device type.

I decided I’d give nested virtualization a try for this POC.  I know that it was not specifically supported for the POC, but we all know we can nest hypervisors and get some mileage under that technique, especially in a test scenario.  So, after performing nearly 30 installation attempts on ESXi 6.0, I believe I have a pretty decent guide on how to successfully get this proof of concept installed in a monster VM configuration.  Let’s start with the configuration of the ESXi host.

ESXi Host Configuration

During the tests I performed, I originally tried ESXi 5.5 Update 3 (I forget the specific build number).  Unfortunately, I had nothing but blue screen problems with Windows 2016 on ESXi 5.5.  I had it recommended to me to use ESXi 6.0 instead.  So, I would highly recommend you use ESXi 6.0 Update 1b (Build number specifics below in image):

ESXi Version

For the host networking, since this host was a standalone host, I used a standard vSwitch.  I created a single port group and enabled on all the security options (promiscuous mode, MAC address changes and forged transmits).

Virtual Machine Configuration

First, let’s start off with the virtual machine hardware version on ESXi 6.  Make sure you use hardware version 11.  I think many of my issues on ESXi 5.5 were related to hardware version 10 and the VMware tools version for the ESXi 5.5 build.

Onward to the virtual machine CPU configuration, there are a couple of settings I would ensure are enabled as below:

Nested VM CPU Settings

The first setting here is the Hardware virtualization setting.  Make sure to check the box for Expose hardware assisted virtualization to the guest OS.  This is essential for nested hypervisors on ESXi.  Also, I’m not exactly sure if it’s required these days, but I ensure the CPU/MMU Virtualization setting to Hardware CPU and MMU instead of Automatic.  You’ll also notice that I presented my 12 CPUs in a format of 2 sockets/6 cores.

For the memory configuration, I did reserve all the memory of the virtual machine (mainly to eliminate the need for a 128GB page file).

Nested VM Memory Settings

For the disk configuration, I configured the boot hard disk to use the SCSI controller (which was set to LSI Logic SAS).  The four disks for the storage space, I configured in a different manner.  I added each of the four disks to the SATA controller (like below):

VM SAS Disk Config

I had some issues using the SCSI controller for these four disks.  Many of those issues resulting in consistent failures in creating the storage space (mostly in the form of a read-only configuration being applied to the storage space).  It wasn’t until I set the virtual device node to the SATA controller, that I was able to finally resolve many of those issues.

Lastly, which should seem like a no-brainer, configure the virtual machine to use the operating system type of Microsoft Windows Server 2016, like so:

VM OS Configuration Information

Final Notes

After you finish your Windows 2016 installation, make sure you install the VMware Tools pack that matches up with the build number I had listed above.

Do note that this installation of the POC in this virtual machine is going to take a while.  You will have 124 steps this installation has to go through, including, if I remember correctly, two reboots early on (mainly to install Hyper-V and to add the monster VM to the Windows domain that is created during the installation process).  I can highlight plenty of failure points I ran into and may document those with a later blog post.

Now, when actually running the installation PowerShell script for the installer, unless you already have a DHCP instance on the network you place this monster VM, prepare to use -NATVMStaticIP and -NATVMStaticGateway parameters.  Also, I highly recommend using the -Verbose parameter.  The log files created by the installer were great troubleshooting tools into identifying problems with certain parts of the architecture and their state during the install process.

Hopefully this helps anyone else looking to give this proof of concept a try even though you might not be able to satisfy that pesky physical disk requirement.


About snoopj

vExpert 2014/2015/2016, Cisco Champion 2015/2016. Virtualization and data center enthusiast. Working too long and too hard in the technology field since college graduation in 2000.
This entry was posted in Technical and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s