Boudewijn Plomp

WELCOME TO MY BLOG SITE!

  • Email
  • LinkedIn
  • RSS
  • Home
  • Blogs
  • Links
  • About me
  • Contact
  • Cisco
    • Unified Computing System
  • Dell
    • VRTX
  • Microsoft
    • Active Directory Certificate Services
    • Active Directory Domain Services
    • Azure
    • DirectAccess
    • Exchange
    • Hyper-V
    • Office 365
    • System Center
    • Windows
  • SkyKick
    • SkyKick Cloud Backup
  • Uncategorized
You are here: Home / Cisco / My first FlexPod! (Part 5 – Hyper-V Cluster)

My first FlexPod! (Part 5 – Hyper-V Cluster)

April 17, 2016 By Boudewijn Plomp Leave a Comment

This is part 5 of the following blog series:

  • My first FlexPod! (Part 1 – Introduction)
  • My first FlexPod! (Part 2 – Hardware Overview)
  • My first FlexPod! (Part 3 – Hardware Configuration)
  • My first FlexPod! (Part 4 – Quality of Services)
  • My first FlexPod! (Part 5 – Hyper-V Cluster)
  • My first FlexPod! (Part 6 – System Center)

I hope part 1, 2, 3 and 4 where informative to you. In this part it is time to talk about an actual Hyper-V Cluster on UCS Blade Servers. Hosting a Hyper-V Cluster on UCS is not rocket science. The deployment is just as straightforward as with other brands. The only difference is that you are (more) flexible with your network configuration and you have the benefits from stateless computing, if that is what you want.

In our case we had eight physical UCS B200 M3 Blade Servers available as specified in part 2. When you have pre-configured UCS Manager with all pools and policies that are required, you are ready to create a Server Profile. You can then assign the new Server Profile to a physical server (equipment), boot it and begin installing the Operating System.

On our PlexPod we started with an six-node Hyper-V Cluster based on Windows Server 2012 R2 and SCVMM (System Center Virtual Machine Manager) 2012 R2. I’m not going to give you a step-by-step deployment or explain how it all fits together in SCVMM. Cisco has many validated designs that describe it in full detail. Instead we will focus on some details of a Server Profile and the network configuration within the OS.

Overview:

For this example we use a six-node Hyper-V Cluster:

Hyper-V Cluster and Shared Storage [HVC01] 6

Each Hyper-V Server has hostname ‘HVS0x‘, and the Failover Cluster has hostname ‘HVC01‘.

Remote Disks:

The Hyper-V Servers share three remote disks that are hosted on the NetApp Storage Array:

  • QUORUM (Witness Disk)
  • CSV01 (Cluster Shared Volume)
  • CSV02 (Cluster Shared Volume)
  • CSV03 (Cluster Shared Volume)
  • CSV04 (Cluster Shared Volume)

Our NetApp Storage Array is configured in 7-Mode, which means it is separated in two aggregates. We host ‘QUORUM‘, ‘CSV01‘ and ‘CSV03‘ on the first aggregate. And we host ‘CSV02‘ and ‘CSV04‘on the second aggregate.

NOTE: You might ask, why four CSVs instead of two? Well, that has to do with VM backups. There is a known issue with VM backups (snapshots) on Hyper-V when the VMs are stored on a CSV. In some scenarios running multiple VM backups simultaneously, it can cause a CSV to get in a pause status. I have seen this on many Hyper-V environments, unless you use SMB3 or local storage. A thumb rule is to have as many CSVs as the number of nodes, with a maximum of four CSVs. I don’t want to go into detail, but I can tell you it does matter.

Boot from SAN

The Blade Servers we use have no local disks, instead they boot from SAN. On the NetApp Storage Array we created a unique LUN for each Hyper-V Server, and we configured each LUN with ID 0 to be able to boot from SAN.

NetApp - Boot from SAN - LUN ID 0

Optionally, you can store these LUNs on a single volume and enable deduplication on the volume. Of course setting LUN ID 0 is not enough to boot from SAN. You still need to configure a Boot Policy and such in UCS Manager.

NOTE: I would have preferred FCoE (Fiber Channel over Ethernet) on the NetApp Storage Array. The fact is, at that time our NetApp devices where delivered with Ethernet based NICs. It was a total surprise to me, because I expected to have FCoE. Long story short; the choice was FC or iSCSI only. Eventually we kept iSCSI. Although iSCSI works perfectly fine, I recommend to use FC or FCoE for storage connectivity, especially if you are going to use boot from SAN. Not because I prefer FC, but because on UCS it is more straightforward and has some advantages. Also most network devices (like Nexus Switches) already have QoS configured, where traffic is already classified as Ethernet or FC.

vNICs (virtual Network Interface Cards):

For the Hyper-V Cluster we needed the following vNICs:

  1. iSCSI-A
  2. iSCSI-B
  3. Management
  4. Cluster
  5. Live Migration
  6. VM-Ethernet-A
  7. VM-Ethernet-B

In the beginning we also had vNICs called ‘VM-iSCSI-A’ and ‘VM-iSCSI-B’. We added those vNICs to offer raw iSCSI connectivity within VMs (Guest OS). But one year later we removed them, because we did not use them anymore. Just to let you know because it is so easy to add more vNICs and separate them from other traffic without needing to share ‘iSCSI-A’ or ‘iSCSI-B’ with your VM.

As mentioned in the previous parts; you don’t use NIC Teaming on UCS Servers. UCS offers FF (Fabric Failover). It is up to you how you distribute those vNICs between Fabric A and B, and whether you use FF or not. To illustrate, I configured the vNICs as following:

UCS Server - vNIC and FF [HVS0x]

vNICs (iSCSI for the host OS):

The vNICs ‘iSCSI-A‘ and ‘iSCSI-B‘ (iSCSI for the host OS) each connect to a different Fabric A or B, without FF. FF is disabled because we use MPIO (Multi-Patch I/O). Although FF can work for iSCSI it is certainly not recommended. This is even written in a white paper from Cisco/NetApp.

vNICs (Management for the host OS):

The vNICs ‘Management‘, ‘Cluster‘ and ‘Live Migration‘ (Management for the host OS) connect to Fabric A, with FF to Fabric B. Of course you can distribute them between Fabric A and B, but for simplicity we kept all three on Fabric A.

vNICs (vSwitches for the guest OS):

The vNICs ‘VM-Ethernet-A‘ and ‘VM-Ethernet-B‘ (vSwitches for the guest OS) connect to Fabric A or B, with FF to Fabric B or A. The reason why we created two of them is because we are hosting a Security Multi-Tenancy environment. We wanted to distribute tenants between the Fabrics to have as much bandwidth available. We gave each tenant an ID number. The uneven numbers connect to Fabric A, and the even numbers connect to Fabric B.

To get seven vNICs you just add them to a Server Profile. In UCS Manager you cannot label them as shown above. You have to give them short labels, which can’t be renamed afterwards. So it is essential you keep them logical or keep track of their purpose. In UCS Manager the vNICs are shown like this:

UCS - vNICs (labels) 2

To give you a better understanding this is also another view within a UCS Server Profile:

UCS - vNICs (labels) 1

You might ask, why do you see ‘iSCSI_Eth0‘ and ‘iSCSI_Eth1‘ twice? Well, to configure boot from iSCSI you need to configure a so called overlaying iSCSI vNIC on top of the vNIC. If you don’t boot from SAN (with iSCSI), you won’t need an overlaying NIC to support iSCSI connectivity.

When you add a vNIC you also have to configure the right right policies and assign VLANs. For instance;

  • You want to connect it to Fabric A and enable FF
  • You want to connect it to certain VLANs
  • You want to apply an MTU size of 1500 or 9000 (Jumbo Frames)
  • You want to apply a certain Adapter Policy
  • You want to apply a certain QoS Policy
  • You might want to apply a certain VMQ Policy

The following is just an example of the vNIC properties:

UCS Manager - vNIC (VM-Ethernet)

Here is an example of a Adapter Policy (for and Ethernet Adapter in Windows):

UCS - Adapter Policy (Eth Windows)

VMQ Policy:

If you use Hyper-V and have 10GbE vNICs available for vSwitches, you should definitely use VMQ. Most network interfaces come with a pre-defines number of queues. For example an Intel X520/540 10GbE NIC offers 64 VMQs per port. If you remember from part 2; a Cisco VIC 1240 + Port Expander allows you to create 256 vNICs or vHBAs per server. Well this number also defines the number of queues for RSS or VMQ you can assign to a server. So in our case we added 7 vNICs to the Server Profile. This allows us to assign 256-7=247 VMQs to vNICs with a VMQ Policy.

So if you add a vNIC that is going to be used for a vSwitch (like ‘VM-Ethernet-A‘) and you want to have 32 VMQs available, then you should configure a VMQ Policy with a number of 33 VMQs. And you apply that policy to a vNIC. Here is an example of an VMQ Policy:

UCS - VMQ Policy

NOTE: One VMQ is always reserved for the system, that’s why you add one more which comes down to 33.

Adapter Properties (Windows):

Unlike normal NICs (e.g. Intel or Broadcom) you will notice that the properties of a vNIC (hosted on a Cisco VIC) is somewhat limited:

UCS - vNIC Windows Properties 2

In fact, it is not limited at all. It is because everything is controlled by UCS. This way you can make sure UCS applies the optimal settings for the vNIC.

BIOS Policy:

For a Hyper-V Server there are some best-practices for the BIOS Policy. Here you have a few examples:

UCS - BIOS Policy (Hyper-V - Main)

UCS - BIOS Policy (Hyper-V - Boot Options)

UCS - BIOS Policy (Hyper-V - Advanced - RAS Memory)

UCS - BIOS Policy (Hyper-V - Advanced - Processor)

UCS - BIOS Policy (Hyper-V - Advanced - Intel Directed IO)

It it to much detail to show our entire configuration. But I think you get it.

Other than this it is just a normal Hyper-V Server and Hyper-V Cluster configuration/deployment, as you would with any type of hardware.

P.S: I have published this part just recently. I going to review it and might add some more information later. Please we aware that Cisco has many validated designs that describe the entire deployment in detail.

Ok, that’s it for now. I hope this information was informative. Click on the link below to continue with the next part.

NEXT >>> My first FlexPod! (Part 6 – System Center)

Filed Under: Cisco, Hyper-V, Microsoft, Unified Computing System Tagged With: Cisco FlexPod, Cisco UCS, Cisco VIC, Converged Network Adapter, Hyper-V, Network Interface Card, vNIC

About Boudewijn Plomp

Boudewijn is working as an Senior IT Consultant. His expertise is mainly focused on Microsoft infrastructures, together with Azure cloud services.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent Blogs:

  • How-to fix the Downloaded Maps Broker (MapsBroker) warning
  • Useful links for Microsoft Azure
  • SkyKick Cloud Backup (for Office 365)
  • My first FlexPod! (Part 6 – System Center)
  • My first FlexPod! (Part 5 – Hyper-V Cluster)
  • My first FlexPod! (Part 4 – Quality of Services)
  • My first FlexPod! (Part 3 – Hardware Configuration)

Catagories:

  • Cisco (6)
    • Unified Computing System (6)
  • Microsoft (8)
    • Azure (1)
    • Exchange (1)
    • Hyper-V (2)
    • Office 365 (1)
    • System Center (1)
    • Windows (2)
  • SkyKick (1)
    • SkyKick Cloud Backup (1)
  • Uncategorized (1)

Tags:

Anti-SPAM Azure Cisco Cisco ASA Cisco FlexPod Cisco Nexus Cisco UCS Cisco VIC Converged Network Adapter Device Manager Exchange Hyper-V Load Balancing and Failover MapsBroker Microsoft NetApp Network Interface Card NIC Team Office 365 Quality of Services Sender ID Sender Policy Framework Services Simple Mail Transfer Protocol SkyKick Cloud Backup System Center System Center Operations Manager System Center Virtual Machine Manager Team NIC UCS Manager UCS PowerTool Suite vHBA vNIC Windows

Archives

  • August 2017
  • April 2016
  • August 2015
  • May 2015
  • March 2015
  • March 2014
  • February 2014

Copyright © 2023 · Boudewijn Plomp · All Rights Reserved · Log in