Deploy Storage Spaces Direct 4 Node Cluster

I have been doing a lot of work with Storage Spaces Direct lately for a project I am working on.  We are deploying a Hyper-Converged Microsoft solution using Windows Server 2016, Storage Spaces Direct (S2D), and Software Defined Networking (SDNv2).  This blog will cover my experiences with deploying a 4 node Storage Spaces Direct cluster as part of a POC to load test our hardware design. Once again, there are a good number of resources our there now that will guide you on your path to deploying S2D clusters.  This is my experience deploying it using various resources.

So really quick, Hyper-Converged Solutions basically in a nut shell is Compute, Storage, and Network on the same hardware.  I will link various sites that I have used along the way but for now my blog will be written the way I deployed my final POC solution.

The Hardware

Compute Cluster
Model PowerEdge R730xd Server
NIC 2 x Mellanox ConnectX-4 Lx Dual Port 25GbE DA/SFP Network Adapter
Storage Adapter HBA330 Mini Controller, PERC H330 adapter for FB
Boot Device 2 x 480GB SSD Read Intensive (RAID 1)
Drives 4 x 1.6 TB SSD Write Intense (Caching)
8 x 6 TB 7.2K HDD  (Capacity)
CPU 2 x Intel® Xeon® 2.1GHz 18C/36T
Memory 512 GB
TOR Switches 2 x Dell z9100-ON
OOB Switch Dell S3048-ON

Note:  For Storage Spaces Direct to work you need to make sure your storage adapters are compatible for a No Raid configuration.  There are RAID cards out there that allow software pass through but this is not going to work.  We are using the HBA330 Mini Card Controller PERC H330 Adapter combo that allows me to have a No RAID configuration on my storage disks and still have RAID configuration for my OS Disk.

 

Deploying Storage Spaces Direct

A quick overview of the installation task:

  1. Configure Physical Network Switches
  2. Prepare the Servers and Storage
  3. Install Windows Server 2016
  4. Configure Windows Server
  5. Configure Networking Parameters
  6. Configure Windows Operating System
  7. Create the Windows Failover Cluster
  8. Enable Storage Spaces Direct
  9. Configure Storage Spaces Direct
  10. Various other crap I did.
  11. Witness Disk

 

I.  Configure The Physical Network Switches

I will be honest.  I am not a network guy and I have no clue how to configure the Physical Switches.  However, I will leave some information on this section that I left for our Network Guys to help guild you.  Since we used Dell z9100’s I really only have information on this subject.

There isn’t a lot just documented for the Physical Networks.  A lot of what I found was getting a Hyper-Converged Solution ready so I followed those instructions.

Here is a Microsoft Doc link for SDNv2 Configuration of Network Equipment:

Plan a Software Defined Network Infrastructure

Here is a GitHub link to some Dell Force 10 Examples:

Dell Force10 Configuration Examples

At the end of the day the following should be configured on your TOR Switches:

  • Enable Datacenter Bridging
  • Jumbo Frames with 9014 MTU Configured
  • Priority Flow Control Configured
  • Enable CEE/ETS on Ports
  • Also make sure your VLANS are tagged as well.

NOTE:  Please do a little more research on the physical network part.  Once again, I am not a network guy.  I passed information off to my network guys and they did all the configurations.  If your TOR Switches are not configuration correctly the entire environment will be effected. 

 

II.  Prepare the Servers and Storage

The first thing I did was to make sure all my servers are prepped.  Which includes various things like getting iDrac IP addresses reserved in our IPAM tool, creating the Service Account that I will use for my Hyper-v Cluster, etc.

  1. Reserve IP addresses for iDrac, Mgmt Network, SMB networks.
    • We will create a SET Switch later with 3 VM Networks.  We will have a Management Network and two SMB Networks called SMB1 and SMB2.
  2. Assign iDrac IP addresses
  3. Update Firmware and Drivers via the Dell LifeCycle Controller
  4. Configure the BIOS
    • Enable UEFI Boot Mode
    • Enable TPM
    • Enable Secure Boot
    • Configure System Profile:  Performance
    • Configure Memory Operating Mode:  Optimizer Mode
    • Enable Hyperthreading
    • Enable Virtualization Technology
    • Enable Logical Processor
    • Enable SR-IOV Global Engine
    • Disable Devices
      • Disabled Intel NIC’s
      • Disabled External USB Ports
      • Disabled PXE Boot
  5. Configure RAID for OS Disk

 

III.  Install Windows Server 2016 Datacenter (Core)

I am going to be using Windows Server 2016 Datacenter Core.  There is always arguments within an organization on Core or Desktop Experiences.  We had planned to use Nano Server but after the announcement from Microsoft a few weeks ago we are glad that we never got that far.  So at this point, Microsoft recommends using Core.  Which I decided that is what we will use.  The one argument you might get is that the support staff doesn’t have PowerShell down and needs some kind of GUI to do their job.  With S2D, a majority of the work is within PowerShell anyway, so even if they had a Server with Desktop Experience installed they would still need to get skills caught up to manage and support this new technology using PowerShell.

Not much more here that I can add.  If you are using iDrac you can configure the LifeCycle controller to deploy it unattended.  Or just the old fashion way of attaching an ISO to the virtual media and deploying it that way.

 

IV.  Configure Windows Server

The next few sections can be done in any order.  If you have DHCP in your environment then most likely your server already has network connectivity so there are a few things you would need to do like enable within the Server Configuration Window, like RDP, etc. I may or may not have created more work for myself but at this step I did the following:

The first thing I did since I don’t have DHCP in our datacenter was configure a static IP address and a VLAN on that interface with my Management IP address I will then again use later as part of my VMnetwork IP.  This is basic stuff that can be done either via Server Config console in Core or by PowerShell.  Once you have the the Network configured it will be easier to run PowerShell scripts on these boxes.  At least for me, cause with the Dell iDrac Virtual Console you can’t paste anything in the console.

# Set IP Address on Interface Slot 2 Port 1
New-NetIPAddress -InterfaceAlias "Slot 2 Port 1" -IPv4Address "10.91.92.245" -PrefixLength 23 -DefaultGateway 10.91.92.1

# Set DNS Address
Set-DnsClientServerAddress -InterfaceAlias "Slot 2 Port 1" -Serveraddress 10.91.232.251

# Set VLANID
Set-NetAdapter -Name "Slot 2 Port 1" -VlanID 92

So basically we just set my hosts IP address to 10.91.92.245, assigned it the gateway of 10.91.92.1, with a subnet mask of 255.255.254.0, and a DNS setting of 10.91.232.251.  We also set the VLAN for this interface to 92.

NOTE:  Please remember if you do set a static ip address on one of your NIC’s that you will be using for the SET Switch in upcoming steps is to come back and remove the VlanID from this interface.

  1. Enabled Remote Desktop
    # Enable Remote Desktop
    Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" –Value 0 -Verbos
  2. Changed Time Zone
    # Change Timezone to Central Standard Time (UTC-06:00) Central Time (US & Canada)
    tzutil /s "Central Standard Time"
  3. Change Host Name
    # Change HostName
    Rename-Computer -NewName <COMPUTERNAME>

    Note:  Don’t restart right now.  You can restart after the Hyper-V Feature is installed.

  4. Disabled Windows Firewall.  (In production you might not do this and actually not be lazy and only open the ports you need.)

    # Disable Windows Firewall
    Set-NetFirewallProfile -Enabled False

  5. Install Windows Roles and Features.
    # Install Roles and Features
    Install-WindowsFeature -Name "Data-Center-Bridging","Failover-Clustering","Hyper-V","RSAT-Clustering-PowerShell","Hyper-V-PowerShell" -IncludeManagementTools -Verbose

You can now restart your server.

V.  Configure Networking Parameters

There are a few things during these steps that you can do and may should do for your production network.  I never configured the Virtual Machine Queue settings.  By default it is on, but there are some configurations you can do that may give you better performance overall.  I think the biggest reason I didn’t configure it was the lack of information out there to configure it on Windows Server 2016.  The Lenovo PDF I found online, “Microsoft Storage Spaces Direct (S2D) Deployment Guide” has a little section on how to configure it.  Actually, this guide has some good information on how to deploy S2D as well.

At this point we are going to configure the networking parameters.  We will be creating our SET Switch and also our VMnetworks as well.

  1.  Enable Network Quality of Service (QoS)
    # Configure a QoS policy for SMB-Direct
    New-NetQosPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
    
    # Turn on Flow Control for SMB
    Enable-NetQosFlowControl -Priority 3
    
    # Make sure flow control is off for other traffic
    Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
    
    # Apply a Quality of Service (QoS) policy to the target adapters
    Enable-NetAdapterQos -Name "Slot 2 Port 2","Slot 2 Port 1","Slot 3 Port 2","Slot 3 Port 1"
    
    # Give SMB Direct a minimum bandwidth of 50%
    New-NetQosTrafficClass "SMB" -Priority 3 -BandwidthPercentage 50 -Algorithm ETS
    
    # Disable Flow Control on physical adapters
    Set-NetAdapterAdvancedProperty -Name "Slot 2 Port 2" -RegistryKeyword "*FlowControl" -RegistryValue 0
    Set-NetAdapterAdvancedProperty -Name "Slot 2 Port 1" -RegistryKeyword "*FlowControl" -RegistryValue 0
    Set-NetAdapterAdvancedProperty -Name "Slot 3 Port 2" -RegistryKeyword "*FlowControl" -RegistryValue 0
    Set-NetAdapterAdvancedProperty -Name "Slot 3 Port 1" -RegistryKeyword "*FlowControl" -RegistryValue 0
  2. Create a Hyper-V Switch (SET Switch)
    # Create the Switch Embedded Teaming (SET) with all 4 ports
    New-VMSwitch -Name S2DSwitch -NetAdapterName "Slot 2 Port 1","Slot 2 Port 2","Slot 3 Port 1","Slot 3 Port 2" -EnableEmbeddedTeaming $true -AllowManagementOS $false
    
    # Add host vNICs to the vSwitch just created
    Add-VMNetworkAdapter -SwitchName S2DSwitch -Name Management -ManagementOS
    Add-VMNetworkAdapter -SwitchName S2DSwitch -Name SMB1 -ManagementOS
    Add-VMNetworkAdapter -SwitchName S2DSwitch -Name SMB2 -ManagementOS
    
    
    # Assign the vNICs to a vLAN
    Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB1 -VlanId 94 -Access –ManagementOS
    Set-VMNetworkAdapterVlan -VMNetworkAdapterName SMB2 -VlanId 95 -Access –ManagementOS
    Set-VMNetworkAdapterVlan -VMNetworkAdapterName Management -VlanId 92 -Access –ManagementOS
    
    # Verify VLANID
    Get-VMNetworkAdapterVlan -ManagementOS
    
    # Restart vNIC Adapter
    Restart-NetAdapter "vEthernet (SMB1)"
    Restart-NetAdapter "vEthernet (SMB2)"
     
    # Enable RDMA on the vNICs just created
    Enable-NetAdapterRDMA -Name "vEthernet (SMB1)","vEthernet (SMB2)"
     
    # Verify RDMA Capabilities
    Get-SmbClientNetworkInterface
    
    
  3. Configure Network Interfaces
    # Set IP Address for Virtual NIC's
    
    #Host Management Adapter
    New-NetIpaddress -InterfaceAlias 'vEthernet (Management)' -IPAddress 10.91.92.249 -DefaultGateway 10.91.92.1 -PrefixLength 23 -AddressFamily IPv4 -Verbose
    #DNS Server Address
    Set-DnsClientServerAddress -InterfaceAlias 'vEthernet (Management)' -ServerAdresses 10.91.232.251
    
    #SMB1 Adapter
    New-NetIpaddress -InterfaceAlias 'vEthernet (SMB1)' -IPAddress 10.91.94.249 -PrefixLength 24 -AddressFamily IPv4 -Verbose
    
    #SMB2 Adapter
    New-NetIpaddress -InterfaceAlias 'vEthernet (SMB2)' -IPAddress 10.91.95.249 -PrefixLength 24 -AddressFamily IPv4 -Verbose

    Note: If you assigned a network interface at the start a VLANID.  This is where you would remove that VLANID from that interface.

  4. Configure Jumbo Packets on NIC’s
    # Set Jumbo Packet on Physical NIC's
    
    Get-NetAdapterAdvancedProperty -Name "Slot 2 Port 1" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
    Get-NetAdapterAdvancedProperty -Name "Slot 2 Port 2" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
    Get-NetAdapterAdvancedProperty -Name "Slot 3 Port 2" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
    Get-NetAdapterAdvancedProperty -Name "Slot 3 Port 1" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
    
    # Set Jumbo Packet on SMB vNIC's
    Get-NetAdapterAdvancedProperty -Name "vEthernet (SMB1)" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
    Get-NetAdapterAdvancedProperty -Name "vEthernet (SMB2)" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014
  5. Associate vNICs For RDMA to Physical
    # Associate vNICs Configured for RDMA to Physical Adapter Connected to Switch (At this time not sure why this is needed)
    Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName "SMB1" –ManagementOS –PhysicalNetAdapterName "Slot 2 Port 1"
    Set-VMNetworkAdapterTeamMapping -VMNetworkAdapterName "SMB2" –ManagementOS –PhysicalNetAdapterName "Slot 3 Port 2"

    Now, about the step to associate the vNIC’s configured to RMDA physical adapters.  This was a step in the Microsoft guide “Hyper-Converged solution using Storage Spaces Direct in Windows Server 2016.”  I reached out to confirm what this really does and why we need it.  My worry was that since we have 4 ports in our SET switch, would this limit the SMB traffic to just those two ports.  I was told it wouldn’t and to be honest don’t really remember exactly why it needs to be done but has to do something with telling the SMB Direct to use these two physical NICS.

VI.  Configure Windows Operating System.

We are at the point where we are going to finish configuring the Windows Server Operating System.  We also will install any drivers that we may need.  This could have been done earlier but hey I forgot.  At least I remembered before going into production?

  1. Install Mallanox Drivers
    • At the time of this blog the latest drivers for my card where 1.7.  You can just run the .exe from the PowerShell command and it will start the installer.  There is a silent install switch as well if needed.
  2. Join Computer to Domain

    Add-Computer -DomainName <FQDN> -Restart

    This will add the computer to your domain and restart after prompting for domain credentials.

  3. Add Domain Accounts to Local Administrator Group

    # Adds Hyper-V Cluster Service Account to Local Admins Group
    Net localgroup Administrators <DOMAINNAME>\svcact_hypervcluster /add

    This will add the service account I have configured to create my cluster, etc.

  4. Set Proxy Server Address

    # Set Proxy Server
    netsh Winhttp set proxy <PROXY SERVER ADDRESS:Port>

    Unless you use a proxy this isn’t necessary.  I didn’t really need to set it but do anyway.

  5. Run Windows Updates

    # Run Windows Updates
    wuauclt /detectnow

    Our current environment has GPO’s that manage the Windows Update location.  All this is going to do is have the agent detect if there are any updates.  You can also go to the server config console and run Windows Updates from there.  I also will use the Cluster Awareness Update Tool to update my clusters.

  6. Active Windows

    # Activate Windows
    slmgr /ipk “<PRODUCT ID>”
    slmgr /ato

 

VII.  Create the Windows Failover Cluster

We start out by running the Test Cluster PowerShell command.  Once we have verified everything is good we can move forward with actually creating our cluster.

  1. Run a Cluster Validation

    # Run Cluster Validation
    Test-Cluster –Node NODE1,NODE2,NODE3,NODE4–Include “Storage Spaces Direct”, “Inventory”, “Network”, “System Configuration”

  2. Create Windows Failover Cluster

    # Create Cluster
    New-Cluster –Name CLUSTERNAME –Node NODE1,NODE2,NODE3,NODE4 –NoStorage -StaticAddress 10.91.92.240 -Verbose

    When creating your cluster make sure you include the -NoStorage switch.  Also, if you assign you clusters a static IP make sure you include the -StaticAddress switch as well.

  3. Check the status of Cluster Storage

    # Check Status of Cluster Storage
    Get-StorageSubSystem

  4. Rename Cluster Networks
    # Update the cluster networks that were created by default
    # First, look at what's there
    Get-ClusterNetwork | ft Name, Role, Address
    # Change the cluster network names so they're consistent with the individual nodes
    (Get-ClusterNetwork -Name "Cluster Network 1").Name = "Management"
    (Get-ClusterNetwork -Name "Cluster Network 2").Name = "SMB1"
    (Get-ClusterNetwork -Name "Cluster Network 3").Name = "SMB2"
  5. Check Cluster Network Names and Roles are Set Properly
    # Check to make sure the cluster network names and roles are set properly
    Get-ClusterNetwork | ft Name, Role, Address
  6. Run Clean Disk Script
    If you had used these machines in the past you should clean the disk and get them ready for S2D.  The script below can be found in various places.  It just didn’t copy correctly into my blog and I didn’t take the time to find out why.  :)

    You can find that script here in section 3.4:
    Hyper-Converged solution using Storage Spaces Direct in Windows Server 2016.”

    #Run Clean Disk Script Run Clean Disk Script 
    icm (Get-Cluster -Name PTC-Jabil-POC | Get-ClusterNode) { Update-StorageProviderCache Get-StoragePool | ? IsPrimordial -eq $false | Set-StoragePool -IsReadOnly:$false -ErrorAction SilentlyContinue Get-StoragePool | ? IsPrimordial -eq $false | Get-VirtualDisk | Remove-VirtualDisk -Confirm:$false -ErrorAction SilentlyContinue Get-StoragePool | ? IsPrimordial -eq $false | Remove-StoragePool -Confirm:$false -ErrorAction SilentlyContinue Get-PhysicalDisk | Reset-PhysicalDisk -ErrorAction SilentlyContinue Get-Disk | ? Number -ne $null | ? IsBoot -ne $true | ? IsSystem -ne $true | ? PartitionStyle -ne RAW | % { $_ | Set-Disk -isoffline:$false $_ | Set-Disk -isreadonly:$false $_ | Clear-Disk -RemoveData -RemoveOEM -Confirm:$false $_ | Set-Disk -isreadonly:$true $_ | Set-Disk -isoffline:$true } Get-Disk |? Number -ne $null |? IsBoot -ne $true |? IsSystem -ne $true |? PartitionStyle -eq RAW | Group -NoElement -Property FriendlyName } | Sort -Property PsComputerName,Count

 

VIII.  Enable Storage Spaces Direct

This step we are going to enable Storage Spaces Direct on the Windows Failover Cluster we just finish building and verifying.  The process will enable the feature and also create a Storage pool that we will late create Volumes out of.

  1. Enable Storage Spaces Direct
    # Enable Storage Spaces Direct
    $cluster = New-CimSession <CLUSTERNAME> 
    Enable-ClusterStorageSpacesDirect -CimSession $cluster -PoolFriendlyName "S2DPool" -Verbose

IX.  Configure Storage Spaces Direct

We now must create our volumes.  There are various ways you can create your volumes. You can create them based on Capacity or Performance.  Have them multi-tiered, etc.  Yet again, a good site is the Storage Spaces Direct Microsoft site that can explain Volumes and best practices.  I suggest reading Planning volumes in Storage Spaces Direct first, then moving on to create your volumes.  In the Microsoft Doc’s site Planning volumes, it talks about Choosing how many volumes to create.  This again is all based off of your workload and needs.  With my configuration I followed the recommendations from Microsoft and created 8 volumes since I have a 4 node cluster.

I didn’t mention, but with my configuration on my nodes, I have about 48 TB of physical capacity and 6.4 TB of cache per node.  Based off of Cosmos Darwin’s S2D calculator, using 100% Mirroring with a Storage Efficiency of 33%, it means I have 56 TB of usable data.

So I will create 8 7 TB Volumes all based off the the storage tier called Performance. Which means that I will be using the Mirror Resiliency Settings to get me the best performance but at a sacrifice of storage space.

  1. Create Volumes
    # Create Volumes
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-01" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-02" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-03" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-04" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-05" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-06" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-07" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
    New-Volume -StoragePoolFriendlyName "S2DPool" -FriendlyName "Mirror_Vol-08" -FileSystem CSVFS_ReFS -StorageTierfriendlyNames Performance -StorageTierSizes 7TB
  2. Rename Cluster Share Volumes
    # Rename Cluster Share Volumes
    Cd \
    CD C:\ClusterStorage
    REN Volume1 Mirror_Vol-01
    REN Volume2 Mirror_Vol-02
    REN Volume3 Mirror_Vol-03
    REN Volume4 Mirror_Vol-04
    REN Volume5 Mirror_Vol-05
    REN Volume6 Mirror_Vol-06
    REN Volume7 Mirror_Vol-07
    REN Volume8 Mirror_Vol-08

What I like to do is make sure my CSV’s folders match the actual volume name.  So the above script did that for me.

X.  Various Other Crap I did.

There are a few other random things that I did to my cluster.  Not sure how much of it anyone else will want to do or need to do.  I did configure Live Migration to use SMB and Kerberos.  Had some major issues with Live Migration on my first few clusters.  Nothing to do with the solution but everything to do with something in my environment.  There is also VM Queue that you may want to configure.  At this point I installed the Microsoft Monitoring Agent and configured the agent to connect to my OMS instance.  I also ran Windows Update again via the Cluster Aware Updating Tool.  There was one more thing that I configured I found in one of Dell’s documentations. Setting a Hardware Timeout.  So I did that as well.

  1. Configure Live Migration
    # Configure Live Migration
    Set-VMHost -VirtualMachineMigrationAuthenticationType Kerberos -VirtualMachineMigrationPerformanceOption SMB
  2. Configure Hardware Timeout
    # Configure Hardware Timeout
    Set-ItemProperty -Path HKLM:\SYSTEM\CurrentControlSet\Services\spaceport\Parameters -Name HwTimeout -Value 0x00002710 -Verbose
    Restart-Computer -Force

 

At this point you are ready to build out VM’s and play with S2D.  You can also continue on and deploy SOFS if you please but I don’t need SOFS.

 

Configure A Cluster Witness

I also never configured a Witness for my cluster.  There are a two ways you can deploy a Witness for a Failover Cluster like S2D.  Since it isn’t connect to SAN you can’t just present it with a 1 GB disk and make that your Quorum or Witness.  Currently you can choose between a Cloud Witness or a File Share Witness.  I tried the Cloud Witness but since I need a proxy to connect to Microsoft and the Cloud Witness only accepts a default port that you can’t change, I had to fall back to a File Share Witness on a Scale-Out File Server (SOFS).

I had to wait until I built out my cluster, then I created 2 VM’s and built out a SOFS server that I have dedicated now to all my File Share Witness configurations.

Type Set-ClusterQuorum –FileShareWitness <File Share Witness Path>

 

Final Thoughts by Kris (That is me!)

Storage Spaces Direct is a very cool new technology from Microsoft.  With the proper hardware configurations this can be a very cost effective way to provide storage and compute to your business.  Just think about doing a 2 Node cluster, no TOR Switches needed, for a remote location?  I need to blog about my VMFleet results on the above cluster.  One day I might have the time to do it.

There is one thing you should know.  If you are going to be managing the Fabric using VMM 2016, I suggest you deploy your S2D cluster using VMM.  Especially if you have plans on a fully Hyper-Converged deployment with SDNv2 as well.  You can not manage a pre-existing built out S2D cluster with VMM without breaking it while doing it.  There are blogs that walk you through how to get around some of the issues.  However, you might as well not run into them and build them out using VMM if you are going to manage them with VMM as well.  Why can’t you manage a pre-existing S2D cluster?  Basically it comes down to the SET Switch that VMM will create and can’t use the existing SET switch you already created.

I am really looking forward to finishing this project I am on.  Not because I am tried of it but because I will have successfully deployed 3 Hyper-Converged Clusters being managed by VMM across two continents.

I do have another blog in the works on how I created my Management Cluster using VMM.  That will be published soon.  There are some slightly different ways to do it with VMM and some things I am not quite sure I am doing exactly by the book.  :)

 

Advertisements
Tagged with: , , , ,
Posted in Hyper-Converged, Storage Spaces Direct
One comment on “Deploy Storage Spaces Direct 4 Node Cluster
  1. […] So most of my experience recently with Storage Spaces Direct (S2D) have been deploying stand alone clusters that are not managed by System Center Virtual Machine Manager (SCVMM) 2016.  I have another blog in the works that documents how I built out a 4 node S2D cluster using VMM and PowerShell.  It piggy backs on my last blog on how to Deploy A Storage Spaces Direct 4 Node Cluster. […]

    Like

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow Kristopher Jon Turner on WordPress.com
Archives
%d bloggers like this: