Building A Microsoft Hyper-Converged Private Cloud Solution.

For my next blog I have decided to follow a project I am about to start with a larger client that is moving from VMware to a Microsoft Hyper-Converged solution using Windows Server 2016 and System Center 2016.  This part of my blog will just be an over view of the project and what we are planning and our design.  I hope to include links to every blog and site that I will use along the way for assistance with the deployment of the entire solution.  I think giving credit where credit is due is very important so forgive me if I didn’t include your blog if I used some of your information.

Our solution and our requirements are as follows.  We will be building this environment in two different datacenters.  One here in the United States and another one located in Singapore. Our entire management environment will be located in the data center here in the states with a few exceptions, including some ConfigMgr distribution points, some management jump servers, a OpsMgr Gateway server, a VMM Library server, and of course the components needed for SDNv2 such as Network Controllers, Gateways, and SLB MUX’s.  This management environment will be in it’s own 4 node hyper-converged cluster which will host all the SQL Instances, System Center components, etc.   We will be using System Center 2016 Data Protection Manager with integration to Azure for Azure backup.   We have also included Integration Packs for Orchestrator from a company called Kelverion.

Based off the sizing requirements we received we will have 2 16 node compute/storage clusters here in the states and also a single 16 node compute/storage cluster in Singapore that will be dedicated just for our VM’s.

All 4 clusters will be built out with the exact same hardware.  The configuration of that hardware is still being discussed but we will be using Dell PowerEdge R730xd’s for compute/storage, Dell’s z9100 switches for the TOR switches, and Dell’s S3048 for the OOB management switches.  The current hardware discussion looks to be leading us to a basic configuration:

2 x HDD internal 3.5 for OS drive
4 x 1.6 TB SDD for Cache Drives
4 x 4 TB 7.2K 3.5 internal HDD for Capacity
8 x 4 TB 7.2K HDD  for Capacity

This should give us 48 usable TB of Capacity and 6.4 TB Cache.  Which in turn once we deploy Storage Spaces Direct and have the 33% resiliency configured would give us about 250.7 TB usable, 501.3 resiliency, and 16 TB in reserve for each cluster.

We haven’t decided on memory but are leaning at 516 GB for each node.  Which would give us about 8,256 GB Memory for each cluster.

We are still discussing CPU’s as well.  However, more than likely we will be going with 2 Intel Xeon processes with 14 Core.  Which gives us 28 cores per node.  If you do 4 to 1 ratio on vCPU and Cores that should give us about 112 vCPU’s per node and a total of 1,792 vCPU per cluster.

Now, as I move forward with design and planning I will add more parts to this series. Including my SDNv2 designs, etc.

So, I am hoping to have part 2 here soon which should start covering the design and more.


Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s