When I first started this blog series I had thought I could knock this off in a week or so. As I sit here on Saturday night a few weeks after posting my first blog in this series I realized I might have bitten off more than I could chew. I also missed a lot in that first blog on planning. At least as I started my journey, what I had planned to do and what I might end up doing are ending up to be two different things.
I want to go back to capacity planning first before I continue on with this series. I talked about planning for the SQL Cluster and the File Share cluster that is required by the App Service Resource Provider. However, I didn’t go too deep into some of the capacity planning that needs to be done around those two solutions. I think I did a decent job talking about capacity planning for the App Service server roles themselves. I just didn’t dig into the supporting infrastructure.
I currently don’t have the answers for some of my questions. For instance, for the required SQL server for the App Service RP, how much space do we need to plan for the two* databases that will reside on that SQL instance? If we host those SQL servers on external host, will the bandwidth limitations of the site to site VPN have any effect on the functionality of our PaaS services? Will the size of these database depend on how many apps are created, how many nodes per scale unit, etc?
The same questions I also have for capacity planning for the file share. When I create the file share cluster, how much data should I plan for? I am assuming this will change based off of how many apps are deployed and how many nodes we have in each scale unit. Again, will the bandwidth limitations have any effect on the files and configurations that are stored for each application? Also future considerations of having multi-scale units in a region.
Another area of planning I don’t think I touched on was the location of these clusters. Are we deploying these clusters externally or are we building clusters on Azure Stack VM’s? There are some things to think about when deploying clusters on Azure or Azure Stack. For instance, Active Directory and the Windows Fail-over Cluster role that normally needs a service account to function. (Note: there are ways to do this without AD but I won’t even go there.) Also, how to make these cluster resources publicly available? There are some tricks that need to be done with external load balancers and more.
For Active Directory, our external host won’t have any issues. However, since we are planning this for various possible solutions, how do we plan to bring Active Directory into an Azure Stack VM deployment? Do we create a completely new Active Directory environment for each Stack we deploy? Can we route traffic to an on premises domain to authenticate against? Will that even be allow by security at various companies? Right now we are only working with Azure Active Directory when we deploy Stacks. Will our plans change for any client that wants to deploy a Stack using ADFS? A lot of questions about identity and authentication. To be honest, isn’t my strong side in systems management in the first place. I will have to say that over the last few days I have been experimenting with Azure’s new Domain Services. I have successfully created a Site to Site VPN connection between Azure and Azure Stack and have successfully joined a few Azure Stack VM’s to my Azure Active Directory Domain Services domain. So a plan may change again from deploying and managing a completely different Active Directory environment for each Stack and using Azure’s new domain services for authentication. I know I will blog about that experience here soon and might include it in this series as a solution for Active Directory.
I really don’t have any final thoughts on this blog. The blog itself was a thought! :) This was more of a way to get me thinking about what kind of capacity planning we need to do in more detail. I know as I continue with this project I will have more questions and I hope I can answer them as I go as well.