Hosting an existing ASP.NET Framework web app on Azure Service Fabric using Docker

Want to move existing ASP.NET Framework (as in not .NET Core) web apps to the Azure cloud? Then you may want to consider deploying them as Docker containers to Azure Service Fabric. This article will show you how.

Why Docker?

Docker allows you to package your app as a standalone container that includes everything needed to run. You will not just deploy your app, but actually your whole environment. When you create a Docker container and it works correctly on your machine, then it will for sure also work correctly when you deploy it somewhere else. No more “works on my machine” situations. And since you have one complete and pre-configured package, deployment is a breeze.

Docker also allows you to run multiple apps on one machine in isolation as if they were VMs, but they are not VMs. They are much lighter-weight than VMs and therefore allow you to run many more of them on one machine. Each container will have its own file system, processes etc. For example in the case of an ASP.NET Framework container, each app will run in its own clean IIS instance.

Why Azure Service Fabric?

Your apps will be deployed to a minimum of three nodes (virtual machines) and Service Fabric will orchestrate them for you. For example, you have an app running on Node A and that node gets a failure? Service Fabric will immediately start up that app on Node B and you will have no downtime. Another example, you have an app running on Node A and you receive a heavy load? Service Fabric will start up your app on another node and the load will be divided between them.

Azure Service Fabric can also help you save costs. Usually when you deploy a web app to Azure, you will use an App Service. An App Service is tied to a VM, so you basically have one VM per app. A Service Fabric cluster has a certain number of VMs and your app will run on one or more of them. Service Fabric will control that for you. With Service Fabric you can for example run ten apps on a five node cluster, so that means you pay for five VMs instead of ten. Of course you have to keep in mind that the nodes can handle the load. However, even if you misjudge it, you can add or remove more nodes later. This scaling can even be done automatically.

Visual Studio Team Services / Automation

This article will not demonstrate using Visual Studio Team Services (VSTS or TFS) for building the Docker image and publishing to Azure. Although it’s definitely more convenient and requires less prerequisites, in my experience the build often fails because the ASP.NET Docker images take a long time to download if the build agent doesn’t have it cached yet. The free tier VSTS only allows 30 minutes per build step and this is often not enough to download the image. However, this shouldn’t be an issue anymore in the future when the images are cached on all build agents, so I will dedicate a future article to doing this with VSTS.

Also note that this is specifically about web apps using ASP.NET Framework (4.7, 4.6 etc.), not about ASP.NET Core.

Prerequisites:

All of the above should already be installed on your machine before you follow the steps in this article. I will not go over installing these.

1. Publish your app to a folder

Open your web app’s project in Visual Studio, select the web app project if there are several projects in the solution and go to Build > Publish <name of your app>. The output folder can be whatever you want, the default location is fine.

2. Create a docker image

Navigate to the output folder (<your project file>/bin/Release/PublishOutput in the above image) and create a new file called Dockerfile (no extension). Open this file with notepad or another text editor and add the following:

FROM microsoft/aspnet:4.7
ARG site_root=.
ADD ${site_root} /inetpub/wwwroot

Note that the above assumes your project uses ASP.NET Framework 4.7. If your project uses an older version you can substitute the image name on the first line with microsoft/aspnet:4.6.2 or microsoft/aspnet:3.5.

Open a command prompt or PowerShell and navigate to the output folder. Then run the following command. Replace “PikedevApp” with the name of your app.

docker build -t PikedevApp .

The above command may take a while to complete as it will download the microsoft/aspnet Docker image, which is more than 10GB. When the command completes you can use the following command and you should see an image with your app’s name listed.

docker images
3. Log onto your Azure account on PowerShell

Open PowerShell and run the following command to log onto your Azure account. Running this command without any parameters will open a popup with the Microsoft login page.

Login-AzureRmAccount

If the correct subscription is not active after logging in, you can select the desired subscription with the following command.

Select-AzureSubscription -SubscriptionName <your subscription name>
4. Create a secure Azure Service Fabric Cluster

In this article I will assume that you don’t have an Azure Service Fabric Cluster created yet. If you do, you can skip a few steps.

Create a new Azure resource group with the following command. I will call it “Cluster” and place it in the West Europe region, but you can of course choose your own values.

New-AzureRmResourceGroup -name Cluster -location WestEurope

We will create a secure Azure Service Fabric Cluster and for that we need a certificate. We will save this certificate in an Azure Key Vault. Create a new Azure Key Vault with the following command. Again, you can modify the parameter values to your liking.

New-AzureRmKeyVault -vaultname PikedevClusterKeys -location WestEurope -ResourceGroupName Cluster -EnabledForDeployment

Edit the following with your own values and then run it to create the actual Azure Service Fabric Cluster and its self-signed certificate. You can save this to a script file and run it or just copy and paste the whole thing into PowerShell.

# General stuff
$location="WestEurope"
$resourcegroupname="Cluster"
$keyvaultname = "PikedevClusterKeys"
$name = "pikedevcluster"

# Create this folder before running this script
$certificateoutputfolder="D:\temp\"
# Edit to match with above values
$certificatesubjectname="pikedevcluster.westeurope.cloudapp.azure.com"
# Change this but keep in mind the password policy
$certificatepassword="Password4u!" | ConvertTo-SecureString -AsPlainText -Force

# For the sake of demoing, I will create a single node cluster. When you actually deploy, be sure to use at the very least 3 nodes.
$clustersize=1

# The VM type for the nodes. I'm using the cheap A2 v2 one for this demo. 
$vmsku = "Standard_A2_v2"

# Create the actual Azure Service Fabric cluster and its self-signed certificate
New-AzureRmServiceFabricCluster -Name $name -ResourceGroupName $resourcegroupname -Location $location -KeyVaultName $keyvaultname `
-ClusterSize $clustersize -VmSku $vmsku -OS WindowsServer2016DatacenterwithContainers `
-CertificateSubjectName $certificatesubjectname -CertificatePassword $certificatepassword -CertificateOutputFolder $certificateoutputfolder

When running this command you will be asked to choose a VmPassword which is the password to log onto the nodes

The last command may take a while to complete. When it is completed navigate to the folder you’ve set as certificate output folder (“D:\temp\” in the above example). You should see the newly created certificate (.pfx file). Open (double-click) this file to open the Certificate Import wizard and complete the steps to import the certificate. You can keep all default values in the wizard, but in the following screen be sure to check “Mark this key as exportable“.

5. Create an Azure Container Registry

Now that we have a container , we wil create an Azure Container Registry to host it. The Azure Service Fabric Cluster will fetch the container from this registry when we deploy. Run the following command to create a new Azure Container Registry and fetch its info into a PowerShell variable called $registry.

$registry = New-AzureRMContainerRegistry -Name "PikedevClusterContainers" `
            -ResourceGroupName "Cluster" -EnableAdminUser -Sku Basic

Now run the following command to retrieve the authentication credentials for this new container registry. The credentials will be saved in the $registrycredentials variable.

$registrycredentials = Get-AzureRmContainerRegistryCredential -Registry $registry

Make sure to perform the next steps in this same PowerShell session because we will use these variables

6. Push the container to the registry

Run the following command to log into your newly created container registry using Docker.

docker login $registry.LoginServer -u $registrycredentials.Username -p $registrycredentials.Password

Tag your container with the below command to specify the url to your registry, the name for the container in the registry and the version. We will simply use “latest” as the version and keep the same name.

docker tag PikedevApp pikedevclusterContainers.azurecr.io/PikedevApp:latest

Then run the following command to push the image to your registry.

docker pushĀ pikedevclustercontainers.azurecr.io/PikedevApp:latest

Lowercase is necessary in these commands

7. Create a docker-compose file

Now return to the app’s output folder and create another file: docker-compose.yml.

version: '3'

services:
  # Set the below to the name of your app, to be used in Service Fabric
  PikedevApp:
    # Edit the below to match the tag from the last step
    image: PikedevClusterContainers.azurecr.io/PikedevApp:latest
    deploy:
      mode: replicated
      # The number of instances you want of the app
      replicas: 1
    ports:
      #Below means <port in Service Fabric>:<port in container>
      - "80:80"

Make sure the indentation in the file stays as shown in the example above, as this affects how the values are grouped

Port 80 is opened by default so we won’t have to change anything in the cluster’s load balancer. However, when you deploy a second app to the the cluster, you will need to specify a different port because 80 will be in use. For example you can use “81:80”. The second port (the port in the container) should always be 80 because inside the container your app is hosted in IIS. You will then also need to add a rule to open port 81 in your Service Fabric Cluster’s load balancer, which is automatically created with your cluster. You can find it in the Azure portal.

8. Deploy the container to Azure Service Fabric

The last step is to use Docker Compose to deploy the container to your Azure Service Fabric Cluster. Run the following command to do so.

New-ServiceFabricComposeDeployment -DeploymentName PikedevApp -Compose docker-compose.yml -RegistryUserName $registrycredentials.username -RegistryPassword $registrycredentials.password

When this command completes, you should be able to access your app when you browse to the url of your Azure Service Fabric Cluster. In this article’s case it would be “PikedevCluster.WestEurope.cloudapp.azure.com”. Note that the actual URL is “PikedevCluster.WestEurope.cloudapp.azure.com:80”, but we can leave out the port since 80 is the standard port for HTTP. When you run a second app on for example port 81, you can access it by going to “PikedevCluster.WestEurope.cloudapp.azure.com:81”.

Posted by Pikedev

Geef een antwoord