.NET 6 Minimal API on AWS Lambda

Minimal API in .NET 6 is a great feature. Combined with top-level statements and global using directives you can have a web app that returns “Hello World” with only one source file containing the following few lines:

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/", () => "Hello World!");

app.Run();

Of course this can still be expanded by adding controller support, authorization etc. But in any case we can drastically reduce boiler plate code in our web apps. So of course we also want this in our .NET web apps hosted on AWS Lambda right? Luckily the guys at AWS have our back, but they haven’t really promoted it yet as far as I can see.

The key to making this work in a Lambda is to add the Amazon.Lambda.AspNetCoreServer.Hosting package to your project through NuGet. This allows you to add the following line to the startup logic:

builder.Services.AddAWSLambdaHosting(LambdaEventSource.RestApi);

This will setup your app to process requests coming from an AWS API Gateway REST API. The LambdaEventSource can also be HttpApi and ApplicationLoadBalancer. The final code simply becomes:

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddAWSLambdaHosting(LambdaEventSource.RestApi);

var app = builder.Build();

app.MapGet("/", () => "Hello World!");

app.Run();

That’s it. Have fun changing your Lambda-hosted web apps into using Minimal API!

Posted by Pikedev, 0 comments

Why AWS is a good choice for your .NET application

Say you’re planning a new .NET application and deciding on a cloud platform, obviously it would make sense to go for Azure because they are both Microsoft products. However, perhaps surprisingly, .NET is also a first-class citizen on AWS. After developing .NET applications on AWS for several years now I would like to share why I would recommend AWS as a solid cloud platform choice for .NET applications.

1. First-class C#/.NET support for AWS Lambda

AWS Lambda is the main serverless component of AWS. It is the AWS counterpart of Azure Functions. To oversimplify it, you use it to run code in the cloud without worrying about what it runs on. Additionally, it is pay-per-use and has a very generous monthly free-tier. During months of development and testing with a full development team we weren’t charged a single penny. So definitely don’t be afraid to give it a try.

There are many use cases for a serverless component such as Lambda, but perhaps the most common one is processing events or data. It can subscribe to events from message queues, file storages, databases etc. Or you can just manually feed it some data to process. All of this can be done with C#/.NET code which is great, but nothing new compared to Azure Functions though right? The real power lies in Lambda’s support for ASP.NET Core.

ASP.NET Core

Basically you can run your whole ASP.NET Core web application, for example a REST API, on a single Lambda. No separate Lambda per API method, but your whole set of API controllers in one Lambda. So how is this done? Well, Lambda’s process data, usually in JSON format. On AWS we can use API Gateway to accept HTTP requests which then get converted to JSON and given to the Lambda to process. The Lambda’s result then gets converted to a HTTP response and returned to the requester. The ASP.NET Core framework doesn’t know about any of this and just treats the request as if it came from any webserver, so it will route the request to the correct controller and method.

API Gateway converts your HTTP request to something Lambda understands. The Amazon.Lambda.AspNetCoreServer package converts the Lambda input into something ASP .NET Core understands. ASP .NET Core handles the request just like it would if it came from a regular webserver and routes to the correct controller and method.

This way you get all the ASP.NET Core goodies for free, such as dependency injection, authorization, configuration etc. Serverless hosting of your ASP.NET Core web application with only two cloud components and barely a noticeable difference in the code? Read more about it here.

2. Infrastructure-as-code in C#

If you’re not yet part of the everything-as-code movement you should probably get moving because you’re missing out! I also plan to cover this subject more on this blog. It’s great to be able to specify your whole cloud infrastructure in code instead of logging onto the Azure Portal or AWS Console jungles to click all your components together. Small cloud component configuration change? Edit it in code and push it. Assuming you have good CI/CD, you have now applied your configuration change.

AWS offers CloudFormation as an infrastructure-as-code solution, similar to Azure ARM. Just simply defining your whole infrastructure as a CloudFormation template can still be quite messy though. All the different components with each their own settings can become quite large and the JSON or YAML format doesn’t provide much of an overview. This is where the AWS Cloud Development Kit (CDK) comes in. The CDK allows you to use C# to define your infrastructure. Not just that, but it provides so called “constructs” that make it easier to create and configure cloud components. For example you can create an S3 bucket (AWS file storage component) using:

var bucket = new Bucket();

And a Lambda using:

var function = new Function();

Constructor parameters left out for clarity.

Not only is this all you need to write to create these components, both of these objects will now also have several helper methods to extend their configuration. For instance, you can now say:

bucket.GrantReadWrite(function);

And the CDK will take care of creating roles and policies to allow your Lambda to access this S3 bucket for reading and writing. Go take a look at what monstrosities you need to have in a CloudFormation template to achieve the same, or how much clicking and typing it takes in the AWS Console, and you will understand how powerful this is.

3. Full-feature .NET SDK

I guess this one is pretty much an open door, but still worth mentioning. The parts of the AWS SDK for .NET are easily installed through NuGet. Pretty much all the main AWS Services can be interacted with using this SDK. Many services also have higher level helper methods to simplify using these services even more. For example reading a file from an AWS S3 bucket (file storage):

var stream = await amazonS3Client.GetObjectStreamAsync("<bucket name>", "<bucket file path and name>", null);

Now you have a stream of the file. Want to write a file to that same storage?

await amazonS3Client.UploadObjectFromStreamAsync("<bucket name>", "<bucket file path and name>", stream, null);

Simple as that. Obviously this is only the tip of the iceberg. The SDK provides an extensive list of API’s for almost every service so you can perform even the most advanced operations.

4. Easy SPA hosting

This isn’t directly related to .NET, but indirectly this definitely plays a part. All of the frontends I currently create are Single Page Applications (SPA), built with Angular, React, Blazor WASM or any other such framework and this seems to be the case for other .NET projects around me as well. It makes sense to host this frontend on the same cloud platform and of course this should be pain-free. I’ve found this to be the case on AWS. There are two relevant services, one of them optional.

AWS S3

The main service for hosting a SPA is AWS S3. This is the cloud file storage offering from AWS. In S3 you create so called Buckets which hold your files and folders. A Bucket also has an option called “static website hosting” which turns the bucket into a web root directory.

If you want to host for example an Angular application on S3, just build a production release and add the complete contents of the publish directory to the S3 bucket. This means all the HTML, js, css, images and whatnot. Now make sure public access is enabled on the bucket and then enable “static website hosting” in the properties. A unique URL will be automatically generated for your bucket. Visit the URL and behold your Angular application.

AWS CloudFront

So now that you have your basic SPA hosting set up, you can optionally take it to the next level with AWS CloudFront. CloudFront is the CDN offering from AWS. You can use it to deliver your SPA at high speeds to your users all around the globe. And remember, that initial page load is crucial to satisfy those impatient users! CloudFront also offers additional security such as protection from DDoS attacks and you can use it to link your custom TLS certificate and domain to your SPA.

5. Good container support

This one is also indirectly related to .NET, but good to know if you’re deciding on a cloud platform. And it’s definitely related to .NET if you take into account that nowadays containers should be in the toolbox of any .NET developer!

AWS has multiple container services, but the one I like to use is AWS Elastic Container Service (ECS). ECS has a feature called Fargate, which is basically serverless container hosting. So just like Lambda, you can use it to just run your code in the cloud without worrying about any servers and you pay-per-use.

So when to use ECS instead of Lambda? Well Lambda has a hard time-out of 15 minutes. This means that if you have a long-running task you won’t be able to run it on Lambda. Additionally, ECS allows more CPU and memory to be allocated to run your code. These two things combined make ECS an excellent choice for heavy tasks like big imports and exports or big data processing.

If you want to run for example your .NET Core console application as a serverless container on AWS, here’s what you need to do:

  1. Create a docker image using one of the .NET Core base images
  2. Create a container registry on AWS Elastic Container Registry (ECR)
  3. Push your docker image to ECR
  4. Create an ECS Task Definition for your image
  5. Create an ECS Cluster
  6. Run your Task Definition on your Cluster with Fargate as your launch type
  7. Stop the task if you want to stop your application. Or if the application exits by itself when it’s done then the task will be stopped automatically

The basic serverless stack for .NET applications on AWS

So these are my five reasons why I think AWS is a good choice for .NET applications. Of course there are also some downsides and annoyances to AWS and I will cover those in a separate article, but nothing to dismiss AWS over. To finish things up I just want to sum up the basic cloud stack you can use for your next cloud-native serverless .NET application on AWS:

  • Web API: AWS Lambda + API Gateway
  • Short-processing tasks: AWS Lambda
  • Long-processing tasks: AWS ECS
  • File storage: AWS S3
  • Web frontend: AWS S3 + AWS CloudFront
  • Relational database: AWS Aurora
  • NoSQL database: AWS DynamoDB
  • Message queue: AWS SQS
  • Event bus: AWS EventBridge

Good luck!

Posted by Pikedev, 0 comments

Custom commit message tags to control your CI build on Azure DevOps

Having an extensive CI build for your pull requests on Azure DevOps can be a great way to guard the quality of your codebase. Compile errors, style errors and regression can all be caught before they reach the main code branch. However, running your full set of automated checks every time can cause unwanted delays in your development process. Using commit message tags can be a great way to avoid this problem.

A real world example

Firstly, I will show an example of how I am personally using this in my current project. Our pull request CI build includes the following tasks:

  • Build all backend projects
  • Run unit tests
  • Run unit-integration tests
  • Build frontend project

From this selection, the last two tasks take the longest time. However, not every backend project contains unit-integration tests and the front-end is often untouched in pull requests. This is why we have introduced the commit tags [skip-it] and [skip-fe] to skip the unit-integration tests and the frontend build respectively. This has majorly decreased the time we have to wait for the CI build of pull requests that do not contain front-end changes or changes to backend projects without unit-integration tests.

Set build variables based on custom commit tags

We use our custom commit tags simply by retrieving the latest commit message and checking if it contains any of our tags. This can be done by using the Azure DevOps Services REST API. Personally I used Powershell to create the script for these actions. Powershell is included with all the build agents, cross-platform and very flexible, I definitely recommend it for these types of scripts.

- powershell: | 
   $baseUrl = "$(env:SYSTEM_TEAMFOUNDATIONCOLLECTIONURI)$($env:SYSTEM_TEAMPROJECTID)/_apis/git/repositories/$($env:BUILD_REPOSITORY_ID)"
   $headers = @{ Authorization = "Bearer $($env:SYSTEM_ACCESSTOKEN)" }
   
   $url = "$($baseUrl)/pullrequests/$($env:SYSTEM_PULLREQUEST_PULLREQUESTID)?api-version=5.1"
   $pr = Invoke-RestMethod -Uri $url -Headers $headers
   
   $url = "$($baseUrl)/commits/$($pr.lastMergeSourceCommit.commitId)?api-version=5.1"
   $commit = Invoke-RestMethod -Uri $url -Headers $headers
   
   if ($commit.comment.ToLowerInvariant().Contains("[skip-it]")) {
     Write-Host "##vso[task.setvariable variable=skip_it]true"
   } else {
     Write-Host "##vso[task.setvariable variable=skip_it]false"
   }
  displayName: Process custom commit tags

We query the Azure DevOps Services twice; first to retrieve the details of the current pull request, and then to query the last commit to the pull request’s branch. We use the pull request data to retrieve the last commit ID and the commit data to retrieve the commit message. If the commit message contains our tag [skip-it] we create a build variable named skip_it with the value true.

Using build variables to skip a build task

Build tasks can have a condition to determine if they should run or not. This is done by adding the condition property.

- task: DotNetCoreCLI@2
  condition: and(succeeded(), ne(variables.skip_it, true))
  displayName: 'Run integration tests'
  inputs:
    command: test
    projects: '**/*IntegrationTests.csproj'

In this example the condition expression uses ne which means “not equal”. With this condition the task will only run if the build variable skip_it does not equal the value true. So now we just have to add [skip-it] to our commit message when we push to the pull request and it will skip the integration tests.

Posted by Pikedev, 0 comments

Azure Container Service (AKS) vs Azure Service Fabric

Azure Service Fabric and the new Azure Container Service (AKS) are both great container orchestration services on Azure. Which one is currently the best to start using? That’s what I was wondering so I looked up some facts. Let me preface this article by saying that Service Fabric actually has several use-cases. For example it has a comprehensive programming model that is very well suited to create microservices. These services can then be orchestrated as a cluster by Service Fabric. However, it is also possible to let Service Fabric orchestrate your containers and that is the only part I’m talking about here because it is the only part that is comparable to Azure Container Service (AKS).  So if you still have to build your (micro)services, then do look into the complete Service Fabric offering. If you’re looking for which container orchestration service to use on Azure then this article might help you a bit.

If you’re not yet familiar with AKS you may be thinking it doesn’t make sense as an abbreviation for “Azure Container Service”, and that’s correct. The “K” actually stands for Kubernetes, which is what this service is actually about; managed Kubernetes on Azure.

What they both do

Just in case you’re not really familiar with either of these services. Also this is not the complete list, but I feel like these are worth mentioning.

  • They coordinate a cluster of computers (nodes) connected to work as a single unit. This means you just deploy to the cluster and it will take care of running it on the node that is currently best suited for it
  • They provide high availability for your application. In other words, they make sure your app is always running. If your app is running on a certain node and that node fails, the cluster will spawn your app on a different node
  • They allow you to scale your apps very easily. Your cluster can’t handle the load? Just add a couple more nodes
  • They re-balance your nodes based on resource consumption. Two apps running on a node and they get into a “this node ain’t big enough for the both of us” dispute? The cluster is the sheriff that will end the fight and send one of them to another node. But there’s also another side; have an extra node that isn’t doing much? Might as well spawn an instance of another app on there to optimally use your resources
  • They enable you to upgrade your apps without downtime by temporarily running your app on another node while it is being updated
  • They monitor the health of your apps and restart them when necessary
  • They allow you to apply resource governance to your apps so you can limit the amount of resources a certain app can use on a node
  • They give you a complete cluster and container orchestration platform on Azure for free, you only pay for the nodes (Azure VMs) in the that are being used by the cluster
What’s different in Azure Service Fabric
  • It supports Windows containers. AKS only supports Linux containers, so you don’t really have a choice if you want to lift-and-shift some legacy .NET applications to containers and the cloud. Now if you’re reading this, it’s important to look at the time of writing of this article, because Windows containers support is also expected on AKS around Q2 of 2018
  • It’s already a first-class Azure service, meaning you have all the standard tools available to you such as the Azure CLI, Powershell, REST API etc. And the Azure Portal can be used to configure many things in your cluster. Again it should be noted that AKS is aiming to become a similarly complete Azure service in the future
  • It has great tooling in Visual Studio 2017. With a few clicks you can set up your Service Fabric container package, complete with configuration files and a deployment script. And you can then deploy it to your cluster right from Visual Studio just as easily
What’s different in Azure Container Service (AKS)
  • It’s based on the open source Kubernetes, so you can expect quick fixes to issues and other benefits from community involvement. Also, Kubernetes is built upon over a decade of experience of running production workloads at Google and let’s face it; they need highly available and highly scalable systems
  • The configuration is a bit more complex because you have to define all components in your cluster such as load balancers and endpoints. In Service Fabric more of it is done for you automatically. However, this also means in AKS you can have a more fine-grained configuration
Which one to choose?

Honestly, it depends on when you are reading this article. At the moment AKS is still a preview, which is why it is not as complete of an Azure service as Service Fabric. However, this is going to change rapidly. All things I wrote about what’s different in Service Fabric will eventually also be implemented for AKS. Kubernetes is an industry standard container orchestrator. Meanwhile Service Fabric is a microservices framework that also features orchestrating containers instead of services made with the framework. If you’re looking for the right Azure service to host your container cluster, choose AKS. If you’re planning to build a microservices application from scratch, do look into Service Fabric’s programming model because it’s pretty great, but my personal recommendation would be: create your microservices using .NET core and run them in Linux containers on AKS.

Posted by Pikedev in Containers, 0 comments

Creating (ASP) .NET Core Docker containers that can be built anywhere

When you create a new ASP .NET Core application in Visual Studio and add Docker support (Project -> Docker Support), a nice Dockerfile gets created for you. It will look like this:

FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "WebApplication.dll"]

Then when you run your container from Visual Studio it will turn out that this Dockerfile indeed works.

That is what I experienced. However, I then wanted to set up continuous integration on Visual Studio Team Services (VSTS) and the build failed in the Docker Build phase. For some reason I couldn’t figure out how to get the build output copied into the container on the VSTS build agent. That’s when I found a different way of building an ASP .NET Core app container: building the app inside the container.

The microsoft/aspnetcore-build Docker image contains the complete .NET Core SDK as well as other common requirements for an ASP.NET Core build environment, for example Node.js. This means this image can be used to build your .NET Core app from source. So I built a Dockerfile that uses two Docker images: one to build the app and one to run the app. Here’s what it looks like. I added comments to explain the steps.

# Use the ASP.NET Core build environment image
FROM microsoft/aspnetcore-build:2.0 AS build-env
WORKDIR /app

# Copy all local source and project files to the container
COPY . .

# Restore packages in the container
RUN dotnet restore

# Create a Release build and publish files to a folder named "out"
RUN dotnet publish -c Release -o out

# Use the ASP.NET Core runtime image
FROM microsoft/aspnetcore:2.0
WORKDIR /app

# Copy the published files from the "out" folder in the build environment image to the runtime image
COPY --from=build-env /app/out .

# Configure the app entrypoint
ENTRYPOINT ["dotnet", "WebApplication.dll"]

You most likely also need to remove the line containing just “*” in your .dockerignore file to allow copying all source files

Now this container can be built on any system that has Docker installed, using only your source files. No need to worry about the state of the build environment on that system.

Posted by Pikedev in How-To's, 0 comments

Hosting an existing ASP.NET Framework web app on Azure Service Fabric using Docker

Want to move existing ASP.NET Framework (as in not .NET Core) web apps to the Azure cloud? Then you may want to consider deploying them as Docker containers to Azure Service Fabric. This article will show you how.

Why Docker?

Docker allows you to package your app as a standalone container that includes everything needed to run. You will not just deploy your app, but actually your whole environment. When you create a Docker container and it works correctly on your machine, then it will for sure also work correctly when you deploy it somewhere else. No more “works on my machine” situations. And since you have one complete and pre-configured package, deployment is a breeze.

Docker also allows you to run multiple apps on one machine in isolation as if they were VMs, but they are not VMs. They are much lighter-weight than VMs and therefore allow you to run many more of them on one machine. Each container will have its own file system, processes etc. For example in the case of an ASP.NET Framework container, each app will run in its own clean IIS instance.

Why Azure Service Fabric?

Your apps will be deployed to a minimum of three nodes (virtual machines) and Service Fabric will orchestrate them for you. For example, you have an app running on Node A and that node gets a failure? Service Fabric will immediately start up that app on Node B and you will have no downtime. Another example, you have an app running on Node A and you receive a heavy load? Service Fabric will start up your app on another node and the load will be divided between them.

Azure Service Fabric can also help you save costs. Usually when you deploy a web app to Azure, you will use an App Service. An App Service is tied to a VM, so you basically have one VM per app. A Service Fabric cluster has a certain number of VMs and your app will run on one or more of them. Service Fabric will control that for you. With Service Fabric you can for example run ten apps on a five node cluster, so that means you pay for five VMs instead of ten. Of course you have to keep in mind that the nodes can handle the load. However, even if you misjudge it, you can add or remove more nodes later. This scaling can even be done automatically.

Visual Studio Team Services / Automation

This article will not demonstrate using Visual Studio Team Services (VSTS or TFS) for building the Docker image and publishing to Azure. Although it’s definitely more convenient and requires less prerequisites, in my experience the build often fails because the ASP.NET Docker images take a long time to download if the build agent doesn’t have it cached yet. The free tier VSTS only allows 30 minutes per build step and this is often not enough to download the image. However, this shouldn’t be an issue anymore in the future when the images are cached on all build agents, so I will dedicate a future article to doing this with VSTS.

Also note that this is specifically about web apps using ASP.NET Framework (4.7, 4.6 etc.), not about ASP.NET Core.

Prerequisites:

All of the above should already be installed on your machine before you follow the steps in this article. I will not go over installing these.

1. Publish your app to a folder

Open your web app’s project in Visual Studio, select the web app project if there are several projects in the solution and go to Build > Publish <name of your app>. The output folder can be whatever you want, the default location is fine.

2. Create a docker image

Navigate to the output folder (<your project file>/bin/Release/PublishOutput in the above image) and create a new file called Dockerfile (no extension). Open this file with notepad or another text editor and add the following:

FROM microsoft/aspnet:4.7
ARG site_root=.
ADD ${site_root} /inetpub/wwwroot

Note that the above assumes your project uses ASP.NET Framework 4.7. If your project uses an older version you can substitute the image name on the first line with microsoft/aspnet:4.6.2 or microsoft/aspnet:3.5.

Open a command prompt or PowerShell and navigate to the output folder. Then run the following command. Replace “PikedevApp” with the name of your app.

docker build -t PikedevApp .

The above command may take a while to complete as it will download the microsoft/aspnet Docker image, which is more than 10GB. When the command completes you can use the following command and you should see an image with your app’s name listed.

docker images
3. Log onto your Azure account on PowerShell

Open PowerShell and run the following command to log onto your Azure account. Running this command without any parameters will open a popup with the Microsoft login page.

Login-AzureRmAccount

If the correct subscription is not active after logging in, you can select the desired subscription with the following command.

Select-AzureSubscription -SubscriptionName <your subscription name>
4. Create a secure Azure Service Fabric Cluster

In this article I will assume that you don’t have an Azure Service Fabric Cluster created yet. If you do, you can skip a few steps.

Create a new Azure resource group with the following command. I will call it “Cluster” and place it in the West Europe region, but you can of course choose your own values.

New-AzureRmResourceGroup -name Cluster -location WestEurope

We will create a secure Azure Service Fabric Cluster and for that we need a certificate. We will save this certificate in an Azure Key Vault. Create a new Azure Key Vault with the following command. Again, you can modify the parameter values to your liking.

New-AzureRmKeyVault -vaultname PikedevClusterKeys -location WestEurope -ResourceGroupName Cluster -EnabledForDeployment

Edit the following with your own values and then run it to create the actual Azure Service Fabric Cluster and its self-signed certificate. You can save this to a script file and run it or just copy and paste the whole thing into PowerShell.

# General stuff
$location="WestEurope"
$resourcegroupname="Cluster"
$keyvaultname = "PikedevClusterKeys"
$name = "pikedevcluster"

# Create this folder before running this script
$certificateoutputfolder="D:\temp\"
# Edit to match with above values
$certificatesubjectname="pikedevcluster.westeurope.cloudapp.azure.com"
# Change this but keep in mind the password policy
$certificatepassword="Password4u!" | ConvertTo-SecureString -AsPlainText -Force

# For the sake of demoing, I will create a single node cluster. When you actually deploy, be sure to use at the very least 3 nodes.
$clustersize=1

# The VM type for the nodes. I'm using the cheap A2 v2 one for this demo. 
$vmsku = "Standard_A2_v2"

# Create the actual Azure Service Fabric cluster and its self-signed certificate
New-AzureRmServiceFabricCluster -Name $name -ResourceGroupName $resourcegroupname -Location $location -KeyVaultName $keyvaultname `
-ClusterSize $clustersize -VmSku $vmsku -OS WindowsServer2016DatacenterwithContainers `
-CertificateSubjectName $certificatesubjectname -CertificatePassword $certificatepassword -CertificateOutputFolder $certificateoutputfolder

When running this command you will be asked to choose a VmPassword which is the password to log onto the nodes

The last command may take a while to complete. When it is completed navigate to the folder you’ve set as certificate output folder (“D:\temp\” in the above example). You should see the newly created certificate (.pfx file). Open (double-click) this file to open the Certificate Import wizard and complete the steps to import the certificate. You can keep all default values in the wizard, but in the following screen be sure to check “Mark this key as exportable“.

5. Create an Azure Container Registry

Now that we have a container , we wil create an Azure Container Registry to host it. The Azure Service Fabric Cluster will fetch the container from this registry when we deploy. Run the following command to create a new Azure Container Registry and fetch its info into a PowerShell variable called $registry.

$registry = New-AzureRMContainerRegistry -Name "PikedevClusterContainers" `
            -ResourceGroupName "Cluster" -EnableAdminUser -Sku Basic

Now run the following command to retrieve the authentication credentials for this new container registry. The credentials will be saved in the $registrycredentials variable.

$registrycredentials = Get-AzureRmContainerRegistryCredential -Registry $registry

Make sure to perform the next steps in this same PowerShell session because we will use these variables

6. Push the container to the registry

Run the following command to log into your newly created container registry using Docker.

docker login $registry.LoginServer -u $registrycredentials.Username -p $registrycredentials.Password

Tag your container with the below command to specify the url to your registry, the name for the container in the registry and the version. We will simply use “latest” as the version and keep the same name.

docker tag PikedevApp pikedevclusterContainers.azurecr.io/PikedevApp:latest

Then run the following command to push the image to your registry.

docker push pikedevclustercontainers.azurecr.io/PikedevApp:latest

Lowercase is necessary in these commands

7. Create a docker-compose file

Now return to the app’s output folder and create another file: docker-compose.yml.

version: '3'

services:
  # Set the below to the name of your app, to be used in Service Fabric
  PikedevApp:
    # Edit the below to match the tag from the last step
    image: PikedevClusterContainers.azurecr.io/PikedevApp:latest
    deploy:
      mode: replicated
      # The number of instances you want of the app
      replicas: 1
    ports:
      #Below means <port in Service Fabric>:<port in container>
      - "80:80"

Make sure the indentation in the file stays as shown in the example above, as this affects how the values are grouped

Port 80 is opened by default so we won’t have to change anything in the cluster’s load balancer. However, when you deploy a second app to the the cluster, you will need to specify a different port because 80 will be in use. For example you can use “81:80”. The second port (the port in the container) should always be 80 because inside the container your app is hosted in IIS. You will then also need to add a rule to open port 81 in your Service Fabric Cluster’s load balancer, which is automatically created with your cluster. You can find it in the Azure portal.

8. Deploy the container to Azure Service Fabric

The last step is to use Docker Compose to deploy the container to your Azure Service Fabric Cluster. Run the following command to do so.

New-ServiceFabricComposeDeployment -DeploymentName PikedevApp -Compose docker-compose.yml -RegistryUserName $registrycredentials.username -RegistryPassword $registrycredentials.password

When this command completes, you should be able to access your app when you browse to the url of your Azure Service Fabric Cluster. In this article’s case it would be “PikedevCluster.WestEurope.cloudapp.azure.com”. Note that the actual URL is “PikedevCluster.WestEurope.cloudapp.azure.com:80”, but we can leave out the port since 80 is the standard port for HTTP. When you run a second app on for example port 81, you can access it by going to “PikedevCluster.WestEurope.cloudapp.azure.com:81”.

Posted by Pikedev in How-To's, 0 comments