Deploying .NET Core apps to Kubernetes with GitLab

Deploying .NET Core apps to Kubernetes with GitLab

This article is a part of the GitLab for .NET developers series.

GitLab is the awesome tool that I used in different organisations for years. It is, however, often overlooked in the .NET space. One of the reasons for this is that GtLab was always oriented to more dynamic stacks and .NET used to be quite rigid and enterprise-oriented space.

When it comes to continuous delivery, I believe that GitLab is one of the best, if not the best integrated tool on the market today. It allows organisations to create a fully-functional end-to-end development flow. It covers you from the moment you got an idea that you put in the issue, to managing your project using Kanban boards, keeping your source code safe in Git repositories, building and deploying your software.

The Auto DevOps feature in particular allows developers to enable the CI/CD pipeline with a click of a mouse. GitLab will build, test, scan the code for quality issues and deploy it to different environment.

When it comes to .NET, the issue is that the Auto DevOps feature simply doesn’t support .NET solutions. It’s because GitLab uses Heroku build packs for building and testing the code and uses CodeClimate for static analysis. None of those tools support .NET out of the box.

It is, however, not a huge impediment if you work in an environment where Kubernetes is the main application hosting platform, and you deploy your applications in containers. GitLab supports building Docker containers for anything, as long as you have a Dockerfile in your repository root. When it comes to testing, you aren’t covered with any ready-made features, but it’s not hard to configure the CI pipeline to run dotnet test. Finally, deployments to Kubernetes using default GitLab pipeline is possible, but it includes Postgres, which isn’t necessarily the tool of choice in .NET works. It doesn’t hurt to have Postgres there since you can easily disable using Postgres in the pipeline, but it creates some noise in the configuration files.

Now, I am going to explain how we use GitLab in .NET environments successfully with some minor customisations of the Auto DevOps process.

GitLab runners

There are quite a few flavours of GitLab runners out there. GitLab uses runners to execute pipeline jobs. A runner supports one or more executors, which actually executes jobs. For this article, I will use the Kubernetes runner, but my instructions are equally relevant for Docker executor hosted on a Shell or Docker+Machine runner. The Kubernetes runner is oe of the GitLab managed Kubernetes apps, so you can install it from the Applications tab on the Kubernetes cluster configuration page.

Auto deploy image

GitLab uses their own custom image for deployment steps in the pipeline. The image has tools like helm and kubectl installed in the image. It can be used as-is, but I created a fork that uses Helm 3, so I can avoid using Tiller, and adds two jobs that I find useful.

All jobs are just bash functions.

The first new job is use_version. It allows me to generate the application version at the beginning of the pipeline and then use it as the container tag when building the Dockerfile. Then, I can use the tag when deploying the new release to Kubernetes.

Here is the use_version function that I added:

function use_version() {
  [ -f version.sh ] && source ./version.sh
  tag=${APPLICATION_VERSION/+/-}
  export CI_APPLICATION_TAG=${tag:-$CI_APPLICATION_TAG}
}

I will show where the version.sh file is generated later in this article.

Pipeline template

Since I use the same custom jobs in multiple .NET repositories, I use the awesome GitLab feature: job templates. It allows me to use ready-made jobs as-is or use job templates and expand them with custom settings if needed.

Here is my CI template file, which I host in the ci-templates repository in my GitLab private instance:

The file is quite long, so I am going to explain it bit by bit.

First, I use my custom auto deploy image in the deployment job template. It is not different from the GitLab own deployment job template. The only thing there is that I use my custom auto deploy image that is pulled from registry.ubiquitous.no/gitlab/auto-deploy-image:latest. If you decide to follow the same path and host the custom image in your GitLab container registry, you’d need to change this path. If you want to use the default image from GitLab, here is the image tag: registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:v0.17.0. It uses Helm 2, but that shouldn’t be a problem.

Then we come to using Git for generating the application version. The job uses the set_verson function defined at the end of the time. The function uses git describe --tags to generate the unique version based on repository tags. It is not a SemVer version, so you might change the function to use MinVer instead, but you’d need to change the image from docker:git to either a .NET Core image and install the MinVer CLI global tool in the pipeline, or use the .NET test image, which I described above.

Next comes the Kaniko cache job. I use Kaniko, the tool to build Docker containers without using Docker. It is an invaluable tool when using a Kubernetes executor since you won’t need to use Docker in Docker service. Kaniko can be “pre-warmed” with the necessary build and runtime images before building the application container. Here I use this feature combined with the GitLab cache feature. The job will download the necessary images and GitLab will upload it to the distributed cache, using the kaniko-cache cache key. You can read more about configuring the distributed cache for GitLab here. Hence, it is barely an optimisation, and you don’t really need to pre-warm Kaniko. It just saves a few seconds during the build.

The last job template of the tile is the build job. The default build job in GitLab uses Docker and DnD and I don’t want that. GitLab is adopting Kaniko so maybe in a few months I won’t need a custom job for that. Right now, I have to use my custom job.

One thing to mention there is that I use the executor:debug-v0.18.0 image. First, in order to start Kaniko with custom arguments as the command-line tool, you need to use the debug image. Second, the latest version ahs some issues with RUN command in Dockerfile that uses regular expressions. My Dockerfile has those, and I use the latest version that manages to handle everything properly, which is v0.18.0.

The build job uses cached images from the pre-warm job and the kaniko-cache key. Again, it is entirely optional.

Helm chart

Perhaps the most important change that I had to make in order to make it all work is the Helm chart. I believe it would all work with the default GitLab Auto DevOps Helm chart too, but it has some Postgres stuff that I don’t need. I also added some things that I find useful for my own environment.

In short, you can find my chart on GitHub, and I am now going to explain how to use it.

GitLab doesn’t force you to use their own charts. In fact, if your repository contains the chart directory in the repository root, the pipeline will use that chart.

When using a custom shared chart, you need to set three CI variables, so GitLab knows where to find it:

Variable My custom chart
AUTO_DEVOPS_CHART ubiquitous/auto-deploy-aspnetcore
AUTO_DEVOPS_CHART_REPOSITORY https://ubiquitousas.github.io/helm-charts
AUTO_DEVOPS_CHART_REPOSITORY_NAME ubiquitous

I usually set these settings on the group level, so I avoid setting them for each project in the group.

In short, the chart has the following differences, compared with the default GitLab Auto DevOps chart:

  • No Postgres, no init container with database migrations, no DATABASE_URL
  • Health probe is pointing to /health where you would usually host the health check endpoint using ASP.NET Core health checks.
  • Readiness probe points to /ping, which you have to make using a route or some controller. You can also change it to /health using the values override file.
  • Added podEnvironment and podPorts collections (optional). The first one allows you to add some environment variables without using K8S_ prefixed CI variables (used for the application secret). The second one lets you adding more ports since GitLab only adds the default port (5000) as HTTP.
  • Added the ASPNETCORE_ENVIRONMENT to the deployment environment variables:
        env:
        - name: ASPNETCORE_ENVIRONMENT
          value: {{ .Values.gitlab.envName | quote }}

Project Dockerfile

Now I need to create a Dockerfile in the project repository, so my pipeline can use it to build the container.

Here is what I normally use (layered build):

ARG BUILDER_IMG=mcr.microsoft.com/dotnet/core/sdk:3.1
ARG RUNNER_IMG=mcr.microsoft.com/dotnet/core/aspnet:3.1

FROM $BUILDER_IMG AS builder

WORKDIR /app

# copy csproj and restore as distinct layers
COPY /src/*/*.csproj ./src/
RUN for file in $(ls src/*.csproj); do mkdir -p ./${file%.*}/ && mv $file ./${file%.*}/; done
RUN dotnet restore -nowarn:msb3202,nu1503 src/MyApp -r linux-x64

# copy everything else, build and publish the final binaries
COPY ./src ./src
RUN dotnet publish src/MyApp -c Release -r linux-x64 --no-restore --no-self-contained -clp:NoSummary -o /app/publish \
/p:PublishReadyToRun=true,PublishSingleFile=false

# Create final runtime image
FROM $RUNNER_IMG AS runner

USER 1001

WORKDIR /app
COPY --from=builder /app/publish .

ENV ALLOWED_HOSTS "*"
ENV ASPNETCORE_URLS "http://*:5000"

EXPOSE 5000
ENTRYPOINT ["./MyApp"]

The layered build allows caching layers like the restored NuGet packages. To cache those layers, you’d need to have a private Docker registry where you can tell Kaniko to push those cached layers. I already added support for it to the container build job using CACHE_REGISTRY_* variables. However, I noticed that it doesn’t speed up the build a lot. Sometimes, when the registry becomes slower to respond, it may take more time to pull and push the cached layers than to build them, so use it with caution or don’t use at all. If those variables aren’t configured, the job will still work, so you can just ignore caching.

Project pipeline

Now I can use my template in the project repository. When using templates, the CI file becomes really simple:

include:
  - project: "gitlab/ci-templates"
    file: "/autodevops-ci.yml"

stages:
  - prebuild
  - build
  - deploy
  - cleanup

version:
  extends: .version
  stage: prebuild

warmer:
  extends: .kanikocache
  stage: prebuild

build:
  extends: .dockerbuild
  stage: build
  dependencies:
    - version

.deploy:
  retry: 2
  extends: .deployment
  allow_failure: false
  dependencies:
    - version
    - build

deploy:
  extends: .deploy
  stage: deploy
  environment:
    name: production
    url: http://myproject.$KUBE_INGRESS_BASE_DOMAIN
    kubernetes:
      namespace: myproject

There’s nothing really to see here, I have this YAML in the .gitlab-ci.yml file in the project repository, and it all works. In here I import the CI template, which I described above, from the repository in the same GitLab instance. The group name is gitlab and the project name is ci-templates. If you place the template file somewhere else, you’d need to change the include section:

include:
  - project: "gitlab/ci-templates"
    file: "/autodevops-ci.yml"

If you configured your project or group with the Helm chart details pointing to my custom chart (it is public), when you push changes to the repository, it will be built and deployed to the production environment.

In the next post, I will explain how to include the dotnet test step to the pipeline, along with parsing the test results to be shown in the merge request.


See also