GitLab, .NET Core, Kubernetes, and Pulumi

GitLab, .NET Core, Kubernetes, and Pulumi

This article is a part of the GitLab for .NET developer series.

We actively use the modified GitLab AutoDevOps pipeline that supports .NET applications better than the original one. I described our approach in the previous article.

GitLab AutoDevOps feature uses Helm and therefore I had to create my own Helm chart with some amendments. However, the cart is still very rigid. Rigidity the common issue with Helm charts and trying to develop a chart to cover a lot of different deviations from the default is a road to hell. Templates become overloaded with conditional statements, more and more settings pop up in the values.yaml file. It was my trigger as well to create my own version of the GitLab AutoDevOps chart, since the original version contains too much stuff about PostgreSQL, which I don’t use very often.

Of course, one option could be to keep a local chart in each repository using the original chart as its base. But then we lose the centralised governance of the deployment process because all local charts won’t be updated if the master chart changes.

The issue is especially relevant for deployments that require some custom infrastructure. GitLab (incorrectly) assumes that everyone in the world can happily use PostgreSQL, but it’s not the case. Creating a number of subcharts that would support every possible combination of infrastructure components is also tedious job, and it doesn’t give any control of what gets deployed to the developers anyway.

That’s why I decided to try using Pulumi, the popular infrastructure-as-code tool. Pulumi offers private accounts for free, but their published plans are steeply priced, so that might be an issue for some. Nevertheless, it was worth to try.

Deploy image

I use the modified auto-deploy-image container (see the previous article). Pulumi needs NodeJS, so I decided to use the Alpine Node image as the base image instead. I also added Pulumi CLI to the list of tools that I need in the container. Here is my final Dockerfile:

ARG HELM_VERSION
ARG KUBECTL_VERSION

#FROM alpine:3.9
FROM node:alpine

# https://github.com/sgerrand/alpine-pkg-glibc
ARG GLIBC_VERSION

COPY src/ build/

# Install Dependencies
RUN apk add --no-cache openssl curl tar gzip bash jq \
  && curl -sSL -o /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \
  && curl -sSL -O https://github.com/sgerrand/alpine-pkg-glibc/releases/download/${GLIBC_VERSION}/glibc-${GLIBC_VERSION}.apk \
  && apk add glibc-${GLIBC_VERSION}.apk \
  && apk add ruby jq \
  && rm glibc-${GLIBC_VERSION}.apk \
  && curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl \
  && mv ./kubectl /usr/bin/kubectl \
  && chmod +x /usr/bin/kubectl \
  && curl -L https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash \
  && curl -sL https://sentry.io/get-cli/ | bash \
  && curl -fsSL https://get.pulumi.com/ | bash

ENV PATH=$PATH:/root/.pulumi/bin

RUN ln -s /build/bin/* /usr/local/bin/

I kept Helm there since I use the same image for all the deployments, so I don’t want to break anything.

Pulumi project

Then, I created a new .NET Core WebAPI project locally. Leaving everything was it was there, I added a Pulumi project to the repository. I’ve done so by executing The following commands in the repository root:

mkdir pulumi && cd pulumi
pulumi new kubernetes-typescript

There, I entered the project name and kept the stack name to dev (default).

You can check the guidelines for Kubernetes in Pulumi docs.

I could’ve used the .NET Pulumi API version, but I’ve done some Pulumi work using TypeScript before, so, for the first round, I felt more comfortable using the familiar API.

Pulumi decided to use npm, and I prefer Yarn, so I had to run yarn once in the pulumi folder and remove the package-lock.json afterwards, keeping the yarn.lock file instead.

Converting the AutoDevOps chart

I didn’t want to reinvent the wheel and spent some time converting YAML from the AutoDevOps chart that I have to TypeScript and Pulumi API. I’ve done it before for a different reason, and some Kubernetes types have changed, but it wasn’t an issue.

Settings

First, I figured out a few interfaces for the deployment settings. Basically, I needed to replace most of the chart configuration in the values.yaml file. Most of those values get set by the deployment pipeline, so I had to make them configurable. Pulumi separates configurations per stack, so for each GitLab environment I’d need a separate stack. This feature would allow me to get more control on how different environments get deployed.

So, I created a autoDevOps directory in the pulumi directory of my repo and added the settings.ts file. It took me a while to find out all the settings I need and here is the end result:

import * as pulumi from "@pulumi/pulumi";

interface DeploySettings {
    namespace: string,
    release: string,
    replicas: number,
    image: string,
    imageTag: string,
    imagePullSecret: string,
    url: string
}

interface AppSettings {
    name: string,
    tier: string,
    track: string,
    secretName: string,
    secretChecksum: string,
    port: number
}

interface GitLabSettings {
    app: string,
    env: string,
    envName: string,
    envUrl: string
}

interface ServiceSettings {
    enabled: boolean,
    type: string,
    externalPort: number
}

interface TlsSettings {
    enabled: boolean,
    secretName: string
}

interface IngressSettings {
    enabled: boolean,
    tls: TlsSettings
}

interface PrometheusSettings {
    metrics: boolean,
    path: string
}

export interface AutoDevOpsSettings {
    deploy: DeploySettings,
    application: AppSettings,
    gitlab: GitLabSettings,
    service: ServiceSettings,
    ingress: IngressSettings,
    prometheus: PrometheusSettings
}

export default class Config {
    private config: pulumi.Config;
    
    constructor(config?: pulumi.Config) {
        this.config = config ?? new pulumi.Config();
    }

    getAutoDevOpsSettings(): AutoDevOpsSettings {
        return {
            deploy: this.config.requireObject<DeploySettings>("deploy"),
            application: this.config.requireObject<AppSettings>("app"),
            gitlab: this.config.requireObject<GitLabSettings>("gitlab"),
            service: this.config.requireObject<ServiceSettings>("service"),
            ingress: this.config.requireObject<IngressSettings>("ingress"),
            prometheus: this.config.requireObject<PrometheusSettings>("prometheus"),
        };
    }
}

The settings structure mainly repeats the values.yaml structure, with some deviations. I also left behind settings for liveness and readiness probes, which I will explain later. The Config class allows me to load all the settings at once. I use this combined AutoDevOpsSettings interface in the deployment code.

AutoDevOps code

Next, I added the index.ts file to the same directory (pulumi/autoDevOps). Here it gets a bit crazy, but not as crazy as the Helm chart. Maybe I am much less proficient in Go templates, but writing TypeScript code was a bliss compared with those nasty YAML files.

The code here is a result of several iterations. My first approach was to use functions, but then I moved close to how Pulumi does things and moved to a class instances instead.

The file starts with some imports and type declarations:

import * as k8s from "@pulumi/kubernetes";
import {AutoDevOpsSettings} from "./settings";
import * as inputs from "@pulumi/kubernetes/types/input";
import * as pulumi from "@pulumi/pulumi";

type returnValue<T> = (parameter?: string) => T;
type configure<T> = (element: T) => T;

export interface AutoDevOpsResult {
    deployment: k8s.apps.v1.Deployment;
    service: k8s.core.v1.Service | undefined;
    ingress: k8s.networking.v1beta1.Ingress | undefined;
}

Here I have two types for high order functions, which allow me to avoid using if statements in some cases. I also wanted the deployment code to be flexible and add a possibility to configure the deployment, so I created this configure type for that purpose, so I can use it for callbacks.

Next, I added the class itself:

export default class AutoDevOps {
    private settings: AutoDevOpsSettings;
    result: AutoDevOpsResult;
}

Then, I added the most complex function, which creates a deployment. It mainly repeats the Helm chart deployment template. The function also has a couple of callback parameters, which allow me to configure the deployment and add some things that I might need. Essentially, this part is what I am missing when using Helm.

private createDeployment(
    sidecars?: k8s.types.input.core.v1.Container[],
    configureContainer?: configure<k8s.types.input.core.v1.Container>,
    configurePod?: configure<k8s.types.input.core.v1.PodSpec>
): k8s.apps.v1.Deployment {

    const appLabels = {
        app: this.settings.application.name,
        release: this.settings.deploy.release,
        track: this.settings.application.track,
        tier: this.settings.application.tier,
    };

    const gitlabAnnotations = {
        "app.gitlab.com/app": this.settings.gitlab.app,
        "app.gitlab.com/env": this.settings.gitlab.env
    };

    const envFrom = AutoDevOps.valueOrUndefined<k8s.types.input.core.v1.EnvFromSource[]>(
        this.settings.application.secretName, x => [{secretRef: {name: x}}]
    );

    const container: k8s.types.input.core.v1.Container =
        {
            name: this.settings.application.name,
            image: `${this.settings.deploy.image}:${this.settings.deploy.imageTag}`,
            imagePullPolicy: "IfNotPresent",
            envFrom: envFrom,
            env: [
                {name: "ASPNETCORE_ENVIRONMENT", value: this.settings.gitlab.envName},
                {name: "GITLAB_ENVIRONMENT_NAME", value: this.settings.gitlab.envName},
                {name: "GITLAB_ENVIRONMENT_URL", value: this.settings.gitlab.envUrl}
            ],
            ports: [
                {name: "web", containerPort: this.settings.application.port}
            ],
        };
    const configuredContainer = AutoDevOps.configure(container, configureContainer);

    const podSpec: k8s.types.input.core.v1.PodSpec = {
        imagePullSecrets: [{name: this.settings.deploy.imagePullSecret}],
        containers: sidecars === undefined 
            ? [configuredContainer] 
            : [configuredContainer, ...sidecars],
        terminationGracePeriodSeconds: 60,
    };

    const deploymentArgs: k8s.apps.v1.DeploymentArgs = {
        metadata: {
            name: this.fullName(),
            labels: {...appLabels, heritage: "Pulumi"},
            namespace: this.settings.deploy.namespace
        },
        spec: {
            selector: {matchLabels: appLabels},
            replicas: this.settings.deploy.replicas,
            template: {
                metadata: {
                    labels: appLabels,
                    annotations: {
                        ...gitlabAnnotations,
                        "checksum/application-secrets": 
                            this.settings.application.secretChecksum
                    },
                    namespace: this.settings.deploy.namespace
                },
                spec: AutoDevOps.configure(podSpec, configurePod),
            }
        }
    };

    return new k8s.apps.v1.Deployment(this.pulumiName("deployment"), deploymentArgs);
}

In the createDeployment function I use some small helper functions like valueOrDefault, which I will list later in this article.

Notice that I am able to pass definitions for sidecar containers and two functions to configure the container and the pod spec before executing the deployment.

The second function would create a service. It is the smallest function in the class.

private createService(annotations?: { [key: string]: string }): k8s.core.v1.Service | undefined {
    const serviceLabels = {
        app: this.settings.application.name,
        release: this.settings.deploy.release,
        heritage: "Pulumi"
    };

    const serviceAnnotations = this.settings.prometheus.metrics
        ? {
            ...annotations, 
            "prometheus.io/scrape": "true", 
            "prometheus.io/port": this.settings.service.externalPort
        } : annotations;

    return new k8s.core.v1.Service(this.pulumiName("service"),
        {
            metadata: {
                name: this.fullName(),
                namespace: this.settings.deploy.namespace,
                annotations: annotations,
                labels: serviceLabels
            },
            spec: {
                type: this.settings.service.type,
                ports: [{
                    port: this.settings.application.port,
                    targetPort: this.settings.service.externalPort,
                    protocol: "TCP",
                    name: this.settings.application.name
                }],
                selector: {
                    app: this.settings.application.name,
                    tier: this.settings.application.tier
                }
            }
        });
}

I can still pass some custom service annotations for the service if I need to.

The last function will create an ingress. Here the code gets more complex again, mainly due to TLS concerns.

private createIngress(service: k8s.core.v1.Service, annotations?: { [key: string]: string }): k8s.networking.v1beta1.Ingress {
    const ingressLabels = {
        app: this.settings.application.name,
        release: this.settings.deploy.release,
        heritage: "Pulumi"
    };

    let ingressAnnotations: { [key: string]: string } = {};
    if (annotations !== undefined) 
        ingressAnnotations = {...ingressAnnotations, ...annotations};
    if (this.settings.ingress.tls.enabled)
        ingressAnnotations = {
            ...ingressAnnotations,
            "kubernetes.io/tls-acme": "true",
            "kubernetes.io/ingress.class": "nginx"
        };
    if (this.settings.prometheus.metrics)
        ingressAnnotations = {
            ...ingressAnnotations,
            "nginx.ingress.kubernetes.io/server-snippet": 
                "location /metrics { deny all; }"
        };

    const hostName = AutoDevOps.getHostnameFromUrl(this.settings.deploy.url);
    const tls: k8s.types.input.networking.v1beta1.IngressTLS[] | undefined = 
        this.settings.ingress.tls.enabled
        ? [{
            secretName: AutoDevOps.valueOrDefault(
                this.settings.ingress.tls.secretName, 
                this.settings.application.name + "-tls"
            ),
            hosts: [hostName]
        }] : undefined;

    const ingressArgs: k8s.types.input.networking.v1beta1.Ingress = {
        metadata: {
            name: this.fullName(),
            namespace: this.settings.deploy.namespace,
            annotations: ingressAnnotations,
            labels: ingressLabels
        },
        spec: {
            tls: tls,
            rules: [{
                host: hostName,
                http: {
                    paths: [{
                        path: "/",
                        backend: {
                            serviceName: service.metadata.name,
                            servicePort: this.settings.service.externalPort
                        }
                    }]
                }
            }]
        }
    };

    return new k8s.networking.v1beta1.Ingress(this.pulumiName("service"), ingressArgs);
}

The function needs a service instance from the createService result, and accepts custom annotations (optional) for the ingress.

Now it’s time to list all the helper functions that allow me to streamline the configuration and generating predictable names.

pulumiName(resource: string): string {
    return `${this.settings.application.name}-${resource}`;
}

fullName(): string {
    return `${this.settings.application.name}-${this.settings.gitlab.envName}`;
}

static valueOrDefault(setting: string, defaultValue: string): string {
    return setting === undefined || setting === "" ? defaultValue : setting;
}

static valueOrUndefined<T>(setting: string, fn: returnValue<T>): T | undefined {
    return setting === undefined || setting === "" ? undefined : fn(setting);
}

static configure<T>(element: T, configure?: configure<T>) {
    return configure === undefined ? element : configure(element);
}

static getHostnameFromUrl(url: string): string {
    let result = url.replace("https://", "").replace("http://", "");
    while (result.endsWith("/")) result = result.substr(0, result.length - 1);
    return result;
}

Some GitLab naming conventions don’t really give me comfort when looking at the deployed application. For example, the deployment name is always the environment name and pod names are derived from the chart name. It is not very nice, so I used more meaningful names.

I guess all these functions can also be made private.

Finally, I can put it all together in the class constructor.

constructor(settings: AutoDevOpsSettings,
            sidecars?: k8s.types.input.core.v1.Container[],
            configureContainer?: configure<k8s.types.input.core.v1.Container>,
            configurePod?: configure<k8s.types.input.core.v1.PodSpec>,
            serviceAnnotations?: { [key: string]: string },
            ingressAnnotations?: { [key: string]: string }
) {
    this.settings = settings;
    const stableTrack = this.settings.application.track === "stable";
    const deployment = this.createDeployment(sidecars, configureContainer, configurePod);
    const service = this.settings.service.enabled && stableTrack 
        ? this.createService(serviceAnnotations) : undefined;
    const ingress = this.settings.ingress.enabled && service !== undefined
        ? this.createIngress(service, ingressAnnotations)
        : undefined;

    this.result = {
        deployment: deployment,
        service: service,
        ingress: ingress
    };
}

When I create a new instance of the AutoDevOps class, it will do all the work!

Stack settings

For each stack, the Pulumi project contains a separate configuration file. Pulumi config supports complex objects, which I used in the settings.ts code. Setting the configuration can be done by using pulumi config set command, but I populated the initial settings file (Pulumi.dev.yaml) manually to speed things up.

config:
  gl-test:app:
    name: gl-pulumi-test
    port: 5000
    secretChecksum: null
    secretName: null
    tier: web
    track: stable
  gl-test:deploy:
    image: null
    imagePullSecret: gitlab-registry
    imageTag: latest
    namespace: gl-test-namespace
    release: production
    replicas: 1
  gl-test:gitlab:
    app: gl-pulumi-test
  gl-test:ingress:
    enabled: true
    tls:
      enabled: true
  gl-test:prometheus:
    metrics: true
    path: /metrics
  gl-test:service:
    enabled: true
    externalPort: 5000
    type: ClusterIP

Stack deployment file

Now I have to go back to the pulumi/index.ts file, where I must instantiate the AutoDevOps class instance.

import * as k8s from "@pulumi/kubernetes";
import * as kx from "@pulumi/kubernetesx";
import * as pulumi from "@pulumi/pulumi";
import AutoDevOps from "./autoDevOps";
import Config from "./autoDevOps/settings";

const config = new pulumi.Config();
const devOpsConfig = new Config(config);
const settings = devOpsConfig.getAutoDevOpsSettings();

const autoDevOps = new AutoDevOps(
    settings,
    undefined,
    x => {
        return {
            ...x,
            livenessProbe: {
                httpGet: {
                    path: "/health", 
                    scheme: "HTTP", 
                    port: settings.application.port
                }
            },
            readinessProbe: {
                httpGet: {
                    path: "/ping", 
                    scheme: "HTTP", 
                    port: settings.application.port
                }
            }
        };
    }
);

export const name = autoDevOps.result.deployment.metadata.name;

Here I pass undefined as the sidecars list because I don’t use any sidecar containers just yet. The container spec configuration callback allows me to configure the probes. I’d very much prefer this method instead of configuring what type of probe I use, like it’s done in the chart.

From index.ts I can also deploy any infrastructure required for my application, if needed. I can add it as containers to the same pod, for example. It could work for review environments. Otherwise, I can deploy those components as separate deployments and use their service addresses for my application deployment (connection strings, database users and so on).

Preparing the .NET app

I used the same Dockerfile as before, since deployment style doesn’t affect the application container.

Since I created a new project from the WebAPI template, I added a dummy health controller to respond to liveness and readiness probes (/health and /ping). I also added the prometheus-net.AspNetCore package, and the code needed to respond on the /metrics endpoint. I won’t cover it now, the library is well-documented.

Pipeline file

I didn’t want to change my pipeline template (see previous article), so I used the same template and added a new script to replace the deployment job.

include:
  - project: "gitlab/ci-templates"
    file: "/autodevops-ci.yml"

variables:
  DOTNET_SDK_VERSION: "3.1"
  DOTNET_RUNTIME_VERSION: "3.1"

stages:
  - prebuild
  - build
  - deploy
  - cleanup

version:
  extends: .version
  stage: prebuild

warmer:
  extends: .kanikocache
  stage: prebuild

build:
  extends: .dockerbuild
  stage: build
  dependencies:
    - version

.pulumi: &pulumi |
  function pulumi_deploy() {
    local track="${1-stable}"
    local percentage="${2:-100}"

    local name
    name=$(deploy_name "$track")

    local stable_name
    stable_name=$(deploy_name stable)

    local image_repository
    local image_tag

    if [[ -z "$CI_COMMIT_TAG" ]]; then
      image_repository=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
      image_tag=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
    else
      image_repository=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
      image_tag=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
    fi

    local replicas
    replicas=$(get_replicas "$track" "$percentage")

    local secret_name
    if [[ "$CI_PROJECT_VISIBILITY" != "public" ]]; then
      secret_name="gitlab-registry"
    else
      secret_name=''
    fi

    cd ./pulumi
    yarn install
    pulumi stack select dev
  
    pulumi config set --path gitlab.app "$CI_PROJECT_PATH_SLUG"
    pulumi config set --path gitlab.env "$CI_ENVIRONMENT_SLUG"
    pulumi config set --path gitlab.envName "$CI_ENVIRONMENT_NAME"
    pulumi config set --path gitlab.envURL "$CI_ENVIRONMENT_URL"

    pulumi config set --path app.name "$CI_PROJECT_NAME"
    pulumi config set --path app.secretName "$APPLICATION_SECRET_NAME"
    pulumi config set --path app.secretChecksum "$APPLICATION_SECRET_CHECKSUM"
    pulumi config set --path app.track "$track"
  
    pulumi config set --path deploy.image "$image_repository"
    pulumi config set --path deploy.imageTag "$image_tag"
    pulumi config set --path deploy.imagePullSecret "$secret_name"
    pulumi config set --path deploy.replicas "$replicas"
    pulumi config set --path deploy.release "$CI_ENVIRONMENT_NAME"
    pulumi config set --path deploy.namespace "$KUBE_NAMESPACE"
    pulumi config set --path deploy.url "$CI_ENVIRONMENT_URL"

    pulumi up --yes
  }
  
  function deploy_name() {
    local name="$RELEASE_NAME"
    local track="${1-stable}"

    if [[ "$track" != "stable" ]]; then
      name="$name-$track"
    fi

    echo $name
  }

  function get_replicas() {
    local track="${1:-stable}"
    local percentage="${2:-100}"

    local env_track
    env_track=$(echo $track | tr '[:lower:]' '[:upper:]')

    local env_slug
    env_slug=$(echo ${CI_ENVIRONMENT_SLUG//-/_} | tr '[:lower:]' '[:upper:]')

    local new_replicas
    if [[ "$track" == "stable" ]] || [[ "$track" == "rollout" ]]; then
      # for stable track get number of replicas from `PRODUCTION_REPLICAS`
      eval new_replicas=\$${env_slug}_REPLICAS
      if [[ -z "$new_replicas" ]]; then
        new_replicas=$REPLICAS
      fi
    else
      # for all tracks get number of replicas from `CANARY_PRODUCTION_REPLICAS`
      eval new_replicas=\$${env_track}_${env_slug}_REPLICAS
      if [[ -z "$new_replicas" ]]; then
        eval new_replicas=\$${env_track}_REPLICAS
      fi
    fi

    local replicas="${new_replicas:-1}"
    replicas="$((replicas * percentage / 100))"

    if [[ $new_replicas == 0 ]]; then
      # If zero replicas requested, then return 0
      echo "$new_replicas"
    elif [[ $replicas -gt 0 ]]; then
      echo "$replicas"
    else
      # Return one if calculated replicas is zero
      # E.g. 25% of 2 replicas is 0 (integer division)
      echo 1
    fi
  }  

.deploy:
  retry: 2
  allow_failure: false
  image: registry.ubiquitous.no/gitlab/auto-deploy-image:latest
  script:
    - auto-deploy check_kube_domain
    - auto-deploy ensure_namespace
    - auto-deploy create_secret
    - *pulumi
    - pulumi_deploy
    - auto-deploy persist_environment_url
  dependencies:
    - version
    - build

deploy:
  extends: .deploy
  stage: deploy
  environment:
    name: production
    url: http://pulumi-test.$KUBE_INGRESS_BASE_DOMAIN
    kubernetes:
      namespace: pulumi-test

If you compare the new pulumi_deploy function with the original deploy function from the template, you’d notice that all the code related to Helm disappeared. I set the stack settings using pulumi config set, which otherwise would be sent to Helm as deployment parameters.

Before setting the configuration, the script runs these lines:

cd ./pulumi
yarn install
pulumi stack select dev

I should replace the hard-coded stack name with the GitLab environment name coming from $CI_ENVIRONMENT_SLUG to make the pipeline ready for handling different environment.

In the last line, the function tells Pulumi to deploy the stack. Pulumi will then figure out what resources need to be created, changed or deleted, and then does the job!

All together now

Finally, I create a repository in GitLab, add the repository deployment token and configure Kubernetes as usual. The deployment job will run in the proper Kubernetes context, so Pulumi doesn’t need any Kubernetes context and namespace configuration.

When I commit and push my project, it gets built and deployed in a couple of minutes with everything working as it should :)

Now, handling the environment deletion would be as simple as running pulumi dstroy for the chosen stack, which represent the environment.

With Pulumi, you get a bonus if you go to your Pulumi account and check the stack. There, you can see the last deployment together with the GitLab commit reference. Pulumi plays nicely with GitLab CI and links the stack update to the commit.

You can also see the activity history, configuration values and deployed resources. This information is not available for you in any way when using Helm, so it’s really cool to be able to check if everything is deployed as you’d expect it to.

Of course, some additional work might be needed to support canary deployments and gradual roll-outs. But, my experiment shows that it all should be quite trivial with Pulumi too.

Also, dealing with secrets is not very mature in GitLab at the moment. They work on integrating the product with Hashicorp Vault, but what if you want to keep your secrets somewhere else? Pulumi integrates with a number of tools to keep your secrets safe, and they all are available by storing and retrieving your stack configuration secrets.

Another great opportunity here is to automate the service observability by keeping things like dashboards in the repository and deploy them when they change. GitLab already has a great feature of deploying custom metrics from the repository, but it only works with GitLab itself. My organisation uses Datadog and using the Datadog cloud provider for Pulumi we can create dashboards automatically, from the configuration that we keep in the project repository, enabling more and more GitOps.

Remember that it’s totally possible to create the same deployment project in C#. You’d only need to use a different base container for the auto-deploy image, since it would need the .NET Core runtime.

Have you also tried something awesome with GitLab and Pulumi? Share your experience in comments!


See also