GitLab, .NET Core, Kubernetes, and Pulumi - Part 2

GitLab, .NET Core, Kubernetes, and Pulumi - Part 2

This article is a part of the GitLab for .NET developer series.

It’s been a while since I published my last article about replacing Helm charts in the GitLab CI/CD pipeline with Pulumi deployment programs. Between August last year and now I’ve tried quite a lot of different things to improve the flow, so I am now ready to share my experience.

What didn’t feel right

First, let me explain why I thought it’s not the end of it, and decided to continue my experiments.

  1. Although I like writing code in TypeScript, not all my colleagues do. I wanted to try out the Pulumi .NET SDK instead.
  2. As a consequence, I should be able to put the application code, and the deployment in one solution for increased visibility.
  3. I also needed to replicate the drop-in experience of the AutoDevOps pipeline, when you can reuse a shared way to deploy things within a company. This way, the deployment process governance can still be done by people, who maintain production environments.
  4. As one of the main issues with the Helm chart is its rigidity and inability to accommodate even slightest deviation from “this is the way”, I also didn’t want to go for a similar solution, which won’t allow any deployment customisation without changing the shared code.
  5. The last bit that bothered me was that despite I moved the deployment of all the resources, which were previously deployed by Helm, to the deployment project, some resources are created in the CI file itself. In particular, these are: namespace, GitLab registry secret, and the application secret.
  6. In addition to creating a few resources not managed by Helm, the bash script in the CI file does a lot of complex logic. It calculates the image tag, the track, some annotations for the deployment, service and ingress settings, the number of replicas, and so on. I can read and write bash scripts, but it just doesn’t feel right to do so many things there.

The solution

Right now, I am quite confident that most of the issues I listed above are solved. Below you can read about what I’ve done and how it all worked out.

All that work is now available as part of the Ubiquitous.AutoDevops repository and a set of NuGet packages.

Using Pulumi .NET SDK

My first job was to rewrite the previous deployment project to C#. Literally, I did a full rewrite as it wasn’t that much anyway.

The C# code is more verbose than TypeScript, I must admit. It is still manageable though. I was also able to move some repetitive code to functions, so it became more readable.

I kept the same settings structure. As before, a lot of the settings need to be set from the CI variables when running the deployment job. Quite a few settings also got the default values, which allows me to only configure things that are really different between repositories. For example, all my services use .NET Web SDK and expose port 5000. I don’t really need to configure it all the time.

You can find all the setting classes (or records) in the repository.

I haven’t been able to find the way to change YAML deserialization settings in Pulumi, so it only works with settings being in Pascal case, when I use structured configuration.

Here how my semi-default stack config YAML file looks like:

config:
  testapp:app:
    Name: testapp
    Port: 5000
    SecretName: null
    Tier: web
    Track: stable
  testapp:deploy:
    Image: null
    ImagePullSecret: gitlab-registry
    ImageTag: latest
    Namespace: testapp
    Release: production
    Replicas: 1
    Url: https://testapp.app.ubiquitous.cloud
  testapp:gitlab:
    App: testapp
  testapp:ingress:
    Enabled: true
    Tls:
      Enabled: true
  testapp:prometheus:
    Metrics: false
  testapp:service:
    Enabled: true
    ExternalPort: 5000
    Type: ClusterIP

Some of those settings don’t need to be manually specified in the file as they get assigned during the deployment job. For example, Image, ImageTag, Namespace, and so on.

I prefer to have one stack per environment, so the environment slug is used as the stack name. At this stage, one of the drawbacks is that the stack for each environment must be created upfront, and you need to have a basic config file for it. I know how to fix it, but it is a separate topic.

After I was done with the settings, I created a few functions to provision different Kubernetes resources:

If you look at the deployment code now, you’ll see that it has quite a few optional parameters, which are functions. These are extensibility points, which add so much power to this solution. I will cover that in details later in this article.

I didn’t do HPA and other things as I rarely need them. The HPA case for me is a perfect example of something that a fraction of GitLab users need, but they still included it in the chart because some customers wanted it. That’s exactly why one shared chart “to rule them all” becomes bloated and hard to use.

By then, the basic part was done, so I could replicate what the Helm chart does. But, that’s not the end of it.

Bringing more resources in

As I mentioned previously, the Helm chart alone is incapable to deploy the whole set of resources for an application. Before executing helm upgrade --install, the CI file needs to ensure that there is a namespace for the environment, and create two secrets. It means that even if you stop the environment and delete the Helm release, those resources will keep hanging around. If you would really want to clean things up, you’d need to delete those resources manually.

So why don’t we bring them in to the new solution? The only reason why, for example, the application secret is created in the CI file itself and not in the chart, is just because it’s very hard, or impossible to do in the chart. When you write code in a general-purpose programming language, those issues become irrelevant.

Secrets

For example, I can collect all the environment variables, and select those that start with K8S_SECRET_:

var env = GetEnvironmentVariables();
var vars = new Dictionary<string, string>();

foreach (DictionaryEntry entry in env) {
    var key = (string) entry.Key;
    if (key.StartsWith("K8S_SECRET_") && entry.Value != null)
        vars[key.Remove(0, 11)] = (string) entry.Value;
}

Having the collection populated, creating the application secret takes no time at all:

new Secret(
    secretName,
    new SecretArgs {
        Metadata   = CreateArgs.GetMeta(secretName, namespaceName),
        Type       = "opaque",
        StringData = vars
    }
);

But why stop there? Keeping some of the settings in stack configs is easier, as I don’t need to change them in the GitLab CI variables. It can also be more secure, as I can store those settings as secrets. So, I added this:

if (settings.Env != null) {
    foreach (var envVar in settings.Env) {
        vars[envVar.Name] = envVar.Value;
    }
}

It allows me to add some settings to the stack config file, as plain text, or as secrets, which then will be added to the pod environment variables via the application secret:

testapp:env:
  - Name: EventStoreOptions__ClusterId
    Value: c0bxj8q5h41i7drr2i99
  - Name: ConnectionOptions__Protocol
    Value: grpc

The GitLab registry secret is quite trivial, although it required some JSON construction. You can find the code here.

At this moment, the whole Helm chart story is covered. In addition, I moved three resources that are created outside the Helm release - namespace and two secrets - to the stack. By doing so, I ensure that changes in those resources get a proper diff in the stack update overview. Also, when I destroy the stack, all the resources will be removed.

What’s next

All the above allowed me to create a reusable library that can help to replace the whole bash-Helm story og GitLab AutoDevOps with a single deployment project. In the next article I will describe how the library can be used.


See also