This article is a part of the GitLab for .NET developer series.
In the previous article I described the work I’ve done to move more parts of the GitLab AutoDevOps deployment to C# code that uses Pulumi. This article explains how the library I created based on all that work, can be used in a real-life scenario.
Using Ubiquitous.AutoDevOps
The Ubiquitous.AutoDevOps
NuGet package is a set of tools that allow to replace the default GitLab AutoDevOps deployment using bash and Helm, with a single deployment project using Pulumi.
The library itself is nothing more than a consolidated set of classes that represent the usual deployed resources as Pulumi resources. The assumption here is that the structure of the deployment remains the same as GitLab AutoDevOps expects it to be. In particular, a normal deployment would include the following resources:
- Kubernetes namespace
- GitLab registry secret
- Application secret, populated from the repository CI variables
- Deployment with a given number of replicas
- Service for the deployment
- Optional: ingress for the service, using the environment URL
Deployment project
Let’s say we have a service written in .NET, which we want to deploy.
First thing, we need a deployment project. It is a good idea to clearly separate the service source code from the deployment source code. If the service code is in the src
directory (looking from the repository root), we can create a new project in the deploy
directory in the repository root, and call it Deploy
. I prefer to keep this project in the same solution as the service itself. The project needs to be a console application, I will be using .NET 6 for this example.
Then, add a Pulumi.yaml
file to the project with the following content:
name: AutoDevOpsSample
runtime: dotnet
description: Sample GitLab deployment with Pulumi
Feel free to change the stack name and its description.
Next, add a package reference to Ubiquitous.AutoDevOps.Stack
, I am using version 0.5.7 of the package for this example.
Simplest case
You can complete the deployment project for the simplest case by removing all the content of Program.cs
and replacing it with this code:
using Ubiquitous.AutoDevOps.Stack;
await Pulumi.Deployment.RunAsync<DefaultStack>();
The DefaultStack
is a Pulumi stack that closely resembles the GitLab AutoDevOps deployment, but also includes things like Kubernetes namespace, and both secrets, which GitLab creates using bash scripts.
For the simplest scenario you won’t need to have a stack configuration file as it will be created during the deployment.
Now, we need to configure the deployment job in the .gitlab-ci.yml
file. Here is a bash snippet that can be added to the CI file:
.pulumi: &pulumi |
[[ "$TRACE" ]] && set -x
export CI_APPLICATION_REPOSITORY=$CI_REGISTRY/$CI_PROJECT_PATH/${CI_COMMIT_BRANCH-$CI_DEFAULT_BRANCH}
export CI_APPLICATION_TAG=$CI_COMMIT_SHA
function pulumi_deploy() {
local track="${1-stable}"
local percentage="${2:-100}"
local name
name=$(deploy_name "$track")
local stable_name
stable_name=$(deploy_name stable)
local image_repository
local image_tag
if [[ -z "$CI_COMMIT_TAG" ]]; then
image_repository=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE/$CI_COMMIT_REF_SLUG}
image_tag=${CI_APPLICATION_TAG:-$CI_COMMIT_SHA}
else
image_repository=${CI_APPLICATION_REPOSITORY:-$CI_REGISTRY_IMAGE}
image_tag=${CI_APPLICATION_TAG:-$CI_COMMIT_TAG}
fi
cd ./deploy
pulumi stack select "$CI_ENVIRONMENT_NAME"
pulumi config set --path gitlab.App "$CI_PROJECT_PATH_SLUG"
pulumi config set --path gitlab.Env "$CI_ENVIRONMENT_SLUG"
pulumi config set --path gitlab.EnvName "$CI_ENVIRONMENT_NAME"
pulumi config set --path gitlab.EnvURL "$CI_ENVIRONMENT_URL"
pulumi config set --path gitlab.Visibility "$CI_PROJECT_VISIBILITY"
pulumi config set --path registry.Server "$CI_REGISTRY"
pulumi config set --path registry.User "${CI_DEPLOY_USER:-$CI_REGISTRY_USER}"
pulumi config set --secret --path registry.Password "${CI_DEPLOY_PASSWORD:-$CI_REGISTRY_PASSWORD}"
pulumi config set --path registry.Email "$GITLAB_USER_EMAIL"
pulumi config set --path app.Name "$CI_PROJECT_NAME"
pulumi config set --path app.Track "$track"
pulumi config set --path app.Version "$APPLICATION_VERSION"
pulumi config set --path deploy.Image "$image_repository"
pulumi config set --path deploy.ImageTag "$image_tag"
pulumi config set --path deploy.Percentage "$percentage"
pulumi config set --path deploy.Release "$CI_ENVIRONMENT_NAME"
pulumi config set --path deploy.Namespace "$KUBE_NAMESPACE"
pulumi config set --path deploy.Url "$CI_ENVIRONMENT_URL"
pulumi up -y -f -r
pulumi stack tag set app:version "$APPLICATION_VERSION"
pulumi stack tag set app:image "$image_repository:$image_tag"
pulumi stack tag set app:url "$CI_ENVIRONMENT_URL"
pulumi stack tag set app:namespace "$KUBE_NAMESPACE"
}
function deploy_name() {
local name="$RELEASE_NAME"
local track="${1-stable}"
if [[ "$track" != "stable" ]]; then
name="$name-$track"
fi
}
Here I assume that for all the other jobs you use the default auto-deploy-image
container.
Now, let’s see what the GitLab AutoDevOps CI template has for its deployment steps. Here is the production
template:
variables:
AUTO_DEPLOY_IMAGE_VERSION: 'v2.18.1'
.auto-deploy:
image: "registry.gitlab.com/gitlab-org/cluster-integration/auto-deploy-image:${AUTO_DEPLOY_IMAGE_VERSION}"
dependencies: []
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- auto-deploy download_chart
- auto-deploy use_kube_context || true
- auto-deploy ensure_namespace
- auto-deploy initialize_tiller
- auto-deploy create_secret
- auto-deploy deploy
- auto-deploy delete canary
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt, tiller.log]
when: always
The deploy
script is not the only one there that touches Kubernetes resources. In fact, ensure_namespace
and create_secret
both create resources there. Because our new deployment creates all the necessary Kubernetes resources, there’s no need for those scripts either. So, now I can use the following production template (after removing canary deletion for simplicity):
.production: &production_template
extends: .auto-deploy
stage: production
script:
- auto-deploy check_kube_domain
- *pulumi
- pulumi_deploy
- auto-deploy persist_environment_url
environment:
name: production
url: http://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
artifacts:
paths: [environment_url.txt]
when: always
Using the new template, you will get your application deployed to Kubernetes in a very similar way as GitLab AutoDevOps job templates do, but it won’t be using Helm. You can also destroy any deployment by using pulumi destroy
for the selected stack (environment), either locally, or from the CI job.
Advanced stuff
You don’t necessarily need to configure the whole stack from the CI job script, it’s also possible by creating the stack configuration file inside the repository. As my convention is to name stacks after environments, the configuration file needs to have a name Pulumi.<environment>.yaml
, for example Pulumi.production.yaml
.
The stack configuration could look like this:
config:
AutoDevOpsSample:app:
Name: sample-app
Port: 5000
Tier: web
Track: stable
AutoDevOpsSample:deploy:
Namespace: some-namespace
Release: production
Replicas: 1
AutoDevOpsSample:env:
- Name: Mongo__Database
Value: checkinStaging
- Name: Mongo__ConnectionString
Value: mongodb+srv://user:[email protected]
AutoDevOpsSample:gitlab:
App: gl-pulumi-test
AutoDevOpsSample:ingress:
Enabled: false
Tls:
Enabled: false
AutoDevOpsSample:prometheus:
Metrics: false
Path: /metrics
AutoDevOpsSample:service:
Enabled: true
ExternalPort: 5000
Type: ClusterIP
As you can see, the stack config has seven sections, which are all used by DefaultStack
:
app
: application-specific settingsName
: app name, used as the deployment namePort
: a single port number that the application is listening to, default5000
PortName
: port name, defaultweb
Tier
: app tier, used for the tier labelTrack
: app track, used for the track labelReadinessProbe
: readiness probe URL, default/ping
LivenessProbe
: liveness probe URL, default/health
deploy
: deployment-specific settingsNamespace
: Kubernetes namespaceReplicas
: the total desired number of replicasPercentage
: the same as GitLab percentage, default100
gitlab
: settings for GitLab-specific annotationsApp
: app nameEnv
: environment
prometheus
: settings for metrics collection using PrometheusMetrics
: enable metrics scrappingPath
: endpoint to collect metrics fromOperator
: use Prometheus Operator configuration instead of legacy scrapping annotation
env
: environment variables to be added to the pod (collection)Name
: variable nameValue
: variable value
The other settings (service, ingress, etc) are the same as you’d expect to have them configured from the CI pipeline.
You can also inspect the AutoDevOpsSettings
class to find all its parts, what properties they have, and what the default values are.
The DefaultStack
class is doing the same job as GitLab AutoDevOps CI bash scripts and Helm chart combined, so it only creates one workload resource (deployment or stateful set). But, the code for DefaultStack
class only instantiates the AutoDevOps
class, which creates all the resources. And that class can do much more, if necessary.
The AutoDevOps
constructor accepts quite a lot of optional arguments:
public AutoDevOps(
AutoDevOpsSettings settings,
IEnumerable<ContainerArgs>? sidecars = null,
Action<ContainerArgs>? configureContainer = null,
Action<PodSpecArgs>? configurePod = null,
Action<DeploymentArgs>? configureDeployment = null,
Action<ServiceArgs>? configureService = null,
Dictionary<string, string>? serviceAnnotations = null,
Dictionary<string, string>? ingressAnnotations = null,
Dictionary<string, string>? namespaceAnnotations = null,
Action<Namespace>? deployExtras = null,
Dictionary<string, string>? extraAppVars = null,
ProviderResource? provider = null
)
As you can see, it’s possible to add things like sidecar containers, functions to configure the workload container, the pod, the deployment, the service, add annotations, or even create additional Kubernetes resources within a given namespace.
For example, if I need to add a Cloudflare Argo Tunnel sidecar and not use the ingress, I can create a new stack class and use it instead of the default stack:
using Deploy.Components;
using Pulumi;
using Pulumi.Kubernetes.Types.Inputs.Core.V1;
using Ubiquitous.AutoDevOps.Stack;
using Ubiquitous.AutoDevOps.Stack.Factories;
namespace Deploy;
public class AppStack : Stack {
public AppStack() {
var config = new Config();
var settings = new AutoDevOpsSettings(config);
var cloudFlareSettings = config.RequireObject<CloudflareSettings>("cloudflare");
var cloudflare = new Cloudflare(cloudFlareSettings, settings);
var autoDevOps = new AutoDevOps(
settings,
new[] {cloudflare.GetContainerArgs()},
container => {
container.LivenessProbe = HttpProbe("/health");
container.ReadinessProbe = HttpProbe("/ping");
},
pod => pod.AddVolumes(cloudflare.GetVolumeArgs())
);
ProbeArgs HttpProbe(string path) => Pods.HttpProbe(path, settings.Application.Port);
NamespaceId = autoDevOps.DeploymentResult.Namespace?.Id;
}
[Output]
public Output<string>? NamespaceId { get; set; }
}
Then, in Program.cs
I would use this class:
await Pulumi.Deployment.RunAsync<AppStack>();
You can do more things here, like creating two deployments instead of one, using a number of init containers, create cron jobs, etc.
The Ubiquitous.AutoDevOps.Stack.Resources
namespace has a number of wrapper classes for different Kubernetes resources:
KubeNamespace
KubeSecret
KubeDeployment
KubeStatefulSet
KubeCronJob
KubeService
KubeIngress
In addition, you can configure your deployments for Prometheus Operator metrics scrapping using service or pod monitors, as well as annotate your workloads for Jaeger traces collection using classes from Ubiquitous.AutoDevOps.Stack.Addons
.
Finally, you can add resources outside of Kubernetes to your stack. That’s where this story actually started. It can be very useful if you deploy things to Kubernetes, but also need some cloud-native resources that cannot be deployed using Helm.
To be continued
In the final part of this series I will show how to avoid configuring stack using bash scripts in the CI file and move the whole thing to code using Pulumi Automation API.