Managed Prometheus is Live!

Managed Prometheus is Live!

Last Thursday we released the newest addition to our set of managed services – Prometheus. As a leading open-source monitoring solution, it has exciting features for you in store.
Let’s have a look at what Prometheus is, how it can help you monitor your IT infrastructure, and how you can start it on MyNWS.

 

What is Prometheus

Prometheus is the leading open-source monitoring solution, allowing you to collect, aggregate, store, and query metrics from a wide range of IT systems.
It comes with its own query language, PromQL, which allows you to visualize and alert on the behavior of your systems.

Being a pillar of the cloud-native ecosystem, it integrates very well with solutions like Kubernetes, Docker, and microservices in general. In addition, you can monitor services such as databases or message queues.

With our managed Prometheus offering, you can leverage its versatility within minutes, without the hassle of setting up, configuring, and maintaining your own environment.

If this is all you need to know and you want to get started right away – the first month is free!

 

How Does Prometheus Work

Prometheus scrapes data in a machine-readable format from web endpoints, conventionally on the /metrics path of an application.
If a service or system does not provide such an endpoint, there probably exists a metrics exporter that scrapes the data in a different way locally and makes them available via its own endpoint.

The collected metrics get stored by Prometheus itself, aggregated over time, and eventually deleted. With PromQL, you can make sense of your metrics visually, either in Prometheus’ web interface or in Grafana.

 

How to Use Managed Prometheus on MyNWS

Spinning up Managed Prometheus on MyNWS is as easy as selecting the product from the list, picking a name for your app, choosing a plan, and clicking on Create. Within minutes, your Managed Prometheus app will be up and running – so what has it in store?

Managed Prometheus on MyNWS hits the ground running – below are just the key features it has to offer:

  • Web access to Prometheus and Grafana – for quick data exploration and in-depth insights
  • User access management based on MyNWS ID – onboard your team
  • Prebuilt dashboards and alarms for Kubernetes – visualize insights immediately
  • Bring your own domain – and configure it easily via the MyNWS dashboard
  • Remote Write capabilities – aggregate data from multiple Prometheus instances in MyNWS

If this sounds good, make sure to give Managed Prometheus a try on MyNWSthe first month is free!

If you have further questions or need assistance along the way, contact our MyEngineers or have a look at our open-source trainings in case you are just starting out with Prometheus.

Rejekts and KubeCon 2024 Review

Rejekts and KubeCon 2024 Review

Last week, our team was present at KubeCon 2024 to learn about the latest and greatest in the cloud-native ecosystem. Back in our cozy (home-)offices, we want to share our insights with you! Buckle up, bring your office chairs in an upright position, and let’s start with Rejekts EU, before getting to our KubeCon 2024 review.

Rejekts – The Pre-Conference Conference

I made my way to Paris a few days earlier than the rest of the team in order to make it to Cloud-Native Rejekts. Over the past years, the conference has hosted many great but rejected talks of KubeCon, where acceptance rates were below 10% for this year’s European rendition.

With space for ~300 attendants and 45 talks on 2 tracks, ranging from sponsored keynotes over regular talks to lightning talks on the second day of the conference, Rejekts set the scene for a fantastic cloud-native conference week.

the Rejekts EU 2024 main stage at ESpot in Paris

Cloud-Native Rejekts took place at ESpot, a flashy eSports venue.

Many attendees agreed that the less crowded venue combined with high-quality talks made for a very insightful conference – kind of like a Mini-KubeCon. In addition, folks representing projects and companies from pretty much all cloud-native areas of interest were present.
This led to engaging conversations on the hallway track, revolving around community contributions, new technologies, and marketing strategies.

My Personal Favorites

Based on the event schedule, I decided to focus on talks about Cilium, Kubernetes’ rather new Gateway API, and GitOps. I wasn’t disappointed – there were a few gems among the overall great talks:

In Demystifying CNI – Writing a CNI from scratch Filip Nikolic from Isovalent, creators of Cilium, took the audience on a journey to create their own rudimentary CNI – in Bash. From receiving JSON from the container runtime to creating the needed veth pairs to providing IP addresses to pods, it was a concise walkthrough of a real-world scenario.

Nick Young, one of the maintainers of the Kubernetes Gateway API, told us how to not implement CRDs for Kubernetes controllers in his great talk Don’t do what Charlie Don’t Does – Avoiding Common CRD Design Errors by following Charlie Don’t through the process of creating an admittedly horrible CRD specification.

The third talk I want to highlight was From Fragile to Resilient: ValidatingAdmissionPolicies Strengthen Kubernetes by Marcus Noble, who gave a thorough introduction on this rather new type of policy and even looked ahead at MutatingAdmissionPolicies which are currently proposed by KEP 3962.

After a bunch of engaging lightning talks at the end of Day 2, I was ready to meet up with the rest of the team and prepare for the main event – so let’s dive into our KubeCon 2024 review!

KubeCon 2024 – AI, Cilium, and Platforms

KubeCon Paris was huge – the biggest in Europe yet. The organizers had said so in their closing keynote at KubeCon 2023 in Amsterdam already, but still: Seeing 12.000 people gathered in a single space is something else.

Justin, Sebastian, and Achim waiting for KubeCon to start

Justin, Sebastian, and Achim waiting for KubeCon to start

All of them were awaiting the Day 1 keynotes, which often reflect the tone of the whole conference. This year, the overarching theme was obvious: AI. Every keynote at least mentioned AI, and most of them stated: We’re living in the age of AI.

Interestingly enough, many attendees left the keynotes early, maybe annoyed by the constant stream of news regarding AI we’ve all witnessed over the last year. Maybe also due to the sponsor booths – many of them showcased interesting solutions or at least awesome swag!

What the Team was up to

Maybe they also went off to grab a coffee before attending the first talks of Day 1, of which there were many. Thus, instead of giving you my personal favorites, I asked Justin, Sebastian, and Achim to share their conference highlights and general impressions. Read their KubeCon 2024 review below:

Justin (Systems Engineer):

“I liked the panel discussion about Revolutionizing the Control Plane Management: Introducing Kubernetes Hosted Control Planes best.
As we’re constantly looking for ways to improve our managed Kubernetes offering, we also thought about hiding the control plane from the client’s perspective, so this session was very insightful.
Overall, my first KubeCon has been as I expected it to be – you could really feel how everyone was very interested in and excited about Kubernetes.”

Sebastian (CEO):

“My favorite talk was eBPF’s Abilities and Limitations: The Truth. Liz Rice and John Fastabend are great speakers who explained a very difficult and technical topic with great examples.
I’m generally more interested in core technologies than shiny new tools that might make my live easier – where’d be the fun in that? So eBPF looks very interesting.
Overall, this year’s KubeCon appeared to be better organized than the one in Valencia, in the aftermath of the pandemic. Anyways, no location will beat Valencia’s beach and climate!”

Daniel (Platform Advocate):
“Part of my job is to see how end users consume and build upon Kubernetes. So naturally, I’ve been very excited about the wide range of talks about Platform Engineering going into KubeCon.
My favorite talk was the report on the State of Platform Maturity in the Norwegian Public Sector by Hans Kristian Flaatten.
I experienced the digitalized platforms of the Norwegian Public Sector back in university as a temporary immigrant, and adoption seemingly improved tremendously since.
As a German citizen, it seems unbelievable for the public sector to adopt cloud-native technologies at this scale.

While I share Sebastian’s views regarding Valencia (which was my first KubeCon), I liked a lot of things about Paris: The city, the venue, and the increase in attendants and talks to choose from.”

Achim (Senior Manager Cloud):

“I really liked From CNI Zero to CNI Hero: A Kubernetes Networking Tutorial Using CNI – the speakers Doug Smith and Tomofumi Hayashi explained pretty much everything there is to know about CNIs.
From building and configuring CNI plugins to their operation at runtime, they covered many important aspects of the CNI project.
Regarding KubeCon in general, I feel like Sebastian – Paris was well organized, apart from that there wasn’t too much change compared to the last years. Many sponsors were familiar, and the overall topics of KubeCon remained the same, too.”

KubeCon 2024 Reviewed

Looking back, we had a blast of a week. Seeing old and new faces of the cloud-native landscape, engaging in great discussions about our favorite technologies, and learning about new, emerging projects are just a few reason why we love attending KubeCon.
We now got one year to implement all of the shiny ‘must-haves’ before we might come back: KubeCon EU 2025 will be held in London from April 1-4 2025.

The next KubeCons will be in London/Atlanta (2025) and Amsterdam/Los Angeles (2026)

The next KubeCons will be in London/Atlanta (2025) and Amsterdam/Los Angeles (2026)

If you are working with cloud-native technologies, especially Kubernetes, yourself, and would love to attend KubeCon one day, have a look at our open positions – maybe you’d like to be part of our KubeCon 2025 Review?
If you need a primer on Kubernetes and what it can do for you, head over to our Kubernetes tutorials – we promise they’re great!

LUKS Encrypted Storage on OpenStack

LUKS Encrypted Storage on OpenStack

Thoroughly securing your IT landscape has become more and more important over the last couple of years. With an increase in (user) data to be managed, processed, and stored, encryption of this data should be on your agenda towards fully secured IT infrastructure.
Therefore, we will look at how to leverage LUKS encrypted storage on OpenStack to encrypt volumes at rest in this tutorial. We will look at two ways of defining and using encrypted volumes in OpenStack: via web interface and via OpenStack CLI. Let’s start with the web interface.

Prerequisites

As a user of OpenStack, all necessary prerequisites for using LUKS encrypted storage have normally been configured for you by your administrators. If you administrate OpenStack yourself and are looking for configuration instructions, take a look at the official documentation.

To follow along with this tutorial on NWS OpenStack, you will need an NWS ID. You will also need to create an OpenStack project in which you can create volumes and compute instances. If you want to follow along in your terminal, you will additionally need to download the OpenStackRC file for your project. If you need a recap on how to go about this, see our tutorial on starting a project and creating a server (you will need one later) on NWS.

Configuring LUKS Encrypted Storage on the Dashboard

First, let’s see how you can provision LUKS encrypted volumes in Horizon, OpenStack’s web interface. If you’re an NWS customer, you can access Horizon either at https://cloud.netways.de and login with your NWS ID, or by clicking the “Go to App” button in the top right of your OpenStack project’s overview page at MyNWS.

Screenshot of the MyNWS VPC project overview

Navigate to OpenStack Horizon from the MyNWS dashboard

Creating a New LUKS Encrypted Volume

Once you made it to the Horizon dashboard, navigate to Volumes > Volumes and click on Create Volume in the top right. A form will open for you to specify the desired settings. You can configure the following settings here:

  • Volume Name – make sure to choose a meaningful name for your volume here, e.g. encryption-tutorial-volume
  • Description – add information regarding this volume here if needed
  • Volume Source – you can choose between creating an empty volume here or import an existing image
  • Type – this is the most important setting, as this will define whether your volume will be encrypted or not – more infos below
  • Size – the desired size of your volume
  • Availability Zone – where you want OpenStack to store your volume; in NWS OpenStack, there’s only one availability zone (nova)
  • Group – add your volume to a preexisting volume group if needed
screenshot of the New Volume dialogue on OpenStack Horizon

Exemplary configuration of a LUKS encrypted volume in NWS OpenStack

In the screenshot above, I configured a volume to use the LUKS type, which is the volume type for LUKS encrypted storage on NWS OpenStack. I also made sure to create a new, empty volume and set the size to 4GB.
If you are following along on an OpenStack environment that isn’t NWS OpenStack, the volume type might be named differently.

On NWS OpenStack, volumes of the LUKS type are encrypted using 256bit aes-xts-plain64 encryption.

Once you’re satisfied with your volume configuration, click on Create Volume. After a few moments, OpenStack will have provisioned the new, encrypted volume for you. Next, you need to attach it to a compute instance.

Using LUKS Encrypted Volumes

A volume alone won’t get you far, no matter if encrypted or not – you need a client consuming it. For this tutorial, this will be a simple compute instance. Deploy a new server via the NWS OpenStack dashboard or by navigating to Compute > Instances in OpenStack Horizon and clicking on Launch Instance in the top right.
Once the server is up and running, attach the encrypted volume via the OpenStack Horizon dashboard: Expand the dropdown in the Action column of your server’s listing, and choose Attach Volume from the available actions.

Screenshot of the server listing in OpenStack, with the Action menu expanded

Another popup dialogue will open, prompting you to choose the volume to attach from a dropdown of available volumes. Pick the LUKS encrypted volume you created before and confirm by clicking on Attach Volume:

Screenshot of the Attach Volume dialogue in OpenStack Horizon

That’s it – you attached encrypted storage to your server. If you log into the server (e.g. via SSH), you see a new storage device (probably /dev/sdb). You can use it like any other volume, OpenStack already decrypted the block storage for you.
This is a good moment to emphasize again that your LUKS encrypted volumes are only encrypted at rest, not in transit – keep that in mind when reasoning about your security posture!

Configuring LUKS encrypted storage in the Terminal

If you prefer a terminal and Openstack’s CLI over its dashboard, there is of course another way to provision LUKS encrypted storage on OpenStack. Make sure you have your OpenStackRC.sh at hand, source it, and pick the project to work in, if prompted:

source ~/Downloads/nws-id-openstack-rc.sh

Testing authentication and fetching project list ...

Please select one of your OpenStack projects.

1) aaaaa-openstack-bbbbb
2) ccccc-openstack-ddddd
3) eeeee-openstack-fffff
4) ggggg-openstack-hhhhh
5) iiiii-openstack-jjjjj
6) kkkkk-openstack-lllll
7) mmmmm-openstack-nnnnn
8) ooooo-openstack-ppppp
Enter a number: 1
Selected project: aaaaa-openstack-bbbbb

With the correct project picked, you can proceed to create your encrypted volume.

Creating a New LUKS Encrypted Volume

First, you need to identify the storage type providing LUKS encrypted storage in your OpenStack environment. You can list all storage types available to your OpenStack project with this command:

openstack volume type list
+--------------------------------------+--------------------------+-----------+
| ID                                   | Name                     | Is Public |
+--------------------------------------+--------------------------+-----------+
| 2f487bd6-628d-46ba-83c5-21c6dbb4c67d | Ceph-Encrypted           | True      |
| e0704085-2e47-4e3d-b637-ae04e78f5000 | Ceph-Encrypted-High-IOPS | True      |
| 21b793e6-8adf-4c92-9bf9-14f5a7b6161a | LUKS                     | True      |
| 664b6e93-0677-4e11-8cf1-4938237b6ce2 | __DEFAULT__              | True      |
| 0a65e62f-3aad-4c6d-b175-96dedaa7ba1f | Ceph-High-IOPS           | True      |
| c4a685b0-64c4-4565-9b4c-9800056d659d | Ceph                     | True      |
+--------------------------------------+--------------------------+-----------+

On NWS OpenStack, the storage type providing LUKS encryption is conveniently called LUKS. Next, you will have to create a new volume, referencing the identified volume type:

openstack volume create --type LUKS --size 4 encryption-tutorial-volume
+---------------------+------------------------------------------------------------------+
| Field               | Value                                                            |
+---------------------+------------------------------------------------------------------+
| attachments         | []                                                               |
| availability_zone   | nova                                                             |
| bootable            | false                                                            |
| consistencygroup_id | None                                                             |
| created_at          | 2024-03-04T12:41:29.000000                                       |
| description         | None                                                             |
| encrypted           | True                                                             |
| id                  | 9b20a1d8-bfb6-4e4f-bb4b-dbda62e4afc7                             |
| multiattach         | False                                                            |
| name                | encryption-tutorial-volume                                       |
| properties          |                                                                  |
| replication_status  | None                                                             |
| size                | 4                                                                |
| snapshot_id         | None                                                             |
| source_volid        | None                                                             |
| status              | creating                                                         |
| type                | LUKS                                                             |
| updated_at          | None                                                             |
| user_id             | acfef1ea27ec3ac25fa5009238cdeb2cc5ae2c943da7ecb279c43a5a91b8a4bf |
+---------------------+------------------------------------------------------------------+

In the example above, I created a volume encryption-tutorial-volume with a size of 4GB. Next up, you will want to attach the volume to a VM.

On NWS OpenStack, volumes of the LUKS type will are encrypted using 256bit aes-xts-plain64 encryption.

Using LUKS Encrypted Volumes

In order to consume the volume, you will need a server to attach it to. Create a new server using the following command, or deploy one via the NWS or OpenStack dashboards:

openstack server create --flavor s1.small --image "Debian 10" --network public-network test-server

This command will work in any NWS OpenStack project – if you’re following along on a different OpenStack environment, you might have to adapt the referenced flavors, images, and networks.
Once the server has come up, you can go ahead and attach the LUKS encrypted volume you created before:

openstack server add volume --device /dev/sdb test-server encryption-tutorial-volume
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| ID                    | 9b20a1d8-bfb6-4e4f-bb4b-dbda62e4afc7 |
| Server ID             | 7952e17d-0781-41c3-a1e9-b758574ac336 |
| Volume ID             | 9b20a1d8-bfb6-4e4f-bb4b-dbda62e4afc7 |
| Device                | /dev/sdb                             |
| Tag                   | None                                 |
| Delete On Termination | False                                |
+-----------------------+--------------------------------------+

If you log into your server now (e.g. via SSH), you should see a new storage device /dev/sdb. You can go ahead and format, mount, and consume it like any other OpenStack volume. OpenStack will have decrypted the block storage for you in the background already.

Summing it up

OpenStack provides you with everything you need to encrypt your storage at rest. You can use either OpenStack’s Horizon dashboard or its CLI to create, encrypt, and attach block storage volumes, without having to change any configuration on your VMs.
The feature comes at very little additional cost (OpenStack stores the encryption keys as Secrets in Barbican, OpenStack’s key manager, which incurs a small, fixed cost per secret in NWS OpenStack), and is a ‘quick win’ in terms of security posture.

Securing ingress-nginx with cert-manager

Securing ingress-nginx with cert-manager

In one of our first tutorials, we showed you how to get started with ingress-nginx on your Kubernetes cluster. As a next step, we will tell you how to go about securing ingress-nginx with cert-manager by creating TLS certificates for your services!

What is cert-manager?

cert-manager is an incubating CNCF project for automatic or programmatic provisioning of TLS certificates. It watches annotated resources (e.g.  Ingress objects) and a set of  CustomResourceDefinitions to request, create, and renew certificates for your workloads.
On the provider side, it supports certificate authorities or other providers such as ACME, GlobalSign, Hashicorp Vault, or Cloudflare Origin CA.

With this initial question answered, let’s get started with securing ingress-nginx with cert-manager!

Installing ingress-nginx

In order to secure ingress-nginx, we need to install ingress-nginx! We will do so using Helm, a package manager for Kubernetes. Installing applications to Kubernetes using Helm is similar to installing packages on an operating system:

  1. Configure the repository
  2. Fetch available packages and versions
  3. Install a package

With Helm, those three steps look like follows:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install --namespace ingress-nginx --create-namespace ingress-nginx ingress-nginx/ingress-nginx

Why 4x ingress-nginx?!

The third command might look a bit confusing – what do we need ‘ingress-nginx’ for that often? The first time, it’s for the namespace name, the second time for the release name Helm uses to manage the installation, and the third/fourth time reference our repository and package

When the installation command terminates, ingress-nginx will be ready for serving traffic to your cluster! We can go on to installing cert-manager next.

Installing cert-manager

Installing cert-manager is very similar to installing ingress-nginx – we can use Helm again. Add the repository, sync it, and install cert-manager to your cluster. Make sure to add the extra variable  installCRDs  which will install the aforementioned  CustomResourceDefinitions  for cert-manager along with the application.

helm repo add jetstack https://charts.jetstack.io/
helm repo update
helm install --namespace cert-manager --create-namespace cert-manager jetstack/cert-manager --set installCRDs=true

Just like with ingress-nginx, cert-manager is readily available to handle TLS certificate creation as soon as the installation command terminates – we’re just missing an  Issuer  CRD.

Creating an Issuer

As mentioned before, cert-manager integrates with many different certificate vendors but needs to know which one to use for a specific certificate request. This can be configured using cert-manager’s  Issuer  CRD.
For this tutorial, we will use the ACME issuer, which utilizes Let’s Encrypt to issue valid TLS certificates for our services:

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
  namespace: cert-manager
spec:
  acme:
    # You must replace this email address with your own.
    # Let's Encrypt will use this to contact you about expiring
    # certificates, and issues related to your account.
    email: daniel@example.com
    # we will only use the staging API for this tutorial
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    privateKeySecretRef:
      # Secret resource that will be used to store the account's private key.
      name: acme-staging-key
    # Add a single challenge solver, HTTP01 using nginx
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

This  Issuer declaration will use Let’s Encrypt’s Staging API to issue valid (but untrusted) certificates to our services. It is configured to use HTTP01 challenges hosted by your previously installed ingress-nginx to validate our services (read more about Let’s Encrypt’s Challenge types here).
If you look closely, you can also see that we actually defined a  ClusterIssuer , which is an  Issuer  that can be used and referenced cluster-wide.

You can go ahead and deploy this ClusterIssuer to your cluster like this:

kubectl apply -f clusterissuer.yml

For our  Ingress to work though, we need one more thing: Connectivity.

Preparing the Ingress

If we want to connect to our Ingress and generate a TLS certificate, we need a DNS entry.
And for a DNS entry, we need our clusters’ Ingress IP. This is the IP ingress-nginx uses for its service of type  LoadBalancer , which is in charge of routing traffic into our cluster.

Creating a DNS entry

You can retrieve the publicly reachable IP of your ingress-nginx service by executing the following command:

kubectl get svc -n ingress-nginx ingress-nginx-controller -o jsonpath={.status.loadBalancer.ingress[0].ip}

With the public IP address retrieved, you can go ahead and create an A entry for one of your (sub)domains with your DNS provider. How to do this depends on your provider and is out of the scope of this tutorial.

Installing a Demo Application

The last missing piece of our secure-ingress-puzzle is an application to access from our browsers. We will deploy podinfo, an often-used demo application that comes with a web frontend.
Once more, we will use Helm for the installation:

helm repo add podinfo https://stefanprodan.github.io/podinfo
helm repo update
helm install --namespace podinfo --create-namespace podinfo podinfo/podinfo

After a short while, the application should be deployed to our clusters, with a  ClusterIP service we can reference in our Ingress resource in the next step.

Securing ingress-nginx with cert-manager

We now got all necessary bits and pieces to install and secure our Ingress, all in one go:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-staging
  name: podinfo
  namespace: podinfo
spec:
  ingressClassName: nginx
  rules:
  - host: podinfo.cluster.example.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: podinfo
            port:
              number: 9898
  tls: # < placing a host in the TLS config will determine what ends up in the cert's subjectAltNames
  - hosts:
    - podinfo.cluster.example.com
    secretName: podinfo1-tls

Note a few things:

  • we create the Ingress in the same  namespace that we deployed podinfo to
  • we add an annotation, which will get picked up by cert-manager and tells it to use the  ClusterIssuer letsencrypt-staging to issue a certificate
  • our Ingress will route traffic to https://podinfo.cluster.example.com/ to our podinfo service
  • we also specify a hostname for the SAN of the certificate as well as a certificate name

Once you’re satisfied with the configuration of your Ingress, you can go ahead and deploy it:

kubectl apply -f ingress.yml

We will now have to wait 1-2 minutes for a few things to happen:

  • cert-manager to pick up on the annotation and…
    • …create another temporary Ingress for the HTTP01-Challenge to be solved
    • …create the requested  Certificate CRD
    • …create the corresponding secret podinfo1-tls to be used by the Ingress
  • ingress-nginx to update the Ingress with the publicly reachable address.

Once all those steps succeeded, we can open https://podinfo.cluster.example.com/ in our browsers – you should be greeted by the notorious “Your Connection is not secure” screen of your browser.

Screenshot of Google Chrome's 'Your connection is not private' screen

Our certificates won’t get recognized by most web bowsers

This is happening because we specified Let’s Encrypt’s Staging API as issuer for our certificates. After clicking on Show advanced and confirming Proceed to https://podinfo.cluster.example.com/, we will get forwarded to podinfo anyways:

A screenshot of podinfo's web UI

Podinfo’s squid is happily greeting us

It worked! We’re greeted by Podinfo’s mascot, a little squid, applauding us for configuring our  Ingress and  Issuer correctly.
We can also verify our certificate got issued by cert-manager by inspecting the certificate details in our browsers:

A screenshot of Google Chrome's certificate details view, confirming that the certificate was issued by Let's Encrypt's Staging issuer

The certificate was issued by Let’s Encrypt’s STAGING issuer.

Conclusion

We did it – we secured our ingress-nginx with cert-manager and Let’s Encrypt! All we needed was a  ClusterIssuer referencing Let’s Encrypt’s (staging) API, and a matching annotation in our  Ingress  configuration.
If you feel adventurous, you could now go ahead and use Let’s Encrypt’s production API by removing the  staging part from the Issuer’s URL, and the browser warning regarding the certificate should go away.

Furthermore, if you’re already deploying other applications with Helm, more often than not there’s predefined values for ingress annotations or even built-in cert-manager support for provisioning certificates – check it out!

If this tutorial was too fast-paced for you or you got further questions regarding your Kubernetes deployments and connectivity, don’t hesitate to give our MyEngineers a call – they’d be more than happy to talk to you!

CfgMgmtCamp 2024: Our Recap

CfgMgmtCamp 2024: Our Recap

Earlier this week, our team drove all the way to Ghent, Belgium to attend the ConfigManagementCamp 2024.
Being a free-to-attend conference right after FOSDEM, it has always been buzzing with fans of Open Source, good conversations, and electrifying ideas for new projects. As this year was no different, we wanted to share some of our impressions with you, so join us for this recap of CfgMgmtCamp 2024!

The Ol’ Reliable

As config management has always been a necessity when dealing with infrastructure and the software running on it, some tools have been around for quite some time now. I am looking at you, Puppet, Terraform, and Ansible!
It was good to see that despite their age, the respective solutions and their ecosystems continue to flourish: We could spot a few talks on event-driven Ansible and learned new ‘hacks’ when operating Puppet.

Spongebob Squarepants holding the 'old reliable' box

Ansible, Terraform, and the like continue being reliable tools with flourishing ecosystems.

Terraform and its fork OpenTofu, which reached GA in January, were also at the center of many talks. Forking a project like Teraform and nurturing a community all the way to its first stable release over just 5 months shows how important both, Terraform and Open Source, are to the community.
It will be interesting to see who is going to stick with Terraform and who’s set to move on to OpenTofu, as well as how the fork will diverge from Terraform.

In addition, CfgMgmtCamp 2024 was sponsored by both, Puppet Labs and Ansible, so attendees could chat with the ‘insiders’ for a bit, with many maintainers from within the community also available for discussions around their favorite config management tool.

The Shiny New Stuff

Besides reinforcing our knowledge about ‘the ol’ reliables’, we also learned about a bunch of emerging config management solutions, languages, and ideas:

Pkl is the configuration language used at Apple, which open-sourced it 3 days before CfgMgmtCamp. We were able to catch a glimpse at its core principles in the first-ever talk on Pkl.
It allows you to define configuration, which you can then export to configuration formats such as JSON or YAML. Check out the Pkl website for more information or take a look at the project’s codebase on GitHub.

Another interesting project presented at CfgMgmtCamp 2024 is winglang. It builds on the idea that a single programming language can define infrastructure and code.
It focuses on abstracting cloud concepts, making it easy to write code that leverages the cloud’s vast offerings.
We especially liked the project’s local simulator, which arranges your defined resources and functionality visually in real time.

The third project that deserves mentioning is System Initiative, a ‘collaborative power tool designed to remove the papercuts from DevOps work’.
You can think of it as DrawIO for Infrastructure, with multiplayer capabilities: It offers a GUI and several components of cloud infrastructure with which you can build your infrastructure. System Initiative will constantly validate operability and state for you while you design your infrastructure.

Our Takeaways of CfgMgmtCamp 2024

In retrospect, we take a few key points back home with us:

Everyone despises YAML, even at ‘YAMLCamp’ – projects like CUElang, Pkl, and winglang hint at that fact. If providing handrails with a type system just to generate YAML at the end of the day anyways will be enough – we’ll have to see.

Ansible, Puppet, and Terraform are here to stay – at least for the moment. We still observe innovation in the ecosystems, and the community takes matters into their own hands where necessary (Hello OpenTofu!).

Especially Ansible and Terraform were presented in many talks showcasing many scenarios, and we can vouch for those solutions from our own experiences: Those tools are great for managing cloud resources, be it for managing OpenStack with Terraform or generating dynamic inventories of your infrastructure with Ansible.
And if you don’t feel comfortable plunging into the cold waters of config management just yet, there are always our MyEngineers readily available for you.