Every story begins with people who make a difference. In our interview series, it is precisely these people who have their say: NWS employees who share their experiences, ideas and insights with us. We want to know what drives them and what we can all learn from them.
We continue our series of interviews with Justin Lamp, Senior Systems Engineer.
What is your position at NETWAYS Managed Services and what are you responsible for?
I work as a Senior Systems Engineer at NETWAYS Managed Services. My core business is in the Kubernetes area, where I am responsible for all Kubernetes clusters in the broadest sense. I mainly provide support with orchestration and am responsible for the status of the clusters. I also take care of the entire Kubernetes lifecycle – from starting the cluster to resizing and deleting it. I am also involved in OpenStack and mainly take care of network issues, i.e. network setup, bare metal, OpenStack controllers, hypervisors and connecting hardware to the network.
Where do you actively intervene in architectural or design decisions?
I give my opinion where I know that I have the necessary experience, perhaps know an alternative and it seems sensible to me. It’s very open in our team. All employees are very open to ideas and suggestions for improvement. If I think of something, I’m happy to give my opinion.
Which tools and frameworks do you think are indispensable in Kubernetes operation?
I now see the Cert Manager as the most important tool for managing certificates. This is absolutely essential for Kubernetes in order to expose services to the outside world. The Gateway API is also important to me. This now also plays a central role for me. At the moment, it is not yet popular in the existing clusters, but that will soon change. There are now many implementations for replacing ingress controllers. Therefore, it is simply the new method of exposing services to the outside world, which is why I find the Gateway API very important. It is a very integral tool for Kubernetes. I also find Operator indispensable in the broadest sense. This is the big advantage for Kubernetes: Operators can be used to simplify and optimize services and workflows by managing certain services. For example, if you imagine a PostgreSQL cluster (database cluster), there is an operator called CNPG (Cloud Native PostgreSQL) that takes care of the entire lifecycle of the database cluster, including horizontal scaling, backups and snapshots. This simplifies the running of many services when operating clusters.
How do you implement topics such as RBAC, network policies and secret management?
There are many nuances and things that can, must and should be considered. It depends on how you look at it. What you have to say in any case is that you have to test a lot. For example, if you write your own operators, they must also communicate with the Kubernetes API. You have to make sure that you use the right RBACs and release what is minimally necessary. You can achieve a lot with Cilium Hubble when it comes to network policies. Cilium Hubble is supplied with us as standard and offers the option of seeing the network traffic, at which point which flows are dropped and where the traffic is not forwarded or allowed. This allows you to create network policies and gradually release more until everything works as desired. I currently use Sealed Secrets for secret management. This offers simple options for using Kubernetes secrets, encrypting them and saving them in Git. You can then roll these out with the GitOps tool of your choice and store them in a public repository without running the risk of any secrets being leaked. There are also other options, such as the External Secrets Operator, where you can of course define this in a different way in other stores outside of Kubernetes.
How does customer feedback flow into technical improvements to the platform?
We are always ready to accept customer feedback. Depending on the feedback, we almost always implement it. One example is the Kubernetes API topic, where the wish was expressed that it should remain internal and not be accessible from outside. This came about as a result of feedback. We are therefore very happy to implement ideas and requests from customers and evaluate the latest technologies. Currently, for example, the dual-stack load balancer is a topic that we would like to implement for a customer.
What was your most complex Kubernetes problem – and how did you solve it?
The most complex Kubernetes problem was Gardener itself, our new platform for deploying Kubernetes. This is because many things are nested and interlinked. For example, you first need an initial cluster to be able to deploy Gardener at all. This means that you cannot manage your own cluster yourself at first. This creates a chicken-and-egg problem, so you have to use other tools to help you out.
Which Kubernetes features or trends do you currently find particularly exciting?
I can come back to what I said earlier: Gateway API. The Gateway API is Ingress 2.0, the replacement for it. I find this very exciting at the moment, because a lot of things are kept as universal as possible without neglecting the differentiation between implementations. Unlike with Ingress, you don’t have to set countless annotations here, which unfortunately too often cause security problems, see Ingress-Nginx. With Gateway API, the aim is to keep the API as universal as possible so that you can simply replace your Gateway API controller if necessary. I therefore think that this could become very exciting in the future. The Gateway API also offers opportunities to make AI models available via API and to control access. This was the topic at KubeCon.
What motivates you personally about working with Kubernetes?
Kubernetes offers the option of declaring certain states and returning to them at any time without having to do anything yourself. You can also specify something that is then rolled out with operators as if by magic, which makes your life easier. Admittedly, the initial hurdle is very high. But once you’ve got to grips with it, it’s a wonderful tool for running your workloads and building your applications on top of them.





0 Comments