Episode 9: Thorsten Klein, creator of k3d

Thorsten Klein, Dax McDonald, Beyang Liu

Thorsten Klein is the creator of k3d, a tool that lets you run a lightweight Kubernetes cluster (k3s) inside a single Docker container. This makes it much easier to spin up a Kubernetes cluster in places like your dev environment, your CI pipeline, or a low-resource environment like a Raspberry Pi.

Thorsten is a DevOps engineer at Trivago, where he works on developer experience for a team that maintains a set of bare-metal Kubernetes clusters. Dax McDonald, former engineer at Rancher Labs, joins. We chat about the ways in which developers are using k3d, the motivations and inspirations for writing it, and other tools we find useful in the Kubernetes ecosystem.

Show Notes

Thorsten Klein: https://twitter.com/iwilltry42

k3d: https://k3d.io

Docker Compose: https://github.com/docker/compose

BlueJ, Java IDE: https://bluej.org

Trivago: https://github.com/trivago

Bitlocker: https://en.wikipedia.org/wiki/BitLocker

Rancher: https://rancher.com

HashiCorp Nomad: https://www.nomadproject.io

"Explain it like I'm Five" for Kubernetes, from Miguel Mota: https://dev.to/miguelmota/comment/filh

K3s: https://k3s.io

Darren Shepherd, CTO of Rancher: https://twitter.com/ibuildthecloud

minikube: https://minikube.sigs.k8s.io/docs

MicroK8s: https://microk8s.io

etcd: https://etcd.io

SQLite: https://www.sqlite.org/index.html

DQLite: https://dqlite.io

Kine, abstraction layer that swaps out the backend for etcd: https://github.com/rancher/kine

containerD and runC: https://containerd.io, https://github.com/opencontainers/runc

Flannel container networking interface: https://github.com/coreos/flannel

CoreDNS: https://coredns.io

Traefik, ingress controller: https://containo.us/traefik

Helm: https://helm.sh

iptables: https://linux.die.net/man/8/iptables

socat: https://linux.die.net/man/1/socat

k8sup ("ketchup"): https://github.com/alexellis/k3sup

Alex Ellis: https://twitter.com/alexellisuk

podman, alternative container runtime: https://podman.io

Kubeadm: https://kubernetes.io/docs/reference/setup-tools/kubeadm

snapd: https://en.wikipedia.org/wiki/Snap_(package_manager)

Tilt: https://tilt.dev

Skaffold: https://skaffold.dev

k3d demo repository: https://github.com/iwilltry42/k3d-demo

k3s-in-docker, created by Rishabh Gupta, the inspiration for k3d: https://github.com/zeerorg/k3s-in-docker

Tweets about k3d (original announcement, reply from Darren Shepherd): https://twitter.com/Rancher_Labs/status/1113449304281288704, https://twitter.com/ibuildthecloud/status/1113501021102194688

Linkerd: https://linkerd.io

Service mesh: https://www.redhat.com/en/topics/microservices/what-is-a-service-mesh

Deezer k3d fork: https://github.com/deezer/k3d

Argo: https://argoproj.github.io

kube-ps1: https://github.com/jonmosco/kube-ps1

Indent-Rainbow, VS Code plugin for editing k8s YAML: https://github.com/oderwat/vscode-indent-rainbow

RKE, Rancher Kubernetes Engine: https://rancher.com/docs/rke/latest/en

GitOps: https://thenewstack.io/what-is-gitops-and-why-it-might-be-the-next-big-thing-for-devops

Katacoda Kubernetes course: https://www.katacoda.com/courses/kubernetes

CNCF "explain it like it I'm 5" comics: https://www.cncf.io/phippy-goes-to-the-zoo-book

k3x, created by Alvaro Saurin: https://github.com/inercia/k3x

Buildkit: https://github.com/moby/buildkit

k3c: https://github.com/rancher/k3c

arkade: https://github.com/alexellis/arkade

Transcript

This transcript was generated using auto-transcription software and the source can be edited here.

Beyang Liu: Alright. So today is a historic moment in Sourcegraph podcast history, because for the first time I am joined by a cohost Dax McDonald, who is my teammate at source graph. Uh, and he is fairly active in the Kubernetes and Docker world. So hello, Dax.

Dax McDonald: Hello. Great to be here.

Beyang: And we are joined by Thorston Klein, an engineer at Travago and the creator of K three D a tool that lets you run a lightweight version of Kubernetes inside a single Docker container Thorston. Welcome to the show.

Thorsten Klein: Hi, thanks for having me.

Beyang: .So before we dive into kind of the nitty gritty details of K three D and where it fits into the, the Kubernetes ecosystem, uh, we always like to kick things off by just asking our guests how they got into programming and what has been kind of their journey, , as a programmer.

Thorsten: Um, yeah, so I, I have, I wish there was like kind of those, one of those marvelous stories, which we already heard on the podcast here, but unfortunately it isn't. So, uh, I was already interested in computers as a kid, but mostly like an ant user. Um, then in like 10th grade high school, I got a basic computer science introduction course, which was like, Java with super simplified in a blue Jay

Beyang: Oh, blue Jay. I remember I used that in my first class, too.

Thorsten: Yeah. I actually never heard about it again after that one.

Beyang: Same.

Thorsten: yeah, so we built some simple snake implementation and I was like, wow, cool. I can tell a computer to draw a moving line that eats some pixels. And that was like, Ooh, amazing. Um, yeah, unfortunately that cost got discontinued because, um, too many people lost, interested in it.

So after that, the school, I went off to study chemical geology, but, uh, that kind of got too boring after a year, standing in the liver laboratory all day long. So I, I bought that and, um, got into studying computer science and to sold off. And yeah, to pay my rent. I started working as a working student at Trivago in it support.

So I learned a lot about, uh, people's using computers or, uh, trying to use computers. Um, and I helped my teammates to create some automation scripts with Pash. I got into PowerShell and finally to Python. And I have to admit, I really started liking Python way, way more than Java or C that I learned in university.

So, uh, treat myself some login scripts and BitLocker stuff.

Beyang: Yeah.

Thorsten: That was, that was pretty cool. And in 2017, I then switched to the web performance team inside Trivago to write my bachelor thesis. And that's how I got in touch with the concepts of microservices, containers, container orchestration, not Kubernetes, but no Matt at the time and also learned programming and golden.

Mmm. And yeah, then one year later I finished the. Jesus. I moved to the marketplace department to create a new kind of dev ops team. Um, and I was centered over a cluster of 20 bare metal machines running. I kind of a composed based environment, which was Rancho one that's six, and one of my first tasks there was to move that whole environment to Kubernetes.

Because Rancho was going to drop support for the old set up by the end of the year. So we moved to, to Dodo and it was kind of, that was a bumpy road to into Coobernetti's with no one else knowing it there. Yeah. That's what I've been doing for the past few months and years, deep diving into Coobernetti's learning oh, the things about those dev ops kind of buzzwords.

Beyang: at source graph, we kind of had a similar situation where, An engineer on the team said , Hey, you know, what's this new Kubernetes thing.

Uh, at that time, I think most people are still using, you know, something like no, or, um, what's the other one, uh, starts with an M I'm blanking on the, yeah. Amazing. Uh, and, and then we kind of tried it out. We liked it. And then everyone else was just kinda. Like kind of figure, figuring things out, reading through the documentation and probably doing a lot of stuff wrong.

But, um, I guess like now, now I think you'd be considered one of the leading experts in the community's world. Cause you've, you've contributed such a significant, project. So, um, I guess for, for, for both myself who, you know, I still consider myself kind of a Kubernetes newb. Um, and for a lot of our listeners community's ecosystem can be , oftentimes a bit overwhelming, just cause there's so much happening.

Um, so I guess first I like to motivate the discussion, just starting with the super basics. I would love to hear your, your explain it, like I'm five explanation of, of what Kubernetes is and why it exists.

Thorsten: Yeah. Uh, yeah, first I guess I wouldn't call myself an expert in Cubanetis. It's like, uh, I also kind of hates reading documentation, so it's almost mostly trial and error for me. And then when it's. I struggle with something. I just tried to build a solution myself because sometimes I'm a little bit too impatient, so that's why I probably contributed more than I learned. Um, yeah. Uh, Eli five and Coobernetti's, that's, that's a difficult thing. I found the really, really good, uh, on a desktop Theo. Um, I use, uh, called, uh, Miguel motor, which is, um, I, I just assume that all of the listeners now what containers are .

So when you imagined containers as cattle, which is a cool reference to wrench a by the way, um, Or cows standing on some fields, um, eating some grass. Then the field is kind of a note like a physical server or virtual machine. And the cross is the resources that the nose note has to offer. Then community is kind of the rancher that checks for the cross on the fields that enough is available for all the cows to eat.

And if there's not enough, then it just moves the cows around so that all of them have enough to eat. So it just moves them to different. Fields or notes and culinary terms. And also some cows maybe die for some disease or, um, because the field is poisoned or something, then it takes care of bringing in new cows and keeping the milk production up to the demand by dynamically, uh, scaling up the cows.

Dax: And I think if you, uh, if you continue that analogy might be something where you talk about, you know, like actually moving the cows to different pastures would be another huge role in Kubernetes.

Thorsten: The one was just one, one of the best Eli fives, like real unifies that I could find or there,

Beyang: Yeah. I really liked that analogy. I think it's really good. Um, I think it might break down a little bit in what we're going to talk about though. Cause like K three D is, is a way to run Kubernetes inside a Docker container. And so if Kubernetes is the rancher and the containers are the cows, then what does it mean when you're running the rancher inside?

Uh, a cow.

Thorsten: Well, yeah, that's a difficult one. This is actually just to, uh, bring the whole ranch, uh, into your house. or into your computer. So you don't want to. Oh, it's go out there and the stormy weather and check for the fields and for the cows. So you just bring the whole thing into your home.

Beyang: So the field, the field is production. It's a, can be a scary unforgiving place. And the home is like your development environment

Thorsten: Yeah, exactly. Exactly.

Beyang: sense. Cool. So, you know, K K three D is a containerized distribution of K three S so before getting the K three D um, you know, can you explain what K three S's and why it exists?

Thorsten: well, first off, I'm kind of not involved with K three and right. So that's a rancher product, um, at the basics, uh, K three S is just five less than Kubernetes, right? So K three and it's Werther's K eight S that's also where the naming comes from. It's like half of coordinators and, um, It's basically a super lightweight and certified Kubernetes distribution, which reps all the community's components in a single multi-core binary and puts it in a simple launcher.

It's like putting all the, um, control plane and the Cubelets and scheduler and controller manager, all in a single binary. Making it easy to deploy to various places. Um, and it also trims some of the communities components, which are mostly not used for the intended use case anyways, to reduce the overall size of the binary.

For example, it, um, trims away the intrigue storage providers or cloud providers, which you probably don't need if you're running to an, it is for example, on your raspberry PI at home.

Dax: I've always been curious. When did you first hear about, uh, K three S. Um,

Thorsten: that's actually was like two weeks before I started working on K three D, so,

Beyang: well, you move fast.

Thorsten: Yeah. Um, I just started liking it because it was like super lightweight. Um, at that time I was trying out different development tools for Kubernetes. So our developers that , or in my team at Livalo could run Kubernetes locally.

And that then I stumbled upon a tweet by Darren. I think they're in shepherd, um, on KTS and I think it was pretty early stages back then. And I just tried it out on my laptop at home and it was running perfectly. And then I was thinking, Hmm, I need to get this onto every developer's machine, which didn't quite work because they, then it was like a binary that would only run on Linux machines.

Beyang: How did, um, how does K three S differ from like other kind of, um, lightweight Kubernetes distributions? Like there's many cube, um, which I think was kind of the first and then like Brocade S uh, which is, I think Ubuntu's, uh, , uh, uh,, um, how does KTS relate to things like that?

Thorsten: Hmm. Yeah. I mean the biggest difference between all of them, uh, compared to Catherine's is that Catherine has, is way, way lighter. So for example, I'm not sure how it is a mini cube actually, or micro Cates. If they are actually modifying to an it is itself or the upstream codes to trim away the, uh, Unneeded parts of it, but KTS also comes with, um, kind of batteries included.

So it, for example, it switches out the storage back end. So it doesn't rely on NCD, but uses sq Lite, uh, as an embedded process or DQ light, if you're running multi-service setups, but it's using a shim for that. So. I'm an obstruction layer on the data store, which is called . That's also a thing, Rancho home grown project that also enables you to swap out the backend for Etsidy Maya SQL plus SQL and stuff like that.

And caveats comes also bundled with everything that you need to run two miniatures, for example, on your respiratory at home, which is. Container D and run C as the, uh, runtime environment, uh, flannel as the container networking interface, Cardenas, the metric server traffic as the default ingress controller, um, an embedded service load balancer, um, a helm controller and many, many more things.

And even the host utilities like IP tables. So cut and so on.

Beyang: Interesting. So if you're, if you're creating kind of like a home raspberry pie, server , then is what you want to use. If you want to run Coobernetti's on it.

Thorsten: I will definitely suggest that to all my friends. Um, actually there's, uh, when it comes to home clusters and raspberry pie, there's a pretty cool block pose. And also too. Uh, written by Alex Ellis who created a project called K three S up or in short ketchup, um, to deploy K three S clusters on, for example, respiratory pause.

So everywhere where you have S H X is two, which is pretty cool.

And, uh, coming back to your previous question, um, Back then when I first touched a mini cube, a mini required you to spin up a virtual machine on your local host I just learned that. So no supports Docker and pot man, even bare metal.

But I think under the hood, it's still uses cube ADM to set up the whole cluster, even if it's multi nodes. Um, so that. There's still some startup time required to get the whole configuration done. And they're K three is I think has a small benefit because it has a super fast start up time and everything is kind of bundled in a single process.

This taking up way less resources on micro Cates on the other hand, um, or micro Cadas, it kind of has the same target. It's K three S which is. Running when it's on developer machines, um, having Kubernetes for edge computing, internet of things, or even CICB pipelines, but if I'm not wrong, I think it's still a full blown community distribution.

And also since it's a canonical project, uh, it requires snap D to be present on the machine because it packages communities as a snap package. And those, it will require a VM to run on windows or, or even Mac.

Beyang: Makes sense. Kind of turning back to the use case that you were targeting. So, um, you were investigating . Uh, to get Coobernetti's running in development environments at Travago. Um, can you kind of paint us a picture of, um, I guess, like what, what was the, what, what did development environments look like before?

Um, Katie S in, in K three D was adopted. And what does it kind of look like today? Because I think a lot of people, you know, want to use Kubernetes in production, but then the development side of the story is much less, uh, clear. So, um, Yeah, can you, can you talk about that?

Thorsten: Yeah, sure. Um, in terms of Trivago, I can obviously mostly only speak for my department, which is like 50 people and we are kind of running in the backend of Trivago and the marketplace side. So it's not the user facing website that my team handles. Um, so that's also why we're running on bare metal machines. Um, because we have some pretty resource heavy tools there. Um, and as I said before, when I started, I got handed over a cluster of 20 machines running Rancho one, that six, there was a duck composed base based environment. So you would also use the composed like fives to deploy to production. Uh, Rancho had an, another obstruction layer called rancher, compose for, um, scaling, uh, autistic.

Kind of auto-scaling, um, features and health checks and stuff like that. And so the local development environment for the developers was obviously also the components, because make the most sense, you spin up your few containers locally and took post a super fast lightweight, and you anyways, kind of have it installed alongside Tucker on your machine.

And so there was the only thing that made sense. And it was also the simplest thing. And since we switched to Coobernetti's in production, we also kind of wanted to have the local development environments to be as close to production as possible. Currently we're still in the transition period. So we're still halfway using composts and even using duck composts in RCI systems, but also have the first projects using, um, K three D or other, um, environments for local development.

Uh, so the goal of K three D for me, was having, um, a local Kubernetes development environment that would be as simple to use as the composers. We, it's still a long way from, to go for that, but I think we are on a pretty good track there.

Beyang: And so in, in that development environment, um, what does a build, uh, process look like? Do you let's say, do you have like a single script that kind of, uh, spins up a caterer D container and then watches the files and rebuilds containers and replaces them as, as you kind of change the files or, you know, what does that look like?

Thorsten: Today in the few tests environments, we're mostly doing it manual, but we had a look into using tools like tilt or even scuffled. So I think from one of the last episodes, you know, tilt, it watches your local files, rebuilds the container and also deploys. And, uh, luckily too, even already knows about K three D, which is pretty cool.

And I think we should work on collaborating even more on that. So it builds the containers and directly uses the K three D image import command to get the images present in your cluster to be used. And then it can update the manifest accordingly.

Beyang: I mean, that sounds super cool. Is it like how, I guess how long between when I change a file and then when I can see kind of the change reflected in. My development instance is that like, you know, seconds, tens of seconds.

Thorsten: Um, I, yeah, I guess that kind of depends on the size of your Docker fire, right? So if you have a super large, large base image, now obviously can take up two minutes. And this is also kind of one of the weakest points of K through D I have to admit because inside K 3d, there's still K three S at its core and that's running container D uh, so playing container D Y on your local machine, you usually have talker.

And unfortunately the, both of them have different image storage formats. So you cannot simply share your local, um, image file system with K three D that's currently not possible. Um, so you always have to build, push or import the image, and that can take a few seconds to, um, even minutes, depending on how large images and how many notes you spun up with K three D.

So the cool thing is that when you have a noncompete or an interpreted language like Python, as we have it, you can just Mount some source directory into the K three D notes and then have your app pick that up. For example, if you're building a, I don't know, a flask application that supports hot code three loading, then you can just monitor your source code there and then your container or your deployment inside the cluster.

Picks up the source code from the hosts file system. That's a little faster.

Beyang: Yeah, because you're not building an importing an image, then it's just, uh, changing the files on disk

Thorsten: yeah. There's actually a, we have a K three D demo repository in my private space and get up. We can try that with an example in application. And also with all the K through D um, scripts and configurational fights.

Beyang: Yeah, I'm kind of tempted to Axe to look into this for our dev environment source graph. Cause I, yeah, like the, the big upside here is that you have a development environment. That's like. Super super close to what you're actually deploying into production. Right? So there's, you basically reduce all those kinds of issues where, you know, something's weird that's happening in production, but when you run it in development, you can't reproduce it and then you have to go and like of dive into like, okay, what the heck is different about these two kind of environments?

And that's often a, uh, rabbit hole.

Dax: Exactly. Uh, Thorston. Yeah. What were developers doing like, uh, during the, you said there's a transition to humanities over, uh, a few years, even, it sounds like. What, how were you before how were developers actually. Were they using mini cube or what were they using to test their changes locally? I mean, how

Thorsten: Um, uh, actually the transition period was just a few months. I mean, the cluster was fairly small and luckily we were not, uh, imposed to, um, a hundred percent uptime. So there was pretty, pretty good for me. Um, yeah, before they just use the compost because, uh, we. Simply couldn't find a solution that would fit all of us back then because we were having mixed environment.

Everyone can choose their own hardware. So, and also operating systems. Of course. So we have Mac OS, we have Linux different flavors of Linux, of course, uh, and windows. So, and back then mini cube wouldn't run on all of them. And, uh, also. Some other tools that we tried wouldn't run on all of them. So we were kind of forced to stick what we knew was working and there was the compose.

And now we are now that we've switched production and everything fully Kubernete is now we are slowly trying to get rid of the compose in favor of going full in on K three D or maybe, uh, some other tools, even depending on, uh, on the use cases.

Beyang: I see. So you're still in the process of switching, how many distinct, um, I guess development environments. Do you have, uh, in, in your team of 50, my understanding is like some projects still use Docker compose, but you're trying to switch most projects to using a K through D and the developer, environment. Is that like on a per repository basis or?

Thorsten: Yeah. Yeah, that's a per repository and also per team. So there are teams that are. More data science related. So they are the last ones to migrate. And then we have some developers who are always on the bleeding edge, which are going with me and just giving it a try. Those are also the ones that are helping me with, um, developing K through D and giving constant feedback to me.

Beyang: are we talking about like dozens of repositories or hundreds or Okay.

Cool. I would love to dive into kind of the K three D internals and to kick things off. Um, you know, why, why is K through D kind of like a nontrivial project?

You know, the straw man here is like, Oh, it's just, shoved inside a Docker container. Um, you know, shouldn't that have been relatively straightforward to do, you know? I know it's not, but like, you know, what, what went into that and what, what, um, did you have to kind of, work on to make it work inside a Docker container.

Thorsten: Um, to be honest, uh, I didn't have to do anything because they, uh, so the K three S upstream ripple already had to toxify. So they were already building the images and they also had a duck composts, uh, fire pre-made for that. So, um, And even for K inventing K three D I didn't have to do anything because I didn't invent it myself.

So back then when I was looking at KTS and was checking out the tools, I was checking all my usual channels, like was browsing through Reddit. Obviously not to be work time. Right. So, and Twitter. And then on Twitter, I stumbled upon a post by a guy called Fisher cooktop. And he was writing about, um, that he wrote a CLI to, um, called K three S in DACA and was like, wow, K three is pretty cool.

Tucker runs everywhere. That will be the perfect fit for us. So I went there to the wrapper, I tried it out and it was like, that's pretty cool. Was. Kind of the simplest way of doing it, right? So you just have talker installed and to just call, um, uh, exec calls to the operating system to call the binary itself or the ex executable and spin up the containers.

So kind of exactly what took composed us, but more aware of what you're actually doing. So more aware of that you're running KTS in containers. Uh, and that was pretty cool. But then I had hit a roadblock, which is. I wanted this hot, hot coterie loading, which our patent developers wanted to have definitely for the solution.

And there was no volume Mount feature in that phrase and DACA. And so I tried to just trade a PR for it, but then the whole project was written rust. Um, and to me, Russ, wasn't the simplest language to learn in like half a day. Uh, And as I mentioned earlier, sometimes when I'm excited about trying new things, I can get quite impatient.

So it's just thoughts. Yeah. I cannot make this work in half a day by learning rust. So let's just build it myself. And so I went and rewrote KTS and DACA in dough. Which was then called Tesco or something. And I thought, yeah, that would be a good idea because tho is a super easy to learn language in my opinion.

And it's also the language that's used by like 98% of all the tools and the coordinators ecosystem. So I thought that would be a great fit. Um, and once I finished that, I also shot to his shop and he liked it. And then I posted it on Twitter. Also with a back reference to the original repository. Um, and again, quite a lot of attention, which it wasn't expecting.

Cause I just got onto Twitter and this was like my first sweet. Wow, that's super cool. And then Darren, the, um, architect and CTO of rancher somehow stumbled upon my posts and got in touch with me, but direct message and said, that's a pretty cool project. And if you wouldn't like to just include it in the Rancho space, uh, on GitHub. And, uh, yeah, I just thought that was amazing. I got was really powerful by the attention it got and was like, just move it there. And then we had a group chat with Darren ReSharper and me, and so we moved our project to. The ranch has space on get up. Then we froze the whole, uh, the old rusty implementation repository.

Um, we thought obviously Katie would gain much more attention and love when it's a ranch, a community project. That's also an reshot got onto it and also started developing it further and go. So. I kind of just pick that up and rewrote it in another language, but a new and build it up on the solid base.

Dax: remember when, uh, like when we saw K three coming out, I mean, I personally downloaded it right away and immediately scratches that itch. It's so great. When you can find a developer tool that just like immediate was like, yes, this is exactly what I wanted simplifies. Uh, it was like a perfect moment where all of a sudden, like my work, my work for my personal workflow immediately got easier after using it.

So it was, it was really exciting to see that come out

Thorsten: Yeah, super happy to hear that it's also a perfect fit for me too. Right. A tool that I myself, uh, use every day and have to use every day and which also solves some of my problems that it's really great to work on it.

Beyang: Sourcing, you mentioned the initial feature that you wanted, that wasn't available in the existing Russ based implementation was the, the volume Mount, um, have there been more features that have been added to cater D um, over time that would be of interest?

Thorsten: Ooh, uh, I guess plenty. so we have pot mappings now. I don't know if they existed before that. Uh, so first of all, one of the biggest things I did over the last. A half a year or eight to 10 months was rewriting the whole K three D code base to have a proper code structure because it was my first go project.

And so it was lots of spaghetti code there

Beyang: Oh, this is your first go project as well. So you, you basically learn, go in half a day in order to write K three D

Thorsten: no, I, you go before for my bachelor thesis, but there was, this was like writing and. The permit is export or back then. And this was by my, uh, first go project. That was more, more than three fights.

Beyang: Wow. That's super impressive.

Thorsten: yeah, so that's why it grew organically. And didn't look really nice. So I took the time, of course, a few months to rewrite it created a new repository structure and also get rid of the.

Uh, cards to the top executable and replace it with actual API calls using the Docker SDK. So that's probably a little more robust. And now we have part mappings, um, since K three S one dot Oh, we support multi server clusters. So clusters, we can have multiple server and agent nodes. I'm not worthy here is that we recently switched from the old worker master terminology to agent server.

So just for some people that might get confused with it. And we also introduced a new load balancer feature, which is putting another container next to your Katherine keynotes, which is doing load balancing over the seven nodes that you might spin up. Which ultra also eases, um, the part mappings, because you can map multiple pots just to the low pennants are, but it's forwarded to multiple back and notes.

Cause in DACA, you, you cannot expose parts after you actually created the containers.

Dax: And just to add a little bit more on the, I think the multi server I've seen other projects, which is a linker D have used that to, uh, demo service mesh on a single machine. Um, so I think there's a ton of community excitement about that too, for using like multi server and things like that. Thurston. Have you seen, is there any projects that have used K three D that you're like, wow, that's really cool.

I'm sure there's a lot,

Thorsten: Um, actually actually astonished that, uh, they, they think they used it for a demo. That's pretty cool. Oh, um, actually that I know that some some projects, um, expressed interest in using it, but I actually don't know how many people are, um, using it. I know that tilt for example, is including it in their copays to support as a development environment.

I recently saw that, uh, the company called Deezer is using it and also extending it for their needs. Apart from that. Kind of remember some to the direction of Argo using it for CIN CD on end to end testing. But I cannot recall the details it's there.

Dax: yeah, I definitely love all the, I've seen a ton of community involvement from K through D. Um, and one of the, one last question I had was like, I know I've, I'm still a neophyte and using K three. Are there any features or any, is there anything in K three? Do you feel like people haven't really discovered yet or taken full advantage of?

I think you recently rewrote a good portion of it for the 3.0 release. So you have anything that you'd love to highlight for people to go and try.

Thorsten: Oh, that's pretty good question. So, uh, to highlight what changed in the recent versions is obviously, um, the whole syntax that changed on the CLI. So we are, there was a large discussion about it if we were sticking to the centers that, you know, from cube CTL, which is like this verb object, kind of like chip CTL, Uh, get notes.

Um, we decided to go the other way round, uh, Tucker style. So it's now K 30 cluster create. So there are actually many people asking, uh, create cluster doesn't work anymore. And I'm like, yeah, it's cluster create now. So sorry about that. So that's definitely something that people should know. Um, also the low and that are just introduced is a pretty new concept.

That's just. Dropped in, uh, three dot. Oh, and which makes some things way, way easier. Also the whole multi-service set up was only just introduced in three dot O and in all the development releases. And it's kind of one of the coolest things that we have there. Hm. And image importing is one thing that many people are asking for or were asking for.

And it's even easier. Now you can import it either from your taco pack and or you can just have a tar archive somewhere and import directly from MetaArchive all the images that you may need to be running in the cluster. Maybe at some point we will add some automation there as well.

Beyang: It seems like a lot of the features that you mentioned that have been added are more geared towards like a production use case rather than a development use case, like a, you know, multi server, that sort of thing. Is that kind of a shift that's taking place where, you know, initially, um, you know, you built it because you had a developer environment targeted in line, but the community has taken it and run with it and is now trying to use it in.

Uh, like what exactly, what exactly are the production settings is mostly like raspberry PI, that sort of thing. Or are you also seeing like other, other sort of production settings?

Thorsten: Hmm. Uh, you can certainly run it on a raspberry PI. In fact, we just recently added, um, the multi arc money Fest for our tuck containers that we provide and also improve the multi arc builds in RCI pipeline. But at least in my mind, it's still not targeted for production. There are just so many pitfalls that you have, which are just introduced by the additional Docker layer.

For example, you have to know beforehand which ports you are going to use in all of the applications. So that's kind of sub optimal for a production environment. I think why we've edited. Those is because. People not only want to develop against it because you usually a single node cluster has enough or maybe one server and one agent.

Um, but they also want to test the features that Kubernetes has kind of like you spin up a cluster with three master nodes and all three, seven nodes and three agent nodes. And then one of them goes down what happens. Those kinds of scenarios that you may want to test. That's also where the local insight kicks in because without a lot of pellets, which have, uh, which you can also disable, uh, when creating the cluster, you wouldn't have a cube CTL connection anymore, or no connection to the API server anymore.

And then, you know who maybe in production, I should put a LOPA answer in front of it.

Dax: Do you see the Kubernetes developer needing to test? Like more of those scenarios is like, multi-master now something that, that might be a production, but you actually need to simulate that in your development environment. Are you seeing that becoming an increasing trend?

Thorsten: Um, for developers, I actually don't think so because at least in the most environments that I know the Mo uh, the seven notes are restricted or tainted anyways. So there shouldn't be a workload, actual workloads running on them. I see it more for operators giving it a try.

Beyang: kind of shifting gears a bit, I would love to get your take on, you know, what tools, what other tools do you find useful and valuable in the communities? Uh, ecosystem, just cause it seems like there's so many. Um, so would you mind sharing with us, you know, the sorts of, uh, command line tools, package management, secret management, uh, like.

It, whatever your team uses for, um, uh, developing against and deploying to Kubernetes. What are the things that you find most useful?

Thorsten: Well, there's a large list. I guess we could extend this to two hours to go through all of them. Um, yeah, there are some really obvious things that one might not even think about. It's like, for example, a plug in for your terminal to show which contexts of which to when it is context and namespace, you're currently in.

Because you really don't want to delete the deployment accidentally in production. Right. So you should know which context you're working against. So there's, for example, for that as H um, the shell, you have a cube P as well, a plugin that always shows you the current prompt, um, which contexts in which namespace you're in.

Um, and then. You know, communities is like 80% GMO and sometimes, uh, Yammel is, uh, is hell right? So it's indentation, indentation, indentation, uh, luckily your editor of choice, um, has maybe some cool features to support you with, with the animal and indentation. So I'm personally using visual studio code and there's a cool plugin called rainbow in dent.

Which, uh, co highlights each level of indentation in a different color. And that's pretty useful actually. Yeah. Actually where to an online meetup. Um, I think what, two months ago from Rancho where, um, 50% of the questions related to K for D were, what kind of plugin is he using for vs code? So I guess that should, people should know,

Beyang: Is, is there, uh, any vs code plugin that like understands the Kubernetes schema that will actually like tell you if there's like a semantic error in your animal?

Thorsten: Yeah, actually, they are there's um, there's a Coobernetti's and even the helm plugin that does sound like real time linting of it, but it doesn't catch all of the errors. So

Beyang: I see.

Thorsten: it's. Kind of like 80% and then the rest you catch when you try to actually deploy, because even ham Lynn's, doesn't catch all the arrows for that.

So

Beyang: Yeah,

Thorsten: to me

Beyang: why is that? Why? Why is that? So, because in theory, like the schema is well-defined, you could probably, I believe like the source of truth is in like go types or something and you probably generate like a Jason schema out of that. And, you know, Yammel is just a like, why, why, why is it that there's no tool that just doesn't a hundred percent accurately so far?

Or maybe I'm missing something here.

Thorsten: yeah, probably I was also missing something because I have absolutely no clue. I know that there are Jason schemas out there and I guess they ultimately intend pretty well. So I'm not sure what the views, how the Visco plugin works there. I just know that it happens fairly often to me that. It doesn't catch all the errors, but that might be because I'm mostly using a helm templates and whatever you put in the template, it cannot find everything.

Right. So,

Beyang: yeah. That complicates things a bit.

Thorsten: yeah. True. Yeah. So how would be one of the next things we are using helm for our community packaging and distributing charts that we may reuse? An example, there would be just recently since I'm in the kind of dev ops team. And, um, it may be an Antipater now, but I'm just gonna tell you the truth. We are running databases on Kubernetes, so,

Beyang: Interesting. We also do that, but like, yeah, say more about that.

Thorsten: yeah, I think it's also, it's gotten better now that there are stateful sets out there. Um, We are using mostly Postgres and our teams in our Python developer teams. And we have quite a lot of small and tiny Postgres databases running everywhere in the cluster. And then we, for example, we thought we would just trade it a health chart that every one could just pull as a dependency into the project and we can take care of integrating it.

With back up methodology and services that, um, create metrics out of them and take care of restoration if you need them. So that's where home is really cool tool. Um, and then we have, um, our core tools are obviously, um, tinnitus itself, which is in our case, uh, our ke so the rancher coordinator's engine used to spin up.

Kubernetes on bare metal. And that's for example, as a Coobernetti's distribution, which is running in dock containers. So every component is isolated in its own container, which is pretty cool because upgrades are fairly safe with it because it just moves the Alto container. It stops, it starts the new one, or does kind of a rolling update there.

So you. I have almost no downtime of the Kubernetes API. And then we run rancher on top as the interactive UI for communities, which is loved by our developers. So they don't have to struggle with cube CTL and all the power of the community's API and have it simplified in a nice and tidy UI.

Beyang: You mentioned earlier, uh, that, uh, so your team runs Kubernetes clusters on bare metal, but there's another team, uh, that runs kind of the user facing, uh, sites and they use, uh, like GKE or, or some cloud provider. Can you talk about why, what are the reasons for that, uh, split.

Thorsten: um, yeah, so the split is mostly because we at Trivago, we are lucky we have the freedom of choice to use whatever technology fits our use cases best. And, um, my team was just. Historically, it just had this cluster, right. The Casa existed and we were running our services on them. And we were even running some privileged containers because, um, there was a service running inside the containers that needed some kernel features.

And you don't want to have that in a cloud provider. And also they are quite resource hungry. So running them in. In public cloud could get pretty expensive, but yeah. Um, then just a year ago, or even a little more, we had multiple container orchestration services at, uh, across all the teams that Trivago for some who we were running.

Misos, uh, combined with Kronos and nomads and metal communities. They're based on the cube area, our K E in our department. And, um, Yeah, a few months or last year we started consolidating everything a little bit. And while most of the teams decided to go with the Google Kubernetes engine, because it just, the, the services that Google offers in Google cloud were just a good fit for those teams and they needed, they needed all the features like auto-scaling, um, our team decided to stick with bare-metal due to the high memory usage and kind of.

For economical reasons.

Dax: Thurston, you're talking, you run Kubernetes on bare metal. I've heard that's fairly difficult in your experience. Has that been true? Is there more challenges to running Kubernetes, uh, yourself than unlike public, a public cloud offering?

Thorsten: Um, for me, um, I didn't manage too many public cloud Kubernetes clusters. So my experience there is pretty limited, but what I know is of course the biggest difference to bare metal is that you have to manage your control, paint yourself in Google cloud. For example, Google takes care of managing and upgrading the server notes.

So that's pretty cool for operators. So you cannot screw things up as much as you can on bare metal. And I can tell you, I did screw up several times. Um, uh, for me it was pretty simple or pretty easy up to now to run Kubernetes on bare metal because there's RKA as I talk to you, that's our, um, Coobernetti's distribution of choice.

And it made it just so simple to create the cluster with a single configuration file and running our E up. And we had everything including the whole CNI set up, um, creating multiple nodes, connecting them all together, um, creating the TLS certificates between the components and everything. So to me, the experience of running Kubernetes on bare metal was pretty good, actually.

Beyang: So, so does arcade, he handle most of the control plane update for you? Or do you still have to like, you know, SSH into nodes and, you know, run commands manually and that sort of stuff.

Thorsten: Yeah. So our is, is, uh, SSH based. So you need SSH access to the notes. Um, so what you will need is a config file. Like a cluster, that Jamo file where you. Configure your class. Like I have this and those, this notes and those. So the RPS, those are the SSH keys that you will need. And this role should be master nodes or server nodes.

This should be a worker note. And then I, once this, I Cortinez as my DNS provider and I want, for example, flannel as my container networking interface, and I want to run. Two monitors version one, that 18, and then you run our key up to your conflict and it will take care of creating or updating all the control, plane and control plane modes and all the worker nodes.

And so far it didn't crash once.

Beyang: that's a, that's awesome. Um, you mentioned something a little while back that I kind of want to revisit. So you, when you talked about running databases in Kubernetes, you mentioned that that's considered kind of an anti-pattern. Um, why is that considered an anti-pattern and is it, you know, is it truly an anti-pattern in your view or is it something that folks should get.

I'm more comfortable with.

Thorsten: Um, in my view, it is not an AntiPattern I think. That's been actually a long time ago that it was considered an anti-pattern when stateful sets were in the thing. And then you would have to take care of. So actually we were running databases in production, on Kubernetes, as deployments instead of a stateful sets.

And then you have to take care of pinning them to a specific nodes, which is kind of not the coordinator's way of doing it. Right. So usually the pots are just floating around the cluster, wherever you have enough resources available, but with databases, you have to use the local host path volumes to get the maximum performance, and then you need to pin them to the notes.

And, uh, since also notes should be treated as kind of cattle and not pets. That would be kind of an Antipater there.

But then stateful sets came to the rescue to make some things a little more simple.

Dax: Thorsten, since you've been running Kubernetes for a while, do you have any horror stories? If any times when things went absolutely wrong.

Thorsten: Uh, yeah, but those are actually mostly a human error. So deploying on Fridays and stuff like that, or, um, yeah. What was that? It happens fairly often is that people. Accidentally forget to switch the context and then delete a whole namespace in production. And then you're like, whew, no, that shouldn't happen.

And then it's kind of difficult to, to repair this to the exact same state as you have before. And that's also why kind of patterns like get ops are emerging right now. Um, The worst thing that happened to me once was that I was accidentally running a coordinator's upgrade with our ke in production.

Instead of my development environment. I kind of copy paste that the configuration file of production and forgot to update one of the IPS. And then I just broke a whole production cluster for a day, without even noticing it. That was kind of bad.

Beyang: Wow, uh, Thorston. So, you know, you've been in the Kubernetes ecosystem for awhile, you know, meanwhile, there are all these people, um, who, you know, want to get into Kubernetes, but may find it fairly intimidating or just a lot to take in. Where would you, you know, if someone's newish to committees, maybe they're familiar with, you know, Docker, but.

Uh, you know, they don't have a lot of experience with committees. Where would you recommend folks get started?

Thorsten: Hmm, I guess the most obvious point here would be, um, the Coobernetti's documentation, which is really, really great. And especially recently, they put a lot of effort into optimizing it and. It's always up to date. You will find blog posts on about everything that you can imagine you could do with Kubernetes.

And the coolest thing there is that you can have an interactive, uh, introduction course. Um, I think there's also one on Qatar quarter. We can just go into your running. Uh, you're actually creating a coordinator's cluster in your web browser, and then you're running against life. Can we get discussed and learning about all the concepts that you have there.

And that's pretty cool. yeah. And if you want to go, uh, the, uh, explain like on five routes like we did in the beginning, uh, the CNCF, like the foundation that belongs to hosts a few comics that you can read through, which are. Uh, we have Philippi, um, which is kind of the mascot of those comics is explaining to you what Kubernetes is.

So you can go, you can go and show it to your kids and they will understand what K three, uh, what do we need? This is probably maybe better than me.

Beyang: You can get them started early.

Thorsten: Yeah, exactly.

Beyang: Um, w where do you find out about kind of like the latest developments and, and new tools and the criminalities ecosystem?

Thorsten: Hm. Um, those are the places where I also kind of tend to see projects like K three D so yeah, that, no, it's mostly Twitter. So, um, I got onto Twitter only for those purposes, because I know there are lots of tech folks on Twitter. Yeah.

Beyang: there any accounts that you were, that you really like on Twitter that other people should follow?

Thorsten: There are many, uh, right after he messaged me, I started following, uh, Darren. So, and he really tweets a lot. I don't know where he finds the time to tweet that much while being CTO of this, of rancher and developing all those crazy tools. And you can get some really nice insights on, uh, on technology in general on Twitter, because.

People tend to go there to rant about technology. And, uh, mostly you can only really rant about technology if you really know the nitty gritty details. Right. So that's always a good, a good starting point for me. And then obviously it's a hacker news, uh, Y Combinator and Reddit. So probably what every programmer knows already.

Beyang: From your point of view, what are some of the projects in the future, in the communities, community, or maybe in the K three D community that you're excited about?

Thorsten: in the K three D community, um, and the salt space. There's a pretty cool new project, which is K three X. Um, yeah, you really gotta love this naming scheme, right?

um, yeah. So K three X is a U R R a query graphical user interface, um, builds on K three D which. It's kind of like Docker for desktop on the Mac, which can just go to your tray bar, click on the icon and say, spin up a new cluster. And it will spin up a new cluster for you and set up the community's context to interact with it.

Um, and then you have key bindings to spin up new clusters, this try them. So that really makes some things easier. And, uh, it's built by , who is also a contributor to K three D for example, he brought, uh, the Catherine D managed registry into the last version. And that's really a thing I'm really excited about also to get it properly working with, uh, the latest version and updating it and making it available to all the users out there.

And then one project that's probably worth mentioning is K three C. So in other K three projects, that's an rancher. Yeah. That's the rancher internal experiment.

Beyang: is it that's a kind of a Docker replacement or, alternative.

Thorsten: Um, yeah. Yeah. You could use it like that. Um, it was kind of an experiment too. Uh, centered around creating a new Alina. You asked around building a container images. It was built on container D and build kit if I remember correctly. Um, yeah, but, uh, after playing around with it a bit, yeah, like the lead developers or they're in a Jacob decided to.

Kind of ditch the whole K three C project and go for a simpler K three as built your X only a, which is currently, still in very early stages. But this is like once it's finished, this will be the hugest thing that will come to that you can get rid of all this image importing and, uh, built push, uh, PO cycle.

And you can just do. Maybe sometime in the future, built this image. It will pop up in your cluster to use. That will be super cool.

Beyang: So if someone's listening to this and they're, you know, all excited about everything going on with K three D and they want to give K three D a try, uh, what would you advise them to do? Where it should be? Where, where should they go first?

Thorsten: Yeah. So, uh, one more thing that is new is our website. We are now on K three d.io. And you can go there and you will see a demo or a screen cause of Katy and action and we'll have links for learning. And there's also a whole section dedicated to only installing. Okay. Three D and, uh, it's yeah, it's a one liner in every case.

So either you download and run a scripts or you use Homebrew to install it or Linux pro depending on your OSTP or even, uh, the, our Kleenex package manager. Oh, you just talking about the binary, all you use Alex Ellis, other project called arcade and just to arcade get K three D and there you go. And you're able to run K through D assuming that you have Taka installed.

Beyang: Our guest today has been Thorsten Klein. Thorsen thanks so much for being on the show.

Thorsten: Yeah. Thanks for having me. It was a pleasure.

Start using Sourcegraph on your own code

Sourcegraph makes it easy to read, write, and fix code—even in big, complex codebases.