Matt Holt is the author of many popular projects in the Go open-source world, among them the popular Caddy web server, which pioneered support for HTTP/2 and might still be the only major web server to support automatic TLS by default.
Matt talks about his motivations for creating Caddy, how the project grew and evolved over time, what it was like to do a complete rewrite from Caddy v1 to v2, and the challenges of maintaining a very popular open-source project. He also talks about his latest project, a TCP multiplexer called Project Conncept.
Caddy v2: https://caddyserver.com/v2
Visual Basic 6, QBasic, MS Dos 6, Windows 3.1: https://en.wikipedia.org/wiki/Visual_Basic, https://en.wikipedia.org/wiki/QBasic, https://en.wikipedia.org/wiki/MS-DOS#MS-DOS_6.x, https://en.wikipedia.org/wiki/Windows_3.1x
Talk about Caddy 2 and the engineering challenges of long-running Go programs: https://www.youtube.com/watch?v=EhJO8giOqQs
Caddy v2 architecture: https://caddyserver.com/docs/architecture
Caddy modules: https://caddyserver.com/docs/extending-caddy#module-basics
Third party Caddy app for auth, from Paul Greenberg: https://github.com/greenpau/caddy-auth
Papa Parse: https://github.com/mholt/PapaParse
Project Conncept, layer 4 TCP multiplexing: https://github.com/mholt/conncept
This transcript was generated using auto-transcription software and the source can be edited here.
Beyang: all right. I'm here with Matt Holt, creator of the popular Caddy HTTP server and contributor to many, many open source libraries and go Matt.
Welcome to the show.
Matt: Glad to be here.
Beyang: Well, we have a lot to cover today, but, um, you know, as always, before we get into, uh, the kind of meat of your, uh, open source contributions, uh, I wanted to start off by asking you, you know, how you got into programming originally. What was that kind of initial spark for you?
Matt: I think it had something to do with growing up in the, in Iowa, um, kind of in the countryside with no one else around to, you know, play with or, or anything. So I had to kind of do my own fun things and. Part of that was tinkering on the computer, I guess when I wasn't out working on the stable. Um, so I think I credit it to learning.
Oh, what did I learn? Visual basic six and well, and Q basic, before that, thinking around in ms. Dos six. Um, so yeah, those were fun days. Um, learning how to use like windows 3.1 and, um, And see if I could cheat some of the games and I don't know, those were good times. That's what got me into it.
Beyang: You got started early then with windows 3.1.
Matt: Yeah. We had no internet, so I had to learn it all from books.
Beyang: that's a, that's amazing. You know, I actually didn't know that about you, that you grew up in Iowa. I actually grew up in Iowa too. Although I think I had a far less productive childhood than you. As far as computers go.
Matt: That's okay. You don't need a productive childhood. Do you need a fun one?
Beyang: Yeah, absolutely. Um, so in kind of like preparing for this conversation, I was trying to think back to like how I first met you. Like we met through the, the go open source community and I think it might've been one of the early gopher cons. Do you remember how we first met?
Matt: Oh man. Good question. I, I don't, um go for con sounds, right. I think I went to the 2014 or 2015 go for cons. So like the first couple of them, um, So it very well could have been there. I think I met you in Quinn.
Beyang: yeah. That was a, the time flies.
Matt: I know so much has changed.
Beyang: Yeah. so, you know, given that you were at the early, uh, gopher cons, um, my sense is that you were a fairly early adopter of go, um, tried it out, uh, in the fairly early days before too many people had, um, adopted it. Uh, what kind of got you into, uh, the go programming language in the first place?
Matt: The, um, the company I was working at. I was looking into using go. This was about like right after 1.0 is tagged. Um, so this is back in like 20, I don't know, 11 ish. Yeah. And, um, we were a C-sharp shop. I was doing front end stuff at the time, but, um, we, we were looking to go for our new backend stack. Right.
And, um, so I wrote one of the first services. Um, in go, it was an address autocomplete API, which I think was running in production until like last year it was like this awful go code, my first go project. Um, but yeah, it was a really, it was a breath of fresh air. I was coming from like PHP. I had done all backend and PHP before that, and man goes awesome.
And, uh, been championing it ever since. And just grateful that Google hasn't abandoned it, like every other one of their products. So. Hopefully that that trend holds.
Beyang: Yeah. I remember coming to go from mostly Java and I had very much the same reaction as you did. It was, it was definitely a breath of fresh air.
Matt: Yeah. It's nice to be able to like have short lines of codes and not need your ultra wide monitor to read the class names. And I know that's like a trope kind of a joke, but it's so true.
Beyang: Yeah. Yeah,
Matt: Actually it's funny because in college I wrote in my upper class, upper division classes, I would write my projects all in, go when, you know, when you get to the point where you can choose your own language and everything.
And, um, and so I wrote all my stuff and go, and my program's performed using the same. Algorithms basically like, as my classmates, all my programs performed faster and use less memory. Like all my, all my friends who were writing in Python were having had, like, they were really slow. All the friends who wrote code in Java they're like VMs would crash cause like high memory usage and stuff.
Um, but I guess again, like comparable code base, like comparable algorithms just go is so much better. Like gave me a competitive advantage in school.
Beyang: you know, from talking to my brother who had just graduated college, I don't think go has actually. It's been that widely adopted, uh, in, uh, the university environment. Yeah. So if anyone is listening to this and they're currently in college, that's a pro tip for you, write your assignments and go, and, uh, they'll automatically be faster and better than most of your peers.
Matt: Honestly, without the pain of C.
Beyang: Yeah, totally. So the caddy HTTP server, um, that's probably the project, uh, that you're most well known for. I think it's, it's probably the most popular project that, uh, you've created, um, for those who are not familiar with it. Uh, how would, how would you describe what it is and what it does.
Matt: Um, yeah, I would say just like you did, it's an HTTP server, so it's a, I guess you could compare it to Nginx or Apache or. Um, ha proxy and some other popular web servers and proxies it is both a content origin. So like conserve up static files and render documents and things like that. Um, but it, it's also a proxy, a very powerful and flexible, um, HTTP, reverse proxy.
Beyang: And what is your kind of a view of the landscape of HTTP servers and reverse proxies? You mentioned Apache and Nginx and ha proxy. Um, you know, w what's kind of your, that space and where does caddy fit in along with those others?
Matt: Yeah. So like when we released candy back, well, when it was one man project back when I first started it in 2015, when we released it, then. When I released it, then it was, there were the ones I named were the major players and there's like, I S like, um, for windows, you know, but there was like four or five mainstream servers, a couple others, like lighting and stuff, but I'm so sorry.
It's a huge, it's a huge market. Like everyone. Yeah. It needs a web server. And even, even these serverless architectures that came about later. So you use web servers, fun fact. Um, so everyone needs a web server, but there were only a few. And so I needed one that satisfied my requirements. And so I made one and it was different from all the rest.
And hopefully if you, if you use it, you kind of get a whiff of that and you, you can get a sense of how it's, how it's different and hopefully better than what you used to.
Beyang: my sense is, you know, I've, I've used, uh, Nginx before and a little bit of a patchy, but you know, when I compare the experience of working with caddy to those, it seems to me that, um, kind of the developer experience was, uh, like at the top of your mind when you were building it, uh, cause it, it seems like fairly ergonomic, um, kind of.
Uh, a little bit more intuitive to, to me as like a developer who, you know, traditionally has not spent that much time as like assist admin or like operations focus person
Matt: Yeah. Yeah. I mean, that was kind of the focus, right? Cause I was. I was setting up a bunch of random sites, especially for school and work projects. It's a bunch of little sites and manually configuring them. And so we needed a server that I could, they could quickly just spin up a site with some Semite advanced functionality like proxying and, or.
Markdown rendering was a big one for me at the time. Um, and templates like doing includes in my email files and stuff, little things like that that were like stupid but important for me at the time. And so it was just coming up with a way to configure it and, and just, you know, a little command to run it real quick.
Um, I don't think I had. The future or like growth in mind when I designed it. And that really, if you followed the caddy one life cycle, you can definitely feel like you could tell it got strained up a bit.
So things like server side includes, but better, a little more flexible, like, like including files into HTML files, but also like rendering like the current time or, or do you little transformations on Subutex? So like stupid little features like that, but they were kind of hard to come by and other web servers.
And so. And it's plus something that was easy to configure and just kinda made sense. Um, so the Caddyfile came about through that. And, um, so anyway, I just built this server that, that satisfied my requirements and it was just a lot easier to use, um, at the time compared to any other servers that were out there.
Beyang: Nice. And, uh, so you were building it for yourself, uh, at first. Uh, at what point did you say, like, Hey, this thing that I built for myself might be useful to others and you know, when did others actually start using it?
Matt: Uh, yeah, so I put, I developed it myself and then had a classmate, um, who, who I, who I worked with. Um, he kinda also gave me a little bit of a nudge, um, back in the day. And so. And then there were a couple other classmates actually that did contributed to it early on, um, like the fast CGI middleware to work with PHP sites.
Um, and so I was really grateful for their help. And then I realized when they started contributing to it and using it, maybe I should open source this thing. So put it on GitHub. And then I put it on hacker news, which was a mistake . maybe not, I don't know it was well received on hacker news, but, um, But when he got to the front page there and got a lot of feedback that was really helpful and embarrassing at the same time.
Beyang: Yeah, the hacker news crowd is, is, uh, can be very blunt and, uh, uh, yeah, unforgiving at times. But I mean, at the same time, you know, now looking back, it's like, yeah, the project was clearly, uh, you know, fill the gap that others, uh, Know, it felt that there was something that was needed there.
Matt: Yeah, I think so. And back in the day it was a and easy config. Um, you know, you should be too is cutting edge and, um, auto HTTPS. Wasn't a thing yet. Quite, not quite yet. Um, in fact, you couldn't even configure the TLS settings, uh, at all, um, back yeah. In the initial release.
Beyang: what were kind of the big features that people found a really attractive in those days? I think this was like circa 2013 or is
Matt: Oh, no, it was, it was 2015. Yeah. So ECP two had just barely been standardized or was close to being standardized. Um, That was kind of the main selling point. It was like, okay, it was this cross platform server. So it's a server that worked the same in windows, Mac, Linux, free BSD, open me a C whatever. Um, there's an ECB to web server.
So, um, if you served well, because HTB two is over TLS only typically, um, If you enable TLS at the time you had to enable TLS, uh, it would also serve, you should be too, just like that. It was really cool. Um, and then I think people like the easy configuration of the Caddyfile, um, people liked it. They also hated it.
I also was in both of those camps. I kinda liked it, but I hated it, especially I hated it more as time goes on and I hate it to this day, but it remains to this day. And we can talk about that later.
Beyang: Yeah. Yeah. And for those listeners who are less familiar with Caddy, um, it would behoove us to mention that there's caddy one and caddy two. So we're talking about catty one right now and we'll, we'll get the category to a little bit later. Um, but yeah, I'm curious, you know, what, what did you and others start to hate about, uh, Caddy one over time?
Matt: Uh, I definitely felt the pull of, uh, in many directions for features and capabilities. Um, and to a certain point it was cool. Like we were able to handle it. It came up with this like modular, like plugin architecture. Um, so you could just add an import and add new functionality to caddy that way. Um, and to this day we still use that basic idea.
Um, these, these compiled time plugins, um, so we can add more features infinitely and not bloat the code base. So that's really nice. Um, but the, the way that caddy loaded its configuration and received its configuration and interacted with its environment. So like the use of signals and the lack of an API, like a rest API was very limiting.
Um, So feature requests started to pile up and did my best to keep the number of open issues under a hundred, but, uh, could only do that for so long. And, um, you know, and I had to close or defer a lot of issues and be like, well, I mean, then Caddy, isn't the right tool for the job, even though deep down, I'm like, that's stupid.
It should work.
Beyang: And at this point, were you still, uh, like the, the sole main contributor, uh, to the project or had others joined
Matt: Yeah, no. I mean, well contributed with Caddy. One had over 250, almost 300 contributors. People who may pull requests, maybe a line or two, you know, change, but plenty of contributors, the problem is getting them to stick. And so as far as maintainers go, we did have, we didn't have a few. Uh, in our, in our community who stuck around and, and developed features in a pretty dedicated sense, but it's been really hard to find maintainers who will like to scale at the project in that sense of the maintainer ship of it, and find people who will take some responsibility and also enjoy just like vetting the code and, and the, um, I guess putting in, like, it's a big ask to put in time on a project.
That's not yours, even if it's one that you use. I mean, people will contribute what they want or need and then they'll leave and, and maybe continue using it, but they won't contribute it again. Um, yeah, it's been tough to find maintainers though.
Beyang: Yeah, and I want to dive into the, kind of the pain points of maintaining a large, you know, wildly popular, open source project at scale sustainably. But before we get into that, you know, I think a lot of people out there would be, uh, W they would be super glad to have, you know, that sort of problem, you know, where this, like so many bug requests and feature requests, because so many people are using it.
So, um, kind of to that end, um, do you remember what kind of the big, you know, inflection points were, um, you know, was there a point which like, uh, that, you know, things hit kind of like a hockey stick trajectory in terms of like more and more people, uh, were getting on caddy? Like what, what do you think were the factors in it's kind of like.
Explosive early growth.
Matt: Uh, that's a good question. So again, it's very initially it was so the cutting edge technology and the configuration, the ease of configuration, um, and it kinda came to be known as like a local developer tool, um, or even a toy web server or something you would tinker with, but never something you were deploying to production, which is unfortunate because that was always the point of it.
Beyang: Yeah. Why do people perceive it as, as a non production
Matt: I don't know. I part of it, honestly, I think it's psychological. Like I blame the name. I actually don't like the name catty, but it, it was the most descriptive name I could think of. It was because it kind of takes care of all these little details for you. So you don't have to worry about it as you play your game, so to speak, or as you, you know, it's, it just takes care of all that.
And so, but the name is not as cool as like Nginx and Apache.
Matt: It sounds dumb. So, but that honestly might be one reason, another reason, and I don't buy it. I hear this a lot. Oh, Academy is so new. Well, uh, core DNS is even newer and as caddy plugin actually, and now as a caddy fork. Um, but that power is Kubernetes and Cooper.
Dandies is about the same age as catty. And there's, I could name a bunch of other Cong out there. I think launched around the same time. Like I don't buy the whole it's too new thing. Um, I don't know. I think people have just kind of made up their mind when they, like you make a decision. It's probably no different than like meeting a person.
You just, you make a judgment, whether that's a good thing or a bad thing, but anyway,
Beyang: Yeah. I mean, that's really too bad cause it's, it's a really good production web server. We use it in production and a lot of our customers do as well. So
Matt: Cool. Yeah. Um, and then back to your first question here, it also, I think another thing that did help though, is the automatic HTTPS. I think that kind of wowed a lot of people it's still wows me today, actually that works. Um, the fact that you can turn on a web server with nothing more than a domain name to serve and.
Boom, it's over HTTPS and just a couple seconds. It has a certificate it's managing it. It will keep it renewed. It staples the LCSP, right. Response for you. Like it does HTTPS, right. And about as good as you can possibly do it, uh, out of the box. Oh, other server does that still to this day by default, like there are plenty of services.
And servers that will do it automatically, that will automate HTTPS, but you have to like check a box or turn it on with some config parameter. So anyway,
Beyang: Yeah, that's, that's pretty interesting. Cause you know, when I think about the pain points I encounter when standing up a web server, like setting up TLS and HTTPs is often extremely annoying. And so, you know, why do you, why do you think it is that other web servers does not prioritize that? I, you know, are we in kind of like a niche category here or, or are they just not as like, you know, focused on the developer experience or, you know, what is it
Matt: I don't know. It's hard, but maybe, I mean, maybe they're afraid of new major versions, but ha proxy has released a version 2.0 and. It's still not auto. It's still nice to be asked by default, but I don't know, honestly, like a CPS just seems like the right default these days.
Beyang: Yeah. Do you, do you have numbers around, like how many people have downloaded and like are using caddy to today?
Matt: Um, I did well. No, but yes. So, um, I mean, I have a rough idea of how many people have downloaded it from our website and how many people have cloned it from GitHub. Uh, and I have a rough idea of how many people or how many Docker poles there are. So that's the other thing is tracking people versus. Uh, deployments and downloads, like it's all different and it's very nuanced.
Um, but Caddy has well over probably 40 or 50 million Docker polls. If you consider all the different Docker images, we now have an official one that's a few months old, thanks to community contributions here. We've been able to make an official Docker image. Um, and then Dallas from our website, a million and a half over the last few years, um, get clones, uh, Hundreds of thousands, but again, that, I don't know what that means as far as actual deployment scale that we're seeing.
Matt: yeah, I tried finding out how widely KT is deployed with telemetry, which is also a good research tool because we were able to like, learn about the technical landscape of the internet. Um, and see what kind of clients are out there on more than just proprietary networks, like CloudFlare or Google networks.
Yeah. But, um, so we could see what kind of like TLS client hellos were being advertised and things like that. Um, but it was expensive to upkeep that and, um, people treated it toxically, unfortunately. And so I shut it down,
Beyang: yeah, telemetry is one of those really tricky issues. Uh, you know, cause there are a lot of people, very passionate about, um, data privacy and that sort of thing. And you know, I think, um, you know, legitimately so, and a lot of cases, but at the same time, I think. A side of the conversation that often gets lost is , Hey, the people who made this application want to, you know, gather data so they can make it better for users and, and learn from it.
And, uh, I, I feel like a lot of applications struggle with, with that sort of balance.
Matt: And yet people still use software like Linux and Firefox and Chrome and windows and all of which he meant telemetry. So whatever.
Beyang: Yeah. Let's talk about the journey from caddy one to caddy to, um, so caddy one was released, uh, in, I believe it was April, 2015. Uh, and then caddy two was just released earlier this year in may, right? May the fourth, may the fourth be with you? Um, and. but, so my understanding was Caddy too was, was pretty much a rewrite from, uh, scratch.
Although, you know, maybe there were a couple of components that you, you took over from caddy one, but can, can you talk about the decision to make it kind of a major version upgrade and to, you know, revamp, uh, the code base.
Matt: Yeah. Um, well, like I had talked about it before all these feature requests and, uh, initially started piling up and so it, it was time to rewrite it. And so I started with an empty go file and, you know, funk man and, and just started cranking away. Um, first thing I had to figure out was. Well, I had to, I was, I became very familiar with all the open issues.
Um, and eventually came up with an architecture. This took like four or five months of just pacing around the room and talking through your problems and like spike, coding, a bunch of stuff, and seeing what would work, doing a lot of research as to what kind of like, how would configuration be loaded? What would configuration look like?
What format is it in? Um, how can. How can we achieve certain of these goals? You know, that we have in these open issues, there's a lot of work. Um, and I had just finished. I was just finishing grad school is very at the very end of my thesis. Um, so it was a really busy time, but came up with, I think, a really awesome design, fairly novel architecture.
Um, And we were able to close over 400 issues and feature requests with just the new design alone. Uh, and I think we have the capability of closing all the remaining ones. If people want to help put in some time and just finish them up.
Beyang: That's awesome. What, what were kind of the key design and architectural? Decisions and insights that enabled all this.
Matt: Yeah. So the first one was how, how to load configuration and how to manage that. Um, in V1, a major problem was that the Caddyfile is just human readable and writeable. So it was a bit restrictive if it, if you wanted to automate your deployments, which is more and more popular these days, you know, everything should have an API.
So you can like. Programmatically interact with it. Um, and so I, I ended it around a config API or an admin API. Um, and then the other question was like, how do you, how, how, what, what format I have so many, there's actually a YouTube talk on this. We can link to in a description maybe, but anyway, one of the main questions I had to answer was how do you change certain config parameters in real time, especially if you don't want to change the whole server. Um, and if you do want to change the whole server, how do you do that efficiently? And how do you, like you have the server of the thousand concurrent clients?
Actively using it. How do you change something? You can't just pull out the rug from all of them. Can you, well, it turns out you can, um, and, and you can actually make it so that it feels like you're just changing one config, parameter, like let's suppose that you have an API end point and you change this one config parameter, uh, using this endpoint.
Um, you can make it. Work like that, but still actually, if you swap out the whole config with just that one change parameter, it's actually really efficient because it makes things really easy on the garbage collector to just clean up the one config value that it had provisioned. And then, um, and too, so you actually can pull the rug out from under all your, your concurrent goal routines and put down a new one, uh, without anything noticing, uh, And it requires only a single lock, which is really nice.
Um, and so you can do, you can do dozens of configuring less per second or whatever your hardware is capable of. Um, no problem. And, uh, we also, this way, we don't need a lock around every single config parameter. You can imagine the concurrency nightmares that come, go along with that. Um, Which also introduces like two-way data binding problems.
So anyway, there were a lot of these like config handling questions that had to answer. Um, and I think we have a really good, simple solution for that. That works really well. Um, we even have graceful reloads working in windows, which is not something other web servers really offered, cause the way we handle network sockets and do the, um, Do the graceful, like we gracefully transfer control of the socket from one over to the next.
Um, we do that in a way that works cross platform, which is really kind of unique. So, so there was that issue. Um, the, the new design also had to answer what the config looks like, and I'll cut to the chase and spare you the details as to why, but we settle with JSON. Whether you like it or not. The is JSON and it's actually a very elegant solution.
If you look into what you can do with it. Um, and then writing JSON is obviously a pain. So the answer to that is to have configure. Adapters is what I call them where. Um, these little pieces of code they're catty plugins, basically that can change your config from any format you prefer into JSON.
Because one of the reasons we had to pick JSON was because everything almost anything can evaluate down to a JSON document because of it's. And there's a bunch of like language theory behind why that is, and, you know, declarative versus imperative, syntax, and all this stuff. But. Um, but yeah, you can convert your Caddyfile to JSON.
So anytime music Caddyfile and caddy too, it just converts it to the JSON under the head for you. You can confer your Nginx config to caddy JSON, um, and even run caddy directly with your Nginx config like that. Uh tombola and Yammel obviously convert to JSON. If you prefer those. Um, th and the thing is that with JSON config, we're able to expose just about every single config parameter of your web server in like a one to one format.
So yeah, every field in a memory like a, you know, struct that's in memory, you can configure via JSON. Um, so you have unprecedented amounts of control observer. So. Yeah, anyway, that's a really long answer, but that's also, I think we're catty to really shine. So I hope that the power users will really appreciate that.
Beyang: cool. Another feature of caddy too, is kind of the extensibility, a part of it. And I understand that extensibility is achieved through these things called catty apps. Uh, can you explain what Caddy apps are and, and what sort of things you can do with them?
Matt: Yeah. Um, so Caddy app is, so caddy is made of modules. A module is a, is a piece of code or a plugin, basically that, that adds something to Caddy's config structure. So if you, if you look at, uh, the top level of a catty config of it's JSON structure, it has like four fields. There's like logging and admin.
Um, and then there's storage and then there's one called apps. Um, and the only thing caddy knows how to deal with it's core. This is a crucial thing to understand about the catty architecture is that it only knows how to load a config. It knows how to work with like those four fields and that's about it.
Um, everything else is handled by modules. So, um, for example, storage at that top level, you would configure. Um, various storage modules. It has a default storage module, of course the file system. Um, but all it does is it says, Oh, I'm supposed to use the file system storage. And then it loads that, and then the file system module or the storage module, whatever it is, we'll just run with that and do it.
It needs to do. And then when it gets to apps, it's just, Hey, it's an app. And all he knows how to do is call start on the app and it passes in a context. And that's literally all Academy does. And then when the config changes, it calls stop on the apps. Actually, I don't even think it does that. I think it just cancels the context, which implicitly calls stop on all the apps and call start on all the new apps.
So, um, so that's really all caddy does is it starts and stops apps. So apps are just pieces of go code that implement an interface called caddy.app. Which has two methods start and stop. And, uh, so the HTTP server is a caddy app, which is a module that extends, you know, it's config structure that, um, when you start it, it loads all the servers that you have configured there, and it knows how to run them.
And then when stop is called, it knows how to shut them down gracefully.
Beyang: That's a
Matt: Yeah, so apps can do, um, anything, anything that a long running program will do? So, um, current apps that ship with Caddy standard are the HTTP service, the HTTP app, uh, TLS app, which manages certificates. So this means by the way that you can actually run caddy too.
And manage certificates, meaning obtain them and keep them renewed over the lifetime of the process without needing to run an HTTP server at all. You just say TLS, and then there's a couple other properties, but you just specify the names, the domain names that you mean certificates, and it will just keep them renewed on in your storage, which is a file system by default, but it can also be a database or.
Literally anything. So, I mean, it's super flexible here. Like we're talking unprecedented unlimited amounts of extensibility, basically. Um, as long as it's written and go, so
Beyang: and why, why is there that requirement of it being written in go
Matt: yeah, because these are compile time dependencies. So caddy always is a static binary. Well, yeah, it's a static binary. So you can ship around this executable. Um, that cross compiles to basically any platform and, and it will just work. You have no external dependencies, not even Lipsy. Now, granted, we don't really have control over third party apps and modules as to what they do.
Like they might use Sego and that's up to them. Uh, we discourage that typically, and I don't think we'll officially distribute any that, that have external dependencies like that. But, um, Yeah. So the idea is that if you want to extend caddy, you do it at compile time, and then you can ship around these static binaries.
You don't have to worry about like which Python version you have installed or which, you know, C libraries you've got and all that stuff on a system you don't need, frankly, you don't need Docker to run catty. Like people do. Cause they're, they're already ingrained in that ecosystem, but it's just a static binary.
Just run it.
Beyang: Yeah, that's one of the magical things about go is that you can cross compile it to basically any, um, computing platform. And it's just that single executable you drop it in and it just magically
Matt: Yeah, so nice.
Beyang: So, so go has, um, kind of a, a plugin package in its standard library, but, uh, you kinda, you, you went the route of actually statically, compiling, um, caddy apps, uh, into the code, as opposed to using the kind of plugin mechanism in the go language.
Can you talk about, uh, you know, w why you went that route?
Matt: sure. Yeah. The, um, the go plugin packages is interesting, but it's an experiment and it's not, it's not great. I mean, it's not. Go's fault. It's just, that's just how it is. Um, it's a hard problem to do dynamically linked runtime dependencies, and also makes things difficult and tricky. And frankly, there's not a huge win in my opinion.
I know that people like that way of doing things and like expect that, but it's just a lot more reliable and a lot less stressful too. Um, Just get a static binary that has everything you need. And then just ship that. Now, this is difficult when you're talking about like certain distribution platforms and packaging systems that for example, packaging, a go program for like, you've been to like an official you Boone to, or Debbie and package is a nightmare.
I don't even know if it's possible because of you have to package, you have to package every dependency all the way down to the metal, basically. And that's really hard in practice. So anyways, um, anyway, go, plugins are fine, but they're not like. Something that we wanted to rely on. So the catty way of doing it is you just add an import to your main function.
And literally as long as it's just compiled in it, we'll get, the plugin will register itself on it. When the program is starting up and then caddy can use it. Uh, and we have tools to help manage that pretty easily. So.
Beyang: that makes sense. So going back to Caddy apps and what you can do with them, um, I want to dive into like some of the other things that, uh, you know, you and perhaps members of the community you have. Built through caddy apps. Um, perhaps we'll start with kind of a personal anecdote, which is, uh, you know, Sourcegraph, we support a SSO cause it's often a requirement for the companies that we sell into, you know, large enterprises, things like that. And I was actually the one who wrote it, a lot of our, uh, original SSO code. Um, I remember at the time. Uh, looking into like Nginx, plugins for SSO, um, but getting kind of lost in the documentation and, you know, there's always that worry in the back of your mind where like, you know, maybe it works for the first.
You know, one or two cases, but eventually you're going to hit some edge case where the plugin doesn't support what you needed to do. And so then you're left kind of, you know, hitting a brick wall. And so that's why we ended up just building SSO into the Sourcegraph, uh, application itself. But, you know, if I were writing that today, Um, knowing what I know about, you know, Caddy and, uh, Caddy apps and, and the plugin architecture, it seems like a much better path would have been to like right.
SSO support, um, as a caddy app. And then I, because it just go code, I could just make, make it talk to whatever, you know, backend or database, uh, I needed it to connect to for, you know, user validation and things like that. Is that
Matt: That's exactly a great use case. Yeah. So, um, there's a couple of ways you could do it, right? Like it. So is it great? It, so it's a hard problem, but, um, it could be a catty app if it's, if it's standalone, like if it can run on its own, you just call start and then, you know, it does his thing. Or if you need it to be like an HTTP handler, it could be an HSP handler module.
And that can plug that in instead. Um, There. I know that, um, a developer named Paul, he's been working on some authentication modules for caddy too, and based some he's done some really cool stuff. It's, they're very powerful. They're still kind of in the early days, but he wants people to help test. Um, and.
Uh, yeah, but so caddy, yeah, it would be a great use case for things like authentication. Anything that again is long running. Another benefit too, of writing a caddy module instead of a separate. Piece of code that you have to ship and run and manage well. So that's a nice thing, right? As you can, if all your, if your whole stack is written and go, and they're all caddy modules, then that means you ship one binary around and you have all your services in that one central configuration that you manage.
And they all automatically benefit from this real time online config API, because the config API just manipulates the JSON struct in real time while the server's running. Um, so, and since it's all one JSON structure, you can just manipulate it and we also can automatically document it. So if you go to our caddy website and you look at the documentation and you look at the JSON config structure, there's all of this automatically generated JSON documentation, which normally is not that difficult to do, except that because of its extensible nature, like.
It's, you know, like we have the top level and then we have a few keys or properties and then we have the apps key. And then what's what's within that. Well, it's impossible to know until we have like modules, but, but this system, this documentation system. Yeah. You can just hover over it and see what modules are available there.
You click on when you want to use. And, um, and so it's all, self-documenting automatically.
Beyang: That's awesome. I think documentation is such an important part of, uh, you know, developer experience in ergonomics. I think caddy does a fantastic job of that. Like the docs are so user friendly and easy to explore.
Matt: I'm glad you said that. Can we, can we feature that on the, on the podcast page? Cause I don't hear that too often, but I, I mean our dogs could improve, but like they're pretty good. We get a lot of complaints, but they're not actually that bad.
Beyang: Yeah. I mean, I feel like, you know, uh, however much effort you invest into your docs, there's always going to be a long tail of people that will say like, well, you know, you know, I had a hard time using them and,
Matt: Yeah. I mean, we can work on that, but they're not there
Beyang: At least from this user's perspective, I've had a, you know, overall very positive experience with the docs.
Um, and so I actually wanted to ask you, you know, like what is, what, what is the key to having a good docs? Are there a couple of things that you found that really worked for you and your community?
Matt: The, uh, documentation is hard. Uh, I think it's important to scope your documentation and keep it, frankly, pretty narrow. Like it's not your job, for example, we're, we're a web server. Um, technically we're an application platform, but most people use it as a web server.
Um, our job is not to explain how the internet works. It's not our job to explain how to configure it's frankly. Like I know it's funny when you say like this and I, I don't,
Beyang: I know, I know exactly what you're talking about though.
Matt: I don't mean to come off condescending, but like, frankly it is not the documentation's job to explain how to open like that. The ports 80 and four 43 VW, ACP and ACPS typically, um, you know, or how to like configure their firewalls and other network things like, or how to run a process on Linux. And, you know, after a reboot, like.
I mean, we, we have official like packages and we have a tutorial, but it really, our docs need to be focused on how to use the software. And so we get a lot of complaints that like our docs don't tell you enough, but the reason is because we kind of expect you to frankly, to know how to use your computer and how the internet works.
And. When you're ready to come learn how to use a Webster, like caddy is not a child's toy. It is. It's an advanced tool. I, that was our biggest mistake in V one was touting. And it's like, it's easy to use. That was a mistake because web servers, no matter how easy you make them to use, like, and caddy is not hard to use, but there's a lot going on and you just need to understand it.
So read the documentation. We have tutorials that like, You users should go through, but I want everyone, no matter how experienced or expert you are to go through the getting started guide, just go through, getting started, go through either the API tutorial or the Caddyfile tutorial, whatever you want.
Um, go through at least those two things. And then, and then the rest of it is referenced documentation. Mostly you can just, once you know what you. Are looking for just finding the reference documentation and then kind of figure it out like, and yeah, we'll improve it where there's mistakes or gaps, but, but yeah, like focus the docs, you know, you need to separate tutorials from reference and then also kind of expect something of your users.
It's okay to do that.
Beyang: Yeah, definitely. Uh, I've certainly, you know, have the experience of, even with someone like Sourcegraph, which isn't even an HTTP server, but like, you know, sometimes. I'll get random emails from people saying like, Hey, you know, could you help me answer this like homework question? Like this has nothing to do with Sourcegraph, but, uh, you know, I almost like, I feel like, you know, for a lot of people, maybe it comes from a good place.
Cause you know, when I, when I think back to when I was very new to development, you can very easy, easily get overwhelmed by just the sheer quantity of. Stuff that, uh, you know, you may or may not have to comprehend about, you know, how the internet works, how computer works. Um, but I guess my, my advice to, you know, anyone who's submitting like a question or, um, you know, that sort of thing to like, uh, uh, Uh, opensource project is just, you know, be up front with your level of knowledge.
Like if you're kind of like grasping astrology, just like, you know, say that. And like, oftentimes people are very happy to like point you in the right direction, but just like, err, on the side of saying like, you know, Hey, you know, I'm, I'm a new here. I'm trying to, you know, get this up and running. I understand this may not be the best forum.
Uh, you know, would love any, you know, pointers that people might be able to give. I think that that's sort of ask, well, We'll have a much higher likelihood of getting a useful response.
Matt: Definitely. Definitely. And, and it's hard cause like we developers have maintainers have limited time. Um, and I, I want to help people. I, I would love to tutor people and help people on like their homework project problems, but I just, I have limited time to do that. Um, And yeah, like also I would, I would suggest don't, don't go in guns, blazing blaming the software.
Like, yeah, I know that even you may know what you're doing and how to use your computer, but, but it might not actually be the software's fault. And honestly, I would say more than 50% close to three quarters, I don't know. I'm spit balling here, but a lot of the forum threads and issues that are opened with the caddy project at least are.
Not actually issues with caddy. They're usually issues with like a CloudFlare configuration or like a network misconfiguration or a Docker mess up Docker. So many times Docker or DNS, Docker and DNS are like equally problematic. And then. You know, and it actually, the problem has nothing to do with caddy.
Like most of the time cat is caddy works. You know, the software works. Like it's just putting all the pieces together as hard. And I get that, but just don't come in blaming the software, like just be open to possibilities and help us understand your entire setup. And yes, it's a lot of information to write down and it takes time to.
Boil it down to its simplest form and make it make the problem reproducible. So we can, you know, experience the same thing. And these are all skills that, that we want the community to learn, but they're very helpful and very important. Um, and then, but, but yeah, like when we do get good reports, we can, we can fix the bugs.
Beyang: yeah. Absolutely. So, you know, beyond caddy, you created contributed to quite a few other, like very, very popular, uh, packages, um, in, in go, uh, just to name a few of those, you know, JSON to go, PapaParse, archiver, or timeliner, or cruel to go a checkup. Um, like people should really go to your, get a page and check out, uh, you know, all, all the amazing stuff that you've done.
Um, but I understand that you're, you're kind of working on a new project now that's, uh, something to do with like layer four TCP multiplexing.
Matt: Yeah. Yeah. Yeah. That's my latest side project that I'm having fun with. It's a, it's a caddy app that, um, if you're familiar with it, caddies, HTTP server, this will be a, it's a similar idea or it's basically, yeah, it's a TCP UDP proxy and server. Um, so instead of operating at the HTTP or the application layer that operates at the transport layer, um, so it accepts raw network connections or packets, and then, um, It will, you can configure caddy to do whatever you want with it.
Uh, everything from you can first match on the connection. So similar to the HTTP server, where you have this powerful idea of matchers, um, which are pieces of configuration that select or filter certain requests, right? You can match on HTTP headers or, um, the path of the request or the client IP or the time of day or whatever else, like.
It's a filter. Basically you can do this at the layer four. Now you can match on the client IP of course. Or does the connection look like an HTTP protocol? So it can read in a few of the bytes and like sniff it out. And then, um, is it looked like a TLS handshake? Does it look like SSH? Does it look like some other database call or whatever?
And then you can, so you can match on connection type and. Multiplex, various different protocols and connections on, on a single socket. Um, so that's really cool. And then you can need it from there. You can proxy it with like load balancing and health checks or you can, uh, echo it back. If you're like debugging or whatever troubleshooting, you can tee it off.
Kind of like the Linux T command. Um, that's kind of useful. Sometimes you want to record what's coming in from a client, but also still pass it upstream. Um, You can terminate TLS or you can not terminate it, whatever you want to do. And then of course, TLS termination, you get the benefits of certificate management for free, you know, the automatic TLS. Um, so anyway, it's a really cool, really flexible, um, piece of software that I'm really excited about.
Beyang: Would it be safe to call it like a caddy, but for layer four, instead of layer seven, I hate, I hate to do the X for Y thing, but is that kind of like a.
Matt: Yeah. Uh, yeah, it's basically the same thing and I call it project concept with two ends, like a
Beyang: Project co Oh, that's cool. That's cool. That's cool. Name
Matt: Yeah. It's better than catty. Um, so yeah, it's Apache license. It's not, it's not open to the public, um, while I'm developing it I'm, um, right now sponsors get exclusive access. So if you sponsor me, you can get early access. Um, Uh, and you can test it out and I'd love your feedback. Whoever's listening to this would like to try it.
Um, hopefully, you know, I have the sponsor goal. If you go to my sponsor page and once you reach that goal
Beyang: sponsors, right?
Matt: yup. Go to https://github.com/sponsors/mholt. Um, you'll see that I have a sponsor goal. And once you reach that goal, then I'll open it to the public. And, and we can get to that goal faster, um, at higher tier sponsorships too.
So like, if you want to advocate for your company to sponsor, um, that would be like an ideal arrangement. I think
Beyang: Yeah, definitely. And I would highly encourage anyone who is, you know, remotely interested in this sort of thing to go and, and sponsor that project. It seems, uh, extremely useful. Extremely cool. And I think. Yeah. And also I think, you know, like open source contributors, maintainers, like they invest so much time and effort into making these, uh, libraries and tools that we all can use that, uh, you know, but they, they also have to pay the bills at the end of the day.
Matt: we do. Yeah. Um, I recently, I recently put out a tweet, um, comparing open source sponsorships to like athletic sponsorships. You know how like athletes will be sponsored. Typically athletes are sponsored by like, One company and, or like, and our NASCAR's are sponsored by like a couple of few companies maybe.
Um, but like open source sponsor, you know, developers are sponsored by. A thousand, well, the really popular projects and caddy is actually not at this scale, but like you look at projects like view or whatever, and they have like thousands of sponsors and they're each doing like a dollar or $5 a month. I just thought it'd be so funny.
If like athletes are sponsored by like thousands of their fans at a dollar a month or something like maybe kind of weird. That's a weird thought. The corporate sponsorship really is the way to go. I think.
Beyang: Yeah. So I just went on and sponsored you, uh, and I encourage everyone else to do the same. Um, so I guess, yeah, as well, thank you so much. I feel like this is the least I could do. Um, As, as kind of like a, a final thought, uh, you know, you've built like a fantastic reputation as one of the most active, open source contributors and in the world of kind of modern web utilities and the go open source community, um, you know, looking back on your journey, do you have any lessons or words of advice for people who aspire to, uh, be like you, man?
Matt: No. I mean, Not go into, I would not go into open source development thinking that you're going to make a living off of it. Um, I've been very fortunate and pushed really hard to make that happen. And I'm not going to try and discourage, like, if you want to do that, go for it. Like, I think that's a great goal, but I think it's better if like companies who are already profitable and developers who work for companies who are, you know, who already have a salary, um, getting into open source at that point.
I think that's good. I think it's healthy for tech companies to be involved in open source, but, um, I would just, uh, Honestly, like, I wouldn't know, over glorify opensource development. Um, it definitely is not perfect and has a lot of issues, but, um, if you like working with communities and if you like, so like I'm still personally developing better, like patience and kindness toward everyone.
So like that's something I'm working on, but, and if you're really into that, then community sense of things then open source is really great. Um, but it doesn't need to be a goal. You can still be a really successful developer, um, doing a normal day to day development job, um, that may not get any recognition, unfortunately, but, um, just please do your job.
Well, whatever it may be, because we're, we who are using your company's products are relying on that, you know? Um, so.
Beyang: definitely. And, uh, I really do hope that, uh, someone comes along and cracks the very tricky nut of how do we enable open source contributors and authors to, to make a sustainable living off their work. Cause I think whoever does that will do it a huge favor to the world.
Matt: Yeah, I agree. I can give you a formula that I think will work if you have to tailor it to each circumstance, but, um, And this is not my own formula, but I've seen this work. And, and that is find companies who will sponsor your projects well, or sponsor you personally as, as the maintainer, um, where your project or your work benefits the company, either their employees or their customers, because either of those audiences are profitable for the company.
So. You write a developer tool that the company's employees use, then it's in the company's best interest to sponsor your development and, um, and, and keep that project alive and not even for like, not even necessarily indirect exchange for your, um, like, uh, service, but, um, although you could, you know, if, if you wanted to make it a, a consulting sale in that sense, you could, um, But just the fact that the project is going to keep going and then some, um, security box.
It is very valuable thing. It's also a super good look for the company. Um, imagine this, the company that sponsors this little open source project that is actually really important to them and their customers. And they're able to say, Hey, you know, send out an email to all their customers or put on their website, like, Hey, we sponsor this project, you know, because you use it and you benefit from it and we want it to stay alive and healthy and well, um, you know, we're pleased to, to be behind our customers who use this.
Um, that's a really good luck. And so this, this can work really well. Um, I'm fortunate. In there right now, my, uh, my living is covered by basically by sponsorships, a single corporate sponsor. And then I have many individual sponsors, so I really rely on that, um, to continue working on Caddy full time.
as kind of like a, you know, a final question, you know, if people listening, they want to try out caddy or, project concept or any of your other, uh, you know, work in the NGO community, uh, what would you recommend they do.
Matt: Um, yeah, just go to the Caddy website and you can click download and just get a build for your system. Or you could do any of the packages we have Docker included. Um, and then just go to the getting started guide and just go through the tutorial and then find the next tutorial. Um, whether it's the Caddyfile or API tutorial.
Um, and just start with it and see how useful it can be for you. And then, um, feel free to post in our forums, get involved in, uh, you know, even if you don't need help with anything, go find someone who is asking. We have a lot of people asking questions about how to get caddy, to work with their setups.
And, uh, again, most of the questions aren't really so much like problems in caddy. It's just. They just don't know how to get their set up to work. And so we need, we could really use more people helping on our forums. We have a few wonderful contributors to our forums who are very active and helpful.
Um, but it'd be nice if they weren't, you know, that burden wasn't only on just them.
Beyang: My guest today has been Matt Holt, Matt. Thanks for being on the
Matt: Yeah. I think so much beyond this has been fun.