Matt Holt is the author of many popular projects in the Go open-source world, among them the popular Caddy web server, which pioneered support for HTTP/2 and might still be the only major web server to support automatic TLS by default.
Matt talks about his motivations for creating Caddy, how the project grew and evolved over time, what it was like to do a complete rewrite from Caddy v1 to v2, and the challenges of maintaining a very popular open-source project. He also talks about his latest project, a TCP multiplexer called Project Conncept.
Caddy v2: https://caddyserver.com/v2
Visual Basic 6, QBasic, MS Dos 6, Windows 3.1: https://en.wikipedia.org/wiki/Visual_Basic, https://en.wikipedia.org/wiki/QBasic, https://en.wikipedia.org/wiki/MS-DOS#MS-DOS_6.x, https://en.wikipedia.org/wiki/Windows_3.1x
Talk about Caddy 2 and the engineering challenges of long-running Go programs: https://www.youtube.com/watch?v=EhJO8giOqQs
Caddy v2 architecture: https://caddyserver.com/docs/architecture
Caddy modules: https://caddyserver.com/docs/extending-caddy#module-basics
Third party Caddy app for auth, from Paul Greenberg: https://github.com/greenpau/caddy-auth
Papa Parse: https://github.com/mholt/PapaParse
Project Conncept, layer 4 TCP multiplexing: https://github.com/mholt/conncept
If you notice any errors in this transcript, you can propose changes to the source.
Beyang Liu: All right, I’m here with Matt Holt, creator of the popular Caddy HTTP server and contributor to many, many open source libraries and go. Matt, welcome to the show.
Matt Holt: Thanks, Beyang. Glad to be here.
Beyang: Well, we have a lot to cover today as always before we get into the meat of your open source contributions I wanted to start off by asking you how you got into programming originally. What was that initial spark for you?
Matt: I think it had something to do with growing up in Iowa in the countryside with no one else around to play with or anything. So I had to do my own fun things and part of that was tinkering on the computer, I guess, when I wasn’t out working on the stable. So I think I credit it to learning, oh what did I learn? Visual Basic 6 and QBasic before that, tinkering around in MS Dos 6. Those were fun days learning how to use Windows 3.1 and see if I could cheat some of the games and, I don’t know, those were good times. That’s what got me into it.
Beyang: You got started early then with Windows 3.1.
Matt: Yeah. We had no internet, so I had to learn it all from books.
Beyang: That’s amazing. I actually didn’t know that about you, that you grew up in Iowa. I actually grew up in Iowa too. Although, I think I had a far less productive childhood than you as far as computers go.
Matt: That’s okay. You don’t need a productive childhood. You need a fun one.
Beyang: Yeah, absolutely. So in preparing for this conversation, I was trying to think back to how I first met you. We met through the Go open source community and I think it might’ve been one of the early GopherCons. Do you remember how we first met?
Matt: Oh man. Good question. I don’t. GopherCon sounds right. I think I went to the 2014 or 2015 GopherCons. So the first couple of them. So it very well could have been there. I think I met you and Quinn.
Beyang: Yeah, yeah. That was … Time flies.
Matt: I know. So much has changed.
Beyang: Yeah. Given that you were at the early GopherCons, my sense is that you were a fairly early adopter of Go, tried it out in the fairly early days before too many people had adopted it. What got you into the Go programming language in the first place?
Matt: The the company I was working at was looking into using Go. This was about right after 1.0 is tagged. So this is back in 20, I don’t know, 11 ish and we were a C Sharp shop. I was doing front end stuff at the time but we were looking into Go for our new backend stack. So I wrote one of the first services in Go. It was an address autocomplete API, which I think was running in production until last year. it was this awful Go code in my first Go project. It was a breath of fresh air. I was coming from PHP. I had done all backend in PHP before that and man, Go was awesome and I’ve been championing it every since and just grateful that Google hasn’t abandoned it like every other one of their products. So hopefully that trend holds.
Beyang: Yeah, I remember coming to Go from mostly Java and I had very much the same reaction as you did. It was definitely a breath of fresh air.
Matt: Yeah, it’s nice to be able to have short lines of codes and not need your ultra-wide monitor to read the class names. I know that’s a trope kind of a joke but it’s so true.
Beyang: Yeah, yeah.
Matt: Actually, it’s funny because in college I wrote in my upper class, upper division classes, I would write my projects on Go when you get to the point where you can choose your own language and everything and so I wrote all my stuff in Go and my programs performed using the same algorithms basically as my classmates, all my programs performed faster and used less memory. All my friends who were writing in Python, they were really slow. All the friends who wrote code in Java, their VMs would crash because high memory usage and stuff. Again, comparable code base, comparable algorithms, just Go was just so much better. It gave me a competitive advantage in school.
Beyang: From talking to my brother, who had just graduate college, I don’t think Go has actually been that widely adopted in the university environment yet. So if anyone is listening to this and they’re currently in college, that’s a pro tip for you. Write your assignments in Go and they’ll automatically be faster and better than most of your peers.
Matt: Honestly, yeah. Without the pain of C
Beyang: Yeah, totally. So the Caddy HTTP server, that’s probably the project that you’re most well known for. I think it’s probably the most popular project that you’ve created. For those who are not familiar with it, how would you describe what it is and what it does?
Matt: Yeah, I would say it just like you did. It’s an HTTP server. I guess you could compare it to nginx or Apache or HA Proxy and some other popular web servers and proxies. It is both a content origin so can serve up static files and render documents and things like that but it’s also a proxy, a very powerful and flexible HTTP or reverse proxy.
Beyang: And what is your view of the landscape of HTTP servers and reverse proxies? You mentioned Apache and nginx and HA Proxy. What’s [crosstalk 00:07:35] that space and where does Caddy fit in along with those others?
Matt: Yeah. So when we released Caddy back … Well, it was one man project back when I first started it in 2015. When I released it then the ones I named were the major players and there was IIS for Windows but there was four, five mainstream servers, a couple others like lighttpd and stuff. So it’s not a huge … It’s a huge market. Everyone needs a web server and even these server less architectures that came about later still use web servers. Fun fact. So everyone needs a web server but there were only a few so I needed one that satisfied my requirements and so I made one and it was different from all the rest and hopefully if you use it you get a whiff of that and you can get a sense of how it’s different and hopefully better than what you used to.
Beyang: My sense is I’ve used nginx before and a little bit of Apache but when I compare the experience of working with Caddy to those it seems to me that the developer experience was at the top of your mind when you were building it because it seems fairly ergonomic, a little bit more intuitive to me as a developer who traditionally has not spent that much time as a SisAdmin or operations focused person.
Matt: Yeah. Yeah, that was the focus, right, because I was setting up a bunch of random sites especially for school and work projects, a bunch of little sites and manually configuring them and so I needed a server that could quickly just spin up a site with semi-advanced functionality like proxying and/or markdown rendering was a big one for me at the time, and templates like doing includes in my HTML files and stuff. Things like that that were stupid but important for me at the time. So it was just coming up with a way to configure it and a little command to run it real quick.
Matt: I don’t think I had the future or growth in mind when I designed it and that really, if you followed the Caddy v1 lifecycle you can definitely feel … You could tell it got strained up a bit. So things like server site includes but better, a little more flexible. Including files into HTML files but also rendering the current time or doing little transformations on subtext. So stupid little features like that but they were hard to come by on other web servers. Plus something that was easy to configure and just made sense. So the Caddyfile came about through that. So anyway, I just built this server that satisfied my requirements and was just a lot easier to use at the time compared to any other servers that were out there.
Beyang: Nice. So you were building it for yourself at first. At what point did you say, “Hey, this thing that I built for myself might be useful to others and when did others actually start using it?
Matt: Yeah. I developed it myself and then had a classmate who I worked with. He also gave me a little bit of a nudge back in the day. Then there were a couple other classmates actually that did contribute to it early on. The fast CGI Middleware to work with PHP sites. So I was really grateful for their help. Then I realized when they started contributing to it and using it, maybe I should open source this thing so I put it on GitHub and then I put it on Hacker News, which was a mistake. Maybe not, I don’t know. It was actually well received on Hacker News but when it got to the front page there and got a lot of feedback, that was really helpful and embarrassing at the same time.
Beyang: Yeah, the Hacker News crowd can be very blunt and-
Matt: Ooh boy.
Beyang: Yeah, unforgiving at times.
Beyang: But at the same time, now looking back it’s like yeah, the project was clearly … Filled a gap that others felt that there was something that was needed there.
Matt: Yeah, I think so and back in the day it was HTTP/2 and easyconfig. HTTP/2 was cutting edge and auto HTTPS wasn’t a thing yet, not quite yet. In fact, you couldn’t even configure the TLS settings at all in the initial release.
Beyang: What were the big features that people found really attractive in those days? I think this was circa 2013. Is that right?
Matt: No, it was 2015, yeah.
Beyang: Oh 2015, okay.
Matt: HTTP/2 had just barely been standardized or was close to being standardized. That was the main selling point. It was this cross platform server so it’s a server that worked the same in Windows, Mac, Linux, FreeBSD, OpenBSD, whatever. It was an HTTP/2 web server so if you served … Well, because HTTP/2 is over TLS only typically, if you enable TLS, at the time you had to enable TLS, ti would also serve HTTP/2 just like that. It was really cool. Then I think people like the easy configuration of the Caddyfile. People liked it. They also hated it. I also was in both of those camps. I kind of liked it but I hated it. Especially I hated it more as time goes on and I hate it to this day but it remains to this day and we can talk about that later.
Beyang: Yeah, and for those listeners who are less familiar with Caddy it would behoove of us to mention that there’s Caddy v1 and Caddy v2. So we’re talking about Caddy v1 right now and we’ll get to Caddy v2 a little bit later. But yeah, I’m curious. What did you and others start to hate about Caddy v1 over time?
Matt: I definitely felt the pull in many directions for features and capabilities and to a certain point it was cool. We were able to handle it. It came up with this modular plugin architecture so you could just add an import and add new functionality to Caddy that way and to this day we still use that basic idea, these compiled time plugins. So we can add more features infinitely and not blow up the code base so that’s really nice but the way that Caddy loaded it’s configuration and received it’s configuration and interacted with it’s environment, so the use of signals and the lack of an API, like a rest API, was very limiting. So feature requests started to pile up and I did my best to keep the open number of issues under 100 but could only do that for so long and I had to close or defer a lot of issues and be like, well then Caddy isn’t the right tool for the job even though deep down I’m like, that’s stupid. It should work.
Beyang: And at this point were you still the sole main contributor to the project or had others joined?
Matt: Yeah, no well, Caddy v1 had over 250, almost 300 contributors. People who may pull requests, maybe a line or two would change but plenty of contributors. The problem is getting them to stick and so as far as maintainers go, we did have a few in our community who stuck around and developed features in a pretty dedicated sense but it’s been really hard to find maintainers who will … To scale the project in that sense, the maintainership of it and find people who will take some responsibility and also enjoy just vetting the code and I guess putting in … It’s a big ask to put in time on a project that’s not yours even if it’s one that you use. People will contribute what they want or need and then they’ll leave and maybe continue using it but they won’t contribute to it again. Yeah, it’s been tough to find maintainers though.
Beyang: Yeah. I want to dive into the pain points of maintaining a large, wildly popular, open source project at scale sustainable but before we get into that I think a lot of people out there would be … They would be super glad to have that sort of problem where there’s so many bug requests and feature requests because so many people are using it. So to that end, do you remember what the big inflection points were? Was there a point which that things hit a hockey stick trajectory in terms of more and more people were getting onto Caddy? What do you think were the factors in it’s explosive early growth?
Matt: That’s a good question. So again, very initially it was HTTP/2, so the cutting edge technology and the configuration, the ease of configuration and it came to be known as a local developer tool or even a toy web server, something you would tinker with but never something you would deploy into production, which is unfortunate because that was always the point of it.
Beyang: Yeah, why do people perceive it as a non production?
Matt: I don’t know. Part of it, honestly, I think is psychological. I blame the name. I actually don’t like the name Caddy but it was the most descriptive name I could think of because it takes care of all these little details for you so you don’t have to worry about it as you play your games, so to speak, or as you … It just takes care of all that but the name is not as cool as nginx and Apache.
Matt: It sounds dumb but that honestly might be one reason. Another reason, and I don’t buy … I hear this a lot. “Oh Caddy is so new.” Well, CoreDNS is even newer and is a Caddy plugin actually and now has a Caddy fork but that powers Kubernetes and Kubernetes is about the same age as Caddy.”
Matt: I could name a bunch of other, Kong out there I think launched around the same time. I don’t buy the whole, it’s too new thing. I don’t know I think people have just made up their mind when they … You make a decision … It’s probably no different than meeting a person. You make a judgment, whether that’s a good thing or bad thing but anyway.
Beyang: Yeah, that’s really too bad because it’s a really good production web server. We use it in production and a lot of our customers do as well.
Matt: Cool. Yeah, then back to your first question here. It also, I think another thing that did help though was the automatic HTTPS. I think that wowed a lot of people. It still wows me today actually that that works. The fact that you can turn on a web server with nothing more than a domain name to serve and boom, it’s over to HTTPS in just a couple seconds. It has a certificate, it’s managing it, it’ll keep it renewed, it staples the the LCSP response for you. It does HTTPS right and about as good as you can possibly do it out of the box and no other server does that still to this day by default. There are plenty of services and servers that will do it automatically, that will automate HTTPS but you have to check a box or turn it on with some config parameter. Anyway.
Beyang: Yeah, that’s pretty interesting because when I think about the pain points I encounter when standing up a web server, setting up TLS and HTTPS is often extremely annoying. Why do you think it is that other web servers have just not prioritized that? Are we in a niche category here or are they just no as focused on the developer experience or what is it?
Matt: I don’t know. It’s hard but maybe they’re afraid of new major versions but HA Proxy just released a version 2.0 and it’s still not auto. It’s still not HTTPS by default but I don’t know honestly. HTTPS just seems like the right default these days.
Beyang: Yeah. Do you have numbers around how many people have downloaded and are using Caddy today?
Matt: I did. Well, no but yes. I have a rough idea of how many people have downloaded it from our website and how many people have cloned it from GitHub and I have a rough idea of how many people or how many docker pulls there. So that’s the other thing is tracking people versus deployments and downloads. It’s all different and it’s very nuanced but Caddy has well over probably 40 or 50 million docker pulls if you consider all the different docker images. We now have an official one that’s a few months old. Thanks to community contributions here we’ve been able to make an official docker image and then downloads from our website, a million and a half over the last few years, git clones hundreds of thousands but again, I don’t know what that means as far as actual deployment scale that we’re seeing. I tried finding out how widely Caddy is deployed with telemetry, which is also a good research tool because we were able to learn about the technical landscape of the internet and see what kind of clients are out there on more than just proprietary networks like Cloudflare or Google Networks.
Beyang: Okay, cool.
Matt: Yeah. So we could see what kind of TLS client hellos were being advertised and things like that but it was expensive to upkeep that and people treated it toxically unfortunately and so I shut it down.
Beyang: Yeah, telemetry is one of those really tricky issues because there are a lot of people very passionate about data privacy and that sort of thing and I think legitimately so in a lot of cases but at the same time I think a side of the conversation that often gets lost is, hey the people who made this application want to gather data so they can make it better for users and learn from it. I feel like a lot of applications struggle with that sort of balance.
Matt: And yet people still use software like Linux and Firefox and Chrome and Windows, all of which emit telemetry. So whatever.
Beyang: Yeah. Let’s talk about the journey from Caddy v1 to Caddy v2. Caddy v1 was released in, I believe it was April 2015 and then Caddy v2 was just released earlier this year in May, right?
Matt: May the fourth, yup.
Beyang: May the fourth be with you.
Matt: [inaudible 00:23:17].
Beyang: So my understanding was Caddy v2 was pretty much a rewrite from scratch although maybe there were a couple components that you took over from Caddy v1 but can you talk about that decision to make it a major version upgrade and to revamp the code base?
Matt: Yeah. Well, like I had talked about it before, all these feature requests and issues started piling up and so it was time to rewrite it. So I started with an empty Go file and funk man and just started cranking away. First thing I had to figure out was … Well, I became very familiar with all the open issues and eventually came up with an architecture. This took four or five months of just pacing around the room and talking through problems and spike coding a bunch of stuff and just seeing what would work, doing a lot of research as to what kind of … How would configuration be loaded? What would configuration look like? What format is it in? How can we achieve certain of these goals that we have and these open issues.
Matt: There’s a lot of work and I was just finishing grad school. It was right at the very end of my thesis so it was a really busy time but came up with, I think, a really awesome design, fairly novel architecture and we were able to close over 400 issues and feature requests with just the new design alone and I think we have the capability of closing all the remaining ones if people want to help put in some time and just finish them up.
Beyang: That’s awesome. What were the key design and architectural decisions and insights that enabled all this?
Matt: Yeah. So the first one was how to load configuration and how to manage that. In v1 a major problem was that the Caddyfile was just human readable and writeable. So it was a bit restrictive if you wanted to automate your deployments which is more and more popular these days. Everything should have an API so you can programmatically interact with it. So I designed it around a config API or an admin API and then the other question was, how do you … What format … I have so many … There’s actually a YouTube talk on this we could link to in a description maybe.
Matt: But anyway, one of the main questions I had to answer was how do you chance certain config parameters in real time especially if you don’t want to change the whole server and if you do want to change the whole server, how do you do that efficiently and how do you … you have the server of a thousand concurrent clients actively using it. How do you change something? You can’t just pull out the rug from all of them can you? Well, turns out you can and you can actually make it so that it feels like you’re just changing one config parameter.
Matt: Let’s suppose that you have an API endpoint and you change this one config parameter using this endpoint. You can make it work like that but still actually, if you swap out the whole config with just that one change parameter it’s actually really efficient because it makes things really easy on the garbage collector to just clean up the one config value that it had provisioned and then … So you actually can pull the rug out from under all your concurrent Go routines and put down a new one without anything noticing and it requires only a single lock which is really nice. So you can do dozens of configuring [inaudible 00:27:13] per second or whatever your hardware’s capable of no problem. We also, this way we don’t need a lock around every single config parameter. You can imagine the concurrency nightmares that go along with that, which also introduces two-way data binding problems.
Matt: So anyway, there were a lot of these config handling questions that had to answer and I think we have a really good simple solution for that that works really well. We even have graceful reloads working in Windows, which is not something other web servers really offer because the way we handle network sockets and do the graceful … We gracefully transfer control of the socket from one server to the next. We do that in a way that works cross platform, which is really unique. So there was that issue.
Matt: The new design also had to answer what the config looks like and I’ll cut to the chase and spare you the details as to why but we settle with json. Whether you like it or not the config is json and it’s actually a very elegant solution if you look into what you can do with it. Then writing json is obviously a pain so the answer to that is to have config adapters, is what I call them, where these little pieces of code, they’re Caddy plugins basically that can change your config from any format you prefer into json because one of the reasons we had to pick json was because everything, almost anything can evaluate down to a json document and there’s a bunch of language theory behind why that is and declared over [inaudible 00:29:01] and all this stuff.
Matt: But yeah you can convert your Caddyfile to json. So anytime you use a Caddyfile in Caddy v2 it just converts it to json under the hood for you. You can confer your nginx config to Caddy json and even run Caddy directly with your nginx config like that. [inaudible 00:29:22] obviously convert to json if you prefer those. The thing is that with json config, we’re able to expose just about every single config parameter of your web server in a one to one format. Every field in a memory, like in a struct that’s in memory you can configure via json. So you have unprecedented amounts of control over your web server.
Beyang: That’s awesome.
Matt: Yeah, anyway. That’s a really long answer but that’s also I think where Caddy v2 really shines so I hope the power users will really appreciate that.
Beyang: Cool. Another feature of Caddy v2 is the extensibility part of it and I understand that extensibilities achieved through these things called Caddy apps. Can you explain what Caddy apps are and what sort of things you can do with them?
Matt: Yeah, so Caddy app is … Caddy is made of modules. A module is a piece of code or a plugin basically that adds something to Caddy’s config structure. So if you look at the top level of a Caddy config, of it’s json structure it has four fields. There’s logging and admin and then there’s storage and then there’s one called apps. The only thing Caddy knows how to do at it’s core, this is a crucial thing to understand about the Caddy architecture, is that it only knows how to load a config, it knows how to work with those four fields and that’s about it. Everything else is handled by modules.
Matt: So for example, storage at that top level you would configure various storage modules. It has a default storage module of course, the file system, but all it does is it says, “Oh, I’m supposed to use the file system storage”, and then it loads that and then the file system module or the storage module, whatever it is, will just run with that and do what it needs to do. Then when it gets to apps it’s just hey, it’s an app and all it knows how to do is call start on the app and it passes in a context and that’s literally all a Caddy does and then when the config changes it calls stop on the apps. Actually, I don’t even think it does that. I think it just cancels the context, which implicitly calls stop on all the apps and calls start on all the new apps. So that’s really all Caddy does is it starts and stops apps.
Matt: Apps are just pieces of Go code that implement an interface called Caddy.app, which has two methods, start and stop. The HTTP server is a Caddy app, which is a module that extends it’s config structure that when you start it it loads all the servers that you have configured there and it knows how to run them and then when stop is called it knows how to shut them down gracefully.
Beyang: That’s really cool.
Matt: Yeah, so apps can do anything, anything that a long running program will do. So current apps that ship with Caddy standard are the HTTP servers, the HTTP app, a TLS app, which manages certificates. So this means, by the way, that you can actually run Caddy v2 and manage certificates meaning obtain them and keep them renewed over the lifetime of the process without needing to run an HTTP server at all. You just say TLS and then there’s a couple other properties but you just specify the domain names that you need certificates for and it will just keep them renewed in your storage, which is a file system by default but it can also be a database or literally anything. So it’s super flexible here. We’re talking unprecedented and unlimited amounts of extensibility basically. As long as it’s written in Go.
Beyang: And why is there that requirement of it being written in Go?
Matt: Yeah, because these are compile time dependencies. So Caddy always is a static binary … Well, yeah. It’s a static binary. So you can ship around this executable that cross compiles to basically any platform and it will just work. You have no external dependencies, not even libc. Now granted, we don’t really have control over third party apps and modules as to what they do. They might use cgo and that’s up to them. We discourage that typically and I don’t think we’ll officially distribute any that have external dependencies like that but … Yeah, so the idea is that if you want to extend Caddy you do it at compile time and then you can ship around these static binaries. You don’t have to worry about which Python version you have installed or which c libraries you’ve got and all that stuff on a system. Frankly, you don’t need docker to run Caddy. People do because they’re already ingrained in that ecosystem but it’s just a static binary, just run it.
Beyang: Yeah, that’s one of the magical things about Go is that you can cross compile it to basically any computing platform and it’s just a single executable. You drop it in and it just magically works.
Matt: Yeah. So nice.
Beyang: So Go has a plugin package in it’s standard library but you went the route of actually statically compiling Caddy apps into the code as opposed to using the plugin mechanism in the Go language. Can you talk about why you went that route?
Matt: Sure, yeah. The Go plugin package is interesting but it’s in experiment and it’s not great. It’s not Go’s fault. It’s just that’s just how it is. It’s a hard problem to do dynamically linked runtime dependencies and also makes things difficult and tricky and frankly, there’s not a huge win in my opinion. I know that people like that way of doing things and I respect that but it’s just a lot more reliable and a lot less stressful to just get a static binary that has everything you need and then just ship that.
Matt: Now this is difficult when you’re talking about certain distribution platforms and packaging systems that for example, packaging a Go program for Ubuntu, like an official Ubuntu or Debian package is a nightmare. I don’t even know if it’s possible because you have to package every dependency all the way down to the metal basically. That’s really hard in practice. Anyways.
Matt: Anyway, Go plugins are fine but they’re not something that we wanted to rely on so the Caddy way of doing it is you just add an import to your main function and literally as long as it’s just compiled in it will get, the plugin will register itself on init when the program is starting up and then Caddy can use it and we have tools to help manage that pretty easily.
Beyang: That makes sense. So going back to Caddy apps and what you can do with them. I want to dive into some of the other things that you and perhaps members of the community have built through Caddy apps. Perhaps I’ll start with a personal anecdote, which is Sourcegraph. We support SSO because it’s often a requirement for the companies that we sell into, large enterprises, things like that. I was actually the one who wrote a lot of our original SSO code. I remember at the time looking into nginx plugins for SSO but getting lost in the documentation and there’s always that worry in the back of your mind where maybe it works for the first one or two cases but eventually you’re going to hit some edge case where either the plugin doesn’t support what you needed to do so then you’re left hitting in a brick wall. That’s why we ended up just building SSO into the Sourcegraph application itself but if I were writing that today knowing what I know about Caddy and Caddy apps and the plugin architecture. It seems like a much better path would’ve been to write SSO support as a Caddy app and because it’s just Go code I could just make it talk to whatever backend or database I needed it to connect to for user validation and things like that. Is that kind of the idea?
Matt: Yeah, that’s exactly a great use case, yeah. So there’s a couple ways you could do it, right? SSO is a great … It’s a hard problem but it could be a Caddy app if it’s standalone. If it can run on it’s own you just call start and then it does it’s thing or if you need to be like an HTTP handler it could be an HTTP handler module and you can plug that in instead.
Matt: I know that a developer named, Paul, he’s been working on some authentication modules for Caddy v2 and he’s done some really cool stuff. They’re very powerful. They’re still in the early days but he wants people to help test them and yeah but … So Caddy, yeah, would be a great use case for things like authentication. Anything that again, is long running. Another benefit too of writing a Caddy module instead of a separate piece of code that you have to ship and run and manage, well, so that’s a nice thing, right? If your whole stack is written in Go and they’re all Caddy modules then that means you ship one binary around and you have all your services in that one central configuration that you manage and they all automatically benefit from this real time online config API because the config API just manipulates the json struct in real time while the server’s running.
Matt: So since it’s all one json structure you can just manipulate it and we also can automatically document it. So if you go to our Caddy website and you look at the documentation and you look at the json config structure there’s all this automatically generated json documentation, which normally is not that difficult to do except that because of it’s extensible nature it’s … We have the top level and then we have a few keys or properties and then we have the apps key and then what’s within that? Well, it’s impossible to know until we have modules but this system, this documentation system you can just hover over it and see what modules are available there, you click on the one you want to use. So it’s all self documenting automatically.
Beyang: That’s awesome. I think documentation is such an important part of a developer experience in ergonomics and I think Caddy does a fantastic job of that. The docs are so user friendly and easy to explore.
Matt: I’m glad you said that. Can we feature that on the podcast page because I don’t hear that too often.
Matt: Our docs could improve but they’re pretty good. We get a lot of complaints but they’re not actually that bad.
Beyang: Yeah, I feel like however much effort you invest in your docs there’s always going to be a long tail of people that will say, “Well, I had a hard time using them.”
Matt: Yeah, we can work on that but they’re not there [crosstalk 00:41:14]. Anyway.
Beyang: At least from this users prospective I’ve had a overall a very positive experience with the docs.
Beyang: So I actually wanted to ask you, what is the key to having good docs? Are there a couple things that you found that really work for you and your community?
Matt: The documentation is hard. I think it’s important to scope your documentation and keep it frankly, pretty narrow. It’s not your job, for example. We’re a web server. Technically we’re an application platform but most people use it as a web server. Our job is not to explain how the internet works. It’s not our job to explain how it configures. Frankly, I know it’s funny when you say it like this and I don’t mean to come off-
Beyang: I know exactly what you’re talking about though.
Matt: I don’t mean to come off condescending but frankly it is not the documentation’s job to explain how to open the ports 80 and 443 need to be used for HTTP and HTTPS typically or how to configure firewalls and other network things or how to run a process on Linux after a reboot. We have official packages and we have a tutorial but it really, our docs need to be focused on how to use the software. So we get a lot of complaints that our docs don’t tell you enough but the reason is because we expect you to frankly, to know how to use your computer and how the internet works and when you’re ready to come learn how to use a web server …
Matt: Caddy is not a child’s toy. It’s an advanced tool. I think that was our biggest mistake in v1 was touting it as like, “It’s easy to use.” That was a mistake because web servers, no matter how easy you make them to use and Caddy is not hard to use but there’s a lot going on and you just need to understand it. So read the documentation. We have tutorials that users should go through. I want everyone no matter how experienced or expert you are to go through the getting started guide. Just go through getting started. Go through either the API tutorial or the Caddyfile tutorial, whatever you want. Go through at least those two things and then the rest of it is reference documentation mostly. Once you know what you are looking for, just find it in the reference documentation and then figure it out. Yeah, we’ll improve it where there’s mistakes or gaps but yeah, focus the docs. You need to separate tutorials from reference and then also expect something of your users. It’s okay to do that.
Beyang: Yeah, definitely. I’ve certainly had the experience of, even with something like Sourcegraph, which isn’t even an HTTP server but sometimes I’ll get random emails from people saying, “Hey, could you help me answer this homework question.” I’m like, “This has nothing to do with Sourcegraph but.” I feel like for a lot of people maybe it comes from a good place because when I think back to when I was very new to development you can very easily get overwhelmed by just the sheer quantity of stuff that you may or may not have to comprehend about how the internet works, how a computer works but I guess my advice to anyone who’s submitting a question or that sort of thing to an open source project is just be up front with your level of knowledge.
Beyang: If you’re grasping at straws just say that. Oftentimes people are very happy to point you in the right direction but just err on the side of saying, “Hey, I’m a newb here. I’m trying to get this up and running. I understand this may not be the best forum. Would love any pointers that people might be able to give.” I think that sort of ask will have a much higher likelihood of getting a useful response.
Matt: Definitely, definitely. It’s hard because we developers and maintainers have limited time and I want to help people. I would love to tutor people and help people on their homework problems but I have limited time to do that and yeah, also I would suggested, don’t go in guns blazing, blaming the software. I know that even you may know what you’re doing and how to use your computer but it might not actually be the software’s fault and honestly, most of … I would say more than 50%, close to three quarters, I don’t know I’m spit balling here but a lot of the forum threads and issues that are opened with the Caddy project, at least, are not actually issues with Caddy, they’re usually issues with a Cloudflare configuration or a network misconfiguration or a docker mess up. Docker, so many times docker or DNS. Docker and DNS are equally problematic and actually the problem has nothing to do with Caddy.
Matt: Most of the time Caddy works. The software works. It’s just putting all the pieces together is hard and I get that but just don’t come in blaming the software. Just be open to possibilities and help us understand your entire set up and yes, it’s a lot of information to write down and it takes time to boil it down to it’s simplest form and make the problem reproducible so we can experience the same thing and these are skills that we want the community to learn but they’re very helpful and very important. But yeah, when we do get good reports we can fix the bugs.
Beyang: Yeah. Absolutely. So beyond Caddy, you created and contributed to quite a few other very, very popular packages in go. Just to name of a few of those, json-to-go, Papa Parse, archiver, timeliner, curl-to-Go, checkup. People should really go to your GitHub page and check out all the amazing stuff that you’ve done but I understand that you’re working on a new project now that’s something to do with layer 4, TCP multiplexing?
Matt: Yeah. Yeah, that’s my latest side project that I’m having fun with. It’s a Caddy app that if you’re familiar with Caddy’s HTTP server this will be, it’s a similar idea or it’s basically yeah, it’s a TCP UDP proxy and server. So instead of operating at the HTTP or the application layer it operates at the transport layer so it accepts raw network connections or packets and then it will … You can configure Caddy to do whatever you want with it. Everything from you can first match on the connection. So similar to the HTTP server where you have this powerful idea of matchers which are pieces of configuration that select of filter certain requests, right? You can match on HTTP headers or the path of the request or the client IP or the time of day or whatever else. It’s a filter, basically. You candidate this at the layer 4 now. You can match on the client IP of course or does the connection look like at HTTP protocol. So it can read in a few of the bytes and then sniff it out. Does it look like a TLS handshake? Does it look like SSH? Does it look like some of their database protocol or whatever? Then you can … So you can match on connection type and multiplex various different protocols and connections on a single socket.
Matt: That’s really cool. Then from there you can proxy it with load balancing and health checks or you can echo it back if you’re debugging or whatever, troubleshooting. You can tee it off, like the Linux T command. That’s useful sometimes. You want to record what’s coming in from a client but also still pass it upstream. You can terminate TLS or you can not terminate it. Whatever you want to do and then of course, TLS termination you get the benefits of certificate management for free, the automatic TLS.
Matt: So anyway, it’s a really cool, really flexible piece of software that I’m really excited about.
Beyang: Would it be safe to call it Caddy but for layer 4 instead of layer 7? I hate to do the x for y thing but is that kind of like a …
Matt: Yeah, yeah. It’s basically the same thing. I call it Project Conncept with two Ns. Like a connection.
Beyang: Oh, that’s a cool name.
Matt: Yeah, it’s better than Caddy.
Beyang: [crosstalk 00:50:24].
Matt: Yeah, it’s Apache license. It’s not open to the public while I’m developing it. Right now sponsors get exclusive access so if you sponsor me you can get early access and you can test it out and I’d love your feedback, whoever’s listening to this that would like to try it.
Matt: I have the sponsor goal if you go to my sponsor page and once we reach that goal-
Beyang: This is for GitHub sponsors, right?
Matt: Yeah, github.com/sponsors.mholt. H-O-L-T. You’ll see that I have a sponsor goal and once we reach that goal then I’ll open it to the public. We can get to that goal faster at higher tier sponsorships too. If you want to advocate for your company to sponsor, that would be an ideal arrangement, I think.
Beyang: Yeah, definitely. I would highly encourage anyone who is remotely interested in this sort of thing to go and sponsor that project. It seems extremely useful, extremely cool.
Matt: It’s a fun one.
Beyang: Yeah, and also I think open source contributors, maintainers, they invest so much time and effort into making these libraries and tools that we call can use that … No. But they also have to pay the bills at the end of the day.
Matt: We do, yeah. I recently put out a tweet comparing open source sponsorships to athletic sponsorships. You know how athletes will be sponsored? Typically athletes are sponsored. Typically athletes are sponsored by one company or I don’t know, NASCARs are sponsored by a couple, a few companies maybe but open source sponsor developers are sponsored by a thousand … Well, the really popular projects. Caddy is actually not at this scale but you look at project like View or whatever and they have thousands of sponsors and they’re each doing a dollar of five dollars a month. I just thought it’d be so funny if athletes were sponsored by thousands of their fans at a dollar a month or something.
Matt: It’d be kind of weird. That’s a weird thought. The corporate sponsorship really is the way to go I think.
Beyang: Yeah. So I just went on and sponsored you and I encourage everyone else to do the same.
Matt: Oh, thanks.
Beyang: Well, thank you so much. I feel like this is the last I could do.
Matt: No, this is fun.
Beyang: As a final thought, you’ve built a fantastic reputation as one of the most active open source contributors in the world of modern web utilities and the Go open source community. Looking back on your journey, do you have any lessons or words of advice for people who aspire to be like you, Matt?
Beyang: No, no.
Matt: I mean, I just would not go into … I would not go into open source development thinking that you’re going to make a living off of it. I’ve been very fortunate and have pushed really hard to make that happen and I’m not going to try and discourage. If you want to do that, go for it. I think that’s a great goal but I think it’s better if companies who are already profitable and developers who work for companies who already have a salary, getting into open source at that point, I think that’s good. I think it’s healthy for tech companies to be involved in open source but I would just honestly … I wouldn’t over glorify open source development. It definitely is not perfect and has a lot of issues but if you like working with communities and if you like …
Matt: So I’m still personally developing better patience and kindness toward everyone so that’s something I’m working on and if you’re really into that, then community sense of things then open source is really great but it doesn’t need to be a goal. You can still be a really successful developer doing a normal day to day development job that may not get any recognition unfortunately but just please do your job well whatever it may be because we, who are using your companies products, are relying on that.
Beyang: Yeah. Definitely. I really do hope that someone comes along and cracks the very tricky nut of how do we enable open source contributors and authors to make a sustainable living off their work because I think whoever does that will do a huge favor to the world.
Matt: Yeah, I agree. I could give you a formula that I think will work if … You have to tailor it to each circumstance but … This is not my own formula but I’ve seen this work and that is find companies who will sponsor your projects or sponsor you personally as the maintainer where your project or your work benefits the company, either their employees or their customers because either of those audiences are profitable for the company. So if you write a developer tool that the company’s employees use then it’s in the company’s best interest to sponsor your development and keep that project alive. Not even necessarily in direct exchange for your service but … Although you could if you wanted to make it a consulting sale in that sense you could but just the fact that the project is going to keep going and it has some security behind it is a very valuable thing. It’s also a super good look for the company.
Matt: Imagine this company that sponsors this little open source project that is actually really important to them and their customers and they’re able to say, “Hey.” Send out an email to all their customers or put on their website, “Hey, we sponsor this project because you use it and you benefit from it and we want it stay alive and healthy and well. We’re pleased to be behind our customers who use this.” That’s a really good look. So this can work really well. I’m fortunate in that right now my living is covered basically by sponsorships. A single corporate sponsor and then I have many individual sponsors. So I really rely on that to continue working on Caddy full-time.
Beyang: That’s awesome. As a final question, if people listening, they want to try out Caddy or Project Conncept or any of your other work in the Go community, what would you recommend they do?
Matt: Yeah, just go to the Caddy website and you can click download and just get a build for your system or you could do any of the packages that we have, docker included. Then just go to the getting started guide and just go through the tutorial and then find the next tutorial, whether it’s the Caddy file or API tutorial and just start playing with it and see how useful it can be for you and then feel free to post in our forums. Get involved. Even if you don’t need help with anything, go find someone who is asking … We have a lot of people asking questions about how to get Caddy to work with their set ups and again, most of the questions aren’t really so much problems in Caddy. They just don’t know how to get their set up to work and so we could really use more people helping on our forums. We have a few wonderful contributors to our forums who are very active and helpful but it would be nice if that burden wasn’t only on just them.
Beyang: My guest today has been Matt Holt. Matt, thanks for being on the show.
Matt: Yeah, thanks so much, Beyang. This has been fun.