Taking the warts off C, with Andrew Kelley, creator of the Zig programming language

Andrew Kelley, Stephen Gutekanst, Beyang Liu

How do you improve on C? In this episode, Andrew Kelley, creator of the Zig programming language and the founder and president of the Zig Software Foundation, joins Beyang Liu, co-founder and CTO of Sourcegraph and special guest Stephen Gutekanst, software engineer at Sourcegraph, to talk about what it takes to create a new programming language. Along the way, Andrew shares how programmers can get funding for their side projects and hobbies, why conditional compilation exposes philosophical differences between Zig and C, and explains why and how Zig can be faster than both C and Rust.

Click the audio player below to listen, or click here to watch the video.

Show Notes

Zig: https://ziglang.org/

OK Web Server: https://okws.org/

Zig Software Foundation: https://ziglang.org/zsf/

LLVM: https://llvm.org/

Swiss Tables: https://abseil.io/blog/20180927-swisstables

Zenith Python: https://pypi.org/project/zenith/

Ziglyph: https://github.com/jecolon/ziglyph

Tokio: https://github.com/tokio-rs/tokio

River: https://github.com/ifreund/river

TigerBeetle: https://github.com/coilhq/tigerbeetle

Transcript

Beyang Liu:

All right. Hey, everyone. Welcome back to another episode of the Sourcegraph podcast. So a couple months back, I started noticing my teammate, Stephen, tweeting about this thing called Zig. And at first I didn't make much of it, because Stephen is a brilliant programmer who has built a lot of the core functionality up and down the stack at Sourcegraph, who also likes to pursue a lot of side projects, ranging from hacking on search backends and indexing technology to building miniature castles for his many, many cats at his home. But then he kept tweeting about this and all the cool stuff that he is building in Zig. And so I decided to take a closer look, and upon taking that closer look, I found a really cool programming language that was new to me, and a really awesome early community around it. And Zig is this systems programming language intended to replace C, and I'm really glad today to be joined by Stephen, and also the creator of the Zig programming language and the founder and president of the Zig software foundation, Andrew Kelley. Andrew, thanks so much for being with us today.

Andrew Kelley:

Hey, thanks for having me on the podcast.

Beyang Liu:

Cool. And hey, Stephen. Thanks for joining as well.

Stephen Gutekanst:

Yeah. Excited to be here.

Beyang Liu:

Cool. So, before we dive into all things Zig, I just want to ask, Andrew, what was it that got you into programming in the first place? What started this big journey for you?

Andrew Kelley:

I think I have a pretty average story. I was a young child. I liked video games and I especially liked the ones where you could make your own level or something. I remember, for me, Tony Hawk's Pro Skater 2 was a magic experience: making your little rail gaps and naming them and all that stuff. And so when I discovered programming–it was the ultimate game. It's "make your own game," right? So the appeal of the power that you could have over the computer, I think, was what got me into it.

Beyang Liu:

Did you start programming games or was it just that general concept and then you got into programming in a different domain?

Andrew Kelley:

The very, very first thing I did was, I think, intro to programming for dummies or something. I was 12, and it came with a Liberty BASIC CD–this language no one's heard of. It wasn't good. It was just someone's project to teach programming with a basic language. And so, yeah, the games I made were basically just dialogue trees, like, "What's your name?" and "Oh, hi, Name. Do you want to go north or east?"

Beyang Liu:

Got it. Yeah, Stephen, I think you got into programming a lot through video games too, right, if I recall correctly.

Stephen Gutekanst:

Yeah, definitely. I think my earliest stuff was trying to make games in CSS, and CSS was not very well versed for that at the time.

Beyang Liu:

Games in CSS. That's...

Stephen Gutekanst:

Yeah.

Andrew Kelley:

It's very functional.

Stephen Gutekanst:

Yeah. Yeah.

Beyang Liu:

Functional in the programming language sense. Maybe not so functional from a "trying to build game logic in there" sense.

Andrew Kelley:

Yeah. Fair distinction.

Beyang Liu:

Yeah. Cool. So, okay, Andrew, from that early beginning, what was the path to Zig? Was it always in the cards for you that you wanted to create a programming language or was it more windy and circuitous?

Andrew Kelley:

Well, I was just fascinated with every topic on computers. So, shortly after I got out of Liberty BASIC, I found Visual Basic–Visual Basic 6, to be specific. And I loved it. The fact that you open the program, it just gives you a new project, as every application did back in the day, and you could just press Run right away and an empty form would come up and it would say Form One and there'd be nothing on it. And it's just an invitation to just drag something onto the form and just create something. So I really enjoyed playing with Visual Basic and I just tackled everything I could think of. I wanted to make a WYSIWYG website editor. I wanted to make games. I wanted to make utilities. I wanted to make macros to move the mouse and click for you. Anything I can think of–I was transfixed by the magic of just being able to make the computer do what you want. So I had this huge folder full of all my little experiments.

Beyang Liu:

Cool. When did you get into C development?

Andrew Kelley:

Pretty late in my career, actually. So I think there was a big difference in what I was interested in and what I could get paid to do in my career, and as I got further along, I realized that some of the projects I wanted to pursue, I would need even more control over the computer. I couldn't use Python or Pearl or JavaScript or something, because I really needed to reach into the operating system and use the low-level interfaces, maybe to detect a MIDI keyboard or... If you're trying to explore everything you can do with a computer, eventually you're going to have to reach into C. And so that's how that happened.

Beyang Liu:

Right. Because C is the closest to the operating system and the lower-level functions.

Stephen Gutekanst:

Did you jump straight to C from these higher-level languages, or was there a struggling, in-between period?

Andrew Kelley:

I definitely learned higher-level languages first, and then I think in college, I wanted to learn C because it seemed like this powerful tool that I could add to my toolbox. But I definitely came from the top down and had to learn what's really going on underneath all these abstractions.

Stephen Gutekanst:

Yeah. Got it.

Beyang Liu:

You were at OkCupid as an engineer for a while. What languages were you working on when you were there? Was it C or more higher-level application stuff?

Andrew Kelley:

That was a C++ codebase with a twist.

Beyang Liu:

Interesting.

Andrew Kelley:

They had their own preprocessor that ran before the C++ preprocessor. Oh yeah. It was called-

Beyang Liu:

Say more.

Andrew Kelley:

Yeah. Because C++ needs more features, right? It's known for not having enough features. Yeah, so it's called Tame. I forget why it was called Tame. So you take the Tame file and it adds new syntax to C++ to do asynchronous stuff, and then it would output C++ generated code and that's what you would compile.

Beyang Liu:

Wow. That's almost on brand for OkCupid, at least what I know, and I know very little, but my impression is it's a dating application, but it was started by what appeared to be more like systems-y or language-nerdy programmers out of MIT. Is that...

Andrew Kelley:

Yeah, that's completely true, and not only that, but there's a public research paper you can go read called Ok Web Server, and the business is actually named after that paper, not the other way around.

Beyang Liu:

Got it. So, very analytical minds at this company.

Andrew Kelley:

Yeah. They definitely had a lot of fun with technology. I would say even more fun. I would say they even prioritized that more than the business sense, in the beginning, before the suits took over.

Beyang Liu:

Got it. Got it. Okay. Cool. So you were at OkCupid. Was it at OkCupid when you first started to think about Zig, or what was the origin story behind the Zig project?

Andrew Kelley:

I was actually already three years deep by then.

Beyang Liu:

Oh, so this starts way before that.

Andrew Kelley:

Oh yeah, yeah. Yeah, I would even describe, and I've said this in other places, but I think it's a nice analogy. I've described the Zig project as the flowers growing in the cracks of the cement in my career. So, I mean, I've always loved research projects and learning for the sake of learning. And I want to do programming for fun because I love it as a hobby. So my pattern in my career has always been: save up a bunch of money, quit, take time off to play with side projects, run out of money, get a job, save money... That's the pattern. So I started Zig actually six years ago now, and I actually had spent an entire year unemployed working full-time on Zig for fun before I went to OkCupid.

Beyang Liu:

Got it. So what got you working on Zig in the first place? Were you at another company at the time, or was it during one of your off-periods when you were doing more exploratory work? Can you trace it back to a particular inspiration point, or motivator, I guess?

Andrew Kelley:

Yeah, I sure can. I had taken a year off work and my goal was to not bounce around between a bunch of small projects, but to stick with one and see it through longer and get the experience of working on a project for a longer period of time. There's trade-offs with doing that, but I hadn't tried that before. I wanted that experience. But the project I was doing that with was a digital audio workstation. I still have that project, but it's obviously on ice at this point, but someday I'll pick it back up. But in the back of my head, I just kept having this little thought, "Oh, but this programming language, C++, it's not a good tool for this job. We need a better language to write this music studio with."

Beyang Liu:

And so you started working on Zig. What did V0 of Zig look like? Did you get to the point where it was functional and you started using it for this other project?

Andrew Kelley:

Oh, I think I did play with that a little bit, but it's not to that point yet. I don't think I would seriously use it for that project until we shipped the self-hosted compiler. I think that was the point when it was time to start trying it out for big projects.

Beyang Liu:

Got it. Got it. Cool. What's been the overall lifecycle of the project so far? So it started back in... What year did it start in?

Andrew Kelley:

Can I look it up real quick?

Beyang Liu:

Yeah, sure.

Andrew Kelley:

All right. Here's some keyboard typing for you. I'm just running git log and going to the bottom. All right. First commit is August 5th, 2015.

Beyang Liu:

August 5th, 2015. Okay. So you started then, and then somewhere in there, you ran out of money. You got a job at OkCupid. I assume development continued.

Andrew Kelley:

There was actually another company in between there, too. A job at Backtrace–another startup.

Beyang Liu:

Cool. Cool. And all this time, you're just hacking on Zig on the side, like nights and weekends, that sort of thing?

Andrew Kelley:

Yeah. Yeah. And that was pretty stressful. Having a full-time job is already a drain, and trying to work on a programming language on nights and weekends is taxing and it comes at the cost of health and ability to socialize and all these things.

Beyang Liu:

Yeah. I mean, that's a huge strain. One big commitment or project is enough, and these are two massive big projects. Do you have any lessons or words of advice for other folks out there who might be in a similar situation? Like, "Hey, I want to pursue this thing on the side, but I also don't want to give up the main thing I'm doing."

Andrew Kelley:

Yeah. Well, I don't regret it. It's everyone's job to figure out how to find their own meaning in their life. But in terms of trying to be healthier, I would say it might be easier than you think to try to get funding for your hobbies. I mean, I think that as programmers, we're pretty lucky for a few reasons. So, one is that we can acquire savings pretty fast. We can get compensated pretty well at this point. The mechanisms of capitalism haven't figured out how to exploit our labor yet. We're still on top.

But also, the work that we create–many people benefit, right? So a small number of developers can create software and then a lot of users can benefit. And that equation is like a big O, right? That equation is actually also the equation for making money. If you can do a little bit of work, but then a lot of people benefit and everyone just gives you a little bit of money, then you're actually golden.

And getting donations is not lucrative, right? A very small percent of people will give you money. But the point is that, because you can give software to so many people with only one person who needs to get paid or a small number of people who need to get paid, the numbers do work out okay for a lot of cases. So I guess my advice is just maybe consider that if you get creative, you might be able to make it work. You might be able to swing it without a job.

Stephen Gutekanst:

I was just going to ask, how long did you work on Zig as this side project, hobby project thing before it got to a point where you were like, "Oh, we're getting enough donations that this is something that I can switch over to full-time"?

Andrew Kelley:

Well, at first it was people donating to me directly. So, the Zig Software Foundation comes in at a much later point. At first it was Patreon only. They take 10% of your money, which is wild to me, but that was the best I could do at the time. But you can see the trends, and for me, it was going up with every month and it helped if I did blog posts, and when I did releases with detailed release notes–that helped too.

And so I was working at OkCupid and I was crunching the numbers and I realized, okay, I have this much savings. I have this many donations per month, and it's going up at this rate–as a conservative guess. And then I figured out, "Oh, wow. If I quit now, my savings go down, but before they hit zero, they start going back up." And that was, I think, two years ago. I have a blog post that has a date on it that I can check to find out the exact time.

Stephen Gutekanst:

So it was a pretty gradual thing over time. It wasn't necessarily something that was sudden.

Andrew Kelley:

Yeah. As far as monetary growth of donations and just in general, participation in the project in any form–pull requests, issues, money–it's all been very gradual. There was never a point where it exploded.

Beyang Liu:

Can you talk about the technical evolution of the language from its inception in 2015 up to the point where you started working full-time on it? How did it evolve in that time period?

Andrew Kelley:

Yeah, that's a good question. What are some examples of the technical structure that you're asking about?

Beyang Liu:

I imagine there's a lot of cool language features now, like compile-time, code execution, and memory allocation, async/await. All these things, I assume that they didn't get added at once? Or maybe they did. Was there an incremental exploration process? How did the spec of the language as it stands today come into focus over time?

Andrew Kelley:

Yeah. That makes sense, what you're asking. Yeah, it was definitely incremental and I've been developing it open source since the very beginning. It was very rare that there was a big un-merged branch. I like to get the code just in use as soon as possible so that you don't have rotting code sitting around.

But anyway, to address the question more directly, the vision that I've had for the project has been very consistent since the beginning, but my attention, motivation, and energy has bounced around across all the different aspects of it. I mean, I'm pretty happy with that actually, because I think that managing your motivation and energy levels is a meta skill that is completely core to what we do to have a successful project. But I also realize that it actually helps with contributors. I kind of think of it like creating a skeleton where you can reveal your vision by doing a proof of concept. If people like the vision then they want to contribute–they see the skeleton and they say, "Oh, I see. I see what we're doing here. I can fill in some of this muscle and do the rest of this work."

I think bouncing around wildly between different aspects of what I wanted to accomplish has actually been a valuable asset. It both helps my motivation to stay fresh and then also just helps people to see the vision and see what the goal is and help fill in the pieces. But there's been a lot of things that have been reworked. So almost every language design decision is not the first state. All my intuitions were wrong. A lot of all my implementations were wrong. Everything was wrong. The whole project has been an exercise in humility and paying attention to what's not working and reconsidering how something should be, and then just trying something else.

Stephen Gutekanst:

How much of that was like an iterative process where you really had to let go and experiment on something and try something before you would even be able to see what the right solution there looked like versus being able to design it up front and really reason about it upfront, that sort of thing?

Andrew Kelley:

I think nearly everything was iterative. It's all just been guess-and-check. Like, I think that this language feature will be a good idea, let's try it. Then, I did it, and then I observed how it affected my experience with the language and other people's experiences with the language. It was pretty fast-paced, pretty chaotic, pretty disruptive to trying to actually use the language. But it really let us explore some ambitious ideas. With some experiments, if you take a more measured approach, you wouldn't really get the actual feedback of trying it.

I would say that one feature that had to be a bit more designed was probably the async function feature. The way that Zig has async/await. So, the first thing I tried to do for that one was base it on LLVM's coroutine support. That helped me. It's the same thing. I had the idea, I shipped it, we tried it, and there were just so many problems with it. But doing the first version that didn't work very well helped me understand the problem domain. And so, when I went back in for a design iteration, I was much more–like my second guess was much more on point because we had tried the other thing and at least I understood why it wasn't a good idea.

Beyang Liu:

Very cool. Maybe it'd be good to jump into the language features and what makes Zig unique and how it compares to C, just so folks get a sense of how the language feels and what the developer experience is like.

Andrew Kelley:

I would say that if you are familiar with C, you already know Zig. You're going to breeze through the language reference and map the concepts one-to-one with each other. The differences that you'll find are kind of, I don't know, philosophical differences. Just differences in the details of how certain problems are solved specifically with conditional compilation. The one thing that C programmers are going to run into is they're going to think, "Oh, I want a preprocessor here." And they're going to find that it doesn't exist. There's a different tool to solve this problem and it's running code at compile time.

Beyang Liu:

Talk a bit more about that. What I recall from my C programming days, a lot of #defines and #ifdefs, and that sort of thing. That's all the pre-processor. If I recall, that's mostly for cross-platform builds or if you're building for this OS or that OS you, you build different code. How is conditional compilation different from that?

Andrew Kelley:

Well, if you don't mind, I'll get a little Socratic method on you there. Why use #define and not a constant?

Beyang Liu:

Why use #define and not a constant? I think the short answer is that that's just what every other piece of code in the codebase I was contributing to did. I don't think I really know.

Andrew Kelley:

But why though?

Beyang Liu:

I don't really think I thought about it too hard. I guess, maybe originally it was an efficiency thing? Because the #define, it gets stripped away and the constant just gets embedded directly in the binary. Is that the right–is that the reason? I don't actually know.

Andrew Kelley:

That would happen for just constants as well so that wouldn't be it. I'll tell you the answer. The answer is because if you try to use a constant in a place that you'd expect to be able to use it, for example, just the length of an array, it won't work. It will give you a compile error.

Beyang Liu:

Interesting. Oh, because C arrays have to be, what's the phrase, there's statically-lengthed or the length has to be known at compile time and the compiler's not smart enough to draw the inference.

Andrew Kelley:

Right, exactly. But it's a constant, right, so don't you think you should know?

Beyang Liu:

Yeah, why does it do that?

Andrew Kelley:

In Zig, we just fixed that so it works. That's it. That's the difference.

Beyang Liu:

Got it.

Andrew Kelley:

But then, the argument that I make is that if you follow this reasoning right and we take it a little further, I can ask more questions like "Why this?" The answer is always something like, "Well, because if you don't, then something stupid like that happens." If you just fix all these nonsense things, you don't need the preprocessor.

Beyang Liu:

So a lot of it is just like fixing paper cuts and footguns in C?

Andrew Kelley:

Exactly.

Beyang Liu:

Got it.

Andrew Kelley:

So specifically, you mentioned conditional compilation so you might put #ifdef windows, here's the Windows code; else, here's the POSIX code or something. In Zig, that's just an if statement. If Windows, then Windows code; else, POSIX code. The key distinction is that it's important to Zig if the condition of an if statement is compile-time-known and in this case it will not evaluate the dead branch. So you can put compile errors in the dead branch of an if statement, and they won't be revealed because the branch is dead. That's how conditional compilation works.

Beyang Liu:

Got it.

Andrew Kelley:

That's it. That was the two reasons you needed the preprocessor and C.

Beyang Liu:

Then, I guess in C++, the preprocessor does a bit more. They have the whole generics thing implemented through the preprocessor. But Zig has its own templates. Sorry, I get a little bit loose with the terminology there, but yeah, C++ templates in lieu of generics, and my understanding is Zig has generics built in, so there goes another reason.

Andrew Kelley:

Yeah, it does, but it's kind of the same concept. We already had this concept of if an expression is compile-time-known, and so all I had to do was just add the ability to make that true for a parameter. Then, that's it. That's templates.

Beyang Liu:

Say that again.

Andrew Kelley:

You just have a function where one or more of the parameters are declared that they have to be compile-time-known.

Beyang Liu:

I see.

Andrew Kelley:

And that's it. They're just, you just have a function where you know one of the parameters at compile time. That parameter could be like a type, for example, your capacity in I32 or something like that.

Beyang Liu:

I see.

Andrew Kelley:

So, generics in Zig are just parameters that you know the value of at compile-time.

Beyang Liu:

Cool. That's pretty elegant.

Andrew Kelley:

That's what I'm saying. I feel like a lot of the language design is just like, well, let's fix the warts of C and then generics fell onto my lap.

Beyang Liu:

Stephen, when you came to Zig, had you done a lot of C and C++ prior to that?

Stephen Gutekanst:

C++, definitely not. I've stayed far, far away from that. C, I've worked a lot with C. I mean, for me, the really elegant thing here is that starting out and trying Zig, I was able to get to a position where I was productive really quickly. That hasn't been true for a lot of languages that I've played around with. I think that the compile-time stuff, when I ran across it, I was like, "Oh, that's kind of interesting. I don't know how I feel about that yet." Then, kind of stumbling across like, "Oh, here's this really natural extension to what you've already learned. Now, if you want to make a generic data structure, you write this in the exact same way that you were running some Zig code just a minute ago." That feels like a very natural learning curve from my perspective.

Beyang Liu:

So it was super easy to get up and running. How would you compare the new developer experience for Zig to other languages? I guess the two things that come to mind for me are Go and Rust, because they're both in this proximity to C, C++. Maybe some people might take issue with that characterization, but how would you compare the onboarding experience to those two languages?

Stephen Gutekanst:

For me, and I have a background in Go—I've worked in Go since 1.0 was released, at least—it's very similar. I think from my standpoint, it was very easy to quickly become productive in a similar way that I think most people can pick up Go in a week or something like that, and start to make changes and get an idea for what that looks like.

I think the difference for me where it kind of differs from a Go onboarding experience is that once you've reached the end of that, it's like, "Oh, well, I'd really like to be able to express this." I think, with comp-time stuff, that becomes a very natural extension of exactly what you've already learned, because it's like, "Oh, okay, I just need to pass in this data type here and then I get a struct that has this data type as a field or something." That sort of flow is very natural; whereas, I think, in a lot of other languages that's kind of stacked on top and it's a very distinct system. I imagine people starting out in C could probably get an idea of basic compilation there and stuff and then find, "Oh, how do I really start to use this whole preprocessor system?" I imagine the same thing's true of C++ but I don't know for sure.

Beyang Liu:

It's interesting that generics just kind of grew out of conditional compilation. It sounds like they fell out of that. Contrast that to the experience in the Go community, where that community's been discussing generics, it feels like, for the better part of the last five years or so. So, that's awesome. Were there any other elegant things that emerged from core starting principles?

Andrew Kelley:

I think generics is the best example of this, but I feel like some things have been this pattern to a lesser extent. Well, maybe another example would be, it's more of a convention thing, but in Zig we don't have a global allocator. If you want memory, you have to pass an allocator parameter around.

Beyang Liu:

Interesting.

Andrew Kelley:

And there's a few reasons to do this, but touching on the part where something else falls into your lap. This combined with the fact that anything that you don't use is not analyzed. Zig is lazy with respect to top-level declarations. It means that you can use the standard library everywhere in kernels, in embedded development. If you need to swap out an allocator. It gives us the ability to make Zig so much more versatile in all these different use cases.

Beyang Liu:

Let me translate that to new brain for a moment. What you're saying is, in C, it'd be hard to take a dependency on the standard library in all these different situations, because you'd be pulling in the entire standard library. In some cases, it just wouldn't run properly on some of these platforms.

Andrew Kelley:

Yeah, exactly. For example, if you just naively took C, you tried to make an operating system kernel, you probably tried to include string.h just because you want to move some memory around or something. Not found. You don't get it because you're making an operating system. But why, though? You can manipulate strings in memory, in the kernel. That works fine. So, why can't you import string.h?

Beyang Liu:

Yeah, why can't you? I feel like you're taking me back to sophomore year of college right now. We're just sitting in the lab and you're like, "Oh, how hard could this C thing be?" You just include files if you want to import functions. Okay, string.h, that seems like a thing that should be importable. Then, you get an obscure compiler error.

Andrew Kelley:

To a large extent, the Zig project for me was kind of peeling off these abstraction layers that we've all taken for granted and examining their foundational premises and saying, "Is that really how it has to work?" Let's question some of this stuff. Maybe it can be better.

Beyang Liu:

You also have this rule of like, there's no hidden control flow, no hidden memory allocations, no preprocessor, of course, no macros. How has that played out for you? Has that caused any sort of tensions along the way? These things are presumably added to C and other languages because someone found use for them. You've taken them out. How's that going?

Andrew Kelley:

I think that philosophy has been a hard-line stance since the very beginning. I mean, that philosophy defines the vision of Zig. It's the Harry Potter sorting hat of people who visit the Zig homepage. If you read this and you think, "Oh, good,” then you could become a Zig programmer, if you read that and you think, "That doesn't make sense, I want those things," then you don't become a Zig programmer. You know what I mean?

Beyang Liu:

Yeah, that makes sense. I've written a lot of Go. It kind of reflects a certain ethos in the Go community, too, which is explicit over implicit–don't do anything magical.

Andrew Kelley:

Same energy with Go, definitely.

Beyang Liu:

Actually, can we revisit the memory allocation question real quick? I just want to make sure I understand and listeners understand. You're saying you're passing in an allocator. So there's no global malloc() function that, like everyone uses, passes in an allocator so you can actually ... presumably these different allocators have different memory-management algorithms in place. Is that the idea?

Andrew Kelley:

Yeah, that's the idea. I didn't even think about all the benefits that this would bring at first, but we've been stumbling on more and more benefits over time. The example I was giving earlier, I'll be more specific, let's say that you have a HashMap data structure. It's a great implementation. Actually, we have an array HashMap which keeps the order of keys. We have a, I think it's called Swiss Tables or something implementation, if you don't need key order. It's really efficient. I'm really happy with the API. Normally, in C or something, if you had such a data structure, they're going to call malloc(). If you want to use this library inside your kernel or your Arduino, or I don't know, a web assembly or something. You don't get it because you can't call malloc() in these cases. Or you can, but you have to add a shim. You have to add these weird blue codes.

With Zig, you actually just give the allocator what you want. So, on a desktop, maybe that is malloc(), that's fine. You can pass that in. But if you're writing a kernel, maybe you have a kernel allocator that you're going to use and you don't have to rewrite your HashMap data structure. You can use the one from the standard library.

There's other really nice use cases for this, though. As an example, we do have a global allocator that you are encouraged to use for all your unit tests. This one will fail the unit test if there's a memory leak.

Beyang Liu:

That's brilliant.

Andrew Kelley:

By convention, when you just run zig test, you find all the memory leaks immediately before you even run.

Yeah. When you just run zig test you find all the memory leaks immediately before you even run your code-

Beyang Liu:

Wow.

Andrew Kelley:

... as the main application.

Beyang Liu:

Now that you're saying it, it just seems so obvious.

Andrew Kelley:

Right.

Beyang Liu:

Why would you have just one global malloc() function that everyone is forced to use? But I always took it as an assumption, "Oh, malloc() is special."

Andrew Kelley:

And that's like I was saying, right? The big mission objective of the project is, "All right. Let's peel these layers off. Let's examine their premises and let's put them back together. Maybe we'll make some tweaks, maybe we'll change it. Maybe we'll actually completely redo some of these layers."

Beyang Liu:

You were saying earlier that the way you went about this is to just take a look at the paper cuts of C, to take a look at the footguns and address them one by one. And come up with better and more elegant solutions. But now you're getting to things like passing in malloc(). Which I guess, you could argue is you're addressing that particular paper cut, but that almost feels less like fixing a paper cut and more like you've taken a bigger step–that's a bigger thing.

I don't know if I'm getting across what I'm trying to communicate. But are there these more overarching principles or these big design principles that you've extracted over time from this, let's examine the paper cuts and fix them, mentality?

Andrew Kelley:

Yeah, I think so. We have this, inspired by the Zenith Python. I created the Zig Zen and I think that maybe that could speak to-

Beyang Liu:

Okay, cool.

Andrew Kelley:

... to this question.

Beyang Liu:

Yeah.

Andrew Kelley:

I'll give you one. My favorite one actually. From the Zen of Zig. It's: avoid local maximums. That's the Zen principle. And the analogy is with math and you have a graph, you have a goal of maximum, which is the best place to be.

Beyang Liu:

Yup.

Andrew Kelley:

But over here you have a local maximum, which seems like the best place to be, if you get stuck. So you might be here and you might actually have to go downhill to chase the global maximum, but that's what you have to do. That's our mission, right. We're not satisfied with just close enough. We really are willing to break things in order to put them back together, in a better way.

Beyang Liu:

Yeah. And is that applied to the evolution of the design of the language or to programs that are implemented in the language?

Andrew Kelley:

I think it's guidance for both. In terms of design of the actual language. It's what I was saying earlier. It's a willingness to say, "Okay. We made this language feature, but there's a problem with it. We're going to break everyone's code. We're going to do version two. We're going to try again. We'll still have time before we tag 1.0. We're going to use this time to continually reevaluate our assumptions."

Stephen Gutekanst:

Just curious. How has the standard library evolved over time? Because somebody who's dived in there a bit, obviously a lot of inspiration has come from, I think the Go standard library, just based on the way that the I/O packages and stuff are structured. But in other ways, certain things have been omitted, let's say HTTP or something like that. I'm curious, what are the philosophical reasonings behind the way that the standard libraries ended up–how it is and what do you see the future of that being?

Andrew Kelley:

I think that right now, the status quo is that some parts of it are solid and some parts of it are a complete mess. The main purpose in the beginning and still for the most part today, is that the standard library is a test bed for the language–prep primarily. Only after the language is stabilized, will I think it's time to go in and take a hard, critical look at the standard library and say, "What should a programming language standard library look like?"

And I haven't done that auditing yet. It really is just what did I need while I was working on whatever I was working on? What did other contributors need whenever they were working on whatever they were working on? Who bothered to send a pull request upstream?

Stephen Gutekanst:

Sure.

Andrew Kelley:

It's honestly a mess. In terms of a curation perspective. A lot of the code is actually really high quality. When I say a mess, I mean it's just in terms of what's there and what's not there.

Stephen Gutekanst:

Got it. Okay.

Andrew Kelley:

Yeah. Specifically for web servers, HTTP clients or servers. Whether that makes it in or not, is yet to be determined. Probably, I'm guessing, that whatever we need for the package manager will be in the standard library. That's probably going to be a consideration. And that probably means at least a client. But if you have a client, you probably need to test it with the server. That might mean a server. We'll see.

Beyang Liu:

Is there a set of target programs that you have in the back of your mind when implementing Zig. I think for a lot of new languages, the compiler, the self-hosted compiler, is the original target for Go. I think one of the reasons the standard library has so much is that the creators were all at Google and they wanted it for applications, a somewhat systems level of programming. For Zig, what is that target, if there is one?

Andrew Kelley:

Yeah. I agree with you that you can definitely see the intended use case coming strongly through in the Go language. It's almost admittedly made for making servers. For Zig, our tagline is a general-purpose programming language, and it's a very intentional and ambitious statement. Because it's saying, "No. Really you can use this language for anything." And that's a large scope. That's a big claim to make. But we're serious about this claim. If you give me an example of any problem domain, I can show you the way that that use case is intended to be handled.

Beyang Liu:

I see.

Andrew Kelley:

And it really is intended. Pick any use case, it's intended that Zig will provide a way for you to optimally exploit your hardware, to support that use case. In a way that is reusable, that you can use the same code for–maybe it's a library. You can use the same code on a different use case.

Beyang Liu:

Got it.

Andrew Kelley:

We were talking about that earlier. Where you could reuse a HashMap, for example, in a kernel desktop application, and it's the same deal. As another example, Go is really good for web servers and they use a... What was it called, a green thread system, where it's M:N threading, and if you start a Go routine, it's doing evented I/O.

Beyang Liu:

Yup.

Andrew Kelley:

Zig actually has a way to do that. It's still experimental, but it's completely supported. Combined with the async/await features of the language and a standard library implementation of an event loop, you can write code that looks a lot like Go code, in the sense that you write a Go routine, like Go call function. In Zig, you'd write async. And semantically, it is doing the same thing, with a different memory layout.

The stack is arranged differently. But semantically you have an event loop and you have M:N threading–it's the same paradigm.

Beyang Liu:

Yeah.

Andrew Kelley:

But the difference here is that you can have a library in Zig that uses async/await and it doesn't rely on the event loop, though. Because you can have the same code compiled with evented mode on, or evented mode off. And it generates the ideal code in either case.

Beyang Liu:

Got it. In that sense, if I'm understanding correctly, it does seem like a true successor to C then. I would say unlike Go, for example. Which was, as you said, I think obviously targeting web server applications or Rust, which arguably is targeting Firefox or the next version of a web browser. Zig is closer to C in the sense that, when I think of C, I think of, "Oh. They needed a programming language that served as an interface to a Unix-based operating system." You had C in shell as the... I guess in those days, high-level interfaces to the more lower-level stuff. Zig is just... You're building this on top of the OS primitives, and you want to make the full power and functionality of that lower level available to the user or the developer in a more accessible way. Would you agree with that statement?

Stephen Gutekanst:

I think so. I think you might be trying to make another point that I'm not quite understanding. But that all sounds like it checks out to me.

Beyang Liu:

I was just trying to clarify my understanding. And, really, place it relative to those other languages too, maybe. And maybe that's a good segue. Because I bet a lot of people listening to this are thinking, "Oh. When I think of modern successors to C, like systems-y low-level languages, Rust gets mentioned a lot."

Beyang Liu:

Stephen, I know you played around in Rust quite a bit. More than played around. Wrote some key services in Rust.

Stephen Gutekanst:

Sure.

Beyang Liu:

Maybe we can talk a little bit about the distinctions between Zig and Rust. I'd be curious to hear Andrew, from your perspective, what are the distinctions and then maybe from Stephen, as a developer using them, what are the differences that you've seen? But yeah, let's start with Andrew.

Andrew Kelley:

Okay. Before I give you a direct answer. I want to touch back on what you were just talking about a moment ago. Here's a way I can maybe categorize some of these things. If you're writing C code, you're writing in the most straightforward way, you're writing blocking... Let's say you're writing a desktop application C code. You're going to make OS calls, you're writing blocking imperative code. And that's a well-understood concept.

Beyang Liu:

Yup.

Andrew Kelley:

If you're writing Go code, you're doing an event loop, always. You're not doing the thing that you're doing in C. It would be very difficult to have a Go library that you call from C. Because Go depends on a hefty run time to do all the event loop stuff. So those are distinct. Rust supports both but there's modes. They're different codebases. In Rust, you can write the kind of C code where you have blocking imperative code and you make OS calls. Or you can do async stuff with Rust where you depend on Tokio, or maybe there's another one. It's very pluggable.

But then you're getting the Go thing, where it's the event-based thing. But it's a different codebase. The thing that's interesting about Zig is that, like Rust, Zig supports both of these use cases, but with the same codebase. You can have a library that both can be compiled for the Go use case and for the C use case. We call it colorblind async functions.

Beyang Liu:

Got it. That's pretty cool.

Andrew Kelley:

I think that does segue into, what is the difference between Zig and Rust? Because they're very natural to try and compare. What's the niche?

And I think there's two axes that you would perhaps pick up on. One would be the complexity of the language. And the other one would be the strategy for safety.

Going back to the complexity of the language, I think it's fair to say that as a language, Rust embraces features. There's more than one way to accomplish some problems. Lifetimes interact with traits and that all interact with–I can't even remember. But there's just a whole bunch of language features, and they're all very handy. There's language features that you can use for all sorts of different patterns. But it does give you overhead. Becoming productive in Rust can take a long time because you have to learn all these things.

And if you're reading someone else's code and they use a feature that you don't know, you haven't learned yet, now you have to pause, go learn that feature, come back to the code. And that's fine, that's a strategy of... Like I said, when you read the "no implicit control flow" part on the Zig website, that's where you fork. Do you choose Zig? Or do you say "No. I want more features in my language."

Beyang Liu:

Yeah.

Andrew Kelley:

Yeah. Whereas on Zig, the goal is, there's only one way to accomplish anything. And there's a very small number of language features. You can learn all of them. And then, whenever you're trying to read someone else's code, you never have to take a break to go learn the language. You can always just keep trying to figure out what the code does.

Beyang Liu:

Yeah.

Andrew Kelley:

That's the complexity difference. And I'm obviously biased for which one I prefer.

Beyang Liu:

Yup.

Andrew Kelley:

That's fine. And then for the safety one. I like to explain it as vertical safety or horizontal safety. Rust has done safe blocks and you cordon off entire sections of your code. You put a big nuclear warning sign: "Careful. Nothing sacred is done here. Here be dragons!" And you just try not to touch that code. And then everything on top of it: "Yeah. Well, this part's fine." In Zig it's different because there's no one safe keyword. You can't say, "Oh. This is the code we don't trust." On any line of Zig, you could decide to cast an integer to a pointer and then load a value from it. And that is very unsafe.

Beyang Liu:

Yeah.

Andrew Kelley:

But the thing is, it's actually a common misconception. People think that Zig doesn't care about safety. But there's so much work in the language that goes into safety. It's just everywhere. When you do pointer alignment checks, and if you do an alignment cast, that's safety checked. A pointer alignment is safe in Zig.

Beyang Liu:

Yeah.

Andrew Kelley:

If you do integer overflow and you didn't use the wrapping operator, there's a safety check there. If you go out of bounds on an array, there's a safety check there. There are some ways that you can escape. You can do something that is not detected in Zig. For example, if you have a stack variable and then you take a pointer to it then that pointer outlives the function call–that's not detected.

Now, we still have a lot of plans. The language is not at 1.0 yet. We still have a lot of plans to add more safety checks. And that is one that I plan to address.

Beyang Liu:

Yeah.

Andrew Kelley:

Right now, it's not addressed. But my point is that safety is there. It's just strewn around in different places and it's much more granular. For me, again, I'm biased. But what I like about it is that you can work on a project where you can't make everything safe. For example, if I'm working on an embedded device and I need to do memory I/O, where I'm writing or reading from registers or something. I'm still getting protected. All the safety features are still activated for me. Even though I'm doing unsafe things, it's not black and white. Zig makes unsafe code safer. Whereas in Rust, I would have to use unsafe and then it's not granular. I've just opted into the dragons, all the way. I think that's how I would describe it as vertical and horizontal safety strategies.

Editor's note: the characterization of unsafe in Rust here is fairly high level and we received feedback that it wasn't precisely accurate. See the official Rust docs for a detailed description of the behavior of unsafe. Thank you to Josh Simmons for providing this feedback and to Andrew Kelley for approving of this note for clarity.

Beyang Liu:

Yeah. It almost feels like an incremental approach. Like TypeScript took with types in JavaScript at a high level, if you squint.

Andrew Kelley:

Yes. And you know what. I'll even give it a downside. The downside is that it does require some discipline, as a programmer. And as we all know, programmers are not disciplined. If you can make abstractions that just have footguns in them in Zig. And there you go. You're going to shoot yourself in the foot all the time. Because you put that into the abstraction. But the language does give you the tools where you can make up safe abstractions. And so if you do that then your codebase will be safe.

Beyang Liu:

Stephen, do you have thoughts here on developer experience: Rust and Zig?

Stephen Gutekanst:

Sure. I think that for me, the really key thing is that onboarding experience, that learning curve. In my experience coming obviously from... Going from Go, for many years, to Zig, I think is not a huge leap. Maybe somebody from another language background would have a different experience. But the level of simplicity in that learning curve for me was really great. And I feel like I actually know most of Zig and I've only been working with it for probably a few months, but I have worked with Rust code on and off for a period of 2–3 years. And every time I jump into another codebase, I'm like, oh, here's a whole bunch of macros. This is kind of another concept and another language that I'm not familiar with. And so diving into a foreign code like that in my experience has been really challenging.

So for me, it really just boils down to that learning curve to be able to feel really productive and feel really confident about actually diving into other code. That's what I like.

Andrew Kelley:

That makes me really happy to hear because that's something that we advertise on the homepage, which is: focus on debugging your application, not your programming language knowledge.

Stephen Gutekanst:

Yeah. Yeah. From my standpoint, it shows through. I think it's-

Andrew Kelley:

Nice.

Stephen Gutekanst:

It's a pleasant experience.

Beyang Liu:

So having touched upon developer experience and developer productivity a bit, what about the other sides of things which people always ask about, which is the performance characteristics of the actual programs. So, it's my understanding that Zig has recently moved to being self-hosted compiled, meaning the compiler is written in the language itself. Previously, the compiler was written in C, I'm guessing.

Andrew Kelley:

C++.

Beyang Liu:

C++.

Andrew Kelley:

But in a C style. We just have to do C++ because that's the LVM API.

Beyang Liu:

Okay. And so the key fact here is that it got a lot faster going from C++ to Zig. So can you talk a bit about that and do Zig programs typically outperform C programs?

Andrew Kelley:

Yeah, I claim that Zig is faster than C and I stand by that assertion. I can give like micro benchmarks of examples of messing around with integers and you can see why the code that's generated is better, but I would also make the argument that based on the principles of the language, the conventions that we have, the organization of the standard library, there's also the results that in practice, Zig applications tend to be fast. And I would say faster than C and I would even say faster than Rust.

Beyang Liu:

Wow. Those are bold claims.

Andrew Kelley:

Yeah.

Beyang Liu:

Why is that? Why are they faster? Like in theory, C is the lowest-level thing. How does Zig achieve better performance?

Andrew Kelley:

I have different answers for C and for Rust.

Beyang Liu:

Okay.

Andrew Kelley:

So for C, I'll give you one really small, simple example. So in C, there's this weird thing that if you use a signed integer, overflow is undefined behavior, but if you use an unsigned integer, this is in C, if you use an unsigned integer, addition overflow is wrapping.

And in Zig, it's consistent. So in Zig, both signed and unsigned overflow is undefined behavior, or rather, it's illegal behavior. And if you compile in a mode that is allowed to exploit illegal behavior just by assuming that it does not happen, then that would become undefined behavior. And then if you want wrapping integer operations, you have to use an explicit operation for that and then you get what you want.

Beyang Liu:

Got it.

Andrew Kelley:

But my point though is that I can show you a little Godbolt example. So C just doesn't have an addition operation on an unsigned integer that assumes it doesn't overflow. And the fact that in Zig, the optimizer can assume that will not happen generates better code.

So that's like a micro... That's just an example of the kinds of language decisions that went into Zig that backs up my claim.

Beyang Liu:

Got it.

Andrew Kelley:

Yeah. But then also, the thing that actually matters more is just the fact that having to pass around allocators makes you really conscious about how you're using memory and what strategies you're using to manage memory and it makes convenient some patterns that are more efficient. So you could do all this stuff in C, you could do all this stuff in Rust, but people don't, but they do in Zig because that's the most convenient way to do it.

Stephen Gutekanst:

Yeah. I was going to add there I think being able to swap out allocator implementations with something else and say, "Oh, hey, maybe I'm comfortable with this data structure using a bit more memory and passing it in an allocator," or something like that. Like being able to experiment with different allocators really easily in a way that's not linking in a different global allocator or something like that and being able to do that on a per data structure level is pretty cool and I think that has a lot of room for optimization of memory allocation at a data structure level.

I was working on the Ziglyph last night, which is the Unicode package for Zig, and I saw that the author of that had swapped out the general purpose allocator with an arena allocator and got like two times faster speed on all of their benchmarks. So I think being able to play around with different allocators on a per data structure basis is a pretty cool thing that I haven't really seen in practice anywhere else.

Beyang Liu:

That is really cool. Okay, then... Yeah, go ahead.

Andrew Kelley:

Go ahead.

Beyang Liu:

I was going to say now do Rust, but what were you going to say?

Andrew Kelley:

About the same thing. I was going to say I'm going to be Rust now. So for that one, I'll circle back to the other example you brought up, which was self-hosting the Zig compiler and the speed-ups that it brought. So to be clear, the speed-ups that it brought are almost entirely from the fact that now I know what I'm doing, right? And it's the second version of something. You're always going to do better on the second version of something.

Beyang Liu:

Yeah.

Andrew Kelley:

That's where those speeds come from.

Beyang Liu:

Yeah.

Andrew Kelley:

But I can tell you what I did, right? Now, what is the better thing than I did, and therein kind of lies the justification for my claim.

So what I did is I learned about data-oriented programming. I developed a better intuitive model about how the cache system of CPUs work. And the fundamental premise, or observation I should say, is that if you touch less memory, then the CPU cache will get to have more hits for the memory that you actually do touch. That's the whole observation. So based on that observation, you can make a bunch of code changes and then your code gets faster.

So one of the ways that I did this in a self-hosted compiler was I found the place where we created a lot of objects on the heap and the objects that we were creating on the heap were IR instructions. So this is a part of a compiler that is an intermediate representation that you use to pass from one component of the compiler to another component of the compiler. You admit these intermediate instructions for the second component to consume. And it's code, right? So you're generating many of them in memory based on the code that the user types.

So based on the observation about CPUs and this knowledge, my hypothesis is, okay, if we make these objects take up less memory, then not only will that just use less memory in the compiler, which is good, it will reduce the pressure on the cache of the CPU and therefore make the code faster. And this turned out to be completely true and ended up being something like a 35% speed-up. But not only that, but that pattern-

Beyang Liu:

Wow. That's big.

Andrew Kelley:

Yeah, it was huge. And that pattern also existed in three other places. So I got that big of a speed-up three other times for doing the same strategy.

Beyang Liu:

It's always amazing to me the impact of increasing that cache hit rate or cache locality, because again, you don't really cover it too much in computer science as a subject. It's all about algorithms and whatever, but man, that is a substantial speed-up.

Andrew Kelley:

Yeah, it's more like computer implementation details 101.

Beyang Liu:

Yeah. Right. Right.

Andrew Kelley:

Yeah. But let me make my point with Rust though because I do have a way to tie it in.

Beyang Liu:

Okay.

Andrew Kelley:

So in order to do this optimization, this data-oriented design reworking, one of the core components is an untagged union. And so in Rust, these are enums, but they're tagged. There was a tag for this data structure, but you put it in a separate array and that's part of the strategy. So you have one array up here and it's each one is a byte, and the byte says this is the kind of instruction–it's like add, subtract, store, whatever. And then in a different array, each element contains the instructions data and they're a fixed size. So if you want to find out the tag, you'll look up index number one and the tag is in this array. If you want to find out the data, you go look in the other array, index one, there's the data. This is called instructive arrays.

And the point of this is that if you tried to put all this data in one array, you would be wasting seven bytes of padding, because if you only need one byte for the tag but then you have like a pointer size field here, you have to have padding there. So you just put them in different arrays and then like poof, the padding goes away and that's better for the cache, right? You're not putting padding in the CPU cache.

Beyang Liu:

Right. Right.

Andrew Kelley:

Total waste. So, the problem is that you can't model this in Rust without unsafe. You have to use unsafe, which is fine, you could do unsafe, right? But no one wants to do that in Rust. No one wants to use unsafe because people come yell at you and make you feel sad. So they don't.

Beyang Liu:

Yeah. I mean, it's called unsafe. That's a loaded word right there.

Andrew Kelley:

Yeah. So people don't do it, right? In practice, people try to write safe Rust code, but you can't fully exploit the hardware of your computer in fully safe Rust code. So, in practice, these strategies are available to Zig programmers to write faster code. And here's the kicker, this code is safe in Zig because we have safety on untagged unions. Like we have both. It's safe.

Beyang Liu:

Got it.

Andrew Kelley:

We didn't have to give up safety to accomplish this, right?

Beyang Liu:

And you were able to have that because you've taken this more incremental approach.

Andrew Kelley:

Yes.

Beyang Liu:

You've been able to add this, where like Rust, everything has to fit into this kind of grand-

Andrew Kelley:

Exactly. Exactly.

Beyang Liu:

Consistent universal scheme.

Andrew Kelley:

So this is an example. I mean, it's one little use case. I mean, it's not contrived, it's a real use case. It's the self-host compiler, but it's a use case where it's safe in Zig, but if you tried to write this code in Rust, it would be unsafe or it would perform worse.

Beyang Liu:

Yeah. That's very cool. We're almost out of time here. I wanted to cover really quick: are there examples of projects or organizations or companies that are using Zig right now? Anything cool that people can check out and see?

Andrew Kelley:

Yeah. I'll give you two answers to that one. So one would be an open source project to check out. It's called River. I think if you search for it, it'll come up. It's a Linux window manager.

Beyang Liu:

Cool.

Andrew Kelley:

And that's written in Zig by Isaac Freund. And another project that might be fun to check out–it's called TigerBeetle. This is an up-and-coming financial accounting database, and there's a company behind this and they're doing it in Zig. TigerBeetle by coilhq.

Beyang Liu:

Very cool.

Andrew Kelley:

Yeah.

Beyang Liu:

On kind of a final parting note, if folks are listening or watching this and there's one thing that you want to ask them to do immediately after this episode ends, what is that thing?

Andrew Kelley:

Okay. You know what? I'm going to beg for money. If you like what we're doing, we want to pay more volunteers. There are lots of people who are contributing to Zig and they're doing awesome work, but we don't have enough money to pay them. And if we get more donations, we can pay them. Go to ziglang.org for that.

Beyang Liu:

Cool. Go donate. I know I will.

Andrew Kelley:

Aw, thank you.

Beyang Liu:

It's super cool the work that you're doing and thanks for spending the time today with us. This is very enlightening. It was super fun. And yeah, this just seems like a super awesome project. Thanks so much, Andrew.

Andrew Kelley:

Well, thank you for your time and it was really nice to meet you, Stephen.

Stephen Gutekanst:

Likewise. Yeah. Awesome.

This transcript has been lightly edited for clarity and readability.

Start using Sourcegraph on your own code

Sourcegraph makes it easy to read, write, and fix code—even in big, complex codebases.