35 – Iavor Diatchki

Recorded 2023-09-18. Published 2023-10-17.

Wouter and Niki are joined by Iavor Diatchki to talk about his experience with different Haskell development styles, writing a high assurance wiki in php, and maintaining Haskell code across different GHC releases over multiple decades.

Transcript

This transcript may contain mistakes. Did you find any? Feel free to fix them!

Wouter Swierstra (0:00:18): Welcome to the next episode of the Haskell Interlude. I’m Wouter Swierstra, and I’m joined by Niki Vazou.

Niki Vazou (0:00:25): Hello.

WS (0:00:26): And our guest today is Iavor Diatchki, who will tell us a little bit about the different styles of Haskell development, writing a high assurance wiki in Haskell, and maintaining Haskell code over different GHC releases spanning over multiple decades. So, welcome, Iavor. 

Iavor Diatchki (0:00:46): Hello. 

WS (0:00:48): Our usual question for our guests is: How did you get into Haskell and how did you learn about the language, to begin with? 

ID (0:00:53): So, I first learned about Haskell when I was an undergrad, and at the time, I was doing quite a lot of programming in C++ and Java and object-oriented programming and those kinds of things. And I was really into programming language as well. So, I heard about this functional programming kind of paradigm that I really didn’t know anything about. And so, I kind of googled, or maybe this was kind of in the late ’90s—I have Yahoo or AltaVista or something—to learn what’s this functional programming thing. And quite randomly really, I stumbled upon the gentle introduction to Haskell, which is a Haskell tutorial that I believe even you can read to this day. I think it’s still around somewhere. And so, I just started reading. I installed Hugs at the time, was the main sort of way to play around with Haskell. It was a little interpreter, and yeah, basically went through the tutorial, and I thought it was really cool. So, there was a lot of things that were pretty unusual. Obviously, coming from an imperative language, I was quite curious, how do you program without mutating variables at all, no loops, and things like that? But I was able to follow along pretty well with this tutorial and I kind of got really excited about it. And that’s pretty much – yeah, in a way, it was kind of random. If I had found a tutorial on ML first, I might have been an ML programmer. 

NV (0:02:27): So, you were totally self-taught?

ID (0:02:29): Oh yeah.

NV (0:02:30): You didn’t have a class or any supervisor?

ID (0:02:34): Not really. I mean –

WS (0:02:35): But wait a minute, this gentle introduction is, we’re talking late ’90s, and then you were an undergraduate student or something?

ID (0:02:43): Yes. So, I was an undergrad from ’97 to 2000. So, that was probably around ’99, probably. I think it was towards the last year of undergrad. And then I just started searching around. I remember at the time, I was kind of – once I kind of got a little bit comfortable with Haskell, I started reading some papers. And around that time was the stuff about bananas and lenses and barbed wire, about folds and things like that. So, I was kind of excited about that. I remember reading a paper from Eric Meyer and Simon PJ. They were going to have this dependently typed intermediate language called Henk, named after Hank Barendregt, and I thought, “Oh, that seems really cool.” So, through that, I learned a bit about Hank Barendregt and went on to read about the Lambda calculus and those kinds of things.

So yeah, pretty much all the programming I learned has been self-taught. I mean, when I learned C – well, I learned BASIC first, I guess, was the first language I learned. Then I learned Pascal, then I learned C and C++. Then, in school, we mostly did Java. That was kind of the main thing that was – in school, it was Java and C++ depending on the classes. And I remember in the very last year, we had some kind of a class on, “Oh, here are some other ways to program,” where they taught us a little bit of Scheme. But that time, I had already learned about Haskell, so I was like, “Oh, this is way cooler than Scheme.” I like the static types.

Anyway, that’s pretty much how I got into Haskell. So, once I started reading the papers about it, I thought, “Oh, that would be really cool to continue studying in that kind of area.” So, I applied to the Oregon Graduate Institute, which is where, at the time, John Launchbury and Mark Jones were there. And so, Mark Jones was the implementer of Hugs, this little tool I was using. So, I was helping to work with one of them. At the same time, I was also doing – so, I had two majors as undergrad in Computer Science and in Math. And so, in Math, I was at the University of Cape Town. And there, the Math department was really into topology, and there was one math teacher who was really into category theory. So, studying algebra but very categorical. And so, I thought, “Oh, maybe there’s some connection, like many Haskellers, between category theory and Haskell,” which I’m still interested to these days. I don’t think the connection is nearly as important as I think people seem to think. But it’s still, I think that they’re very cool topics. 

So, anyway, when I applied to the Oregon Graduate Institute, I told them about my interests, and I guess they accepted me there. So, I just moved to Oregon.

WS (0:05:30): And that must have been early 2000s or so?

ID (0:05:34): That was at the end of 2000. So, I started school in September.

WS (0:05:39): And then you did a PhD with Mark Jones, is that right?

ID (0:05:43): Yep.

WS (0:05:44): And what was this on?

ID (0:05:46): So, before, I spent a lot of time, perhaps like a lot of other students, wondering what my PhD should be about, because I was interested in compilers in programming languages. So, I did a lot of little bits. I did some work on formalizing the Haskell module system. Then I was quite interested in verification. So, I kind of learned Isabelle, which is not Haskell, but it’s also kind of functional programming, basically higher-order logic. So, that’s a theorem prover. And for the longest time, we worked on this project called Programatica, which was supposed to be a way to gain assurance that software works as it intended to be. 

And at the end, basically, while we were looking at various kind of software, I got interested in system software. So, OS KernOS and kind of – I was previously interested in these things. As I mentioned, I was kind of programming in C++ and C. And so, I was always interested in just kind of hacking on low-level stuff. And so, the idea came, “Well, do you have to use these low-level languages to actually program system software, or can you do it in Haskell?” And I mean, there’s a lot of reasons why it is hard to do those kinds of things in Haskell. But one thing we kind of played around with that eventually led to my PhD topic was we kind of hacked up the GHC runtime system to put directly on hardware. Basically, yeah, later on at Galois, actually, which is where I work now, there was another project which was – basically, this is called HaLVM. And this was kind of a precursor to that, because there was an observation that the GHC runtime system is pretty much a full operating system. I mean, it manages memory for you, it manages threads, and so on. And so, pretty much, you just had to kind of map the memory manager to virtual tables and so on on hardware. And we were able to place snake, directly putting into Haskell on bare metal, and that was kind of cool. But in order to work on the hardware, we had to write quite a lot of drivers from scratch because GHC can manage memory and fork things, but if you want to do IO or do some networking. 

And so, I started trying to write drivers in Haskell, which is, there’s all kinds of things that are tricky about that. But one of the bits that I ended up kind of zooming in on was kind of a data layout. So, when you interact with hardware, oftentimes you need to have a very specific layout of your data structures because they correspond to various – like you write specific registers or things like that.

Anyway, this eventually led to my PhD topic, which was kind of thinking about what sort of things can you add to a high-level language, like Haskell, to kind of have some control over the representation of data, which is important when you want to interface with other systems such as hardware, a foreign function. So, I’m not sure how successful any of that was. It was a little bit theoretical. It’s certainly not been adopted wildly in the (0:08:57 unintelligible), although I’ve implemented variations of it on multiple Galois tools, have one form of various parts of it. I have kind of leaked into various projects.

NV (0:09:09): So, this was not a DSL. This was, you were writing Haskell, and it was going directly to hardware?

ID (0:09:17): Yes. So, the original thing was just all done in Haskell, and then all these kind of experiments with representation of data types and so on was in a little kind of Haskell-like language that I implemented during my PhD study. The original thing was basically Haskell. I mean, you couldn’t do all things because a lot of IO functions, like IOmap, has a lot of functions. So, we hadn’t implemented all of those. But like for snake, you could directly write to the screen kind of thing. Some kind of low level. I forget if it was – what model it was. 09:54 

WS (0:09:52): I mean, it’s an interesting idea. And I think like early 2000, late ’90s, there’s this idea of, “Oh, Haskell is – we can actually run these kind of lazy functions on our machine and compile things. It’s amazing.” And there’s this big surge in like, “Where can we use these languages?” Looking back at that, are you happy with the progress you made, or would you do the same thing? If you had to do research in this area again today, would you be doing the same kind of thing, or what did you learn?

ID (0:10:28): I mean, I think that the representation of data structures is actually certainly an important component of this thing. In fact, actually, there was an invited talk at ICFP this year. Was it – Christina was the lady. I forgot her name. 

WS (0:10:42): Christina Rizkallah.

ID (0:10:43): Yes, right. I talked to her before her invited talk, and some of these things she talked about are actually very similar to what I was doing in my PhD. So, I forgot like they had two languages, and the one was exactly for data types with representation. So, I was thinking, “Oh, that’s really cool.” It was very much in the same sort of spirit of what I was thinking. So, they’re doing it actually in a similar context, I think, that they’re trying to kind of write systems software. So, I think that’s an important component. And actually, parts of the things we did, I think, is probably good enough for a lot of the things you would want to do. I think there’s more important issues though with Haskell and writing drivers or software, and this is—well, we perhaps all know—the difficulty of predicting performance. Like, when is garbage collection going to run? And that I think is a big issue with Haskell, even when you are not writing something like a device driver, but just normal software.

NV (0:11:47): This is also because of laziness, that you cannot predict performance.

ID (0:11:53): Yes. Laziness certainly does not help because a lot of the standard analysis tools people use are not really made for lazy languages. Other things that are pretty tricky with Haskell is that it is pretty high-level, which is good because that means that the compiler can do all kinds of optimizations, but sometimes it is pretty hard for the compiler to know what you want it to do. So, at the Haskell Implementors’ Workshop this year, there was a few examples of that where GHC very well-intentioned goes and transforms your program in various ways, but that results in memory leaks, for example. I always find it amazing. I kind of think of myself as being a pretty advanced Haskell programmer. And then somebody tells me what they had to do to write a fast Haskell program, and I don’t think I could do that. I have no patience to – like I feel it’s almost no Haskell programming anyway. You kind of look at these details of how the compiler works right now, and the next version might not work like that at all. 

So, I think that it’ll be interesting to think of, like that’s not a direction I’ve seen a lot of research in for Haskell, but I would like to write my program in what I consider to be kind of natural Haskell and maybe kind of separately, as a separate thing, kind of give guidance to the compiler, how I want it to be compiled, right? Or I have no idea how to do that, I should say. But it seems that that is kind of an interesting way that you can kind of separate the issues of specification in a way, like this is what my program should do, and implementation, where you kind of help the compiler in some way. And currently, I feel in Haskell we mix the two up if you want performance code, or oftentimes, we just don’t care about it because it’s not always important.

WS (0:13:46): So that would be things like hints, like always inline this function first and then try to fire these rewrite rules, or are you looking at what kind of information do you need to provide?

ID (0:14:00): Yeah, I mean, but certainly with GHC, you can express some of these things, but there’s others that you kind of like, I think it matters in what order exactly things happen. And I think that the challenge is that there’s potentially a lot of information you could tell the compiler, but it is not at all obvious how to do it in a kind of robust way that sort of scales with software. So, I just don’t know how to do that.

WS (0:14:25): So I’ve seen examples of where you kind of do, or at least what I’ve tried to do is if I know that there’s an efficient program, I try to do the proof, right? And I can then read off from the proof from the very slow thing to the code that I want, which has a single loop, and then you can read off in a way like, oh, you need to inline this thing first, and then you can see that you have this rewrite rule firing and then this happens. But even then, if you do this, you have to know exactly what the efficient program is that you want to end up with, and you have to do a lot of manual work. And still, it can be fragile because maybe the inline heuristics change or it doesn’t extra pass somewhere, generating, I don’t know, other stuff, right? 

ID (0:15:12): Yeah. So, in that direction, I think we can probably also have a lot more tools that help you with these kinds of things, visualize what’s happening. Having said all of this, in most – I mean, I do Haskell programming for my day job pretty much every day. And these things have not been huge problems for me mostly because right now I’m not writing device drivers. So, I find that oftentimes Haskell is just good enough. Things could obviously be a lot faster. But for many applications, like for example, Cryptol, yeah, you could make the Cryptol type checker faster probably, but it works well enough. And I think that there’s also a lot of benefits to just expressing things more or less naturally and not worrying too much about performance. And generally, GHC does an amazing job. I mean, compared to an interpreted language, it still goes way faster. 16:10 

WS (0:16:09): Yeah. So, you mentioned Cryptol a little bit, but maybe let’s kind of go back to the end of your PhD thesis, and that was 2006. Okay? And then you’d spent a few years in academia, Oregon State. You had this low-level Haskell programming theme, and there’s even like – was it House, the Haskell operating system? And a few other things, ideas floating around?

ID (0:16:37): Yeah. So, that was all during my PhD. So, House, I believe, was the name of this thing that I was talking about earlier, GHC on bare metal. So, after all of that, I had done a lot of low-level hacking. I was really quite ready to do something not so low-level for a while, because PhD, you spend a lot of time kind of zoomed in on one small problem and you try to work it out in a lot of details. So, after I finished my PhD, which was I think at the end of 2006 it was maybe, I applied to Galois, and Galois is a company that was actually a startup from the Oregon Graduate Institute, the school I went to that I mentioned. So, just as I got there to start my PhD, John Launchbury, at the same time, started this company, together with a few postdocs from the school. They decided, “Hey, discuss the language; we think it’s really cool. Can we make a business around it? Like, is it plausible to use it in industry?” And so, they started Galois Connections at the time it was called. Now it is just Galois. 

And so, by 2006, they had been around for six years. So, I knew them because John was still a head student. So, he taught category theory classes at Oregon Graduate Institute. So, I knew him pretty well. And I knew that I do interesting work with Haskell, so I thought, okay, I’m going to apply there and see if I can get a job for a few years. I wasn’t really sure at the time if I wanted to do academia stuff or do industry stuff. I was thinking maybe I do industry for a few years, go back to academia. I wasn’t sure at the time, but I wanted to try something different because with the PhD, I had been solidly in academia for quite some time. And so, I joined the company, and I’ve been there ever since, actually. So, it’s almost what? 17 years now. So, I guess I liked it.

WS (0:18:37): You must be one of the oldest employees, or an oldest in terms of age, but the longest. Were there many people when you joined?

ID (0:18:45): Yeah, there’s still a few other people, but not many. I mean, John is still around. There are probably another three other engineers, because I know there was a few folks who just celebrated 20 years for them at the company, and I have not been there for quite 20 years. But certainly, I’m one of the earliest employees that are still around.

NV (0:19:07): So, when you joined, how many people were there?

ID (0:19:09): Oh, it was pretty small. I think maybe about 30, maybe 20 or 30. We are not much bigger now. We are about 150, I think. But yeah, the company has grown. I mean, it has certainly changed. It’s kind of interesting as when a small company grows. And a lot of startups don’t generally last that long, right? So, I think that Galois has been a great place to work. But what’s fun there is that I get to do Haskell and work on all kinds of projects. So, I said that I went into industry, but Galois is kind of an interesting industry company, and that’s kind of halfway between industry and academia, I feel, in that we are in a company, we try to make profit and whatnot, but we work on a lot of very researchy kind of projects. And we also, much like a university, tend to kind of apply for different grants. So, you work on a project. There’s very few projects that have been as long. So, Cryptol, I mentioned, is one of the long-running Galois projects, but most of the other projects run for a few years. And then they finish, and hopefully you do some publications on the way. Oftentimes, we reuse components from one project into another. But there’s opportunities to work on all kinds of different things, which is pretty fun.

WS (0:20:28): So, Cryptol is maybe one of the first projects you worked on, or is it – 

ID (0:20:32): No. Actually, when I first joined Galois, I didn’t work on Cryptol. At the time, Galois was really into – this was the early days of wikis, 2006. Yeah. So, there I had a colleague who was really into wikis, and we were going to kind of see, okay, there’s this new technology to wiki, can we kind of adapt it? So, to make it more secure, for example, to have people communicate, but maybe only like you can have multi-level security wikis and things like that. We have like, people can only see some information on the wiki but not others. And there’s an interesting question of, then if you do edits, how you do merges, and how you deal with conflicts. And so, that was one of the first projects that I worked on. And I don’t think that much came of it mostly because, well, wikis, as you know, they’re kind of very unstructured, but they also kind of – I think the problem is kind of related to lenses in a way, right? This kind of when you edit a partial view of something and how you propagate it with a big thing. Yeah, it was a difficult problem. It was certainly very fun to work with, but yeah. So, I remember we were doing a bunch of web stuff. I even got to programming PHP at the time, which I hadn’t done before, because a lot of the wiki systems were done with PHP at the time.

NV (0:21:56): So, you didn’t use Haskell for the web programming.

ID (0:21:59): So, we didn’t use Haskell for the web programming, but we use Haskell for this kind of, how do you merge the things or keep track of the changes. So, the PHP stuff was for the user interface. Yeah. So, I should say I use a lot of Haskell, but I mean, I program in JavaScript or sometimes – well, currently, I have a project where I do a bunch of Rust and – yeah, I’m not particularly stuck on a particular programming language.

WS (0:22:23): Okay. So, you worked on the wiki stuff for a while and…?

ID (0:22:28): And then Cryptol came. Yeah. So then, after this whole wiki thing, I was like, “Okay, I want a different project. I want some programming language stuff to do.” And so, I knew that we had Cryptol, so I kind of started looking at the source code. And also, I think maybe around that time, a lot of the original people who worked on Cryptol had moved away from Galois. So, we didn’t have that many people maintaining it.

WS (0:22:55): So, maybe just for our listeners, you can say something about what is Cryptol.

ID (0:22:59): So, Cryptol is technically a domain-specific language for writing cryptographic algorithms, or at least that’s how it started. So, the idea was that if you’re going to design a new block cipher, which is of course not something one should do lightly, but it’ll be nice to not have to do that in a kind of a low-level language. Like C, you oftentimes want to kind of be able to experiment, check various properties of your algorithm, and so on. And I think that that was the original motivation for starting Cryptol. Now, that was started before I joined Galois even. But I believe that that was the motivation for it.

In terms of a programming language, it’s not very different from Haskell in that it is a purely functional language. Where it differs from Haskell is that it is much more restricted with the kinds of types it has. Essentially really, it has the type bit and the type sequence of something and two posts and functions and so on.

The sequences are really the big thing in Cryptol, and it is like a – the sequences, unlike arrays in many other programming languages, are indexed by the lens, kind of like what people call vectors sometimes in dependently programmed languages. So, Cryptol is very explicit about keeping track of sequences, and everything is a sequence. You can have a sequence of bits and it can be a 4-bit sequence of bits, it can be 17 bits. They’re not restricted to the standard 8, 16, 32, or 64 sequences, which is very convenient when you do bit twiddling, which you do a lot in crypto algorithms. So, it is very common. You get the word of bits, you split it up in smaller words, you mix them around in various way, there’s various matrices and whatnot. So, Cryptol kind of helps you make sure that you didn’t mess up kind of, with things as you put them together. Otherwise, it is much like Haskell. So, there’s no user-defined data types as well. There’s no type classes. So, it is simpler in that respect. The syntax is a little bit different but not all that different.

WS (0:25:12): Yeah, but it’s a standalone language.

ID (0:25:14): It is a standalone language and it has an interpreter. Well, it has a kind of a whole frontend type checker and whatnot, and it has an interpreter that allows you to play around with the program, browse what’s in scope. Very similar to GHCi. It also has connections to SMT solvers. So, in fact, in Cryptol, we use SMT solvers in two different ways. So, we first use them in the Cryptol type checker when we make sure that various kind of lens of arrays have the right lens. I know Wouter has done a lot of dependent types. The SMT solver can take care of silly things like, “Okay, I need to reassociate the parenthesis here,” or it knows about various identities and so on. And then we also have the ability to do symbolic simulation on Cryptol terms. And then we can use the SMT solver actually, more like the work Niki is doing, to kind of check properties about Cryptol programs, much simpler than the liquid types and whatnot. But you can kind of write a property and you can either do a quick check, like random testing of the property, or you can see if the SMT solver can prove it. 

So, that’s currently kind of what if you can – so, Cryptol is open source and it’s on our website. You can go download it there and it’s all developed on GitHub. So, I’m one of the main maintainers of Cryptol. And the way I kind of got into that was right after the whole wiki thing I mentioned. I was wanting to do some language stuff. And my background is really, I’ve written a lot of type checkers during my PhD that I did a lot of that. So, I kind of started looking at the Cryptol type checker and started cleaning up and refactoring things. And pretty soon, I ended up kind of rewriting a whole lot of the Cryptol front end basically. And so, at this point, we made what we call Cryptol version 2. And just very recently, we have released Cryptol 3. But Cryptol 2 kind of happened like maybe 2008 or something like that. And it was kind of a major rewrite of the kind of Cryptol front end, which I was involved with. And since then, I’ve been maintaining that part. So, the front end really is my kind of main area of expertise.

WS (0:27:33): If you look back on that, it’s been running for a while now. So, it’s like a 15-year industrial Haskell usage. And I know Haskell’s changed a lot in 15 years. Like, how old are GADTs, type families, roles, levity polymorphism. How do you keep up?

ID (0:27:54): Well, we just don’t use those things. You don’t really need them. So, yeah, Cryptol, it’s kind of interesting that we have two long-running projects at Galois that are kind of used for many other projects that are related. One is Cryptol, what I was talking. The other one is called SAW and Crucible, which is kind of a very advanced symbolic simulator that does program analysis. Mostly, it’s C programs. So, they’re written in very different styles. It’s interesting. So, Cryptol is written in what? Basic Haskell. Okay. It uses functions and data types. And it’s not quite Haskell 98, but it’s almost Haskell 98. I think maybe we use some multi-parameter type classes or something, but we don’t use a lot of fancy Haskell features generally in Cryptol, certainly not at the type level. We don’t really use GADTs, or I don’t think we use type families or anything like that. 

WS (0:28:55): So, even though the language itself has these numeric indices, the actual implementation is simple, you would say?

ID (0:29:05): Yeah. Well, I mean, it is written in simple Haskell. It is still a whole – it’s pretty complicated, I think, but at least you don’t need any advanced Haskell to kind of understand what’s happening. I would say the most advanced things are probably not type-related. So, it uses some of these recursive do kind of things. So, this laziness in a monadic setting sort of things, which is also a bit questionable. But at the time, I was – when I wrote some of that, I was pretty excited about it. And so, it has some kind of complicated things like that in a few places, but mostly it is not a super complicated Haskell, which is why generally it has been pretty easy. So, every new GHC release we oftentimes have to fix something or other, but not much, right? So, the kind of changes – generally, Cryptol tends to be pretty easily upgraded from one version of GHC to another because GHC tends to not change the basic Haskell a lot. And it’s been a good experience because I find it’s generally a lot easier to get new folks to do something with Cryptol than with this other codebase, which is written with a lot of GADTs and a lot of type families and so on. Because with Cryptol, you can write the program a lot quicker. Well, modify the compiler a lot quicker. Now maybe you made a mistake, right? I mean, there’s a lot fewer kind of checks that GHC does for us. But instead, we write tests and we do all the things people do, right?

So, also, what I think helped me with Cryptol is, at one point kind of later on, I got interested in kind of GHC development. And so, I looked a lot through the GHC compiler. And GHC is also a very complicated piece of code, but much like Cryptol doesn’t use a lot of these new –well, I guess nowadays, it uses a little bit of type families and kind of cleverness, like they’re doing this fancy abstract syntax, I think they changed. But at the time I was looking, it was written in kind of vanilla Haskell. But what really impressed me is that it’s written in this way where whenever there is something complicated, there’s these notes in the compiler. And Simon PJ, I think, has been very good about pushing people whenever they do something clever to write it down. And so, I’ve been trying to copy that kind of practice in Cryptol. I think it is very valuable, especially I mean it’s being helpful to me because sometimes I do something and then I don’t look at it for a few years and then you look back. It’s very handy to have these kind of notes that describe these.

WS (0:31:48): Yeah. How do you keep the two in sync? Right? I mean, it’s easy. I usually find that when I’m developing stuff. I don’t do any serious development, but I just play around and change a bunch of definitions, and then, “Oh, it works. Amazing. It passes the tests. I’m done.” And then you kind of need the discipline to kind of fix up the notes and keep everything in sync, right?  

ID (0:32:11): I think so. Although generally, the notes tend to be kind of more high-level things that explained algorithm or something like that, which just doesn’t really change all that much. I mean, if you change the algorithm, you should obviously change it now, but then you kind of have to rewrite it. So, I don’t think that that’s been that big of a deal. And it’s mostly for the thorny parts, right? Like, I mean, you don’t need to describe every little bit. But things do get out of sync.

NV (0:32:40): My experience and what you also shared about GHC is that if you start seeing many comments, it’s like something is going wrong. In theory, you can just rewrite and have very simple functions doing most of the work. 

ID (0:32:55): Yeah. But anyway, yeah, I’m kind of a proponent of this simpler kind of Haskell because I think that oftentimes, especially kind of people who are new to Haskell, they tend to jump into this kind of fancy interesting things, which definitely have their uses. I mean, I play around with them, except that I’m pretty conservative before I introduce them in production projects. Because oftentimes, you can do a lot with – I mean, it’s not like Haskell doesn’t have enough types without the type families and whatnot. It still has an incredibly advanced type system. And with higher-order functions, you can express a lot of things. So, I think with experience, you just get to kind of decide how much should you have the compiler check versus have some kind of invariant that maybe you write like a function to check the invariant and you have some kind of linter that you run with your tests. And it’s a balance basically, I think.

WS (0:33:51): Okay. So, you mentioned Crucible a little bit, and I know you have this experience report at ICFP where you reflect on dependently typed Haskell in practice or something like that. How do you feel the two styles compare? The two development styles, right?

ID (0:34:07): So, a lot of Crucible was not – I’m definitely not the main developer on that. So, it was developed by colleagues of mine. And there, at the beginning, it almost started like a bit of an experiment. Like, okay, what if we were to take this kind of thing serious and try to do everything with GADTs and type families and try to encode a lot of invariants in the types? So, I’m familiar with the codebase. So, my thinking is that it worked a lot better than I would’ve thought. (0:34:38 unintelligible) mistake about this experiment, but it worked, right? And what I think made it work is that you kind of have to commit. So, I think that the most difficult bit is if parts of the code are written in a kind of dynamic style and part of them are (0:34:53 unintelligible). So, that doesn’t work. So, you have to kind of jump all the way in, right? Everything from statically typed with all the checks and whatnot. And of course, every now and then, we have cheating axioms, which is when GHC can prove some kind of equality with just axiom that has oftentimes to do with these numeric things. 

Yeah. So, it did actually work. And Crucible is incredibly complicated, not just because of the Haskell, but simply because it is kind of an advanced symbolic simulator. So, it worked better than expected. In general, I much prefer though the simple Haskell style, and in part because my experience as a developer and as somebody quite familiar with the kind of advanced feature of the language has been that I’m incredibly slow when I have to make changes to Crucible because you have to spend a lot of time. Any time you make a change, you have to spend a lot of time convincing the compiler that you didn’t make a mistake, which on the one hand is good, but on the other hand, it’s not great. It’s great if you know exactly what change you want to make, right? But if you’re kind of experimenting with things, it is quite difficult to do like a quick change and see how things work, and did they span out. You have to get it all the way correct. 

And so, this has always been a big challenge for me with this kind of style. In fact, if I was to do verification, I much rather have a program where I have the program and I can run it and test it. And separately, I have properties of the programs that maybe they hold or don’t hold. But I don’t think that the two need to be synced all the time. I would like the program to be temporarily not verified and then maybe I can stop and verify things. And you cannot do that currently, I think, easily with a sort of dependent Haskell kind of language.

WS (0:36:54): Yeah. So, a few weeks ago, I spoke to Edwin Brady, who works on Idris, and one of the things that’s interesting about Idris is that it’s implemented in Idris, right? So, the Idris compiler uses Idris. And there, they very consciously make the choice to have at least a scope-safe syntax. So, that’s used throughout the compiler just because they do all kinds of transformations or they do optimization or they do a pass to remove dead code, and then they need to bump up or down De Bruijn indices. And there he said, “Oh, this has been a great decision for us because we’ve had so many bugs that would’ve been accepted by the compiler, but now we got warned straight away that this is something that we shouldn’t be doing, we need to fix.”

On the other hand, I think he was very explicit in saying that this was a sweet spot, somehow just counting scopes or which variables are in scope or something like this. Maybe Idris developers have more experience working with dependent types or thinking in the right ways. I don’t know.

ID (0:38:06): My take on that would be that Idris dependent types are also quite a bit stronger than Haskell dependent types, right? So, Haskell, not really dependent types, right? We have this kind of – yeah, I mean, actually, if I was to do dependently typed programming, I personally would probably be most excited about Lean and what’s happening with that. They seem to be kind of doing dependent types, but we tend to see some programming rather than a theorem prover. And I mean, we know that the two are kind of – programming and theorem proving is, they’re sort of the same thing, but kind of they have a different spin sort of. And I think that it’s quite like languages like Agda, are quite clearly in the kind of – I think it’s a great language for learning dependent types because everything is very explicit and you get to – but programming in Agda is probably nothing very practical, while things like Idris and Lean seem to be way more intended for actual writing programs with dependent types, and their sound while, right? While the Haskell dependent types also—well, I’m not sure—they sound – it seems that they’re not quite sound. I don’t know what the story there is. So, it seems – yeah, not too excited about this direction in Haskell.

NV (0:39:22): Can you give us a concrete example to solve this different spin between theorem provers and programming? 

ID (0:39:30): Well, I mean, paying attention to performance for one thing and having a compiler. And so, Lean does a lot of very clever things with reference counting and mutating things in place. So, I kind of actually implemented some of the stuff they’ve written about in other tools that we have at Galois. Also, I think it has to do with the standard library. I think having IO early on, it’s a good indication that you intend to do some programming as opposed to theorem. Obviously, you don’t do IO in a theorem prover. I mean, maybe you do two lots and theories or something, but when you think of proofs, IO is not – IO terms are not proofs. So, that’s how I mean. 

So, Haskell is very much more pragmatic, I think, for programmer’s dependent types than Agda, for example. I don’t know. As I said, I think what Edwin says makes a lot of sense to me. So, you have to pick some kind of invariance. As I say, how much you want to capture in your static property versus dynamic ones. In a way, I don’t think it matters that much. What matters is that you think of what the properties are, right? So, that has been a very valuable – like for example, I also at Galois work on another tool called Daedalus , which is a parser generator. So, that’s also basically a functional programming language, and that has many intermediate languages that it compiles like these things. And so, we were very explicit about each of them, has an interpreter, which essentially gives it semantics. And we have linkers like type checks the terms and so on. 

And so, what was really useful is that after we do some kind of a pass in the compiler, you can run the program before and after the pass and make sure that it still does the same thing, for example. So, this is completely dynamic. It’s not a proof, so you have to write some tests to do that. But I think in the process of writing these things and running them in CI and so on, you catch a lot of the problems.

WS (0:41:34): So, your argument, I guess, is whether you enforce the specification dynamically or statically matters a little bit, but what matters most is having a specification at all.

ID (0:41:46): I would say so. I mean, I think that this is what many – when you get a lot of security bugs and you see – clearly, nobody thought of that, right? It’s not that it’s some very deep thing. It was like nobody – like, people didn’t care too much about what the spec is. I’ve been doing these parser generators, we’ve been testing HTTP parsers, and it’s amazing how buggy HTTP parsers are simply because people did what they thought the thing did and nobody read the spec. And so, the important bit is certainly writing the property down in some form. Now, if you can check it statically, that’s good, but sometimes it’s also convenient to be able to disable it temporarily while you are doing things and not check it, right? I think that that is the big – like these kind of dependent types. Well, it’s also dependent types. So, I say there’s various styles of writing dependent programs. 

So, depending on how you write your dependent programs, you can be more or less flexible. Because of course, you can write proof separate for your implementation in a dependently typed language tool, right? There’s no reason why you should mix the tool. At this point, the question becomes like, do you want to have the same language that you’re programming and that you write your proofs in? Or do you want to have two separate languages? Even in a dependently typed language, you can write your proofs in a style where the proofs are separate from the program in a way, right? So, it is a choice even there. So, the main benefit for a dependently typed language is that you’re using the same language for the proofs in the program, right? Although that’s not always the case because with tactics and whatnot, like for example, in Coq, you have Gallina and whatnot. 

WS (0:43:33): Well, at least I guess the specification is the same, right? And you use specification language, and you use a programming language. And both of these are at least, in theory, closely related. And I can imagine that that’s something you see also in Haskell because you might write a specification as a quick check test or a property that you want to check, and then you can use the same technology to kind of keep everything together somehow. Does that match your experience? I don’t know if you’ve worked in systems where the specification was vastly different from the implementation. I don’t know. That might make –

ID (0:44:09): I mean, mostly I’ve encountered that kind of in – it was a long time ago. We did some stuff with Isabelle, and then you had an implementation and specification in Isabelle. And I believe oftentimes when people do that, they kind of try to synthesize the – like for L4, I think that they had a specification in Isabelle but generated C from that. So, you do want to keep them connected in some way. That is quite important.

WS (0:44:35): Yeah. Yeah, that makes sense. Suppose you had to do a project and you could use whatever language, whatever verification technology, how do you go around selecting it should use dependently typed Haskell or not? Or how do we set up the CI or how do we set up the test suite? Or where do we write the specification? You have a bunch of experience in different styles and in different projects. And then, how do you choose what to do?

ID (0:45:09): I don’t have a magic answer to that. When I pick a programming language, oftentimes I want it to be statically typed. So, I would certainly – I mean, it depends on the context if there is a statically typed option, but I much prefer to use a language like Haskell or C++ even, or Rust or something. Something that checks at least the types somewhat, because I find myself making a lot of mistakes when I program in JavaScript, not TypeScript, or Python, without the type checking, or Scheme. I make very silly mistakes. Like, I make titles even so. There, if I had the scoping checking there, it would be great. And it’s really hard. It’s very annoying. You run the program, and halfway through it stops because you mistyped the variable or something. So, I definitely prefer statically typed language. 

And then I pick the language where people are most familiar with. So, I have quite a lot of colleagues. And I mean, I’m the most familiar with. So, in general, if we are doing some kind of tool where performance doesn’t matter hugely, I pick Haskell, simply because I think it’s a very nice language, and I’m familiar with it, and it is very good for quickly changing stuff. And a lot of our projects are kind of, you don’t know from the beginning exactly what you’re trying to do, that they kind of have this research aspect to them. 

On the other hand, if I anticipate that performance is going to be an issue, I mean then, for the programming language, I would pick maybe Rust or C++ or something. In terms of writing tests and whatnot, I think that the most interesting question there is, when should you start writing the tests? Because I find that if you write the test too early, then you spend an awful lot of time early on just changing the test. As software early on is very changeable. So, at some point, once things become kind of at least somewhat stable, you’ve reached some kind of API boundary or something, that’s usually a good time to kind of start adding tests. And how we add tests—it’s just kind of standard GitHub actions and whatnot.

WS (0:47:25): So, it sounds like in the very early days, you want to kind of experiment around very freely but still have at least some static type safety to stop making silly mistakes. And once you kind of converge on a design, then you have to think a little bit harder about, okay, what are the invariants or the properties that form part of my specification? And from there, I can write these down and then at least incorporate them. But then how do you figure out what things should become tests or what things you want to maybe enforce statically using your type system, or doing a little bit of dependent Haskell or something like that?

ID (0:48:03): It’s just a feeling really. I mean, I generally – my guess would be that the next step after you’ve done all the tests and things have worked for, then I would go on to verifying the program, right? Except that never happens first because, well, we don’t have tools that are good enough yet, I think. And also because projects finish, right? But I think that that will be the order of things. And I think if you start with the verified thing, it is kind of quite difficult to have the freedom of quickly changing things because, I mean, it’s not surprising, right? You want to test before you prove because tests are cheap and proving tends to be hard. But I think the big value of proving things is not even that you did the proof, but that you thought about the properties that you wanted to prove. Like, I think that by the time you figure out what the properties are, you probably – yeah, checking that things work is a lot easier in a way.

WS (0:49:02): Yeah. So, it’s even writing down once again this point of having this specification of what is the thing that we care about. Even articulating that thought is valuable in itself.

ID (0:49:13): I haven’t had a lot of experience with verifying things at work, mostly because things tend to quickly get quite complicated. Although lately, we’ve mentioned briefly Daedalus, I cannot add one thing about that. So, Daedalus is a parser – well, not Daedalus. It’s actually a Cryptol project I wanted to talk about. So, the current Cryptol project I work on is, we are working on adding a Rust backend to it. So, compile Cryptol to Rust. And as part of that, I wrote a runtime system where we are working on writing a runtime system for the compiler. And there’s a whole lot of pretty low-level code that does various bit manipulation and so on, which is incredibly error-prone. And so, there I ended up using Rust equivalent of QuickCheck, Proptest, or something it’s called. And that was super, super useful because it codes a whole lot of bugs. And I was thinking that the code is just simple enough that there we may want to prove some properties of the runtime system or some algorithms that these bit manipulation things might be amenable to actual verification. So, for next year, I’m hoping to kind of try to win a project to get some funding to maybe do a little bit of actual verification to go beyond the test. 

So, they’ll be in Rust, not in Haskell because the runtime system is written in Rust, because it’s supposed to run in Rust. But again, Haskell is going to feature somewhere there. I oftentimes end up, if not using Haskell directly for – like I write tools to manipulate things in Haskell. So, it’s very good for putting together a quick tool to do something kind of, you know.

WS (0:50:53): So, we’ve touched on a lot of languages, a lot of technology, kind of a lot of experience in various styles of Haskell development. If there’s something that you feel is missing for the language, either in the ecosystem or the tooling or the type checker or the compiler, if I were to say, “Oh, I won the lottery and I can give Iavor a million dollars to fix up his biggest gripes with the language,” how would you spend that money?

ID (0:51:23): That’s a good question. First, I should say that I actually think that Haskell, for a while, I feel, is more or less finished. It doesn’t need to change all the time because it works well. And I also think that the tools have been getting a lot better. So, there’s all kinds of little things I can mention that things don’t work as well as they could. But that’s the case with all of the tools, pretty much for any language. I mean, Haskell is by no means the only one. 

One thing that I think will be very cool – very hard to do is if we could make GHC faster. I think that that will be a really good thing because people complain how slow the Rust compiler is compared to, I guess, GHC C. But GHC, in comparison to the Rust compiler, is a few orders of magnitude slower. And it’s not the end of the world, but I do tend to spend a lot of time waiting for things to recompile. But I know that that is a very big project, and it’s not an easy thing to do. I know that there is actually ongoing work to try and speed that up. Otherwise, I would spend a lot of money on this kind of research: How do you analyze the runtime behavior of Haskell programs? Because this, I think, is way too – yeah, I don’t think we are doing it the right way. It’s too difficult. There’s got to be a better way to do that. I don’t know how, if you had a million dollars or something.

WS (0:52:49): Yeah. Who knows? 

ID (0:52:51): Yeah, that’s right. 

WS (0:52:52): Okay. Thanks very much for your time, Iavor.

ID (0:52:55): Yeah, thanks for having me on the episode.

Narrator (0:53:01): The Haskell Interlude Podcast is a project of the Haskell Foundation, and it is made possible by the generous support of our sponsors, especially the Monad-level sponsors: GitHub, Input Output, Juspay, and Meta.

SPONSORS
Monads
IOHK Juspay Meta
Applicatives
CarbonCloud Digital Asset ExFreight Mercury Obsidian Systems Platonic Systems Standard Chartered Tweag Well-Typed
Functors
Artificial Channable FlipStone Freckle Google HERP MLabs TripShot
To learn more about the Haskell Foundation
Haskell Foundation, Inc.
2093 Philadelphia Pike #8119
Claymont, DE 19703
USA