32 – Ranjit Jhala

Recorded 2023-05-11. Published 2023-08-24.

This episode’s guest is Ranjit Jhala. We discuss how Ranjit developed Liquid Haskell as a litmus test, because if Haskell programmer’s won’t use Liquid Types, no one will. We also hear how writing Haskell is a joy and how you should never underestimate your students.


This transcript may contain mistakes. Did you find any? Feel free to fix them!

Andres Löh (0:00:15): This is the Haskell Interlude. I’m Andres Löh. My co-host today is Matthías Páll Gissurarson. 

Matthías Páll Gissurarson (0:00:21): Hi.

AL (0:00:22): Today’s guest is Ranjit Jhala. We will discuss how Ranjit developed Liquid Haskell as a litmus test, because if Haskell programmers won’t use Liquid Types, no one will. We’ll also hear how writing Haskell is a joy and how you should never underestimate your students. 

Our guest today is Ranjit. Welcome, Ranjit.

Ranjit Jhala (0:00:43): Hi. Hi, Andres. Hi, Matti. Very glad to be on this podcast.

AL (0:00:49): Our first question usually is, how did you get into Haskell? I guess that for you is a long story.

RJ (0:00:57): Well, it’s – yeah, I don’t know if it’s long. As an undergrad, I was very lucky in that I got exposed by my excellent professors to various functional languages. So in fact, my Intro Programming class was taught with Scheme.

AL (0:01:10): Let’s backtrack a little bit more. So you studied Computer Science? 

RJ (0:01:14): Ah, yes. Let’s backtrack even more. So I was very fortunate in that school. The high school I went to had just an incredibly good Computer Science teacher, a very charismatic gentleman who was a Computer Science teacher, and he was also in drama, like theater. So anyway, he was really good. But during those days, I remember it was, we just learned basic. So literally, the language basic with lines. And I remember this gentleman telling me that, “You won’t believe –” this was a boys-only school in the country. “You won’t believe, boys. There are these funny languages where they don’t even have line numbers.” And we all laughed at him at the time. I was like, “What? How can you have a language without line numbers? That makes no sense.”

But anyway, it was all basic. And then I went to college in Delhi to study Computer Science because this guy had really – I mean, I just loved the subject. It was great. I love to program. And there in the Intro Programming class, it was actually quite interesting because they had two. I mean, they were really ahead of the curve. I went to a school called IIT Delhi and they had two Intro Programming classes. One for people who had never seen any programming before. And those students got to program in OCaml, or was it SML? I can’t remember. I think it was OCaml. And the ones who had seen some programming for whatever reason were given Scheme. So, because I had seen some programming, somehow we did Scheme, but this was the first time, is language without line numbers. And I was like, “Wow, how is this possible?”

Anyways, I was very smitten by Scheme at the time. Everything is recursions, it’s beautiful.

AL (0:02:45): But it’s really interesting that they chose a statically typed language for people who had not seen programming and the dynamically typed language for –

RJ (0:02:55): I think it’s – I mean, it’s a relatively small university with a small department. So I think it was largely guided by the folks who taught the class. And I think I remember the gentleman, or the plural actually, who taught the one for beginners. They were like PL people. They were like, “We’re going to pick a statically typed language and it’s to be OCaml.” And the gentleman who taught us, the ones with, he was a sort of a general computer scientist. He had done some theory and systems and so on. He’s like, “Well, I’m going to pick…” I suspect that was it. 

AL (0:03:25): Fair enough. I say it was different teachers. And then it was a matter of personal preference. Okay, that makes sense.

RJ (0:03:31): That’s my hunch. And I think they may have all converged. 

AL (0:03:35): I thought that there was this whole pedagogic decision that somehow, I don’t know, if you don’t know very much about computers, then the types help you. But the moment you already know your things, the types get in the way. So I was reading far too much into it. 

RJ (0:03:50): No, I don’t – yeah. I suspect it was just, yeah. Then the PL faculty, who I took several classes with, a lot of their classes were with various ML flavors, right? So I had that. So this is, I remember like ’95 to ’99. So I can’t remember if we were using SML or OCaml at the time, but I think it was SML actually. So anyway, a lot of our programming languages classes were taught with some form of ML. So I was spitting with that. 

And then when I went to grad school, I did a lot of – my sort of early PhD work is in verifying sort of C programs. But since you’re already manipulating programs and so on, I mean, functional language is very natural. So all of that stuff. So I worked on a system called Blast, which is a model checker, and Blast was all written in OCaml, because by then, I’d already done a bunch of sort of functional programming in OCaml.

And subsequently, a lot of my research in the early twenty - aughts is what it’s called, right? It all sort of uses OCaml for a bunch of things. And then the way I got to Haskell, in fact, the earliest sort of liquid types paper, that was written for OCaml. It’s a system called DSolve. And somewhere on the internet, probably you can find the code for it. But I went to – I think I did a – what do you call it? Like a brief – I visited Microsoft Research in Cambridge in, I want to say, 2012 or 2011. I forget, but something around that. And I gave a talk on these liquid types for OCaml DSolve, and Simon, Simon Peyton Jones – I wouldn’t say Simon, but I think to this audience, there’s – so Simon was in the audience. 

AL (0:05:22): Oh, there are many Simons in the Haskell community.

RJ (0:05:24): That’s true. There are lots of – Simon Peyton Jones was – actually, I think Simon Marlow was also in the audience there. So I gave this talk and it was all OCaml and so on. And Simon sort of – then when I went and had lunch with him, I remember his sort of pitch. He’s like, “Look, Ranjit, you should do this for Haskell because this is the best way for you to find out if any of this makes any sense. If you do it for Haskell and Haskell people use it, that means that people might use it. But if Haskell people don’t use it, then it means no one’s going to use it.” And so I was like, “Well, that’s a fair point.”

And so anyway, I guess you know Simon is a very enthusiastic person, so somehow that really kind of kickstarted. I was like, “Okay, well, let’s try and do this for – let’s try and make this work for Haskell. And so we sort of regenerate all this stuff. 

AL (0:06:14): Your first contact with Haskell, so to speak, was then to try to bring liquid types into Haskell. That was the start before?

RJ (0:06:23): Correct. That was the first contact I had had with Haskell. And I don’t know if I even – as I sort of mentioned to you earlier, there was a very brief earlier contact I had in the ’90s when – I mean, part of the reason I got into academic research in the first place was because I was, again, very lucky. I had some very good mentors in my undergrad institutions, Sanjiva Prasad and Arun-Kumar. And they sent me off to do an internship at INRIA. So it was a summer internship. I was there for a summer working with Professor Thomas Jensen. He’s at INRIA. And we were building a program analysis and I had no idea what a program analysis was. I learned all this stuff about data flow analysis and fixed points and equations, and so on. But one of the folks I was working with there was a Danish PhD student called Tommy Thorne, whose name has sort of – I’ve come across recently in a totally different connection. And the actual analysis that we were working on was being written by Tommy. And Thomas Jensen sort of pointed me to – I was in Tommy’s office and said, “Look, the actual analysis, Tommy is writing in this curious language called Haskell.” And I think it was even Hugs. This was not even like GHC at the time. 

So anyways, I was just kind of sprinting at it. I was like, “It looks kind of like OCaml,” because I was familiar with OCaml. But yeah, I think the first programs I probably wrote were in the service of sort of Liquid Haskell as it were.

AL (0:07:41): Okay. But before we get into Liquid Haskell, perhaps we should talk a little bit about what liquid types even are or –

RJ (0:07:49): Right. I should say, I’m trying to – before we go there, I can’t remember if I started working on Liquid Haskell first, or there’s a sort of separate parallel thing where I was just reading about type classes and monads and so on. And at some point, I decided I would switch my graduate programming languages class to sort of just use – I can’t remember which of these things came first, but that was also kind of a parallel track that has very little to do with Haskell. I was just so smitten by all the cool things with type classes and monads and so on.

AL (0:08:17): So you’ve been teaching programming languages at university all this time as well? 

RJ (0:08:22): I’ve been teaching, exactly. So I taught an undergrad and a grad class. And again, the earlier versions were all OCaml, right? I would say till 2010, ’11. And then at some point, I switched it all to Haskell for various reasons. But partly, it was because I think all these nice things like type classes and monads and functional reactive programming, this is that and the other, they’re just a quick check. Actually all of them, I felt they all sort of came out of Haskell. They’re sort of cleanest to express in Haskell basic effect systems. Anyway, we can talk about that. It’s a parallel track as to how I’ve been using Haskell as it were.

MPG (0:08:57): Yeah. I want to ask, because a lot of people don’t know, like what are liquid types? Can you summarize them for us, right?

RJ (0:09:03): Well, let me briefly try to summarize it. There’s sort of two ways that I think about this. But I’ll try to think about this from the point of view of a Haskell programmer, perhaps. Briefly, the Haskell type system is already quite fantastic. It lets you express all sorts of contracts as types. You sort of code it up as – I mean, you can use plain int, bool, string, but you can then start writing more constrained types. For example, using new types and so on and so forth that a function expects values only of a certain kind and so on, right? 

However, where I think Haskell, what it does not let you do, or does not let you do conveniently, I would say, is when you start requiring types which sort of relate multiple values. So what I mean to say is the Hello World example of this. So the simplest – it’s slightly boring, but the simplest example of this is, I have an integer or a key and I want to use that key to index some data structure. And so now, I mean, the integer on its own is – I mean, it’s an integer. It’s 7. Is it good or bad? Is it valid, invalid? I don’t know. It only makes sense when you sort of talk about it in the context of the data structure, a vector or a list, or whatever that you want to index. And so you want to start talking about types which relate multiple values. So I want to talk about the integer I and some, let’s just say vector v. And I want to say that I is a valid integer to use to index into the vector v. So that is say, I is – the value of I is between 0 and the size of v, right?

And so the kind of thing that liquid types let you do is to sort of start essentially taking your plain Haskell types like int, and then constraining them with sort of logical constraints of the form. So for example, instead of just saying, “I is an int,” I might say, “I is an int such that I is greater than zero and I is less than size of v.” And this would now let me start writing much more precise types using these kind of logical constraints and still get a certain degree of sort of static checking, right? So the goal is to see, you want to write these types, but I don’t want to leave them as things that are sort of tested at runtime. I want to be able to kind of statically guarantee that, “Oh, this function was supposed to take as input a vector and an index I, which is valid for that vector.” Well, at all the call sites of that function, in fact, my sort of liquid type checker can prove to you that, “Oh yes, the vectors that you passed in and the indices that you passed in are valid for that vector.”

And once you start with this, you can sort of bootstrap this. You can – I mean, there’s a kind of game that you have to play. You want to make the constraints expressive so that you can say more interesting things about your types, but you don’t want to make them too expressive because I also want sort of fast algorithmic checking. And so a lot of the liquid types is, how do you kind of thread this needle? How can you see more and more things while still keeping the checking kind of decidable using something called an SMT solver, which is really what old-fashioned logicians call decision procedures? But yeah, that’s in a nutshell where liquid types are, and you can start encoding various domain-specific requirements.

AL (0:12:10): Are liquid types exactly the same as refinement types or is there a difference? 

RJ (0:12:14): So, liquid types are very much a form of refinement types. I mean, broadly speaking, refinement types, you start with like int and you refine it, somehow you constrain it, right? And so there are various different ways in which you can do the refining. So in this particular instance, what we’re talking about is, the refinements are these kind of logical predicates that are taken from this kind of SMT decidable logic what we call. And there is an extra aspect to the liquid which is different from sort of classic refinement types. Quite important in my opinion and practice, which is that there’s a fair bit of type inference that goes on. As Haskell programmers, we all love sort of type inference because, I mean, for the most part, you only have to write down types where you kind of need to write them down. For the most part, right? 

And the trouble when you have these kinds of fancy type systems, refinement types or dependent types and so on, as I think Benjamin Pierce, who wrote the sort of textbook on type systems, sort of puts it quite nicely in a talk he gave. He says, “The more interesting your types get, the less fun it is to have to write them down everywhere.” And so, liquid types have this additional thing where they are geared so that in a lot of places, you can sort of synthesize what the refinements are quite efficiently. I mean, we could get into the specifics of how that happens, but a lot of it is sort of set up in a fashion so as to facilitate that. 

The kind of place where you would want this quite a bit is at the interface of sort of polymorphism and refinement. So for example, if I have an identity function, a classic identity function, right? Lambda x.x. Hindley–Milner is fantastic at saying, “Oh, you took ID of 3, and so at this place, I’m instantiated –” let’s say I knew that 3 was an int. I know that ID is polymorphic. And so I just instantiated that a, the a → a. I instantiated that a with an int, right? But now if you are in refinement land, what is the type that I’m instantiating the a with? Because I could say int, but that’s a bit unpleasant because then it means the output I get from ID is an arbitrary int. I’ve lost the fact that it was in fact 3. You see what I’m saying?

AL (0:14:25): So you could instantiate to an integer that is exactly 3.

RJ (0:14:29): Exactly. So you want to somehow be able to instantiate it to an integer that’s exactly 3, correct. But this gets more complicated when, for example, instead of – if it was not just 3, but if it was some function that returned some random value that happened to be 3.

AL (0:14:45): Right. You’re right. Or what do you do with a list that contains 1 and 3? 

RJ (0:14:49): Exactly so. And so that’s an excellent – actually, that’s a very nice example, Andres, if you have a list that contains 1 and 3, and I’m sort of mapping over this. So the map a → b, what is the type of a now? It’s not just int. I don’t want to lose that fact. So what can I say about that particular a? And so the liquid type system is sort of set up so that you can sort of – this instantiation happens without the programmer having to spell it out that, “Oh, I’m instantiating a with ints that are either 1 or 3.” That would be really tedious to use, right? So anyway, a lot of the liquid typing, essentially refinement types plus some extra sort of machinery to synthesize the refinements quite invisibly and predictably.

AL (0:15:30): Yeah, as you already mentioned it as well in one of your explanations, could you also contrast liquid types a bit towards dependent types? Because I think there is also quite a bit of an overlap, but they’re nevertheless at least used differently.

RJ (0:15:47): Yeah. I mean, broadly speaking, again, you can think of the liquid types as a kind of a special case, a refinement types or refinements type, or a special case of sort of dependent types. They’re all sort of dependent in the sense that I want the types to depend on values. If I want to have – instead of just saying int → int, I want to be able to name the input integer, right? Integer x and the output int can somehow depend on that, is to say, refer to the input x. So maybe it’s greater than x. Maybe it’s x + 5. Who knows what? So in that sense, they’re related. 

The main difference, I would say, is in the kind of underlying machinery, at least in my head. The underlying theory is totally different, the way these work out. And specifically for refinements, I think there are various technical terms that people use for this. But long story short, we are expressing all the information using these SMT constraints. So you just have this mass of SMT constraints that is expressing the constraints with dependent types. Instead, sort of the main machinery that you use for checking is a form of unification. So in a sense, what you do is you say, “Okay –” I’m being very wooly here. You essentially have sort of underlying terms. So I would say int, for example, x int. Supposed I just want to say returns, integer and returns the value that is greater than x, right? Or returns the successor of x. So it’s almost like, I would say, take as input x, which is, let’s just say a Nat, and a return as output literally the successor of n. And now at the sort of uses, at places where – so for example, in Agda and Idris and so on, in Coq where you have dependent types, the checking there works by trying to unify these indices at various places. And that turns out to be very, very effective too, but it’s sort of effective in different places than where the SMT solver is effective. 

So, specifically, the kind of canonical example of where the SMT solver is better is, it’s like when you represent things in classic dependent types, the term x + 0 is not the same as 0 + x, and you have to do a bit of work to show that these two things are the same. Whereas if you represent everything in SMT solvers, well, these are just the same because it’s the theory engine. The SMT’s decision procedure for it is the thing that takes care of it. 

The really cool thing that I’m very envious of with Coq and Agda and Idris and so on is that because you use unification, you can do these nice tricks with holes, as you are familiar with, right? You can just say all four, and the unification engine will go out and say, “Oh yes, you said successor of underscore. Well, that underscore has to be the n.” That is much harder to do with SMT. And anyway, that’s sort of pros and cons of this nature. 

MPG (0:18:24): So in the liquid types, you can kind of synthesize the types but not values.

RJ (0:18:29): That’s a very nice way to put it. Yes, you can synthesize the types, you can synthesize the refinements, but it’s much harder to kind of – we don’t have a good story for synthesizing the values. It’s difficult because you have this amorphous SMT blob and you have to kind of squeeze a value out of that. And that’s less obvious how to do. And it’s much easier when you’re using unification. You’re like, well, I have these tricks, like unify against this stuff and give me whatever value, right?

AL (0:18:53): So before I got into contact with Liquid Haskell myself, my only experience with SMT solvers was for solving certain kind of problems. And they always felt a little bit like sort of a magic box to me where you try to throw a problem at it, and sometimes it solves it for you and sometimes it says, “I don’t know.” And this sort of “I don’t know” is something that you probably don’t really want your type checker to say. If I understand correctly, you’re taking great care, that what you’re actually posing to the SMT solver always gives you a meaningful result or –

RJ (0:19:35): Yes. So I mean, I was just rereading an old blog post that I wrote about this subject where I use the word “fanatic” to describe ourselves. So I feel it’s okay. It’s a kind of self-deprecating thing. So one of the kind of design, or maybe to – since it’s a Haskell podcast, I should say the “hair shirt” that we have been wearing is to ensure that all the queries that Liquid Haskell or the liquid type checker makes to the SMT engine fall in the decidable fragment. So let me rewind a little bit. SMTs always are amazing, but they are much older and sort of a much older technology than people think. So I mean, they sort of come out of the work done in the sort of mid ’60s, early ’70s, where folks like Greg Nelson and Derek Oppen and so on were building – essentially, it’s called the Stanford Pascal Verifier, right? So Greg Nelson, if you look at his PhD dissertation, it’s called Techniques for Program Verification. And it’s mostly about building Simplify, which was the kind of pro to SMT solver. And so the way these were all set up was you have a bunch of sort of very nice well-behaved theories—the theory of equality, linear arithmetic, so on and so forth. And then Nelson and Oppen and then various other people showed how you can start combining these theories to get decision procedures for these so-called combined theories, right? 

And then in the early 2000s, or late ’90s I should say, there was this kind of SAT revolution where people figured out these really nifty ways—Chaff, the kind of prototypical one, where they figured out these really cool ways in which you could sort of solve propositional SAT queries very, very efficiently. Again, heuristically obviously, it’s sort of – I mean, it’s the canonical NP-complete problem and so on. And SMT was this kind of – what shall I say? Rebranding of a lot of the Nelson and Oppen and all of this entire line of work to sort of make it fit nicely so that you could sort of use SAT solvers as the underlying propositional machinery, right? You have SAT solvers at the bottom, Nelson-Oppen on top, let’s just say, and so on, right? 

And then over time, people started, “Why don’t we add quantifiers, or why don’t we add this? And that you can start adding different theories and you could do amazing things.” The point of this is just to say that SMT solvers are quite magical, but this kind of unpredictable behavior is largely if you go beyond the bounds of these decidable theories. So there are certain theories that are very, very well-behaved. I mean, they all have this exponential SAT thing under the covers, right? But setting that aside, a lot of the other stuff is actually pretty well-behaved. I mean, Z3 or CBC, they will give you an answer if you posit queries in this fragment. So a lot of Liquid Haskell has been designed to ensure that it always stays in this fragment. And this fragment basically means you can’t use quantifiers. You cannot use any kind of quantifiers. I mean, there’s a very special kind of quantifier that you can use, which is sort of right up at the top. You can have a bunch of “free variables” in your formula. And so your formula is always, for all x, y, z does p imply q, right? They’re all queries of this form. As long as p and q fit, are from a certain theory, from sort of good theories, life will be good. 

So a lot of the workaround in Liquid Haskell has been, well, we’re wearing this hair shirt. So several other sort of SMT-based verifiers do not have this kind of restriction, right? And if you talk to their designers, they think this is a kind of nutty thing to do. And honestly, maybe they’re right. But like with the hair shirt, I think interesting research has come out by just forcing us to stay in this.

AL (0:23:18): The other tools are happening. 

RJ (0:23:20): It was just like, “Go for it. Put in for all.” It’s like, “We’ll do our best.” So Dafny is the kind of canonical one. F* is another one for Microsoft Research. F* even uses a lot of refinement types ideas, but you can use for alls, and you could just do that. And there you have this thing where we have Maybes, where SMT solver might say Maybe. Now, part of the reason we didn’t want this Maybe is these two reasons. One is it’s just kind of a philosophical thing. It’s like, I don’t like this thing where it’s – a lot of the maybes are very dependent upon various heuristics in the SMT solver. And I don’t want to kind of – I don’t want to be tethered to those heuristics and say, “Well, it depends on which version of Z3 or whatever you’re using.” I want it to be very predictable. 

The second is, remember, I wanted this business of inference. And in the course of checking a single Haskell program, it makes hundreds or thousands of SMT queries in order to figure out at each of these instantiation sites, what the right predicates, what the right refinements should be. And if you have any unpredictability in that, I mean, the entire inference thing, it’s going to get magnified. So that was another reason why I wanted things to be very, very predictable because I’m going to be making hundreds or thousands of SMT queries, and I need them all to just say the same thing every time. Sometimes yes, sometimes no. So Dafny and F*, for example, do not do that, right? So they don’t do quite so much aggressive, this kind of refinement synthesis, as it were.

AL (0:24:53): For me, it certainly sounds like a sensible decision, but it also – I mean, I’ve been brought up in languages with decidable type checking and whatnot. So to me, it seems strange to give that up. So it’s very nice that Liquid Haskell has that. So you mentioned this talk that you gave at Microsoft Research where Simon told you that you should try to do it in Haskell. So that means that when you started working on liquid types, you did it for other languages, I guess, like for OCaml or for not even functional languages?

RJ (0:25:25): So, in fact, at that point, or by that time, the first PhD students that I had working on it, Pat Rondon worked on it. So his system was for OCaml. And then in the second part of his dissertation, Pat sort of built a refinement type checker for C, which – I mean, the original reason, I should say, this conversation began. I started with this thing of – I’ll sort of explain liquid types from the point of view of Haskell, but in fact, I arrived at liquid types from a very different point of view, which is, remember, my PhD work was in this building a verifier for C called Blast. And so this was always at the back of my head. I mean, I remember this because it was a lunch conversation and this is when I was still in grad school. And so when we were building these model checkers, which I don’t want to get into the specifics of, but they sort of work in the Floyd-Hoare verification style, roughly speaking. 

We had a lot of trouble with exactly the kind of example that you alluded to, Andres, is what if I have a list of things? List of 1, 2, 3, 4, and then I do stuff over them? Because back at that time, the only way you could talk about lists was to start using these universal quantifiers, right? You would say, “For every x, if x is an element of the list, then x is whatever greater than 0.” And as I have already alluded, for alls are kind of problematic for SMT solvers. It was very flaky and people were designing custom logics and life was really complicated. And so we would just throw up our hands. Well, the moment you have a list or a hash table, it’s like, “Okay, we don’t know. It’s like data inside the list.” And we were doing stuff for device drivers, so it didn’t really matter. But this was like, I mean, come on, how can you – if you want to verify software, you have to be able to say interesting things about basic data types like containers and so on, right?

And so at a lunch where it’s like, in grad school, I was with Professor Alex Aiken, who’s now at Stanford. He’s like, “Well, it’s funny that verification people have so much trouble with simple things like lists and higher-order functions. You can sort of generalize. It seems that types people have solved this problem already. I mean, they’ve solved it for decades. You just say list of a, list of int, and that’s that. You just go on, right?” And so that’s what really kind of put this seed in mind. It’s like, “Wait a minute, he’s right. Why are we trying to do everything in logic when in fact the type system is a logic. Instead of saying, for all x, if x is an element of the list, then X is greater than 0, what if I could just say, list of stuff that is greater than zero?” And so that is the kind of algorithmic path that really –

So anyway, I wanted to see if this same method could be made to work back in C, if I could. So sort of having developed it in a functional setting one. So that’s around 2010, my student Pat. There’s a lot of shenanigans you have to do. And the nice thing about a functional language is you have a sort of a real type system to start with, right? And so you have to just do all of this extra work where you have to build up something that is a type system, by which I mean something that’s giving you some invariance about the shape of data, to begin with. So anyway, these are the two things I had done prior to starting with Liquid Haskell.

AL (0:28:26): And then basically, I guess you took Simon up on his challenge and you started trying to do this? I mean, how did that work out? I mean, if that was the first time you actually ever really started writing code in Haskell and Haskell is still quite a bit different from OCaml, was it a pleasant experience? Did you have to reinvent everything? Did things just work?

RJ (0:28:53): It was a very pleasant experience. I mean, I think by then already, Cabal and all of this stuff was – in fact, this is pre-Opam days compared to OCaml. So I remember things being really like, “Wow, this is really easy. You just do cabal-install, mumble and stuff actually worked.” So I remember that I didn’t have – so some of it was of course teaching myself Haskell because I’d never – I’d sort of only heard about monads and things like that at the back of my head. But I mean, a lot of these things, because I mean, I’ve done all this work with program analysis and they sort of fit nicely onto things I knew, I still – let me take an opportunity. I always like to promote Graham Hutton’s monad tutorial. I really like it. Somehow, I thought it was great. It’s like, I don’t know what’s the problem. It’s fantastic. I just read it. It mapped very nicely to something in the program analysis literature called summaries, function summaries. And you just think of monad as kind of composing these transforms or relations. But anyway, that’s an aside. 

Yeah, I enjoyed all of this. I mean, the early days, if I recall, I think this is before Niki showed up. So Niki is Niki Vazou, who several of you know, of course. And so I think I had some kind of very prototypy thing that I was just sort of hacking up, which was mostly porting over a lot of the OCaml code. It was also – I remember because we were working on GHC course, so one of the things Simon said you should do is we’ve done all this work in building this IR called GHC Core and you should try and build on top of that. And that was a very pleasant experience because it’s basically the Lambda calculus, right, with a few other things. So yeah, a lot of it was sort of spelunking around the insides of GHC. It was quite pleasant. 

And then Niki showed up and I think she was very keen to work on this. So for her, she’s like, “Yes, this is exactly the thing.” And so she showed up and then she had this – the first paper we wrote on Liquid Haskell was actually, we sort of cheated in that we pretended Haskell was call by value, and because we didn’t sort of thread that needle for a while. But it’s a paper that was in ESOP 2013, which I really like. It’s one of those papers I feel. Some papers, you’re like, “Oh, whatever.” And it seemed like a good idea at the time. But this one just seems better and better as time passes where Niki invented this. And the curious thing about this paper, like with my students four or five years before that, we had written a paper with two or three different mechanisms that all seemed kind of ad hoc and they seemed totally unrelated. And Niki had this very, very clever idea about how – “Wait a minute, these are actually the same thing.” And they’re this very general thing, what we call abstract refinements now, which let you sort of quantify over refinements, where instead of just saying it’s a list of natural numbers or a list of even numbers, you can just say it’s a list of integers such that some predicate P holes about them. And you can sort of say, “For all p.” And you can do all this in a decidable manner without kind of going off the rails in a very clean, predictable way.

AL (0:31:49): So this kind of quantification does not map to a quantification that has passed to the SMT solver.

RJ (0:31:54): No, it does not. Exactly. The SMT solver never sees any of this, even though it’s a kind of a higher-order specification. But this is the kind of thing I was referring to where we were just forced by, “No, we –” anyway, she came and that was the first kind of the first Liquid Haskell paper. And then after that, we wrote several others as part of a dissertation. But yeah, it was very pleasant. Working with Haskell was just fantastic.

AL (0:32:19): So you briefly mentioned there that you pretend that in the beginning, that Haskell is call by value. I mean, like many, many people do sometimes, I guess. But at the same time, it’s also interesting, I think, because typically when you think about the type system, you wouldn’t think that the evaluation strategy even matters. So why is that an issue??

RJ (0:32:42): That’s right. So when we started off, we were like, “Well, surely it doesn’t matter. Let’s just pretend it’s call by value and let’s go on our merry way.” But remember, unlike the classic sort of Haskell type system, we are relating multiple values, right? So now, let me give you the simplest example of this. Consider the following program. I mean, in retrospect, it seems obvious, but at the time, it was like, “Whoa, what happened here?” Imagine I have the following piece of code. Imagine I have a function called diverge, okay? So diverge takes this input unit and simply causes diverge of unit. Okay? So just diverges. Okay. Now consider the following piece of code. Let x equals diverge unit in 5 divided by 0. Okay? So I mean, we don’t have text here, but in your head, maybe you can imagine. Let x equals diverge of unit in 5 divided by 0. In OCaml or a call by value language, the division by 0 never happens, but in Haskell, it happens. Okay? And so now, if you pretend that Haskell is call by value, then you could essentially unsoundly say that, “Oh, this divide by 0 never happens.” And the reason you can do that is you can essentially give diverge the following type. You can say, “Diverge is a function that takes as input unit and produces an output unit such that false,” which essentially says it does not produce a value, which is correct. It diverges, it does not produce a value. 

However, the big difference between call by name and call by value is that when you have a let binding, I mean, let x equals e1 in e2, in call by value, when you are evaluating e2, you can be guaranteed that the environment binding x is whatever e1 promised to be, right? And so in this case, if you said, diverge, if you gave it this peculiar type unit arrow false, then you would in the environment essentially contain a contradiction, or you would contain x is a unit such that false, which is to say you can assume false to prove that the division by 0 never happens, which is a perfectly sound thing to do in a call by value language, right? And in classic Floyd-Hoare logic, you can – this is why in Floyd-Hoare logic, which is a kind of the foundation upon which a lot of these, all kinds of verification is built, you can make this distinction between what is called partial and total correctness. Partial correctness says, if I get to this program point, then this assertion will hold. I’m not telling you that I will get to this program point. So we have a Hoare triple, what is called P C Q, says, if you start executing a command C in a world that satisfies P, and you finish executing the command C and you arrive at the end of C, then Q is guaranteed to hold. It does not say that you are guaranteed to arrive at Q, right? So you don’t have to prove that you will arrive there. But in the Haskell setting, you can’t do this because every time you have a let, I need to know, is that x really going to be in the environment or not, or is it one of these diverge type deals, right?

So anyway, this is the difference. I don’t know if I’ve somewhat answered your question, Andres, but this is why this is a peculiar thing.

AL (0:35:52): All right. So, Liquid Haskell happened. Thanks to you and Niki and probably some others. What about Simon’s measure of success? Is it being used and does it prove that liquid types are useful in general?

RJ (0:36:08): I think it’s – that’s a good question. I think it’s very much – we’re not there yet. There are various people who’ve tried to use it, and there are various people who’ve sort of successfully used it for various things, but I would like it to be more used. Let me put it that way. And I suspect there are various reasons for it. I mean, some of it is simply that the engineering around it is a little, let’s just say opportunistic, or shall we say research quality. I think you are familiar with some of this. So for a long time, one of the big difficulties was, well, I mean, it was this separate tool and you have to download this tool and run it and there’s some janky mechanism to do that. And so, actually, Andres and Alfredo at Well-Typed, they were able to help us to sort of create this GHC plugin, which I thought was a big – that was a big thing because it made it a lot easier. Once the whole GHC plugin ecosystem was there, it made it a lot easier to run Liquid Haskell on sort of general Cabal packages, right? And also to sort of ship specifications as it were. But that was a relatively recent thing. 

The challenges with it are, of course, Niki’s graduated and she’s a professor now. And so how do you sustain this kind of development long term? There are various other people now. For example, Well-Typed helped us, of course. There’s a bunch of folks at Tweag, I should get the name right. Facundo Dominguez and several of his colleagues have been sort of trying it out and helping us in various ways, sort of cleaning up lots of parts of Liquid Haskell. In my opinion, one of the major hassles with it is still things like error messages. They’re a bit kind of gross because when it fails, you’re like, “Well, why did it fail?” And so anyway, that’s something that I would like to sort of improve significantly. 

Another thing that I think maybe Andres and I have talked about in the past, this is a kind of a technical thing, and I mean, there’s a difference of opinion here. So several people I think would like a much closer integration with GHC so that it just becomes even easier to just, as Simon likes to say, come in the box as it were. But I think this is a fairly substantial kind of engineering commitment, and I’m not sure what the easiest way of making that happen is. However, perhaps, one thing that’s been floated is at least somehow try to make the Liquid Haskell specifications. And Andres again helped to, with a lot of these early discussions, somehow rig it so that at least these specifications are not comments, which was useful as a kind of demonstrator, but perhaps you can at least get GHC to do some of the parsing and you can write these specifications. Anyway, maybe there’s some more gradual way to bring these things together. One of the things I have – yeah.

AL (0:38:47): Oh no, sorry, I was just going to say, I mean, there are I think two different sides to this, right? I mean, there are sort of the user perspective. I mean, users would benefit from the integration directly because the syntax could be easier, right? At the moment, lots of the Liquid Haskell stuff is disguised as Haskell comments, but Liquid Haskell somehow sees it. And in particular, for the types signatures that you mentioned, the refinement types versus Haskell types, you often need to write both and then you have a certain amount of duplication. But also, there is, I think – I mean, the more tight integration could help because at the moment, I think still, Liquid Haskell is essentially tied to a particular GHC version, right? And the development is decoupled from each other. So that means that if a new GHC version arrives at first, you don’t have Liquid Haskell for it, and there has to be a specific porting effort. And of course, if it was living together, then it would – well, not automatically happen. That’s the big issue, right? I mean, somebody would still have to maintain it, but it would at least be kind of clear that it has to happen and when it has to happen. And I think both these things within principle be desirable, but yeah, it’s quite a bit of work.

RJ (0:40:06): It’s a lot of – right. But nevertheless, I mean, the thing I’ve learned – I mean, as you are probably all familiar with, even the smallest amount of friction to be able to use a tool essentially means you don’t want to – unless it’s really kind of, it just works out of the box, as Simon says. It’s much easier if it just works out of the box, right? I think the classic example of this, I would say, is things like Copilot where I don’t have to think about it. It’s just like, it’s in my editor and just does stuff, right? And it’s really quite neat.

AL (0:40:39): Yeah. But the results are rather cool. I mean, it is nice to have it running, right? And it’s impressive what you can do with it from my own experience because I mean – and in particular also, if you can kind of compare the experience to programming, say, an independently type languages. I mean, as you said, there are some advantages of those in particular when it comes to sort of the synthesis of values or just being able to write underscore somewhere and it just works. But for the synthesis of proofs, I think, it’s the opposite, right? It’s often true, right? So you can have rather complicated properties and the SMT solver just does it, right? And no hard work of spelling out a proof in a hundred separate steps.

RJ (0:41:26): Yeah, no, it’s really nice, I have to say. I mean, it’s very pleasant. I mean, part of the reason, at least someone like me, but maybe I can speak for you also, we like Haskell so much. It feels like this kind of conversation, right, the type and the thing of the compiler gives you a little, “No, no, no, not here.” Like, “Oh yes, yes.” Of course, you go and edit it and it really feels – I want it to feel like that, but with fancier types. I don’t want to have a conversation where the programmer has to do too much of the talking. You know what I’m saying? Just enough that, you know – yeah. 

MPG (0:42:01): Right. So I want to go back to some of the things you mentioned in the start, because you do a lot of teaching, right? So you rework the courses to be all in Haskell. So I wonder, how did that go? And then do you teach them Liquid Haskell? What happens there? How do we know that people who don’t know anything about Haskell can just start using these liquid types or –

RJ (0:42:26): So that’s an excellent question. So for the first, I want to say at least in 2015, 2016, in several years, 3, 4, 5, several editions, I have taught like one or maybe two-week sort of segment using Liquid Haskell. And then they have a pretty substantial programming assignment. And I’m astonished by how much – I’m just like – and this was pre-LLM days, right? So everybody wrote the specs themselves. I’m quite impressed because – I’ll tell you one of my favorite examples. I had a good time making it up where it’s in some tutorial somewhere that I wrote, where we write a small imperative language, usual while assignments and branches, statements, and so on. The sort of thing you use to explain how – I think maybe Graham Hutton had this, or somewhere. You sort of build up a series of interpreters to explain the state monad and you add exceptions and so on, classic. But part of that is you want to then check if the program has undefined variables. By undefined variables, I mean, where you use a variable before you have defined it, right? And you could see this is a kind – I have this syntax tree, which is my program, then the parsing is separate. And you want to write a little refinement that basically describes a program where every variable is defined before it’s used. Okay? And it turns out you can do this. This was one of the exercises. I thought very few people were going to be able to pull this off, but I was shocked that, I mean, the year I had them had these things, they did it for a large function. So you sort of write this thing where these are the variables that are defined, these are the variables that are used. 

Anyway, there’s a way in which you could sort of specify this quite compactly. And the cool thing is you can actually even – if you give me an arbitrary program, then of course, I mean, it may be ill-formed. You may have written some rubbish. And so, you can then write a runtime check that says, check well-formed. And then essentially, what you do is you call this check well-formed, and if check well-formed returns true, then you run your interpreter on it, right? And now you can guarantee that your interpreter will never crash in the sense of having to look up the value of a variable x that’s never been defined because of the refinements. Anyway, it’s very pleasant. Lots of students were able to do it. So I was quite happy with that.

To return to your broader question of how did it go teaching Haskell or re-triggering the classes to teach Haskell, I mean, actually, I have a bit of an anecdote there I should tell you because Niki is also involved. So I started maybe 2012, ’13, ’14. I forget when it was that I sort of jettisoned my old class, which was very formal, semantics heavy. And I sort of switched it to this stuff. And I was using – I was teaching the sort of classic progression, sort of Lambda calculus and then types and the monads, transformers, this, that, and the other. And several of the students said, “Professors all well and good. I have various assignments, quick check and all that, but you should give us some more substantial projects to do.” And so I said, “Yes, yes, I know I should,” and kept putting it off. 

And then Niki graduated and she went to the University of Maryland where she taught there. She was a postdoc there, and she taught a class. It was an advanced undergrad class. And so she had them do projects. And it was self-selected. So there was like 35 or so maybe 40-odd students who are – it’s an elective class, right? And she showed me, “Look, Ranjit, these students are doing these projects.” And so some of them were Liquid Haskell. Actually, there were some very good students who did some astonishing things. I think she wrote, one of them got turned into a paper, but some others were just like, build something in Haskell, build something cool, right? And she’s like, “Look, so-and-so has built like this little GUI here.” And so I said, “Well, that was still 35, 40 odd students. So I don’t know.” 

But then over COVID, I think I was teaching on Zoom, and I was very frustrated, “Oh, how are we going to do exams?” And so I said, “Okay, every year, I keep putting off this project, but this year I’m going to make a project. Enough is enough.” And the project was – I had 200 students. This was a graduate sort of class with 200 students. And I was just like, “I’m not going to make an exam. I’m just going to break the students up into groups of four or five, and you have to do a project. And the project, here’s the spec for the project. It’s a command line application. I don’t care what it is. It’s a command line application. You have to give a 15-minute demo of it, so it better be something fun.” And by way of starter code, I pointed them to this very nice library that maybe some of you already know. It’s called Brick, if I remember. It’s basically like some sort of wrapper around the various TTY low-level libraries, but it gives you a very nice declarative way to give sort of nice layouts and so on.

And so we broke those students up in these – 200 students, so maybe there were 50 odd groups, 45 to 50 odd groups. And the only spec I gave them, it was like, “Make it happen.” And I had some temporary like, “Give me your plan. You need to give me a proposal.” I stole this from Niki. Niki said, “No, no, you must make them give a little one-page proposal of what they’ll do, or they’ll leave it till the last minute.” I was just blown away by what people produced. I feel I should – I just don’t have the time because I would like to turn that into like a two- or three-minute clip. Because when I was seeing these demos, there was me and four other TAs. And because there were lots of groups, we sort of broke it up and I had like a three-hour session. I gave each group 15 minutes to give their demo, and I pointed them to the court. And I mean, we spend – we’re like, “Oh, teaching Haskell is so difficult and monads are hard, and we teach you all these things. I’m showing you Graham Hutton’s notes and all that.” And then I see the actual demos. I’m like, “Wow, if you could produce this, what have I even been teaching you?” To me, it was the most kind of jaw-dropping, humbling, and really the humbling sense of the words. Like, I feel I’m completely superfluous because the quality –

I’ll give you examples. This one guy, he was like, “Oh, professor, I must apologize. My project is very weak. It’s very –” I mean, just lots of excuses, “My teammates felt sick, blah, blah, blah. It was all on my own.” I said, “Okay, okay, just show me your demo. Let’s see what you have.” So, what did he do? He’s like, “I built a small Emacs clone.” I said, “What? You built an Emacs clone.” He said, “Yes.” “And does this Emacs clone have a – you know an Emacs, you can customize it. You know, an alias?” I was like, “Does your thing have an alias?” He said, “Of course, professor. Of course, it has an alias.” And he’s like, “Yeah, you can, but can you not?” This guy opens this up. I was like, I said, “Wait, did you really do this? Can you write to me a macro that, hmm, I know I made something up on the spot? I don’t know, replaces every age in the document with Harry Potter or something like that.” He’s like, “Of course, professor, I can.” And then he said, “Let me show through the whole thing.” And I was like, “Wow, what is this? What is this funny symbol over there?” And this young fellow explains to me, “Professor, there’s a library called Lens. It’s very useful. You must use it.” And this was just one of them. I was like, “Oh, maybe one student has done this.” 

I kid you not, of the 15 projects, at least 12 were just phenomenal. People made all kinds of really cool games. Somebody did this thing like an online flight tracker, and it was all in ASCII. So they had the map of the US and they sort of asciified the whole thing. It was just amazing. And it’s like, if in a 10-week class, I teach you like 20% of what you need for these. These people were all operating at like 150%. Anyway, it was funny.

AL (0:49:39): But what’s the lesson here? I mean, is it just that you have very smart students, or is it that you should give projects to students because then they’ll excel? It’s not anything to do with Haskell, right?

RJ (0:49:52): Would very much be, because I think – I mean, these were 200 students who basically have to take this class to graduate. It was not like a highly self-selected, motivated 12 programming language, functional programming enthusiast, right? I think it was that, essentially, some combination of all of the above that this library was really fun, and I think the students had a good time working with it. It’s types, it’s complicated. I mean, there’s all the – it’s not like Fibonacci or factorial, right? I mean, they’re doing really cool, fun things. But some of it is also that I think I was just – my expectations were too low. I was like, “Well, let’s just have them do these very constrained things.” But it’s just, let people give them the leeway and give students the leeway. That was, to me, the lesson, is that I have been really holding people back by not letting them really take ownership or whatever of this. And there were 15 different things. I highly doubt they could have stolen them off the internet because they were just – like the Brick fellow, he then reached out to me, because I think he discovered that there were all these forks of repository. But yeah, it’s been replicated several. I mean, my colleague Nadia Polikarpova, she did a similar thing and she says, “No, it’s true. It’s sort of a flash in the pan.” So yeah, people can build quite astonishing things. Really fun little games and whatnot.

AL (0:51:13): Okay. So you started out as a researcher who had been having a lot of experience with functional programming languages, then you got into Haskell because of this, “Oh, try to do liquid types for Haskell challenge from Simon.” And now you are using Haskell everywhere. Have you tried to do other things except for Liquid Haskell and your courses and Haskell as well?

RJ (0:51:38): Yeah, there’s various – so I guess Liquid Haskell is one of the things that I’ve been working on. There was several other – so Eric Seidel is again, one of – I think some of you might know him. Eric helped a lot in the early days with Liquid Haskell. But one of the things that he then started working on, which was a different project, where he basically showed that often when you have a type error, people sort of complain. It’s like, “Oh, well, type errors are hard to explain and so on.” And so Eric, one of the things he did for his PhD was to show that, you know what, often when you have a type error, you can actually show a concrete execution of the program. At the end of which, you see the bad thing happen, where you try to add an int and a string. So it’s not like the type checker is being silly and just rejecting you. This was of course done for OCaml, but I had some early data where I sort of gathered the kinds of type errors students were making in our classes, right? Just as they’re programming in OCaml. So Eric showed that, I think if I recall the numbers, about 90% of the time, in about a second, you can produce these concrete witnesses that show like no matter what the unification error is, if you try and run your program on these inputs, you will see the horrible thing happening down there where you try and take one and you add a list to it or something like that. So that’s one example. 

But I guess different students often have different things that they want to work on. And so those Niki and the Liquid Haskell stuff, but Eric sort of went off in this direction. Another direction Eric went off in is trying to harvest this kind of data to give more precise type error messages. So for example, often the compiler gives you an error message based on its best understanding of where the constraints are sort of infeasible. But in these sort of long data sets, which we had looked at how students then edited their code, we have the kind of ground truths, because I see how you edited the code. And so when Eric sort of came up with this clever way in which you can train classifiers to predict, “I know that the compiler says the error is on line 12, but really you should be changing the code that’s on line 42.” So, things like that. So that’s another example. So I have a student, George, for example, who’s sort of taken this and done lots of really nice things with it. 

There’s another student right now—actually, Niki is also involved with that—Nico Lehmann, who started off working on a Liquid Haskell project, which I quite like a lot. It’s about sort of building ORM sort of web frameworks using Liquid Haskell. And I think it’s a really nicely – so we built a couple of these nifty little websites, which use – what was the underlying web framework? I can’t remember. Was it Yesod? It’s been a while. But anyway, the idea is that you can now start using refinement types to give very precise kind of security specifications as to who can access which bits of the data. So like in Yesod, with Persistent, you sort of declare the tables and the rows, but in addition, you can say, “Well –” for example, imagine I had a website that was about, I don’t know, courses, course management. Maybe I can say that only the student can see their own score, or of course, the instructor for that particular class can see the score, and only the instructor can update the score, things like that, right? And so Nico did this very nifty paper with several of my colleagues and with Rose on how you can write these very declarative specifications as part of, essentially, the ORM definition, and then have Liquid Haskell sort of do all the checks. So it was very much in this style you refer to, Andres, where there was a lot of very sophisticated proofing going on, but the user never saw any of it because it just worked out of the box. 

But Nico has recently been working on trying to apply this idea to Rust. And so he has also gotten me writing some Rust code, which is quite an interesting experience. So we have a system called Flux where, again, we’re sort of trying to go back to this thing where I wanted to make this sort of refinement. So often, when I give talks on refinement types, like, “Yes, this is fine. This works in Haskell, it’s a pure language. What about in an imperative language?” And so I mentioned this early work with C where we had to start with this low-level type system and you sort of keep doing this stuff and then you can layer refinements. And all of it just felt very heavy and unpleasant in a way that Liquid Haskell just feels pleasant. It’s just like you write the thing and it just works and you don’t have to write too much. It’s great.

And so now, finally, with Flux, I think we have a story for how refinements work in a very imperative setting with like loops and pointers and all that stuff where it feels like Liquid Haskell. By which, I mean, it just feels like there’s no extra cruft, but it’s also – again, I like to use another line from Simon. Simon likes to say, “When something is free, it just means you’ve already paid for it.” So in a sense, with Rust, the refinements come for free because you already did all this work with the borrow checker, right? You just get your program to type check with Rust, you’ve sort of done a bunch of work. Then now it turns out the refinements just fit really nicely and it’s very pleasant. So anyway, that’s something else that’s going on right now.

MPG (0:56:28): Yeah. So one thing that we kind of always ask about towards the end, how do you foresee the future of Haskell? What changes would you make? What changes are being talked about that you would like, and do you have any ideas of where you see us going in the next 5, 10 years?

RJ (0:56:46): That’s a tricky one. I mean, I can tell you what kinds of things I wish for. I mean, in general, it’s just such a joy to write Haskell, you know what I mean? It’s just, ah. It’s just so pleasant. I mean, there are little things like – I mean, even the VS Code, the editor tooling is so much better now. It mostly kind of just works. The one thing I feel envious about compared to Rust, it’s just how fast the compiler is. And of course, I think if you talk to C++ or whatever, they’re like, “Oh, no, no, the Rust compiler is very slow. But anyway, compared to GHC, it’s very, very fast. I mean, I just think of the times, the compile times for when I’m working on the Flux code base, like, can I do whatever cargo build and… done? And when I do Cabal build in the Liquid Haskell code, it is like, “Ah.” So that would certainly be very high on my – I saw this recent thing where someone has – there is some dramatic speed up. I just saw this a few days ago. So anyway, I’m keen to go back to that. 

What are other things? I mean, I would certainly like these kinds of fancier refinements and so on to be part of. I feel it’s one of those things where – I think what was amazing about type classes is sort of Phil and company. They wrote the type classes paper, but they could not have anticipated all the cool things people then started doing with type classes. That was like, “Oh look, we want to overload plus and minus and equals, great.” That was like the type class. But since then, I really feel that was the key to unlocking so many other cool abstractions that were sort of done in Haskell really well because sort of type classes made that possible. 

So I’m curious about if we can get these fancier kinds of specification mechanisms into the types, what sorts of new abstractions they might facilitate, or what kinds of new capabilities they will allow in the kinds of code that you can write from then on, right? I mean, you’ve worked on some of this, Matti, right, where you sort of do a lot of this type-driven synthesis and things like that. All of that, but to make it work for real. I think those would be quite cool if we could figure those out over the next four or five years. But of course, just getting the compiler faster.

MPG (0:59:09): I know the feeling. I was watching some development stream and he was measuring – it took like 16 microseconds or something. It was like there were Greek letters I had never seen before because it was so fast.

RJ (0:59:22): But on the flip side, this is like, why is it such a joy? I mean, I don’t have enough people to use this, but I mean, now it just works and I’ve forgotten why it works. But in VS Code and various other editors, you can do this thing where when you write – and then you use this – what do they call it? Doctest style thing, right? Arrow, arrow, arrow, three times. And then you have some function. Let’s just call it foo, and you want to know what is foo of 12. And you just put it in VS Code and click – it says, “Evaluate me,” and click and it’ll print out the answer, right? So it does this thing where it calls out to GHCi. You can’t do this in – I mean, this is – because of the whole purity, this is added, just ADTs. And it’s great. This is why you can just click. I didn’t have to actually have to recompile everything. So yeah, I miss that in other languages. And I use that a lot to teach too. It’s like, “Look, we can just test it out and see what happened. Look, it crashed.” And I’m just in the editor. I don’t even have to go into GHCi.

AL (0:1:00:17): Yeah, I think it’s very nice. Although I think, at least in past versions, you could easily – if you call a function that diverges and eats up your memory, then there’s no way. Because in GHCi, you can still press Ctrl C, but in –

RJ (0:1:00:34): You have to reload window. I’ve done it. I did it in lecture once and there were some clever students like, “Well, professor, what happens if we try this?” Like, “Well, let’s see.” And the little spinning wheel came up.

AL (0:1:00:48): The language server support is certainly amazing. And also the amount of feedback you’re already getting from the type system now with the holes and everything. 

RJ (0:1:00:57): Exactly. Yeah. It is just great.

AL (0:1:01:01): And compile times are certainly identified as something that is being actively worked on. And I think they have already improved, but there’s a long way to go until then.

RJ (0:1:01:12): I mean, the other thing I always miss is Hoogle. It’s just great. We’ve got just like – anyway, yeah.

AL (0:1:01:23): All right. Thank you so much. We’ve been covering a lot of ground. Thanks for taking the time.

RJ (0:1:01:29): Thanks very much, Andres and Matti. Yeah, it was a pleasure.

Narrator (1:01:35): The Haskell Interlude Podcast is a project of the Haskell Foundation, and it is made possible by the generous support of our sponsors, especially the Monad-level sponsors: GitHub, Input Output, Juspay, and Meta.

Individual Sponsors
GitHub IOHK Juspay Meta
CarbonCloud Digital Asset ExFreight Mercury Obsidian Systems Platonic Systems Tweag Well-Typed
Artificial Channable FlipStone Freckle Google HERP MLabs TripShot
To learn more about the Haskell Foundation
Haskell Foundation, Inc.
2093 Philadelphia Pike #8119
Claymont, DE 19703