WS: This is the Haskell Interlude. I’m Wouter Swierstra …
NV: … and I am Nikki Vazou …
WS: … and welcome to the second episode of the Haskell Interlude and we’ll be talking to Lennart Augustsson who’s been an active member of the Haskell community for the last 30 years if not more. And we’ll share some of the stories about his experience using Haskell in academia and industry. He’ll say a little bit about what it was like debugging the first Haskell compilers, the first commercial Haskell software that he managed to sell, and he’ll share a little bit about his perspectives on how the language is developing, and dependently typed Haskell. Welcome Lennart, really happy to have you here. So Lennart’s famous for a number of things. Amongst others, he’s worked on numerous Haskell compilers, and compilers for other languages; and just to warm up, I was wondering, Lennart, how did you get into Computer Science?
LA: Um, do you want the long version or the short version?
WS: Oh, the long version, we have all day.
LA: Okay. Uh, so when I was, I don’t know, 10, 11, something, computers fascinated me, but I had of course no idea how they worked or anything like that. So when I was in school and was maybe 15, 16, I heard that the school was getting a computer, it was a PDP-8 which was an advanced machine for its day. Programs stored on paper tape and uh um so I decided, okay, the school is going to get the computer, I better sort of look into this. So I went to the library to see if I could find any books. And the only book they had at the moment that wasn’t lent out to anyone else was one called “FORTRAN for those who know ALGOL”
WS: It’s a great title.
LA: And of course I knew nothing about programs. I borrowed this book, nothing in that book did make any sense to me. So I mean that almost made me think this programming thing was not for me, I couldn’t understand a thing. So luckily next time I went to the library to return that book, they had a book about BASIC that I borrowed, and that made total sense to me all of a sudden. So when the school got the computer, I started writing BASIC programs. So that was fun.
WS: And then you went on to study Computer Science, I guess?
LA: Yes. Yeah, well no, so Chalmers didn’t have a Computer Science program, they only had Electrical Engineering. So Linköping was the only place in Sweden that had Computer Science at that point and I’d rather go to Chalmers. So I studied Electrical Engineering, but of course there was lots of Computer Science courses you could take, there was just not the formal Computer Science education.
LA: So I studied that for a while. Uh, and then by the end when I got my Masters I was sort of undecided on what to do. So I figured, well, I’ll learn some more about Computer Science. So I started my PhD at Chalmers.
WS: And how did you then, I mean you started as an Electrical Engineer, but then you somehow picked up Haskell along the way, or how did you learn about functional programming?
LA: So one of the first courses when I started, so I should say, I was more interested in sort of system stuff, uh parallel computing and those kind of things. But one of the first courses was a course on denotational semantics by Sören Holmström. And in some of the exercises we did on that course, this was a PhD level course, was to write little programs in David Turner’s SASL language, “St. Andrews Static Language”. Because, I mean, that’s basically denotational semantics, it’s an untyped, pure functional language and it fascinated me that you could actually write programs in this style. So I had heard rumors about this before that there were some research going on at the Computer Science department, it was before I started studying there, that they did research into these languages that didn’t have assignment. And that made no sense to me. How could you even have a language where you could do anything if you didn’t have assignment.
So I saw SASL and I thought that was cool. And so I found this paper by David Turner in Software Practice and Experience, I can’t remember the name now, it’s his famous paper where he shows how to use combinators to translate lambda calculus into combinators, and combinators were very simple. So I decided I needed to implement the language using that, so I did it. Mostly, I think I did it on my home computer. I wrote some stuff in C.
WS: And that was to implement, write your own SASL implementation.
LA: Uh, yeah, I wouldn’t call it SASL, it was a simple functional language. Actually, it didn’t have a name, but Bengt Nordström said it must have a name, you have to give it the name. He was my thesis advisor, and so I named it “Simple”.
WS: Okay, fair enough. Yeah. And is that the first functional compiler you remember writing?
LA: Yes, yes, it’s definitely the first one. So this, uh, it could bootstrap it couldn’t compile itself, so it was like, I don’t know, 100 lines, maybe 200 lines, something like that in this little SASL-like language. And it spit out a combinator graph.
LA: And it could also read in the same combinator graph. So it could, it could bootstrap, but you can basically had to sort of copy and paste the output together to itself as the input the next time.
WS: Yeah. And then how did, I mean, so at the time you had SASL and then later Miranda, and then at some point …
LA: This was well before Miranda.
WS: Yeah, because I think Miranda was for me, I think that was one of the earlier lazy functional languages that really, um, to take off that had pattern matching and data types and higher order functions. So it was it was certainly similar to Haskell in spirit, I guess.
LA: Yes, but but this was before those things were popular. So in between SASL and Miranda, David Turner did KRC, the Kent Recursive Calculator. Um, so after I had done that, my little “Simple” language, Thomas Johnsson and I talked about we should we should really make a proper compiler to that compiled to machine code. I think it was over the summer break, I wrote some strange compiler uh for a similar simple language into VAX assembly code that basically did the same thing as the combinator thing, but instead of having combinator and an interpreter, it actually spit out the machine code that sort of did the same thing as the combinators would do. This language did not have a name.
WS: Yeah, it’s the best languages, I suppose.
LA: So, using this language Thomas and I wrote the compiler for Lazy ML. So around that time, as we were starting to say that we should write the real compiler, um, Sören Holmstrom and Ken Pettersson had been to Edinburgh and they brought a tape back that had the ML system on it. And I remember sitting at the terminal and and trying out typing in the
map function or something like that, and it spit out the type of the
map function to me, and it seemed like magic. How is this even possible? I mean, this complicated function and it knows what type it has. So we immediately decided, okay, our language must have type inference. So that’s, that’s why we called it Lazy ML, because it was a bit like ML. This was not Standard ML. This was well before Standard ML, but we wanted it to be lazy because we thought that was a cool thing. So we worked on the Lazy ML compiler for a number of years, which sort of, that work was both his and my thesis. And then, um well, I will finally get to Haskell now …
WS: It’s fine.
NV: I wanted to ask, like, what year are we on?
LA: Oh, um so I started my PhD in 1980s. So this first, uh, compiler that translated into VAX assembly code, that would probably be 1981. And then, I would say, all our Lazy ML stuff, that happened from like 1982 to 1988, so that’s a number of years there. Oh, I do have a little story to tell though about that compiler that in these days there were no source control systems, or maybe that some existed somewhere, but we certainly weren’t using any. I mean, we didn’t know about them and it seemed like it was for wimps, so also disk space was precious. So you couldn’t have too many versions of things around. So typically when we made some changes in the compiler, you recompile the compiler and you sort of saw, yeah, that worked. And then you sort of threw away the old version of the compiler and then you had the new one. So that’s exactly what I did. And then I saw that this new compiler, it could compile itself, but the generated code was wrong.
WS: Hah, so now what?
LA: Well, so they took backup on our machine once a week. So, I mean, it would have been possible to go back a week and lose work. But I figured, well, I run the machine code level debugger on it instead and figure out what’s going wrong. So I just ran that for a while until I found the place that went wrong and I patched the machine code so that it could at least do the right thing at that spot. After that, the Make file we used to build the compiler, I changed it so that when it it recompiles itself three times and it makes sure that the three last versions are identical to each other.
LA: It was a scary moment. I didn’t want that to happen again.
WS: Yeah, I can imagine.
LA: Anyway, um, so, I promised to finally get to Haskell. Um, I think it was 1989. We had, I don’t think it was ICFP in those days, I think it was FPCA, “Functional Programming and Computer Architecture”, and I know Simon Peyton Jones, I think, had visited Paul Hudak or something and they had come to the conclusion that we should have one language for people doing lazy functional languages. Uh, because everyone had their own language. They were similar but not the same. So it was hard to sort of use someone else’s work or their programs or whatever. There were no big programs in these days so …
WS: But it was Miranda and supposedly Lazy ML and maybe Clean, was Cleaner around in some form or another perhaps?
LA: And Yale had one called Alfl or something like that. Simon PJ, what did he use? I know he used Lazy ML, but there were like two or three languages around, so we all met in a room and we all agreed that, yes, we should have one language and we should form some committee. So, uh, with one person from each of the different sites that were going to be involved. And so, Thomas Johnsson and I were both there, so we tossed a coin to decide which one of us should be on the committee, and he lost the toss so he had to be on the committee. Uh, so then, I mean, I still went to a meeting or two when when they were sort of in a conveniently located, and so the next year that we had LISP and Functional Programming conference, which was the other one that became ICFP, in Nice in France. And the Glasgow people with Simon had promised that there would be a Haskell compiler by then, that they had been working on it in the spring and there would be one. And then when we got there, they said no, it’s not, it’s not quite working yet, we don’t have anything we can share. That’s, and I thought that’s too bad. I mean, I wasn’t particularly fan of having this Haskell language, but I wanted it at least to see what it would be like, and without having a compiler, you couldn’t really see it.
LA: So then I and another guy, Staffan Truvé, we said, okay, we’ll make our own Haskell compiler. We have the Lazy ML compiler, how hard can it be? So then, when we got back to Sweden, it turned out Staffan had other things to do, so it was only me. So in July and August, I worked on making a new front end to the Lazy ML compiler, and so I actually dug around a little bit in the historical records: So the first release of it, called 0.99, was in August 21, 1990.
WS: Okay. And that was “hbc”, the first …
LA: That was “hbc”, yes …
WS: … one of the first Haskell compilers.
LA: Yes. So they had something in Glasgow, but they still hadn’t released anything. So they were slightly upset that I picked the name “hbc” because that’s the name they wanted to use. They had already decided on this name.
WS: Haskell Bytecode Compiler, is it?
LA: No, that’s, B is Haskell Curry’s middle initial.
WS: Ah, of course. Right. No, that makes sense. Yeah. So actually it was one of the alternative names that we considered for the podcast, was “Babbling Brooks”, which is very funny if you know what Haskell’s middle name is, but otherwise it’s a joke that needs too much explanation, and we kind of gave up mhm um …
LA: It might have been a better name because according to Haskell’s widow, he never liked the name Haskell.
WS: Yeah, that’s what I heard as well. So the …
NV: Uh, is there a story why the name was picked?
NV: No. Haskell.
LA: Haskell. Yes. Uh so everyone uh could, they had some kind of brainstorming session, everyone could suggest names as many as they like. They’d write them all on the white board and then everyone could veto names and so they sort of went up to the bord and put a stroke through the names that they vetoed, and I guess when they were done, the only name that wasn’t vetoed was Haskell, nobody particularly liked the name, but no one hated it enough to …
WS: I think then the story I heard was that someone had suggested “Curry” and then they said that sounds like it will be too many …
LA: … too many jokes and I thought, I think “Curry” would have been a better name. I mean it’s good to be able to make jokes.
WS: That’s true.
LA: I still don’t know who this boring person was that said there would be too many jokes.
NV: I think the Coq people are currently suffering because of this.
WS: And you did have a language called Cayenne, of course, a few years later.
LA: Later, yes. I should, I should say something about the name “Coq”. Uh, that has nothing to do with Haskell, but there was some little get-together where Gérard Huet, who’s sort of was the original creator, he was at this meeting and he’s retired now, but after a few glasses of wine, I finally got him to admit he had picked this name specifically to annoy English-speakers.
WS: So that was the, that was the first release of “hbc”. And then …
WS: … I think Haskell’s, like, changed a lot since then. And kind of when you were working on “hbc”, what was your sense that you think was this, was this going to take off at some point or were you thinking this is still one of these marginal academic languages?
LA: I thought it would be pretty marginal. I mean, it was, it was another lazy functional language, they were not really that useful. So it had a slightly larger user base because we had picked a few academics to get together and do this, and then of course, some of them that were on the Haskell committee, they never used it anyway, so Clean for instance, did not it adopt the Haskell syntax, ah the MIT people who had “Id”, uh, they didn’t, so but I mean it was interesting, nevertheless, and I thought after having used it myself for a bit, I thought it was a very nice language, it was an improvement over Lazy ML. I was also, I strongly, strongly disliked this sort of layout sensitive thing of Haskell, and I said, well I’ll never use that, I’ll use curly braces and semicolons. But then being lazy, I started using the layout there, so although I’m never going back to the semicolons again.
WS: Yeah. So you you spent quite a while in academia working on Haskell and Haskell compilers and at some point you transitioned to industry, right?
LA: Yes. Um.
WS: So what spurred you to change?
LA: So this was around 1994, I think. Well, so I’d had my PhD for a while and I felt like I spent far too much time applying for money, trying to get money for the next project, and I really, really did not like writing these applications. So a friend of mine asked me if I was interested in joining a company. Well, actually, I was asked to join the company called Carlstedt Elektronik, that was trying to build an entirely new computer that was going to build program in an entirely new way and it was a bit too crazy. I turned that down. But then when … and I was right in doing that because that company disappeared. But a bunch of people from that company started a new company called Carlstedt Research & Technology, CRT. And the guy who ran that company asked me if I wanted to join and I said, yeah, okay, I’ll take some leave of absence from my academic position.
LA: And it turned out the first project I was involved in was one where to make a new language for a rule programming language that they used at Carmen Systems to optimize aircraft crew planning. So you put down a bunch of rules about things that you have to have at least 15 minutes break between flights and uh you need pilots need 45 minutes after a flight to debrief. There are thousands of these rules and it was, they wanted a better language and most of all right then they wanted something that ran faster than what they had. So I said, oh, I can take this language and write a partial evaluator for it and speed it up. So I wrote one in Haskell in 1995. And I think this program I wrote was in use for like 10 years. It was probably the first commercial Haskell program that sort of actually ran in production.
WS: That’s even before Haskell 98 the language standard was out. Right, so surely that was kind of the 1.3 or 1.4, with monads maybe just in there or something. That must have been really quite early.
LA: Yes, I try to remember if if monads were in there or not, I can’t even remember what that code looked like. It was around that time.
WS: It was 25 years ago, so I’ll forgive you that. I can’t remember the code I wrote last week, so.
LA: Well, me neither. So this this, uh, I developed it on an IBM POWER workstation and it ran on HP/PA machines at Lufthansa in Germany for their aircraft crew planning. So a couple of interesting points about this program. So the reason it worked well to write this thing was that when they plan, they only plan for one category of people and only a certain route. So say they plan only Europe for captains, which means there is lots and lots of rules in there that do not apply, those for the rest of the world and for non-captains. So, uh, a partial evaluator worked really well. And then it turned out, well it sped up the actual optimization program, which is sort of kind of genetic programming kind of thing. But it also sped up compiling the rules, even though now it had the partial evaluation step, because this partial evaluation language was translated into C, and then you ran the C compiler on that, and it generated a lot of C code, so it was really slow to compile. But if you ran the partial evaluator, which took like five minutes, it generated a lot less C code, which was faster to compile. So you regained the time …
WS: So generating smaller amounts of C code. I see. Yeah, clever.
And you, I think one of the companies I know you for also is that you kind of went back to your electrical engineering background and worked at Bluespec for a while, is that? And you were one, you spent a lot of time working on the Bluespec compiler. That’s right. So in 1999 I was still working for this consultancy company. I was also still on leave from Chalmers, so I never quite let go of that for. Yeah, so uh Arvind, professor at MIT, he saw that all his colleagues were doing startups, because this was in the big boom of that sort of late 90s of uh, the internet was incredibly booming. So Arvind wanted to do a startup, too, and so he had been working with his student on this, um, rewrite rule based language for hardware design. So he asked me, do you want to join in a company to to make this into a real thing? And I just thought about it a bit and I said, yeah, sure, so I was still formerly working for Carlstedt Research & Technology, but I was sort of on loan to Arvind’s company, and then I also had my Chalmers position, so I had sort of two fallbacks so you can …
WS: Then you can take the gamble of a startup, right?
LA: So I started working on that in 1999 and the actual startup company was not formed until 2000 so I started on it before it was quite there. Um, but then at that point, if you wanted money from the the venture capitalists, you have to sort of play by their rules, and they said, there is no money to be made to make CAD tools for chips. Now if you want to get money you need to be a chips company, to make chips. So Sandburst as this company was called that we started, we said, okay we’ll make chips for something, what should we do? We’ll pick data communication. It’s big.
LA: But the secret plan was to develop this language, but of course to actually make these chips, we had to hire a bunch of actual hardware designers, and they were not particularly keen on this new weird language, they wanted to do things the way they always had, and also Bluespec wasn’t quite ready to be used in anger. So it turned out that the chips that we designed, they were all written in Verilog, just as before. I mean there was some little bit of prototyping done in Bluespec.
LA: And so what happened then was like, four years later, 2004 or something, that Arvind was not happy with this, and he thought we should take Bluespec and turn it into its own company instead of having it within Sandburst, because it’s, and one reason for this was that the venture capitalists had completely changed their mind at this time. They said, oh well, CAD tools, they have a great future, we’re happy to invest in a tools company, this chips thing, that’s dead, there’s … so …
WS: It’s back to the drawing board and uh yeah …
LA: So a new company was formed, but I didn’t join that company. I had my doubts about Bluespec actually being a successful commercial product and I guess are both right and wrong, because the company is still around, but successful might be an exaggeration. I mean, they sell services now basically producing Bluespec.
WS: Yeah, well they found their niche but it’s very …
LA: It’s a tough business. Oh, I should point out that the venture capitalists had some experts look at things and they said, well, your language can’t look like this, because it looked like Haskell. I mean, I designed the actual look of this language and they said, this is too weird, you have to make it look like a normal language. So the thing that, uh, Bluespec the company did was to put a new front end on the Bluespec compiler so that it looked like SystemVerilog, but underneath it was actually still the old stuff.
WS: More curly braces and semicolons.
LA: Oh, absolutely, absolutely.
WS: So after Bluespec, where did you go?
LA: So uh in 2005-ish, Sandburst was being acquired by Broadcom, and so I was, I just wanted to hang around until the acquisition had happened, so that you actually got some money for the shares that you had and so on. So I said, well, maybe I can do some work for someone else because, I don’t know exactly how it happened, but I had talked to John Launchbury at Galois, a company in Portland, and they were doing some hardware, compiling to VHDL. I mean they weren’t doing the hardware, but they were generating hardware. There was some work for the NSA that they did on their language Cryptol, because it was for crypto stuff. So Sandburst says, oh yeah okay, we, we can, we can lend you to them. So now I was working for Galois sort of on leave from, well not from Sandburst where I was sort of formally actually at CRT but I also had my position at Chalmers still, so …
WS: Must be your sparkling personality that whenever you spend any amount of time somewhere they immediately load you to a third party.
LA: Uh but at least managed to keep my whole position. So uh, so I think I worked for them for like six or nine months, something like that, doing this Cryptol stuff, it was interesting, so we translated Cryptol to VHDL. We put everything on on a DVD and we sent it to the NSA and then we had no idea if the DVD even got to them. You never got any feedback from them, it was like sending things into a black hole.
WS: Yeah, well you don’t get any user bug reports, you don’t have any complaints or pull request, it’s perfect.
LA: So this was Fergus Henderson and I. Fergus Henderson worked on the Mercury language before that in uh in Sydney, um, but then the acquisition actually happened, and so I got my meager amount of money out of it, so I, and I didn’t want to stay on at Sandburst. And so I briefly went back to Chalmers actually at that point and I had a course, I was maybe at Chalmers only 25%, I don’t know what I did on the other. And then there was this thing that on the Haskell mailing list, there was an email from Simon Peyton Jones saying that, well, this bank Credit Suisse are looking for people doing functional programming in New York.
LA: And I thought, okay, sounds like an interesting job. And so I happened to pass through New York for some other reasons, I went to talk to them, and there were two guys, Neville Dwyer and Howard Mansell, who were sort of enthusiastic about functional programming, because they were doing a lot of spreadsheet stuff and they … So the way they actually came across this was that they wanted some more, something of data basis or some different way of programming for spreadsheets.
So they googled and they found Simon Peyton Jones’ name because he had written some paper with some other people about something in spreadsheets, I can’t remember this paper. So they talked to him, and then they got more enthusiastic about the functional programming part. And Howard Mansell had actually taken a Miranda course when he did his undergraduate degree in Kent. So uh …
WS: It’s a small world sometimes.
LA: Yes. And so they didn’t really know why they wanted functional programming but I thought that would be a good fit for their financial stuff.
WS: And that must have been a very different environment. I think um that you worked quite closely with traders who were not …
LA: I worked physically close to the traders.
WS: Yes, yes. On the trading floor even, right?
LA: So I was sitting across from a guy who traded in unleaded gasoline and pork bellies.
LA: So he had, he liked to sing songs from musicals, he couldn’t, he couldn’t really sing. So yeah, so we were sitting on the trading floor, so I was part of the quant group with sort of mostly mathematicians and physicists who write the financial models. So I worked closely with them and they work more closely with the traders but it’s a very different environment, it’s often quite loud. So back then, this was in 2006, uh the traders, they mostly spent their time watching sports on TV, and then now, because there were big TV monitors up on the, around the whole trading floor and then now and then it was frantic activity when they did things on their computer and were on the phone and were buying or selling pork bellies or unleaded gasoline or whatever it was and uh, and then they’d sort of calm down and they went back to watching sports again.
WS: Yeah, sounds like a very productive environment.
LA: It took me a long time to get used, and when I say a long time it took years to get used to working in this kind of environment. But it has the advantage that after that I have no problem working and sort of open office things because these things can’t disturb me anymore.
WS: You’ve kind of become immunised for this kind of distraction.
LA: Yes. Uh.
WS: You spent a few years at Credit Suisse and then moved to Standard Chartered. Is that right?
LA: Yes. So I was hired as a consultant for Credit Suisse in New York, and after … for a year, and after the year was over, I wanted to be closer to Sweden. So I asked them if I could work in London instead and they said that was fine. So I became a regular employee and moved to London, and worked there for another year. So in New York, uh, they got a new guy to be the head of the quant group.
LA: And so he was also Swedish and uh, he came in with the idea that he wanted something like Python to program things, but, I didn’t try to convert him, but I told him about functional programming and this and that, and he said that he worked for Goldman Sachs before and he, so he’d experienced their weird functional-ish language that has side effects. So he did not like side effects. He said, that was a terrible thing to have. And I said, well, if you do it in Haskell, you have monads and they’re under control and blah blah blah. So, so after like six months of talking to him, he had become one of these converts, new converts with the glowy eyes. Yes, this is the best thing ever. Anyway, so the reason I mention him was that he quit from Credit Suisse, took some other job with Lehman Brothers and then he’s quit that, and he started at Standard Chartered, and so in the Summer of 2008, he asked me if I wanted to join him at Standard Chartered and said, and I said, yeah, that’s, that sounds like an interesting thing to do. Oh, I should, I should mention my lasting legacy at, at Credit Suisse. So I wrote this little plugin in Haskell for Excel. So you can extend Excel with new functions. And about, it was about the first thing I did at Credit Suisse. There was this little plugin where you could write the function in a string because there are only so many datatypes in Excel and you got an object back that represented the function and then you could use this function in other cells and you could map it over arrays and things like that with other extensions. So everything else was written in C++. But I wrote this thing in Haskell and, I think last time I asked them, someone at Credit Suisse, they were still using this very same plugin.
LA: And they’re still using, I mean they recompile things now and then, they’re still using like GHC 6.2 or what the version number was that was around it in 2006.
WS: It’s funny, kind of what your legacy is if you look back. You’ve done all this hard work and then there’s just one thing you did over a long weekend which is no one dares touch or ever throw away … to be useful.
LA: It works. They might have rewritten it in C++ by now. I mean, I don’t know that’s what happened to the thing I wrote for Carmen System, it was eventually rewritten in C++, by a guy who did his PhD doing functional programming at Chalmers. But so it’s a small world.
LA: Yes, so I joined, uh, Standard Chartered Bank in 2008 just before the big bank crash …
LA: … which which was good because it meant when I negotiated my salary and so on, things were great for banks, so they didn’t mind paying me. And …
WS: And there you wrote another Haskell compiler, of course.
LA: Yes, I did. So when I arrived there, they had a little bit of Haskell already because it was run by this guy who had become a Haskell convert, but they weren’t actually using it in sort of production much, they were using it for generating C++ code from some IDL language that they had and things like that and … But they had run, made extensions to Excel that was very similar to the ones we had at Credit Suisse. So they had this plugin that had a string-based little language that you convert into an object. I could talk more about this because it’s all horrible, but … the object part of it, because Excel doesn’t have objects. But let’s, let me ignore that unless you’re really interested about that part …
WS: I’d skip that part if you don’t mind, I’ll save that for the Excel podcast that I run on the side.
LA: So yeah, so they had this little language which was basically Excel formulas and plus let-expressions, what you had in this, and this little language that was called Lambda by the way, uh got very popular with traders and people who wrote little snippets of code in Excel, everything pretty much was done in Excel. And so their little scripts in this language, they grew bigger and bigger, and this was really not a good language because it was just sort of Excel formulas basically. And so after a while I said, we should really have a different language, and let’s not invent our own language, let’s take a small subset of something we know like Haskell but meaning something I know and implement that. And so the head of the group, he wasn’t the Haskell fanatic, he wasn’t that keen on this idea but he went on vacation for like two, three weeks. So mostly I, but also a bit Neil Mitchell, we did an implementation using the Haskell source extensions package to do all the parsing and then type checker for his little language and then, well there was already a runtime for for this Lambda language. So to be compatible, we had to target exactly the same uh sort of … it wasn’t quite bytecode but it was more a syntax tree that you interpret, but we had to target that, and that was a strict language. So this little subset of Haskell that we made was strict, because that’s what the whole thing was, and uh, well he came back from vacation and he thought, oh this, this looks great. And so people started using it and of course when people start using it you have to add more features because they ask for more features. So I added more and more features for the next eight years or something …
WS: I mean you say this now, it sounds like, oh yeah, I wrote this one compiler over two weeks, but I think it now runs one of probably the biggest Haskell code bases that I can think of, there’s millions of lines …
LA: I think they have probably about five million lines of code. That would be my guess, last I heard uh it was maybe four or something. That was a couple of years ago. Yeah. So it’s, uh yes, they have a lot of code so they can’t get rid of it now.
WS: And they have dozens of Haskell programmers working on this …
LA: They had like 40, 50 people doing this full time …
WS: … so that’s …
LA: … and I see every now and then that they are still trying to hire more people. Yeah, so the people in power within IT and so on, they were very much against this language. It was a weird thing. They tried to limit where it was allowed to use this. It could only be used within the quant group and so on. But they couldn’t stop it from spreading and so they sort of gave up on trying to limit it and instead sort of joined in and hired a group to work on it. And so there are lots of of applications written in this. So Neil Mitchell did a fantastic uh user interface library for this that was made for Windows. So there are lots of GUI apps written in this. And people, I mean, they’ve hired a lot of, we hired a lot of Haskell people, and the fact that it’s strict, it doesn’t seem to affect people that much. It’s sometimes a bit of an annoyance but, uh yeah …
WS: It’s fairly … I can imagine it’s fairly specific situations when you need to search through a large structure or something you want to find the first thing rather than traversing the whole tree, like very specific things.
LA: There are, yes, there are situations and there are things that can’t be written as elegantly. You can’t make them by composing little reusable pieces. You have your search loop or something that you can’t write the thing that searches for all of them and then find the first one, like you mentioned. Instead, you have to write a specific function that does the search and stops when it finds the first one. So it’s not as elegant but mostly it’s kind of the same.
WS: Yeah, it’s something you hear more often I think where people say that like the biggest side effect of laziness was keeping the language pure right. That was the important insight that was like this idea of encapsulating effects. Whether its in the IO monad or some other way, it doesn’t really matter. But identifying this pure subset …
LA: And so that bit of it, that was already present in the stuff when I started at Standard Chartered. The head of the group, he did not like side effects. So there was already something like an IO monad. So actually uh this runtime and then subsequently this language … Oh and this language, by the way, is named Mu because it’s the letter after Lambda, and Lambda was the previous language. It has a better distinction on side effects than Haskell does. So it basically has an input monad and an input/output monad. So, so in the input monad, you can do things like reading files but you can’t do things that change the world. Okay, reading a file will change the date stamp on last access, but morally you … And it’s an important distinction uh to have in the, in the environment where these things run. So these things run sometimes on the desktop, sometimes they run on some compute farm somewhere, and the compute farm, if you want to run it there, it has access to data. So it can read all the databases, but you’re not allowed to change anything when it runs on the compute farm. So it’s important to know if your effects are just read effects or read/write effects.
WS: Yeah, sure.
LA: I think that was quite a nice …
WS: That makes sense. Right. Yeah, I can see that being useful. So after Credit … sorry, Standard Chartered, you then went to, um, X, is that right?
LA: No, that’s not quite right.
WS: There was Facebook or Google in between them?
LA: That was Facebook …
WS: Right, very briefly then …
LA: Yeah. So uh, this was 2017, or was it 16 maybe? Anyway. I figured it was time to move on. Banking was not as profitable as it used to be … which means it’s not as profitable for people working there. And also I’ve done the same thing for like eight years … seven, eight years. So I talked to Facebook in London and they said, yeah, sure you could come and join our programming languages group that … So they were working on a new compiler for Hack. So Hack is Facebook’s internal version of PHP. It’s a lot cleaned up from PHP. And so they had one group working on basically the type checker for this. Um, but they wanted to extend it to a proper compiler because the way it worked then and, I am afraid, I think it still works this way, is that you run this type checker to make sure that the program seems right. And then you throw away all that information and you run it through the regular compiler …
LA: … that … So none of the information you gained is retained. And that seems like a waste. Turns out that the regular compiler they have is actually really good. And it’s sort of a just-in-time compiler that will JIT things that on the hot path where you usually know what the types are.
LA: So anyway, I joined this group in the fall of 2016, maybe … yeah, I think so … uh, and worked on this for a little bit. So part of the onboarding process when you started Facebook back then was that you had to go to Mountain View to the headquarters and have a week there. So that was part of it. And around that time I got an email from Tamil Sparling(?) who was working at X. We had met at ICFP in Japan. And he said, oh, I didn’t know you were looking for a new job. If I had known that in Japan, I would have offered you a job. And so I said, oh yeah, no, I, yes, I have a new job, but I can come and talk to you when I’m in Mountain View. So actually, Facebook is not in Mountain View, it’s the next town over, but doesn’t matter. So, uh, I basically interviewed for X as part of my onboarding at Facebook, and I couldn’t, I mean they were very secretive, so I couldn’t quite find out what they were doing, but they were using Haskell at this X project and they were using Bluespec to design hardware. So I figured, okay, this sounds like a lot better fit for me than using OCaml at Facebook. So I quit Facebook pretty much immediately. Except I had three months notice. So I couldn’t actually start. So the spring of 2017, I started at X.
WS: And you spent a few years there on this um still slightly secretive, but not as secretive as it as it has been kind of Haskell / Coq hardware project.
LA: There was no, yes, there was was no Coq in this.
LA: Yes. Uh, I mean it’s probably still secret, but since the project has been canceled, I don’t think anyone cares that much anymore. So we were making a chip for machine learning, uh to accelerate that. So in machine learning, the only thing you do, the way it works today is matrix multiplies. So this was basically a chip that could do matrix multiplies and move data around and, because that’s the other thing you have to do, you have to move it from somewhere to somewhere else.
LA: So the whole thing was designed in, the chip was designed in Bluespec. Mostly by one guy, there were a few more that helped.
LA: So an interesting thing there about Bluespec … So when they turned Bluespec into a company, as I said, they couldn’t use the Haskell syntax. They had to have this System Verilog syntax. But they never threw away the old front end, it was always still part of the compiler. So you could use the Haskell syntax. And the guy at X who did all the code, he didn’t like the System Verilog syntax. So we only used the Haskell syntax to do everything. So this whole chip which, I don’t know, a few million, tens of millions of transistors, something, it’s all been designed using Bluespec in the classic syntax as it’s called.
And then I worked on the compiler for this chip. So you take basically Tensorflow, but it’s been processed to be statically typed and then you compile that down for this chip.
LA: And after uh, how long, a year and a half, something like that, our project was moved from X to Google. Basically, Google said, uh … let me back up … So the idea with X is that they get money from … I mean, Google makes the money and then there’s the Alphabet company on top of it that owns Google and owns X and a bunch of other companies and they give … most of these companies are not making money, X definitely doesn’t make money. X is supposed to uh innovate and come up with new things. So either your project gets canceled at X, most of them do, I mean that’s the whole point of it, or you graduate. And graduate could be that you start a new company, or that you are reabsorbed back into Google. And that’s what happened to us because Google said no, we like this technology too much, we don’t think that there should be a company that makes these chips, we want to keep this for ourselves. And so we were reabsorbed into Google again and …
WS: … you were out of a job again?
LA: Uh no, no. I just had to work in a different place, it meant that the spot that we chose to live that was carefully picked so I could walk to work at X, I could no longer walk to work.
And then uh, after the pandemic hit, they decided we have to save money and one way to save money is to shut down costly projects. And our project, things had not gone as gone as quickly as they should. I mean, they rarely ever do, especially not with these sort of hardware related projects and a very tricky architecture to compile for. So they closed down our project.
WS: Yeah. And now it’s Epic Games, I hear.
LA: Yeah, so …
WS: Are you enjoying that?
LA: I am. I’m afraid, I can’t say anything about what we do. I mean, you’re, we’re not supposed to talk to anyone in any public media. Then it has … that has to be done by the people who talk to the press, so …
NV: But it is still Haskell?
LA: It is still Haskell, but I’m probably the only person who uses Haskell on a daily basis. A lot of the people in the group I work in know Haskell. And Tim Sweeney, who is the CEO and the majority shareholder, is a big proponent of declarative languages. He has a … he had an invited keynote at POPL in like 2006 or 2007 or something like that. And so, well you can see from that that he, he likes these kind of languages.
WS: I think he was arguing for dependent types even in the POPL keynote.
LA: Indeed, indeed. And he still is!
WS: Yeah. Good to hear, good to hear.
NV: I have a question, closing maybe … So like in all the history, you mentioned all the features of Haskell, and you seem to like all of them. So if you, if you could change something, what would it be?
LA: Change in …?
NV: In Haskell.
LA: In Haskell?
NV: So you could now ….
LA: Oh, records. Records are terrible in Haskell. I would do them differently. And I mean, maybe it’s moving towards that now, but …
WS: What would you change?
LA: I have no particular design, but I mean there are these other languages that are Haskell-like that has some kind of lightweight records, it’s an extensible records that where the record type reflects what fields are in the record. Maybe something like that. The ones, the way they are in Haskell is quite clunky. Uh I realized we, we totally skipped over the Cayenne stuff.
WS: We can talk about Cayenne a little bit. So …
LA: I mean, uh I …
WS: So that’s one of the, I think that’s one of the first papers um, that I remember reading from you, was actually about Cayenne and how to translate um, kind of the Bird and Wadler style equational proofs into …
LA: Ah, yes.
WS: … a kind of a proof term in type theory. And Cayenne was I think interesting uh, in that it was one of the first languages that I know of that said dependent types are actually interesting for programming rather than um, let’s try to design a kind of sound proof system in which we can, you know, reason about the identity function or something, right? And for that reason I think that really um, looking at that work that well that was one of the first times I encountered, dependent types was probably through your work on Cayenne. Yeah. How did that start? I mean it seems …
LA: Well …
WS: … another thing, right?
LA: I’m … So I think I read this paper that I think it was Simon Peyton Jones and Daan Leijen wrote a little paper about how to represent lambda cube with some simple little language …
WS: Was it “Henk” or something?
LA: Yes, that’s it.
WS: Erik Meijer or … might be involved?
LA: He might have been. I just remember reading that paper, I thought, oh yeah this looks like fun. I’ll write a little type checker for something. And then it slowly grew and I thought, oh yeah, you could, if you had these features then you could make it into more of a fully fledged language. I never intended it to be sort of used for serious programming. It was sort of a little experiment and … and then when it was kind of done, I wrote the paper that was in ICFP, I don’t know, 99, something, maybe ..
WS: Maybe 98, yeah.
WS: And then I was uh it won the most influential programming award even ten years later.
LA: It did. And uh, I heard before the conference that Bob Harper really hated this paper. So when I did my presentation, I actually had one extra slide that I never got to use, which said, Bob, I’m glad you asked that question. But he was he never asked a question
WS: Too bad. Yeah, so one of the directions in which Haskell’s heading is of course, also more and more into this kind of dependent types and singletons and GADTs, and at the same time, kind of, you’ve always argued for simple Haskell, right, Where you should at least not do too many crazy fancy stuff. So there’s a slight contradiction there, I guess.
LA: Yes. I mean like dependent types, but they’re certainly not simple in Haskell, I think they’re quite complicated. So, I mean, if you want a language with where it’s still fairly simple and elegant, I think it’s something like Idris, which is another language that takes dependent types seriously for programming rather than proving things. Yeah, I mean, I like all kinds of things in Haskell like GADTs for instance, I don’t use them myself, but Ken Petterson and I wrote the paper about something that looked exactly like GADTs in 96, I think it was, but back then it was unacceptable if you couldn’t do type inference and we couldn’t figure out how to do type inference, and I mean it turns out to be undecidable. But uh so we never added it as an extension to Haskell, but but we had exactly those ideas, then it actually appeared in Haskell many years later. So I mean I like them, but I think if you don’t design the language for these things from the start, they can become a bit clunky. So Haskell is wrong, I think for dependent types because it, when Haskell was designed, the type and term levels, they don’t share anything. So you can have the same syntax for things on the type level and term level and they will mean different things. And once you have that, dependent types become very awkward because you want to merge the type and term levels and so then you have to do things that I think look horrible, like a tick before the constructor name or whatever. I mean that’s …
WS: Yeah, that’s true. It’s hard to add this as an afterthought, right?
LA: It is.
WS: Working with the language and develop software in it for 20 years and then to realize …
LA: But then again, I don’t, I mean if you say, okay, we’re going to make a new language that has, has been done right, then there is 30 years of software development that has happened in Haskell that you have to catch up with and you have to make sure it gets popular and that’s, so …
WS: Either ways, yeah. But it’s, I think it’s a great effort, right, that these explore just how far we can get with GHC these days.
LA: Yes I’m totally for trying these things out in Haskell and, but it’s going to be clunky compared to what it could be if you designed it for this.
WS: Okay, so I think we have enough, so thanks very much Lennart!
LA: Of course.
WS: And also thanks to my co-host Niki. And with that, I want to wrap up this second episode of the Haskell Interlude. Stay tuned because we have a lot of interesting guests coming up.