27 – Christiaan Baaij

Recorded 2023-03-21. Published 2023-05-30.

In this episode Christiaan Baaij is interviewed by Wouter Swierstra and Mattias Páll. Christiaan talks about his work on the Clash compiler, what it is like to found your own company, his desire for ergonomic dependent types, and the foundations to all his success, namely capitalising on luck.

Errata: Around the 21m19s mark Christiaan talks about “his“ contributions to GHC with regards to dynamic linking on OSX. Later he remembered that it was actually Moritz Angermann who worked on the symbol limit restrictions. However, Christiaan did some other work on OSX linking and some of the RPATH handling.

Transcript

This transcript may contain mistakes. Did you find any? Feel free to fix them!

Wouter Swierstra (0:00:19): Welcome to the next episode of the Haskell Interlude. I’m Wouter Swierstra.

Mattias Pall (0:00:25): And I’m Mattias Pall.

WS (0:00:27): And joining us today is Christian Baaij, who will tell us a little bit about his work on the Clash compiler, what it’s like to found your own company, his desire for ergonomic dependent types, and the foundations to all his success, namely capitalizing on luck.

So welcome, Christian. I’m very happy to have you here on the podcast. So our usual question just to welcome our guest is, how did you get into Haskell?

Christiaan Baaij (0:00:56): Thanks, Wouter. I got into Haskell because I was looking for a cool master thesis project. When I went to university, I was kind of doubting between Computer Science and Electrical Engineering. I did do a Computer Science bachelor, but then Electrical Engineering like master at the same university. And from my bachelor’s degree, I really like working in functional programming, even though, well, I wasn’t that good at it then. I didn’t get high marks for that course, but I did like it. This was in open source version Miranda called Amanda.

WS (0:01:38): Ah, okay. So, that was going to be my question because I remember that Twente, I think where you did your degree, did not use Haskell at the time. I think they used a version of Miranda for a long time even.

CB (0:01:50): Yes. Eventually, they did switch to Haskell these days, but yeah. So for me, it was still Amanda, which is– well, it’s kind of like Haskell, but the IO system was different. You got an infinite list of IO events and you were supposed to deliver an infinite list of pictures to put something on the screen.

WS (0:02:11): I mean, I think Haskell, prior to 1.3 version of the Haskell Report, had the same that they– yeah. But it’s very hard to get this right usually because you have to produce everything just in the right order and you can’t look ahead at the next request before producing the next value. So you have to keep this perfectly in sync, and that’s easy to get wrong.

CB (0:02:35): Yeah. And then, I like functional programming for my bachelor’s, but I did Electrical Engineering things in my master’s. So my– well, eventually became my PhD supervisor, Jan Kuper, he actually moved from the Theoretical Computer Science group to the Computer Architecture group in our university. Well, he was brought onto a course as these things happen in academia when you suddenly got moved from one research group to the other. And he saw the way that they were deriving circuits from programs and he was like, “Why are they doing this through a language like C? It seems so weird.” And these kind of descriptions look just like the kind of descriptions that I do in my bachelor’s course on functional programming. And it was like, we have a state and an input and we get a new state and output. And this corresponds to many GUI programs or other interactive programs. And this also corresponded well to circuits. So he was trying to build up a new course around this. And then around the time that I was looking for a master’s project, he was like, “Ah, yeah, you still had the version of the course where we weren’t actually being able to build the circuit. Wouldn’t it be cool if we had a compiler that could actually generate this language called VHDL (which is like the standard in the circuit design world for describing circuits)? Wouldn’t it be cool if we had a compiler going from Haskell to VHDL?”

WS (0:04:16): And did you consider– I mean, I know there are a lot of embedded languages which are also in this space. Lava is one of the famous ones, and there you can also write your circuit in Haskell and generate synthesized VHDL or very long descriptions from this. But why wasn’t that good enough?

CB (0:04:37): Well, to be fair, I did a somewhat exceptional master thesis, is that it was actually the two of us. It was Matthijs Kooijmann, who started around the same time as me. I was kind of looking at Lava, but he– well, it was sort of fast and loose at our research group in terms of how you would do your master’s projects. It was fine if you would dive just straight in. So he was like, Well, here’s Simon Peyton’s book, Peyton Jones’s book on Implementing Functional Languages. Let’s just try what’s there and see if we can write just a compiler for the language and not even consider existing work.” 

WS (0:05:19): Fair enough.

CB (0:05:20): And it was kind of serendipitous around that time. So we started GHC 6.8. This was in 2009. And I think it was GHC 6.6, which was like the first version or 6.4, I don’t know. Just before the 6.8 at least, where GHC exposed its internals as a library collected set of modules. And so that allowed us to make great leaps because we could just take everything from GHC on the front end.

WS (0:05:52): Yeah. And then what I think this is getting towards, I think what you’re most famous for is to work on Clash, of course, which came out of this master thesis. And correct me if I’m wrong, but what Clash does is it will take GHC Core, which has been through the GHC front end and type checked and kind of ready to be compiled further and executed. And then from that actually generate VHDL code. Is that right?

CB (0:06:18): Yeah, that’s right. It’s GHC Core. That’s the right level of abstraction. There’s STG below that, but that’s already too sequential because then the STG was how to implement functional languages on stock hardware.

WS (0:06:35): Right.

CB (0:06:37): So that’s already– stock hardware is CPU and is sequential. The Core is still a nice Lambda calculus and free from time.

WS (0:06:48): Right. And presumably, this doesn’t work for all of Haskell. It’s not like you can take an arbitrary Haskell program and always generate a circuit for it. Is that fair?

CB (0:07:00): Yeah, that’s fair. And the Clash operates like that. I guess that came from our electrical engineering background as well. So we want to build circuits, but we want to build circuits that are fast. And so that’s something that puts more of the restrictions that we have on Haskell. In principle, I think it would be possible to compile arbitrary Haskell, at least monomorphic code to circuits, but you quickly end up turning it into a small custom CPU, which is then not necessarily parallel or fast. But Clash, we do have semantic restrictions on the input because you see the front end, there’s no syntactic restrictions, but semantically, there are definitely.

WS (0:07:47): So, what kind of things can you not handle?

CB (0:07:49): Basically anything where the recursion depth is dependent on some unknown input, so an input given at runtime. So the standard Fibon, definition of Fibonacci sequence where you ask, you calculate the nth number or give me the nth Fibonacci number or give me the N numbers when an N comes from the outside. That type of recursion is something we do not handle in Clash.

WS (0:08:20): Right. But for smaller fixed-size circuits manipulating words and so forth, I think that in that space, you’re very, very good.

CB (0:08:32): Yeah, definitely. And then we saw the cheat in that. We just unroll and then the compiler airs out when it says, “Well, I’ve unrolled it a hundred times. I’ll stop now. Please run me again if you think that wasn’t enough.”

WS (0:08:48): Yeah. So actually, one of our previous guests was Gergo Erdi, and he’s written a book on Clash. And I think one of the things which he really liked about the language is, if you look at how people design domain-specific embedded languages, most people go for this deep embedding where they take an abstract syntax tree with all the language constructs and then use essentially Haskell to generate these ASTs. So Haskell does a little macro language in a way that you can use to build up expressions like this. But the problem is that you can’t– if the language that you’re embedding has any kind of binding or pattern matching or anything like that, you can’t piggyback on Haskell’s syntax there. So one of the things he really ran into was– his work was on retro computing and he wanted to write little processors which could handle some instruction set. And then typically, what you want to do is you want to define a data type for your instructions and then explain how to kind of evaluate every instruction, which means pattern matching and then seeing how many arguments are passed to where. And if you try to do this in a deeply embedded language, it becomes really, really hard. So that’s what he really liked about Clash at least. So that was for him a very strong selling point that he said.

CB (0:10:18): Yeah, definitely.

WS (0:10:20): This is a nicer way to do embedded languages for this kind of construct.

CB (0:10:27): And even at our company, but also with other people too that use Clash, even just something as simple as the Maybe type is already very beneficial, especially if you have many Maybe values, and then you don’t want to mix up. So normally, in a language like VHDL, you would have just a one bit indicator saying that something is valid. And if you have too many and there’s a small typo, then the compiler won’t complain because you picked something off the right bit with one bit, but you picked the wrong valid that belonged to a different–

WS (0:11:05): Right. And doing things like either types or any kind of simple algebraic data types like an instruction set is– and usually, you can encode this by saying we’ll take some fixed bit pattern to represent every constructor in our data type, but then you make a typo or something goes wrong. You don’t have the type checker to help you, I guess.

CB (0:11:31): No. And then you have exhausted most checkers. So yeah, if you need one value more than some power of two, then there’s going to be many illegal values. If you have four constructors, yeah, then everything’s fine. You’re exhausted when you’ve done all the patterns for two bits. But if you have five constructors, then yeah, you get three bits. So eight possibilities of which only five actually mean something. So, having something above bit factors as abstraction is a good thing to have.

MP (0:12:07): Are you able to use the types to guide you? Like once it’s down to Core, you have some dictionaries and stuff like that. How do you handle all the dictionary stuff at the circuit level?

CB (0:12:18): Ah, right. So Clash is I guess somewhat necessarily a whole program compiler, or the Clash compiler is a necessary whole program compiler. So we actually load all of the Core definitions from the .hi interface files if they’re there, and we monomorphize and specialize. So we specialize on all the arguments that cannot be represented naively as a circuit, which includes functions or data types containing functions like dictionaries. So something like map will have basically many specializations of map. On specialized on its first argument, we don’t do defunctionalization.

WS (0:13:05): Okay, that was going to be my question. Would that make sense if you’re doing a whole program compiler anyhow?

CB (0:13:10): Well, maybe, but it’s expensive. So maybe it would work out from a compiler performance point of view to do specialization. Well, to do defunctionalization first, but then you want to– after you’ve done defunctionalization, you still want to specialize on the constructor argument because normally what you get with defunctionalization is basically, you get an interpreter for all your function arguments. So let’s say you have many calls to map and for all your unary functions you get some interpreter, then you map with NAT, and normally you think, “I’ll get a circuit with just eight negations,” if your vector was eight long. But now suddenly, “Oh, but I’ve seen 20 unary functions so I’m going to instantiate eight.” And then you have to hope that the downstream tools perhaps optimize that way. 

WS (0:14:15): Okay. That’s interesting, because I would’ve expected that defunctionalization kind of conceptually for me would take a higher-order function and then create a new data type. And then if you map this to circuits, you would have like one little map or circuit, which took some information about which function to map or first-order representation of the function to map. And then it could just execute that. Whereas if you did despecialization, you would generate many different circuits, which are very similar.

CB (0:14:46): Right.

WS (0:14:47): So I expect that that’s maybe a bad thing because you’re generating a bigger circuits.

CB (0:14:53): So in Clash, that’s not necessarily true because every time you apply a function, that corresponds to an instantiation of that function. So whether you have just a single function and that’s applied eight times, you get eight circuits. Or you have eight functions that each get applied one time, then you still have also eight. So if you’re creating a program, you’ll get a large binary because now you have eight functions. But the way that Clash views Haskell programs as circuits, it doesn’t really matter whether you have eight specialized functions and they turn into eight circuits, or you have one circuit description, but in your top entity, you apply that function eight times. It’s the application of the function that matters, not the function definition.

WS (0:15:46): Of course. Okay. Yeah, that makes sense.

MP (0:15:48): So kind of more of a meta question, why do you think Haskell is the right language for this? Why not Agda or why not go higher or why not go lower? Why is Haskell the right place?

CB (0:16:03): I think where we definitely depend on Haskell’s you make it this laziness or, well, non-strict semantics, I think a better way. In circuits, the non-strict semantics works really well for us in the sense that we can have an argument that means absolutely nothing. But if it never affects the result, then it doesn’t matter what that argument was. And this works in circuits because the way we translate case expressions to circuits is that we turn them into a multiplexer. And so that chooses between alternative operations or alternative paths. And so one of those branches might look at the argument and do some operation on it, which would be meaningless. But that’s not an issue because that alternative is never picked by the multiplexer in that particular case. And so from a circuit point of view, we’re like, “Yeah, okay, that’s something we might want to in Haskell,” model that as an error or some undefined value when we actually use Haskell’s undefined a lot in Clash. But that shouldn’t affect the output because we’re never looking at it. And if we were working in a strict language, it would be very annoying that every time we would– if some argument is not going to affect the output, why should it affect the evaluation when we do simulation? 

And another aspect in that sense is that in Clash, we model sequential circuits, so circuits that operate over time, as functions that operate on infinite streams. And then in Clash, we use the simplest version of streams, which is just really a cons list without nil. And there, we get a lot of tying-the-knot type of recursive descriptions. We have streams that are defined in terms of themselves because we get like, some stream S is zero appended or prepended to the stream of S in a way. So of course, more complex. But we get these feedback loops which are very natural in circuits and we get very natural Haskell descriptions for them. And if we were working in a strict language, like Idris for example, we would have to do from lazy to lazy, things like that. And it becomes more difficult to– if we want to bring this to circuit designers, which is eventually one of my goals at least, then there’s this balance between something that works well for Haskell programmers and something that I can still explain to circuit designers. And I think this Haskell fits well. Well, to be fair, I haven’t looked at Agda, but I assume it’s evaluation mechanism.

WS (0:18:59): Well, it’s not strict, but the problem is that if you want to do any of the kind of examples that you mentioned where you have streams defined in terms of themselves, but you use coinduction, or you need to be very careful that the stream that you’re defining is well formed, right? That you’re knowledge–

CB (0:19:18): Yes. So you have to be productive.

WS (0:19:20): Exactly. Yeah.

CB (0:19:21): Or basically, another– I know in Idris you can just say, “Yeah, you can termination check or– yeah, whatever, believe me.” And I don’t know, is that possible in Agda as well?

WS (0:19:32): Yeah. You can say, “This will terminate,” or “Please don’t termination check this function.” Because I know it’s fine, but I don’t have the energy to explain to the machine why it’s fine. So kind of going back a little bit, you were saying how all of this Clash work came out of your master thesis. And then after your master thesis, though, you went on to do a PhD, is that right?

CB (0:20:00): That’s right.

WS (0:20:01): Okay. And what was that like?

CB (0:20:03): Yeah, it was awesome for me at least. So I was in a house with like 12 students and three of them were doing a PhD and it seemed they were enjoying themselves. And one of them was in the research group where I went to do my PhD as well. And I just wanted to continue working on Clash after my master thesis. And so, as I explained earlier, it was mostly actually Matthijs Kooijmann that did the first prototype of this rewrite mechanism of monomorphization and despecialization. I was more involved in the code generation backend. But I did enjoy and actually trying to figure out how to do this what we call normalization inside of our compiler, how to do that properly, and also gave me a lot of time to work on GHC. I was, for example, working on Mac, on OS X at the time. Well, for those that don’t know or that don’t work on GHC, OS X can be– at least back then, it would be annoying because Apple tends to do these some owner’s restrictions from every version of OS X. So one of the things that I worked on was Apple suddenly decided to limit the number of symbols that you’re allowed to link again to have in your linker, because I think that’s a– I don’t know. I don’t know what exactly was going on, but it feels like they wanted to have the same restrictions for the iWatch as you would have for your laptop, even though they’re not the same systems. So I was involved in those kind of things.

WS (0:21:47): Okay. So you wrapped up your PhD and when was this?

CB (0:21:50): I wrapped up my PhD in 2015.

WS (0:21:54): 2015. And then I guess you founded a company or was this already before then?

CB (0:22:01): No. So I founded QBayLogic, our company. I co-founded, I should say. Yeah, co-founded in 2016. Yeah, I graduated in 2015. I did a postdoc. That was more of a formality. I was basically– I was on two European projects back to back and they’re three-year. They were three-year projects, or there was at least some overlap between them. And basically, we still had money to continue. And so, yeah, I just spent my time continuing Clash. So that was really nice. But in ’20, yeah, I didn’t want to continue the academic path. 

So I liked writing the research papers, or the writing part was always– that was fine for me. Well, I can’t speak for everyone during their PhD, but I saw a lot of my colleagues then struggling writing their thesis. I think I was one of the few people who actually wrote it front to back, so starting with the introduction and ending with the conclusion and actually enjoying the writing part. So that was fun. 

And I very much liked attending ICFP as well, but I didn’t like the prospects of the 1% project lottery that I see in the academic world. So that has too much focus and takes too much energy. That’s, yeah, something that just did not appeal to me. So I didn’t want that, but I didn’t want to drop Clash either. And my PhD supervisor, Jan Kuper, was close to retirement and the university, or at least the research group, didn’t want to continue with Clash either. And so that’s when we decided, “Well, we already have people, we have people internationally that are using Clash. We enjoy it. The students at University of Twente (where I graduated) also really liked it, so it’s just too good of a thing to let go.” And so that’s why we decide– Jan and I decided to start QBayLogic as contract engineers, which is a fancy way of saying your, I guess, guns for hire, on circuit design. This is the easiest way to get going, but trying to keep Clash alive. It allowed us to keep the money going. I think that’s the way that I see why Clash is still there, is that compared to some of the other Haskell alternatives, and I think you mentioned Lava, they’re eventually things that happen, is where all the funding runs out and students move on. I think Clash has been able to survive this long because we’ve been able to get the money there.

WS (0:24:55): Was it hard, I mean, to find customers straight off the bat? I mean–

CB (0:25:01): Yeah, this is all– it’s all capitalizing on luck. So you need to be lucky, but then you need to capitalize on it as well. So yeah, I think eventually, I was down to maybe like 500 euros on my personal, on all of our combined accounts until we got the first invoice paid. Yeah, we did get one client. They’re based in Cambridge. Yeah, they were programmers as well that somehow ended up doing circuit design and they were using Haskell. I think Gergo mentioned her name, Ellie Hermaszewska. She just reached out because I was at HaskellX, presenting there, to ask them, “Well, can I join their company?” And I met the founder of the company there as well, and I said, “Well, I just started for myself, but I’m eager to help out to use Clash at your place and business.” And so that’s where our first gig, and we worked on AI accelerators for convolutional neural net inference, which was– yeah. And I think it was a really cool project we generated. 

So we had Haskell description. We had a compiler that could take a then CAFE description of a neural network and create a bespoke circuit for that. And that was– so I wrote all of the configurable parts in Clash and other people were involved in writing the compiler on top of that that would instantiate the circuits I had written. And so it was a small team back then. And Myrtle is quite a bit bigger and they have nicer clients. And we still do some work for them as well. And so this was late 2016 that I started for them. 

In early 2017, we were approached by a very large company. One of the previous guests has worked for them. It doesn’t narrow it down. But yeah, our contracts always state that we can’t mention them in public. But let’s say that many people will use their services every day and they really liked Clash. Also, one of the guests was familiar with the work. They just wanted to see it mature a bit more. So we did some projects for them where we actually did some– we expanded Clash according to their wishlist. And then from there, more statements of work followed. That helped us a lot to get going. From that second client, we were able to hire two more staff.

WS (0:27:55): Yeah. And how big is QBayLogic then?

CB (0:27:57): These days, we’re 12 people. Ten engineers, but not everyone full-time. And that includes me and also Jan. But Jan is actually– he actually retired in September. He’s still somewhat involved. But I think, yeah, we’re growing organically. It was always– well, being contract engineers, you don’t get investments. You have to be in the black most of the time. You can’t go into the red. 

WS (0:28:29): Did you find that stressful? I mean, you have to kind of live on the edge a little bit, I guess. I mean, if a few big contracts fall through, then yeah, the company must–

CB (0:28:41): Yeah, I think it was more– so 2021 was a rough year for us financially. That was a lot more stressful because we had the employees. Back when it was just me, that’s fine. We are very lucky to be in an industry where it’s relatively easy to find a job, maybe now more difficult because of the large layoffs in the tech industry. But I think here locally, they still want people, and probably elsewhere. Maybe you can’t get the 200k. Imagine unicorn salaries, total comps. But no. Well, first of all, I like living here in the Netherlands, my little part of the Netherlands as well. So I would be good there. But yeah, definitely once you have– well, you feel at least a responsibility for your employees and you want to make sure that they’re not inconvenienced by you not doing well enough to get the money.

WS (0:29:44): Yeah. Is that something that you would– would you entertain offers in the future if there were investors looking to buy a bit of Clash, or are you very keen to–

CB (0:29:53): Yeah. So actually, we’re moving in, I guess, perhaps an investor poison pill. We’re moving to steward ownership. So Jan and I are actually going to fully divest our stock and interests in the company. And QBayLogic will be foundation-owned. We still have to figure this out due to annoying fiscal reasons, because the tax office treats stock as Bitcoin. It assigns some arbitrary value to the stock in your company. And so if you want to get rid of that, you have to pay tax on that, which we don’t want. But that’s not the interesting part. So we’re going to be foundational. And there’s a couple of big companies that are foundation-owned as well. One large here in Europe is Bosch. Maybe people in like Carlsberg as well, that’s 51% foundation-owned, and 49 is publicly traded. But Bosch is really big in terms of being foundation-owned, and that’s where we want to go as well because we want the mission of QBayLogic to bring functional programming to the circuit design world. That’s what QBayLogic should be there for. So in the board of this foundation, we’ll also have people from other companies, also steward-owned, and they will make sure that the foundation will project QBayLogic’s mission and they’ll have veto rights and whatnot. So that’s what we’re aiming for.

WS (0:31:32): It’s a bit like the German football clubs, is that a good analogy? Which I think are also necessarily owned by the fans, at least 51%. I’m sure one of our listeners will correct me if I’m wrong, but this makes it very unattractive for foreign investors. But on the other hand, it means that you don’t have to get into bed with various, well, shady characters who are looking to whitewash millions in your football club or something. 

CB (0:32:03): Yeah. We want it to be mission first. Doing employee ownership is really difficult. Also, stock options, whatnot. For a growing company, when we first started QBayLogic, we thought we were going to do that, but once you realize what it involves– also, you reserve enough stock or options for the first 10 employees, and then, okay, what do you do with employee number 11? Do you tell them, “Oh no, you’re not getting any stock,” or do you go back to your 10 employees and say, “Are you ever going to water it down quite a lot because we expect to grow to 100”? So if we’re used to have 10% of the company, now you have 1% because we needed 90% to give to all the new employees. So that’s– yeah.

WS (0:32:48): That’s tricky. Are there any particular applications of Clash that you’re very proud of or out in the wild that you can tell us about?

CB (0:33:00): Yeah. So we did have a public project with a couple of partners. We were actually sort of a subcontractor there. We worked on high-speed laser-based communication between satellites and ground stations. And this was with TNO. I have no idea how to translate that to– it’s a– I think it’s like Fraunhofer, but the Dutch version, and I have no idea how to translate this to non-Europeans.

WS (0:33:37): An applied research center, government-funded research body.

CB (0:33:40): But I think you can also– as a company, you can pay TNO to do research for you. But it was with them. And there we worked on the control parts of the mirrors in the ground station. So you have this satellite communication with a laser beam, the ground station, but there’s an atmosphere in between the ground station on earth and the satellites in space. And so the laser– or there’s actually multiple laser beams, but they get changed a bit. And so we need to adjust the mirrors in the ground station to correct for the disturbances in the atmosphere to compensate for that. And what was really nice there is that we were able to apply the techniques that we were teaching our students at university as well, where we would go from a very high-level specification in Haskell. And that would turn into– and so this would be what we call on time specification, but let’s just call it a regular Haskell specification that maybe would work on lists instead of Clash’s fixed-size list that we call vectors. 

If we would give that to Clash now, it would try to complete its computation in one step, in one clock cycle. And it would do that by making an extremely large circuit because it would do everything in parallel. And this would be too big to fit on. We use an FPGA (Field Programmable Gate Array), a reconfigurable chip, and that circuit would just be too big to fit on the FPGA. And so that way, we do correct-by-construction transformations on our codes. So we would have a circuit that is laid out completely in space and we would, let’s say, fold it in two. So we’d do it something in two steps, so it becomes twice as small, but it would take two steps to complete. And that might be still too big, and we would fold, like we’d fold it again. And so now it takes four steps, four times as small. And I think that that was a really nice project to work on. And yeah, there’s going to be a follow-up on that as well.

MP (0:35:55): So I’m wondering a bit like, how has the field changed since you started in 2015? Amazon has FPGAs now in the cloud, and that was not there in 2015. So how has the whole circuit field changed since you started? 

CB (0:36:11): Yeah, we were caught off guard by the FPGAs in the cloud. And then we were joking that there would be FPGAs in the cloud. But I think, yeah, we were joking around that like 2016. And then end of 2016, beginning of ’20, suddenly they were there. And I think, well, I mean, Wouter has been in the academic circles for a bit longer with the end of Moore’s Law, but that was 2006, which who cares? Everything is parallel. But I think it is true that we get parallel programming, group programming has become more commonplace. And I think in that sense, FPGAs are slowly becoming more mainstream as well, or they’re becoming less niche. They’re becoming less niche, let’s put it like that. Not more mainstream. That is one of the goals of QBayLogic. Our big hairy, audacious goal, as we call it, is by 2030 to have 50% of all FPGA projects to be started in a functional language. This is the big goal. We don’t know how to achieve it, but it’s something to work towards. It’s something to inspire everyone in QBayLogic as well. Perhaps it’s not realistic, but it’s a nice goal to have. But yeah, what we do want there is where we’ve already started. And so, in the Dutch system, you have the universities and you have the University of Applied Sciences. I think Hochschule or– I have no idea how they translate this to a non-Dutch system.

WS (0:37:55): Polytechnics in the UK or the former polytechnics, but– yeah.

CB (0:38:00): We started one now in Enschede and we want to move to other polytechnics as well where we want to do the circuit design courses, at least for the Electrical Engineering students to do them in Clash, and also actually convince the Computer Science departments of these polytechnics as well. So at least we start there in the Netherlands and who knows, maybe people from other countries can help us introduce Clash or functional programming to other polytechnics as well. At least if you look at the Netherlands, there’s maybe– here in Enschede—that’s where I’m from—maybe there’s 200 CS students that do the university with probably 500 that do the polytechnic. So it’s a larger audience.

WS (0:38:49): So I was curious, because I imagine that Clash relies quite heavily on GHC, and GHC is very much a moving target because it’s developed, actively maintained, and you get all kinds of new features like linear types or who knows what. And then how does this impact trying to keep this tool alive while other people are kind of in your way all the time?

CB (0:39:20): Well, it’s not actually that bad. I think if you read the papers, and I think other GHC contributors, their blog posts, they try to keep the Core language kind of stable. So the surface language will change a lot, but the Core language does not. And so far, sometimes the Core language changes a bit, but not in a way that impacts Clash at all.

WS (0:39:49): So are you up to date on– what GHC version is Clash using? I mean, you’ve moved on from 6.8 presumably, but–

CB (0:39:58): So currently, we are on 9.2. So not the release version, but the development version is on 9.2. And that’s because, well, as the company has been growing, I have less time for it. And I used to be the person that keeps us up to date. And it’s really kind of things that are somewhat small. I think 9.4, this is very GHC specific, but 9.4 eliminated integer literals from the Core language. And now you have int literals and big net literals, and no longer integer literals. And so I need to make sure that I saw the deal with that on the Clash side. And very on in the Clash compiler, I did get kind of tired of the GHC assorted set of modules. And so we actually created our own Core data type, which is very much like GHCs. And so we just translate GHC’s Core data type to our own. And so we would have a stable API over our own Core language and we just have to translate. But yeah, now that GHC has dropped into the literals, but does introduce the big net literal, we have to deal with those kind of things. Or multiple home units and 9.4 feature as well. And this affects our interaction with– as I said earlier, we load definition from interface files. And for particular reasons, we don’t want all of GHCs optimizations. And so the GHC API does allow you to say, “Oh, here’s my source file, give me an optimized version of it.” But that’s not what we do. So we need to do this whole chain of, we tell it to parse, to sugar, type check, optimize.

WS (0:41:54): And out of curiosity, you mentioned that you’re usually the person who maintains the Clash compiler, but how many people at the company work for customers, and how many you work on making Clash a success or maintaining or developing the compiler?

CB (0:42:10): So it is definitely shared. So many– I think there are maybe– so of the engineering staff, there’s perhaps two that do not contribute to the ecosystem at all, but the rest always does at least part-time contributions, especially our early staff, Martine and Leon, our two employees that joined in 2017. They still do work on the internals. But some of the things are related to my PhD, and some of the GHC API thing, as far as transferable knowledge goes, that’s really difficult. They could do it, but it would take them a long time whilst I had my four to five years of PhD to learn all that, which is kind of unfair.

WS (0:43:00): And how much time do you still have for development if you also have to sign contracts with customers and start a new foundation and all of this other stuff?

CB (0:43:10): Yeah, these days not a lot. So in the last two years, it’s always been the month of January for some reason that I was able to do stuff.

WS (0:43:19): New Year’s resolutions.

CB (0:43:21): Yeah. Or I think it was mostly just contracts ending and new contracts starting then in February that would give me time. And sometimes I work in the evenings, but I think from the compiler point of view, Clash is doing pretty well. We want to grow all the libraries around it. And so that’s where my input is not necessarily needed. It’s just these, we need to upgrade to 9.4 and 9.6 kind of things. But we have clients that are still on GHC 8.6.

WS (0:43:56): So looking ahead, how would you like to see Haskell develop? I think you’re pretty much an industrial power user in a very interesting space. So you must have thought about what’s missing from the language and what you’d like to see improve.

CB (0:44:11): Well, what I would like to see is ergonomic-dependent types. The Clash Prelude is very much minus x kitchen sink. Well, one of the reasons is this fixed-size list, the vector type. And it’s just annoying to deal with where we have Nat, the type-level natural. Then we have KnownNat, the constraint-level natural. Then we have SNat, the term-level witness or the explicit term-level witness, where the KnownNat is the implicit term-level witness. And so there’s dependent types and we use them and we need them as well. So why I would want dependent types is that I hope that this situation becomes nicer. 

I accept that we can’t have type inference, which I think– well, we don’t need it everywhere, but we will need it in most places. So there’s actually an Idris frontend for Clash from seven years ago. But I gave up on that because type inference for tying the knot type of recursive descriptions does not work. You have to, at least an Idris– and I don’t know whether this is a theoretical problem or a practical one, but it’s basically, once you have a tying-the-knot type of recursive description, at least in a language like Idris, there’s no loop breaker. You can’t say, “I just annotate one of the recursive bindings in the loop.” Now you have to annotate all of them. So even for types that are not depend– like they’re just the regular types that we have on Haskell. Even for those kind of descriptions, it will say, “No, you need to annotate every binder in the recursive group.” And that just becomes tedious. So that’s why I would want still Dependent Haskell just so we can get rid of this. Personally, I would like to see some love for implicit parameters.

WS (0:46:06): Oh, okay. Interesting.

CB (0:46:09): Yeah. We use them a lot in Clash descriptions. And that’s because in Clash, we use a– I don’t know what the style is, I guess applicative style. So yeah, the Reader monad does not really work for us. Monads are somewhat sequential. It’s hard to leave. Monads in a way capture the essence of sequencing. So they’re somewhat antithetical to what the circuit is doing. So that’s why Reader monad doesn’t really work for us. But the implicit parameters, yeah.

WS (0:46:40): They’re not the most widely used and I don’t think they’re the most beautiful feature that Haskell has to offer, but I can imagine they’re very useful for–

CB (0:46:50): Yeah. And so maybe we don’t need all the dynamic binding, but we do need the aspect. So as we know in a way, constraints are implicit parameters, just that they are unique, whereas implicit parameters are not. That’s the whole point of them. In our case, we need them. So we use implicit parameters to distribute ubiquitous arguments like clock and resets. They need to go to every memory element. And so it’s very tedious if you would pass them manually. So normally, we’d use a Reader monad for that as a Haskell programmer, but we don’t want. Monadic programming, we want applicative programming. And so currently, yeah, implicit parameters are our only option. 

WS (0:47:36): No, fair enough.

CB (0:47:38): And I guess lastly, performance debugging, better performance debugging, whatever that may mean.

WS (0:47:45): Is that something you run into? I mean, is that when running the Clash compiler that it can take a long time?

CB (0:47:52): Yeah, it can take a long time. And Clash specifications are just Haskell programs. And so when we’re not careful, you might get a space leak. But the Clash compiler can also– we had clients where for some of their projects we had to run on the biggest AWS instance in the 500 gigs of RAM to generate the circuit because there’s something wrong in the Clash compiler. Well, it takes longer to figure out where things were going wrong inside a big– well, a bigger program like the Clash compiler, it’s difficult to pin down where things go wrong. You can look at the heap, memory heap profile and you die by a thousand cuts, and they all seem equally bad. But yeah, I know that people from Well-Typed, I guess they are investing there. Or Azure and Juspay, I think together with Well-Typed, are investing there. But I definitely want to keep the whole research aspect of Haskell that’s very important to me. Otherwise, we would never have all of these fancy features that we definitely need for Clash. All the fixed-sized vector stuff, and even linear types. I did a blog post where eventually it turned out it was missing one theoretical hard feature. But otherwise, perhaps, linear types with recursive bindings is something that will work out in theory, and then Clash would have a good use for it. And so we would just adopt that. And so definitely, the research aspect should stay.

WS (0:49:33): Yeah. Great. So that’s interesting because usually, I always worry that Haskell is very much a language where you have both academics who are keen to come up with new things and industrial users who want everything to stay the same because they have to maintain a large code base.

CB (0:49:49): Yeah. Well, we did talk about this before the session. There are some aspects of the Haskell ecosystem development that I didn’t like. But that ship has sailed. And actually, I don’t think it had nothing to do with academics. 

WS (0:50:06): No. Okay. So thanks, Christian. That was a lot of fun, and thanks very much for being on the show. 

CB (0:50:15): Yeah, thanks for having me.

Narrator (0:50:18): The Haskell Interlude Podcast is a project of the Haskell Foundation, and it is made possible by the support of our sponsors, especially the Monad-level sponsors: Digital Asset, GitHub, Input Output, Juspay, and Meta.

SPONSORS
Monads
IOHK Juspay Meta
Applicatives
CarbonCloud Digital Asset ExFreight Mercury Obsidian Systems Platonic Systems Standard Chartered Tweag Well-Typed
Functors
Artificial Channable FlipStone Freckle Google HERP MLabs TripShot
To learn more about the Haskell Foundation
Haskell Foundation, Inc.
2093 Philadelphia Pike #8119
Claymont, DE 19703
USA