18 – Matthias Pall Gissurarson & Jimmy Koppel

Recorded 2022-10-17. Published 2022-11-17.

In this episode Matthias Pall Gissurarson & Jimmy Koppel are interviewed by Wouter Swierstra and Niki Vazou. They talk about program synthesis, typed holes, program repair, and generating properties using a new technique called ECTAs.


This transcript may contain mistakes. Did you find any? Feel free to fix them!

Wouter Swierstra: Welcome to the next episode of the Haskell Interlude.In this episode I’ll talk with Matthias, Jimmy, and Niki about program synthesis, typed holes, program repair, and generating properties using a new technique called ECTAs.

WS: Welcome to the next episode of the Haskell Interlude. I’m Wouter Swierstra and I’m joined here with my co-host Niki Vazou. We have 2 guests on tonight Jimmy Koppel and Matthias Pall Gissurarson and we’re here to talk about the work that they’ve done in program synthesis and program repair. So let’s start with Matthias and perhaps you can say a little bit about your background and how you go into haskell.

Matthias Pall Gissurarson: So it all started in a you know, classic way with a programming languages course. I just got you know, fascinated by the whole how easy it was to express kind of the data flow and and it was such a different way of programming right and really helped at the time that I was using XMonad a lot for my window management and there they were talking about integrating over lists to manage windows and I was like Wow this is fancy stuff. And that was ten years ago ah so, after graduating from my undergraduate I was trying to figure out where to go next? Ah, and there was this place called Chalmers and there was a lot of haskell papers coming from Chalmers and I’m like all right that sounds like a nice place to be so I went there for my masters and ended up doing a PhD there. So I’m in my last year of PhD there now.

WS: Okay, you so you did your undergraduate in Island keeping presumably right? is that right? And then then then move to Chalmers for your masters.

MP: Yeah, the University Of Iceland

WS: But okay, super and how’d you like Sweden so far?

MP: I mean it’s a it’s nice. It’s very walkable. Um, but it’s also very nordic right? So people don’t bother you too much on the street. Ah, which is always good I mean for for the nordics. It’s ah it’s a good thing but it’s always nice to be talked to as well. So you know, pros and cons.

WS: Okay, that is a good thing or a bad thing I don’t know. Okay, okay, okay, great. And and Jimmy can you say something about a little bit about your background. Perhaps.

Jimmy Koppel: Ah sure so I did master at Carnegie Mellon which not a Haskell place. It’s where sml was king that says just very she read pointed going in in high school I was very into Lisp and like CMU common Lisp comes from regarding me and I get there and like the people are gone. That’s ah oh so after I graduated undergrad and so I actually graduated four aid plans because I got a huge grant from Peter Thiel to leave school which I mean to graduate early. And then start a company so I got really excited about the papers are coming out in 2012 around program repair I thought I’m and put in a server make a lot of money and that well say so it was controversial about this paper is knocking into the baseless say that. Let’s see but program repair that takes 300 servers to run is like an absolutely great idea technologically and not a great business idea. But that’s what I was going to start working on that I’m like okay we use a language that’s ah. Hire people in and that’s like good for compiler type stuff. So probably the choice is OCaml and Haskell and the time I knew good bit of a OCaml and I knew almost 0 Haskell. That’s a. Like well use a OCaml because it’s more popular and everyone said it’s more popular they started looking in line like oh Amy if that’s out of date a Haskell is more compiller now. So I tried Haskell and I’m absolutely delight the choice that’s now easily my favorites language. That’s normal people I heard of.

WS: Great. Um, So thanks for that background. I think one of the things you spent a lot of time working on is the holes in Ghc and filling in these holes. So can you fill us in and say a little bit about how how that came about.

MP: Ah, ah, right? So and so one of the nice things about Haskell is that you know we have a lot of information around right in in first types and it has types around right? So I was trying to do a master’s project actually on kind of ah modeling at time. Like a monatic constraint or something um and it turned out like I wrote so a little bit of stuff and it turned out. It was very very hard to kind of write things and because it which is a very complicated stack right? So I thought you know how like the information is there like the compiler knows what it wants but you know and it it just can’t tell me so that’s when I got into the whole kind of whole fit synthesis thing right? It’s like here’s a perfect place for us to you know, take that information that we already have and run some kind of synthesis to ah come up with what actually can be used there right. And in languages like ah like Haskell ah because of all the types. There’s usually there’s not that many things if you look at like single identifiers right? so.

WS: Yeah, did you have any exposure to actor when you were at chalmers.

MP: Yeah, they’re ah ah so you have a little joke about that right? So we sit on the fifth floor we are ah the functional program people and they and they do like higher level programming. They sit 1 level higher than us on the 6 floor and that’s where they do the Agda stuff. Um.

WS: Um, yeah, yeah, yeah, yeah, yeah.

MP: So yeah I did do some ada and I was exposed to the like they do like a ah proof search there also right? so and that that’s also where this whole thing came from right and you can just click 1 button and it kind of expands the definition and adds more homes right? and I thought you know this is this we should have this also right now.

Niki Vazou: So do you do you have an intuition why this already exists in Agda and in Haskell it took us so many times and still like you will tell us. But I think it’s still not there at the Agda level.

MP: I mean so at at the Agda level. It’s I think it’s very intuitive when you’re thinking about it as a proof assistant right? like you want helped writing proofs but it’s also like the fact that you know the more complicated the types get. It’s It’s a lot harder to find things that fit and can get hard to actually and because because you really have to be in a very specific context for things to actually work whereas in Haskell. We’re more free to you know there’s less variables that are bound to something. Ah, particular so we can actually suggest a couple of things whereas in Agda. It kind of you know? Yeah, you either end up with getting a reflect or you end up getting um you know, additional holes. But yeah. I alright.

WS: So Yeah I think one of the things with Agda though is um because like you say the the types are already a lot more complicated. So from the very beginning the goal was always to have this interaction mode right? because it’s really hard to write like ah a flawless. Agda proved term of more than 2 lines without blinking right? So there. There was always this idea of gradually writing that that that term together with the programmer and that’s something which these holes that you do as well and and and. Ah, that it’s really great to see that in doc. What if I like favorite things ever right? because it’s also really bad at when the tires get complicated at doing old type infants in my head. And then putting the hole there and then figuring out. Oh okay, wait a minute I know I want to call this function with these with I know as 3 arguments let me figure out what those are and then you can iteratively write your program in that style. So the other thing about synthesis though I think is why it is. I don’t to say ahead of GHC but 1 thing which it has going for it is that oftentimes when I only use the synthesis, if I’m doing a proof where I don’t really care what term gets generated which is very different from writing a function where I really care. Like which integer gets returned or something right? whereas if I’m trying to prove something. Ah, usually it’s not always, but usually I don’t particularly care about the proof being produced so I’m like oh if you can fill anything in I’m happy. So but I do kind of agree with what Niki was suggesting a little bit perhaps is that the UX in Agda is a little bit smoother than in Haskell I mean the holes can be pretty noisy at times and there’s a lot of information which kind of gets thrown at you. But they’re still awesome right? So no plates for me. But yeah is that something you’re you’ve been thinking about as well.

MP: Ah, yeah, so I think also in Agda like the the interface through the holes is is a lot nicer right? And and I think it’s because you know the Agda interactive mode is kind of developed in lockstep with Agda right? So you can see kind of the current context and things like that. Whereas in whereas in Haskell we’re you know, currently kind of limited by the error message right? So we’re what we’re doing is we’re generating error message and then we have to find ways for people to query that error message right? Um, and yeah, so we’re we’re still working on like you know what is a what is a useful thing to display for Haskell right? because I think there’s also like there’s a lot more things in scope usually because you’re ah you’re not not working on a specific proof right? You’re you’re kind of right writing a big program.

WS: Right, right, and then on top of that you you typically have a whole bunch of stuff which is overloaded or polymorphic and and could fit right? and then it’s sometimes hard to to narrow that down a little bit.

MP: Yeah, exactly exactly and I think that’s also why you know why? why we really benefited from these these ECTAs right? because they do have a mechanism to to say these these things should be relevant right? So yeah. We’re going from kind of naive synthesis to a lot more lot better synthesis by being able to express these things you know are more important than these other things right now.

WS: So ah, you mentioned ECTAs briefly and can you say something about what these are.

JK: Sure so ECTAs are stands for equal constrained tree automata and what they are is there a way to represent the entire space of programs that set size of constrain.Being well tight and then get and they give all of them very efficiently.

WS: And so um, what your what you’re what the kind of use case here is that you want you have a huge space of possible programs that could fit but you want to kind of pair that down more efficiently and so you. Say a little bit about the implementation of these ECTAs or or how they compare some of the existing techniques like e-graphs that people might be familiar with.

JK: Um, sure so so in the field of program synthesis. One of the major field is numinter programs in thesis where the idea is going to have some clever data structure some kind of dynamic programming thing. Gonna represent a huge huge number of programs and then I’m going to try to fish like pick pick one. That’s good in some way or say I have like the entire space of programs that works for 1 example and then I have the entire space of programs that works in. Another example when I went to the intersection and to section again and then I draw from the intersection.

WS: Right? And and what are these ECTAs stand for um, ah tree some form of tree automata right? Yeah right? That’s it.

JK: That’s yes, a quality constrained tree automata so so the older approaches for represent in these huge basic programs are space algebras and e-graphs and I actually prove that. These are representationally well. That’s up. It’s see these are represent intentionally a subset of something else called tria.

WS: It right? Um, so tria to I’ve heard of these before but there’s something but usually in the context of languages and kind of formal formal languages I guess.

JK: Yeah, so so Tria and are very similar to context free grammars. It’s like saying if you have an expression a plus an expression you can merge it together and then get another expression.

WS: You say these triatomic are a bit like context free languages. Um, so how does that relate to Haskell programs.

JK: So I talk abouts triatomic and regular and ah context free languages because it’s something people are familiar with that’s ah, kind of that’s quite close triatomic, but that is one way to think about it and through this lens ao like you can write a triatomic for all Haskell programs or all Haskell expressions or any substantive Haskell many other languages but not the only thing can use them for you can use them to write down spaces of things which are quite unlike programmed some language but you can definitely use it to write down basic programs. Ah so so really nice thing about a tree automata is that safe. We found that they’re kind of like more general than I versus base algebras or edras and they also are very theoretically nice and all this rich history of at Automata theory behind them and so they’re quite open to being extended so when some of the early work is coming out on these other ways right now each space programs even space algebras or the inventor wrote. That’s these are really good at representing the the independent join between spaces. So I say a thousand things on the left a thousand choices in the right put together. You got a million possible choices. But they’re not but how ups in a dependent join like a thousand things in left about or thousand things in the right, but only the sunsets that has you know sort of like a certain thing in common between them or maybe have to be the same even less than 1000000 and the dependent join was still an open problem and ECTAs is are our solution so whereas and whereas any of these things be really good at representing the space of say all Haskell programs if once you’re restrict that the space of well types tactful programs. More complicated subsets they would blow up but etas to do that very efficiently.

NV: And so in practice like you used this kind of structure to make Haskell typed synthesis faster right.

JK: Um, that’s right.

WS: Right? So I could imagine that it’s easy to come up with a whole bunch of programs and try enumerating all kinds of Haskell programs. But then the hard thing is that you want to pare down that huge space and find. Well type programs and maybe even the programs that satisfy certain properties or have some nice behavior or would make sense in the context that you’re currently trying to find a term to fill a hope So What kind of constraints can you put on the terms that you’re kind of. Looking for besides just well tightness and.

JK: Yeah, so the well titanness is not a built-in Constraint Theseta is it’s something can encode add qualities which in the case of a well typeness is kind of straightforward that you once like the type of function argument match type of. Actual supplied argument the with Polymorphism. It gets a little more interesting because they are the type is something like identity function is a type of anything to anything with this too. Things are equal. But there are quite a few other constraints systems they can code in fees Quality Constraints. So yeah. So not nothing. You can do is say represents ah a bunch of different programs and anti unify variable pathway and say like this ah like this. Thing here this number here could either be because it’s constant expression or because it’s being supplied as an arguments. Yeah like hundred vari each of those you can represent them come back way. Ah, but it’s a little more out there say we’ve also implemented a sat solver. ECTAs it’s ah is like and that’s all where you say over here have only choice variables and then over here I hanging a variable expression and whether. Whether variable X one is set to true or false over here. Ask be the same as whether it’s use is true or false over here.

WS: Yes, and was this a more of a kind of thought experiment or a proof of concept or is it a realistic attempt to write an efficient sets over.

JK: Yeah, so actually I actually think that. Ah, if you want to enumerate all students problems and just finding a single one I Actually think you probably could write an efficient solver on using ECTAs I think that’d be very interesting projects. When I first did it. It’s because I still had did not understand the power of the constraint language and doing that’s like okay well it is at least strong not to be and be hard to looking at par literature by the way I’m pretty sure that’s they’re point that they can be expert. Time completes quite a bit stronger than an np hard and their constraints representation power.

MP: And yeah, now you so ah one of my things about the way hacked her is implemented is that yeah it’s It’s really a you know a lazy implementation of these ECTAs and we really it really builds on the power of laziness right? So you just Y=you kind of build up a representation that just says you know here are all the terms right? This list is a list of all the terms. Um, and then you know in the background. It’s actually generated right? and what you can do with that is that? Ah, ah, it’s actually implemented as as kind of an within the list monad right. So you can actually look at the current state and what you’ve seen so far and you can actually Prune certain branches from the expressions that you’re generating so you know I think and then you know if you wrote this in a non-lazy language. This would be quite hard right? There would be a lot of bookkeeping and what right? but ah yeah.

WS: Yeah, um, so just to follow up on that point. Um I could imagine that if you’re doing this kind of thing I mean it’s typical to where you do these search problems is to to split the search into 2 parts right? one which is. Generating candidate solutions and the next part is pruning down bad candidates and finding the the thing that you’re looking for um, but you mentioned blazeness was important and I agree with that but is the sharing. Also important is that something that you exploit because I could imagine that that that there’s lots of opportunities for. Um, sharing if you set things up correctly because you might kind of enumerate subterms over and over again.

JK: Oh yeah, absolutely it’s I can when you go to a numerate can reuse partial work. You’ve already done. There’s a little clarification on Maddie’s prior comments just that. Ah, oh. The numeration part definitely takes advantage of laziness and but the main lyziness is the constrained propagation. The data structure itself is strict.

NV: So this in this technique is it like huskily only specific or you can implement it even for other languages like OCaml.

JK: Not at all a Haskell specific. Ah and so in you try to send this hassle program with the way that we do. It’s ah say so in general doing synthesis of a pure functional language is easier. But any technique of then maybe neurals then doing synthesis of a language loss mutation but beyond that there’s It’s a fact it might actually be easier to ah do it for Ocaml because like. Yeah, type classes.

NV: Um, so what is the status of synthesis with type classes.

JK: Um, but on yeah so where there is encoding where the type class instance types of arguments and then I asked to spot that arguments.

NV: So it it It is not affected right? I don’t I didn’t get it. Can you maybe give as an example.

JK: Sure. So if you want to and. So you want to synthesize something in type show a arrow a or error string error string then.

NV: Um, so a a arrow stringer string. Okay.

JK: Um, yeah, then oh you would have you’d have show a as a you’d have show as an ordinary type structure of kind star or star then you’d write it. You’d basically transform it into. What the type should be just a dictionary passing.But it’s basically it.

NV: Ah, okay, so you do the? Okay so your synthesizer knows that the the sugar ink into dictionaries and then everything works like maybe more complicated.

JK: Um, say the current status I’m not sure if yeah, my co-authors from U Csd did more of the front end work the actual application or is mostly on the core the core library and Constraint solver. So I’m not sure right now if the sugaring with the program does the de sugaring but we would benchmarks volunty classes.

NV: Okay, so you do have benchmarks with type classes and can you maybe say like I don’t know like what is a good benchmark that you have and what what is the time that synthesis requires.

JK: Sure my pleasure as far as overall our said say word about our benchmark results which is so we’re actually this hector is a third purely a fourth generation tool the problem of type. Ah, synthesizing small snippets based on types and types of things available. The first generation was tool reforster blue was called from a paper by Ros Botis group called jungo mining. And their approach is it basically only works on functions of 1 argument or rather functions where only one arguments to question the other are fixed if you get a tree of type types say this function goes type a to type b. The huge tree now you have ah you say I’m on know this program on synthesize something you pull together the types of the variables in context and do let you search the tree and you get a small program. The second generation was a tool called say pets by human moinch. He is now a assistly a system scientist research scientist at CMU previously as a post doc and I villains group but UT Austin so he figured out. How would you actually properly make this work multiple arguments which is that ah you represent all the ways of it’s taking a a bunch of type thing genu type thing as a petri net. And then he use his sat song chops to make that work and so it can do better type directed synthesis or Java. But once you get into polymorism now if you think. Of supplying the type explicitly to upon board function this inflamate type supply now. The space is infinite. So the third generation tool is who plus head of Thereris California San Diego ah from now you applied. So jungle is a main officer. It’s ah so they’re all they are getting plums work with pulling Moresomes things that size haskell programs and they do it but still by using petri nets. They tried a ton of things they decided like only with petrie nets were fast enough. But’s still not that fast and they actually the way it works like first they try it like assuming you only like refineed types in a certain only instantiate the plum of variables in a certain way and then they try to solve and they get a good example that like ah. Doesn’t actually work from each pricely real types and they expanded a bit try larger petri ne. It’s a little more precise plum the tight its still fused a lot to see other. It shouldn’t be and they try again. Keep trying until they get it and so. Hectare is a fourth generation tool which solves the exact same problem as hu plus but here’s our big headline results. We solve the same thing as as Google plus over the benchmarks. Let’s say we solve. In their benchmark set. They saw 39 to 45 benchmarks we saw 43 to 45 benchmarks and I should mention that this paper is is being collaborated with the who plus author is Jung and nadia so it’s not in us versus that’s us versus other us. Ah so. Hector sells more benchmarks for the benchmarks itself in common. It is so 7 or 8 times faster geometric mean and and it does all this using. 1 ne-tenth as much code because hu plus has all its infrastructure and hecta is just you set up an East Tf the problem and then you hit solve or you hit enummerate rather so for many of you benchmarks hectare is able to numerate all well type terms of that type. Given those inputs faster than hu plus and find a single one and then not not only that. But it’s also better at high order stuff by a lot since we saw more thing much more things. Another benchmark sets so I don’t want. Have the numbers most specific benchmarks at hand. But I can say there are at least 2 benchmarks I don’t know which ones they are but I’m looking at a graph right now at least 2 benchmarks. That hectare solves in under 3 seconds where Google plus times out after over 300 seconds

NV: Um, and so is it a like open source release can I can I every Haske Programmer use it now.

MP: And so there’s ah, an ECTA plugin that’s available. Um, and it’s actually being integrated into the work. the TheHJls the has school language server right? So soon enough, you’ll be able to yeah as as an optional plugin but like you can turn on the setting and it will start running an Acta in the background to add to fill holes for you right? and then because I mean what it does now is that it. Ah I mean so. What we did previously is if we had this very naive search where just tried things and then run the type checker and it was very slow but it worked for like single identifiers but now with the the Cta we can actually start coming up with ah you know longer programs you know with the composed of high Identifiers right? and you can I mean you can make it as as big as you want, but the you know the the the cool thing is that like even with the whole prelude and scope and everything else in your module you you’re still getting you know, kind of all well typed candidates. Ah. Of size 5 ah within a second or 2 right? So it’s it’s kind of fast enough to be interactive. Um, yeah.

NV: So you you both mentioned that I guess like complete like giving all the possible solutions to something that you care about. But I think click for me as a user this sounds scary like I don’t want to see all of them I Want to see the one that I want. So do you have an equal heuristics or like how how do you? How are you dealing with all these alternatives.

MP: And so currently, there’s not many heuristics at work. We try to ah make sure that you know, kind of things that are using local things are are higher up, but this whole you know ranking of candidates. It’s it’s a very. Difficult problem right? Um, and you know in other areas like people doing program repair for Java there. They they use a lot of neural nets to try and you know rank candidates. Ah, but yeah, we haven’t really found a a good way to do that. Yeah, right? So you know the current kind of thing you should do is just run it with not too many expression. It’s once right? and then you can kind of bump it up as as you go? Um, but you know it. It’s really good if you’re running an automated tool right? that can actually look at all the expressions but to use an interactively it. It does become quite hard to to present the best ones? Um, first right.

NV: But why we just use the the methods that Java synthesizers use is something different in the haskell word.

MP: Ah, no, It’s just a you know we we we haven’t trained a network yet right? Ah and it’s also it’s also just like a ah kind of ah I Want to say a logistical challenge right? Like are we going to ship a you know. Ah, matrix of weights with the has school language server to rank you know whole fit candidates right? It’s ah you know it’s definitely something you can do right? but you know I don’t think people would appreciate ah starting up V as code and then their gpu going 100% ah Running some neural network. You know, just because they’re trying to to solve a home right? They yeah they already have a neural network in their brain to kind of do that right? So Ah, yeah, So it’s also just a yeah you know a ux kind of challenge like how do we like? now we have the technology right? How do we make it nice for people to use. Um and there are projects like ah like the hazel project right? That are you know they’re not for haskell but they have like their own language where you know typed holes and and and whole fits are you know built in from the scratch and they have a very nice editor and they can kind of show you. Um, yeah values of expressions based on different choices. You know in the environment while you’re programming right? So there you know there are there are there are people tackling the the Ux problem. Um, right now, but you know how to scale it up to haskell itself. That’s also true. And that’s as ah as an another problem right? So you know we’re we’re getting there. Yeah.

WS: But it could be that there’s 2 things right? 1 is that you need to start somewhere and once you collect more data. Maybe there’s certain patterns that emerge right? once you see people actually choosing like you like you said they people favor expressions that use a lot of local stuff over expressions which refer to submodule where you just forgot to hide some of the imports or something and on the other hand you could Imagine. Um, once you have this whole space of candidates of solutions is. Ah, interacting with the user once again and saying stuff like um oh did you mean like okay what should be the result here if the list happens to be empty or something I don’t know right and then pruning the search space based on that.

MP: Right? right? Yeah and there are ah yeah, a our efforts to do kind of this kind of example, guided synthesis right? especially in ah, a functional languages. So I think ah Satanswick and his student. Um. Ah I’m blanking out on his name now it was name was Peter O Sarah I think um, they were doing this. It was called myth right? where it actually took examples and kind of figured out the function that you wanted to synthesize based on that. But um, that’s also like you know and in that setting. It’s also a kind of ah you know they they really define a very nice language. Everything is an algebraic constructor so you always have kind of finite ways you you don’t pick any integer right? You always have. 0 or or a suck of 0 um, but the you know the the problem is also like how do we scale that approach to you know a language like Haskell right? Where yeah if you have a exactly.

WS: So yeah, it’s no longer toy right? It’s ah you have a gazillion type classes and functions which are highly overloaded and sequence which will fit anywhere and you know all of these things. Sure.

MP: Um, yeah, exactly. But yeah, so but it’s a very interesting space to be in right? like it feels like we have a lot of technologies. You know, kind of converging on this. You know how can we use these all together to you know achieve the dream of just you know having your compiler write the you know, just just give us some suggestions and then you it writes it for you right? Um, but yeah, we’re we’re not, We’re not, We’re not done yet right? So That’s why. That’s why it’s still an interesting place to be in I think.

WS: Sure so you already mentioned program repair briefly. Can you say something about what you mean by that.

MP: Right? So ah program repair um is this thing where you you have a program which has a bug right? Um, and then you have some way of specifying the bug right? Usually it’s by a test suit that says this test fail right? And then the idea is can. We automatically come up with a. Modifications to the original program so that you know the the spec is now valid like so that tests pass all the tests work right back and and that’s an interesting space because you know it. It also uses synthesis a lot but instead of having to you know, come up with a solution in 2 seconds

WS: Sure.

MP: Ah, you can you know run it in the background on some continuous integration system for hours right? So then you can actually have the luxury of exploring all these things and and what we did is that ah so we wrote this tool called proper. Which actually you know does this for haskell programs right? It takes ah haskell programs a set of tests we were focusing on properties. Um, and so property-based tests via quick check and it kind of you know, figures out from the properties which parts of the program are broken. So we use this cool thing in Gc called Haskell Program coverage which kind of tells you during this run these parts of the programs are these expressions were were touched and then what we do is that we um. So we find the failing properties we find like ah and what Quickcheck does is that it kind of shrinks it down like you get a very small example that shows you exactly what went wrong and then we run that we see which parts of the code you know are involved in that vault and then we we take the code and we we perforate it what we call right? So we actually poke. Ah out the expressions that were involved with type tones and then we run the whole type to synthesis just as default. Ah by single identifiers but we also use these ah wholefit plugins which actually allow you to modify the synthesis right. So we can actually add these like the jungoid things that that Jimmy mentioned additionally like kind of things mine from the code base and we can do you know more sophisticated synthesis via ECTAs by just extending the valid whole fit right? because you already have the idea you know we want to fix this thing. Ah. This is the context now feed it into some synthesizer and what do we get out right now and then what we try to do is you know combine those solutions to fix the whole program right? because it’s usually not enough to just fix 1 place right? You have to fix multiple places at once.

WS: Yeah, and is this easier because it seems like you’re by like poking holes in it if you look at just the the Hbc coverage. Ah, you’re throwing away an awful lot of information because I’d hope That. You’re looking for like the smallest change which will let the test suite pass as opposed to throwing away half your program and then hoping that you can synthesize it back without the bugs in somehow. So.

MP: Right? I mean so we we you know we we we do it in a kind of a lazy way right? We we we do ah we poke out all possible combinations of poking outs right? and then we explore them in kind of order a size right? So you know you you thrown out less of the program in the beginning right here.

WS: Right? right? that makes sense and is this also something um, which could be useful because you you mentioned this now as program repair where you you’re trying to track down a bug in a program but what happens if you have a type area in your program which happens to be an awful lot and. Then I’d really like to know how to fix it but I can I’ve got pretty good at Understanding. You see’s error messages but that would also be super helpful or maybe I do some kind of refactoring right? where I change a function and it takes the extra argument. Then I usually kind of manually have to go through all the calls to this function and then patch them Up. Um is that some another use case of this kind of tech that.

MP: Ah, so that’s definitely a use case right? But so what we do like now we’re very aligned of being able to run the program right? Um, and you know there there are ah so I had another project called Therit Plugin which actually allowed you to kind of ignore type errors in some cases. Um and another another thing you can do is they can you know make things dynamic or something like that to try to just try and get away with it but exactly but I mean they’re they’re not quite built into Gec Ah, you can just.

WS: Um, right? yeah deferred type errors or something like that right? Yeah yeah.

MP: Defer it and then you kind of run it and then when it hits that ah error it it points bits out the error right? but you know because our technique is is based on this has schoolll program coverage thing to try and figure out you know to localize the fault right? to figure out where exactly did it happen right? So we you know we cannot apply it I can’t can’t apply it out of the box. But it’s definitely something we could do especially because you know chic is usually very specific about yeah you have a type error and it says and this is the part that is causing the type error. Um, but then you also get into the whole.

WS: Sure sure yeah.

MP: Ah, blame thing right? Leg is it is it the user of the type or is it the producer of the type that’s at fault and which one should we change right? because if you change one type. Um, then you have to fix all the places that’s used so the simplest fix would be to unroll the type change right? But that’s not what you want right? You want to do the whole like Phil change all the other places right? So it becomes a bit hard to ah to do. But you know the people doing gradual types. They have a lot of work on this kind of blame calculus figuring out what’s allowed who’s at fault. So there’s a lot of you know tech out there that we could definitely apply to fix type errors. But right now we focus only on like like test suite errors that on programs that actually run. So.

WS: Yeah, that makes sense. Thanks Um, so another thing that you’ve also mentioned that you’ve been thinking about is ah in the area of what quickspec does So for people listeners that don’t know you um, many people will be familiar with quick check. Where you kind of generate unit tests. Basically um, but you also have another kind of related idea called quicks spec where you start with some Api and then quick spec will try to find properties for you. It does this in a super smart way where it kind of. Has lots of hypotheses and then it tries to invalidate those by generating tests and kind of showing that 2 terms cannot be equal but you’ve done also done work on something called spectacular. Can you tell us about that.

MP: Ah, right? So this was a project that Jimmy suggested right? So he knew that I was doing broken synthesis and he wanted to see. Basically you know can we use the ectas to make that a lot faster right? um. And then we kind of went off and did that right? So we took the cta that’s in hectare um and and quicks spec is It’s a very very clever tool right? So what it actually does is that and it’s not just generating hypotheses. It’s actually generating and maintaining a set of unique terms right.

WS: Sure, Yeah, yeah.

MP: So if you have an x and and you have an a y and it kind of makes sure that you’re not mixing those together and then it creates ah bigger terms by combining the unique terms right? and then checking you know if I combine these 2 is that equal to some other unique term that I have and if so then we have a property right? You know this combination is the same as this other combination and then slowly you kind of build up like bigger and bigger terms by comparing comparing all these right? and this is where you know the the like. What I was talking about earlier the laziness of the ecta is coming so when we’re generating all valid type terms. We can actually kind of look at the the set of unique terms that we already have um and when we see that the the term tree that we’re about to look At. Contains a unique term. We can actually prune that away so we’re not going to look at that at All. So the way that you can kind of really control the enumeration of the ecta during enumeration. It really helps us to actually do this right. Um, and so and and what we’ve seen so far. Um, is that you know we’re able to handle ah bigger modules like like bigger Apis right? As you said like bigger specs because the ectas are you know, very good at actually producing. Ah. well- typed programs in a very fast way where quicks spec kind of runs for a long time and runs out of memory right? because it’s it’s just a big, very big spec.

WS: So we’ve talked a lot about program synthesis and I feel like it’s an exciting time for the area because it’s an old idea but it’s suddenly starting to work in some non-trivial cases. Um, so can you maybe both you give your perspective about where this is heading and like where will we be will we we let where will we be 5 years from now. Matti you want to start.

MP: Right? right? Um, yeah, so ah, this was actually an an interesting discussion that was had um at ixiy right? So they do a lot of program repair there. Um, and you know everyone there is trying to use neural networks right. But 1 of the problems they have with neural networks is that ah you know you have to teach the neural network syntax also and then you actually end up in a situation where you know it it generates patches but like 70% of them. Don’t compile even right.

WS: Right, right.

MP: But whereas you know the the technique that we’re using you know they’re more classical right? We’re doing inference. We’re doing constraint solving So we we’re doing this bottom up kind of you know we only check well typed programs. Um, and I think you know in the future The you know where we’re going is going to be. We’re going to be using Ne Networks for like the high level sketch right? like you know like you know Copillo now right? kind of generates kind of the program right? And then we’re going to be using these lower techniques that are kind of correct by construction to really repair the program and come up with the kind of correct program. Whereas so I think you know in the future we’re going to be doing a lot of the combination right? So because yeah, as as we’ve kind of hinted that earlier like you know these classical techniques. They so don’t really scale like if you want to synthesize something. That’s you know 5 things that’s okay, 5 terms is fine but you’re not going to synthesize of a full module of you know hundreds or thousands of lines with so many expressions going on right? but you know the neural network. They’re really good at the high level here’s a thousand lines but it doesn’t quite work right? so.

WS: Um, no right.

MP: Combination of these techniques is I think where we’re going and you know it’s a really exciting time because the neural networks are here. The classical techniques are you know they’re all sold here right? So I think you know in the next five ten years we’ll be able to really integrate these and and you know so. Kind of yeah I want to say revolutionize programming right.

WS: So you’re looking at a model where the computer really is writing like in the same way that you have interactive proof assistance and tactics in Kof core automation and Agda. It’s where the computer is helping you to write large parts of the program and then as you. Kind of using some ai techniques for the big picture and then of course it’s maybe not entirely well tight. But then you kind of zoom in on the airs and say oh but of course I just mean that there then kind of fix it from there with with some other input right? Like oh I want these tests to pass I want you know.

MP: Yeah, exact, right? Yeah, yeah, exactly and then the idea is you know the focus becomes on writing the spec and actually seeing what the program should do and not.

WS: And this program to have this type or something. Okay, so.

MP: Exactly writing out how it should do it right? yeah.

WS: Right? right? Okay, thanks very much! Jimmy?

JK: Um, yes I’d say ah the direction let see or similar to mattie buts it’s a but less awesome. Take ah so. So people talk about the ai senior jobs all the time and say well say it is expected. For instance, the number of lawyers will weigh down certainly number nographers has gone way down last hundred years but I think most programmers are may fine for at least another decade in their jobs. Safe a eyes whereas as a tool developer has done so my life in ah it’ so in more symbolic techniques kind of afraid of the ai taking my job. This copit has been an us at our own game. Um, and and moreover it’s ah and it is not always clear. It integrates a large language model like technique with anything else because they’re very specialized sequences. You. And I don’t understand how they work now and interferene and they’re learning power is immense and they learn like every single sentence or every single line code you feed into a large language modelss learning little bit. And so so there is kind of chatter in the Ai Community. That’s about anything Else. You can add to wait another year or even less a bigger large language models can do it for you. So Let’s say it’s more power for the human race up until the points. Where they you know when they asked for they start building a where they start um, turning the entire world into white noise because that makes’s a very easy to predict the token. That’s but well, it’s but less power to. Those of us ons called or it’s very highly trained and think not your own as but the time sit in between is suat chiduri at ut Austin he says he thinks that neurosoboic programming. He wants to be able to make it so that’s a single programmer can the 1 year builds The entire 1 is operating system themselves and I think we can get there.

WS: Well, if Copilot has already learned how to write an operating system. It’s usually pretty good at doing the same trick again once it’s learned that right? So yeah, Okay, thanks very much for your time. Thanks for the interview and um. Ah, thanks for joining us to all our listeners.

MP: Thank you.

JK: Thanks again.

NV: Thanks!

Sponsors: The Haskell Interlude Podcast is a project of the Haskell Foundation, and it is made possible by the support of our sponsors, especially the Monad-level sponsors: Digital Asset, GitHub, Input Output, Juspay, and Meta.

Individual Sponsors
GitHub IOHK Juspay Meta
CarbonCloud Digital Asset ExFreight Mercury Obsidian Systems Platonic Systems Tweag Well-Typed
Artificial Channable FlipStone Freckle Google HERP MLabs TripShot
To learn more about the Haskell Foundation
Haskell Foundation, Inc.
2093 Philadelphia Pike #8119
Claymont, DE 19703